NWChem long

Message boards : News : NWChem long
Message board moderation

To post messages, you must log in.

AuthorMessage
damotbe
Project administrator
Project developer
Project scientist

Send message
Joined: 23 Jul 19
Posts: 178
Credit: 276,865,561
RAC: 2,334,804
Message 589 - Posted: 18 Feb 2020, 9:01:04 UTC

Dear crunchers.

Thank you very much ! You are a really great help.

Initially, only two hundred "NWchem long" workunits should have been submitted for testing. Your numerous results and also the numerous failures allow us to better evaluate parameters for this new simulation. I still have to write some scripts for the new inputs, but I should be able to submit jobs soon. Don't hesitate to cancel in progress workunits (NWchem long).

Cheers,
Benoit
ID: 589 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
morgan

Send message
Joined: 27 Oct 19
Posts: 1
Credit: 148,504
RAC: 143
Message 590 - Posted: 18 Feb 2020, 10:53:00 UTC - in response to Message 589.  

Wrong decision to cancel all Wu :(

i had 2 good waiting for validation, and 1 in progress.. No error here!
wingman for one of the pending wu,has done all his wus with succes so i was waiting for my 5000cred payment. This will now not happen,sad
ID: 590 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
damotbe
Project administrator
Project developer
Project scientist

Send message
Joined: 23 Jul 19
Posts: 178
Credit: 276,865,561
RAC: 2,334,804
Message 591 - Posted: 18 Feb 2020, 12:04:32 UTC - in response to Message 590.  

Wrong decision to cancel all Wu :(

i had 2 good waiting for validation, and 1 in progress.. No error here!
wingman for one of the pending wu,has done all his wus with succes so i was waiting for my 5000cred payment. This will now not happen,sad


It's an application under test. For reasons of homogeneity and comparability, we will have to redo the calculations with the new parameters. Cancelling WU's therefore saves the computing power of the volunteers.
ID: 591 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
PHILIPPE

Send message
Joined: 4 Jan 20
Posts: 25
Credit: 200,536
RAC: 1,130
Message 592 - Posted: 18 Feb 2020, 12:53:22 UTC - in response to Message 591.  
Last modified: 18 Feb 2020, 13:18:02 UTC

Do you intend to include a relation between ram memory requirement and the number of cpus, the work unit is using ?
In some projects , they have defined a such relation : M = A + B * N
where M is the RAM memory necessary , A is a constant , B is a constant and N is the number of cpu used.The law is server side.
So if a host hasn't enough ram memory , the work unit doesn't start.
Will your project do the same ?
It seems that for NWChem it is 2048 Mo.(single core).If a host hasn't this amount of memory , the server doesn't send any work unit.But is it the whole memory present in the host or the free memory available , when the client request is done?
ID: 592 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
damotbe
Project administrator
Project developer
Project scientist

Send message
Joined: 23 Jul 19
Posts: 178
Credit: 276,865,561
RAC: 2,334,804
Message 593 - Posted: 18 Feb 2020, 20:39:41 UTC - in response to Message 592.  

Do you intend to include a relation between ram memory requirement and the number of cpus, the work unit is using ?
In some projects , they have defined a such relation : M = A + B * N
where M is the RAM memory necessary , A is a constant , B is a constant and N is the number of cpu used.The law is server side.
So if a host hasn't enough ram memory , the work unit doesn't start.
Will your project do the same ?
It seems that for NWChem it is 2048 Mo.(single core).If a host hasn't this amount of memory , the server doesn't send any work unit.But is it the whole memory present in the host or the free memory available , when the client request is done?


The free memory available, when the client request is done (as far as I understand).
ID: 593 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
PHILIPPE

Send message
Joined: 4 Jan 20
Posts: 25
Credit: 200,536
RAC: 1,130
Message 596 - Posted: 19 Feb 2020, 18:00:06 UTC - in response to Message 593.  

I tried recently to create a virtual machine with Linux inside to crunch your project with virtualbox.
Just in order to see the difference with the way i usually do.
Having less than 4 Go Ram memory , I can 't give more than half the amount of memory to the virtual machine , because the computer becomes sluggish.
But , when i succeeded to configure my VM Ubuntu , i didn't receive any work unit because the server declared it 's not possible to send jobs to host with less than 2 048 Mo.
So i was wondering why this limit ?
Looking at the ram footprint of some molecules executed in the windows task manager , i saw only 30 Mo at the beginning of the NWChem "short" and then it decreased slowly untill the end.
I was thinking what a pity to have defined a so high value.
I don't know the behavior for NWChem "long" , it should be more important...(of course).
But after some thoughts , you have maybe the obligation to use this value , it is certainly written in a strict protocol to give a valuable scientific sense to the work , you are processing .The environment of the numerical simulation has to be clearly determined to avoid method mistakes.
So , sorry for the perturbance , and go on serenely...
ID: 596 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
damotbe
Project administrator
Project developer
Project scientist

Send message
Joined: 23 Jul 19
Posts: 178
Credit: 276,865,561
RAC: 2,334,804
Message 597 - Posted: 19 Feb 2020, 21:20:38 UTC - in response to Message 596.  

I tried recently to create a virtual machine with Linux inside to crunch your project with virtualbox.
Just in order to see the difference with the way i usually do.
Having less than 4 Go Ram memory , I can 't give more than half the amount of memory to the virtual machine , because the computer becomes sluggish.
But , when i succeeded to configure my VM Ubuntu , i didn't receive any work unit because the server declared it 's not possible to send jobs to host with less than 2 048 Mo.
So i was wondering why this limit ?
Looking at the ram footprint of some molecules executed in the windows task manager , i saw only 30 Mo at the beginning of the NWChem "short" and then it decreased slowly untill the end.
I was thinking what a pity to have defined a so high value.
I don't know the behavior for NWChem "long" , it should be more important...(of course).
But after some thoughts , you have maybe the obligation to use this value , it is certainly written in a strict protocol to give a valuable scientific sense to the work , you are processing .The environment of the numerical simulation has to be clearly determined to avoid method mistakes.
So , sorry for the perturbance , and go on serenely...


Personally, I have a Linux VM and with Virtualbox I've allocated more memory than I wanted to be able to take jobs. I found that the VM only takes what it consumes. So it's not crazy to allocate 2048MB or even more to do short jobs. Maybe it's worth testing at least.

The current campaign is homogeneous on 2GB of memory, but the remark is relevant and we'll have to find a better threshold next time.
ID: 597 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProDigit

Send message
Joined: 16 Nov 19
Posts: 44
Credit: 19,091,149
RAC: 119,568
Message 598 - Posted: 20 Feb 2020, 19:30:56 UTC
Last modified: 20 Feb 2020, 19:31:32 UTC

Not sure if on multicores memory can be shared. 2GB per core is a lot.
I haven't seen those numbers, running natively on Linux (usually 1,7GB is used out of 8GB, with 2 CPU cores running QCPA and sharing the rest of ram with OS and 4 GPU threads).

I really wished you guys would have made these batches for GPUs.
You'd have 200 tasks done in a day. (20.000 even, if need be, on a single PC).
ID: 598 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Aurum
Avatar

Send message
Joined: 14 Dec 19
Posts: 38
Credit: 11,946,661
RAC: 2,014
Message 599 - Posted: 22 Feb 2020, 4:10:29 UTC
Last modified: 22 Feb 2020, 4:10:47 UTC

The native Linux ap RAM has been using 0.8-0.9 GB. If RAM may be a limiting factor I put a warning in my app_config, e.g. for Rosetta:
<app_config>
    <app>
        <name>rosetta</name>
             <!-- needs 5 MB L3 cache per WU -->
             <!-- needs 1.5 GB RAM per WU -->
             <!-- Xeon E5-2686 v4,  18c36t,  32 GB,  45 MB L3 Cache  -->
             <max_concurrent>9</max_concurrent>
    </app>
</app_config>

I also set my Swap file to 16 GB:
sudo swapoff -a
sudo dd if=/dev/zero of=/swapfile bs=1M count=16384
sudo mkswap /swapfile
sudo swapon /swapfile
ID: 599 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProDigit

Send message
Joined: 16 Nov 19
Posts: 44
Credit: 19,091,149
RAC: 119,568
Message 677 - Posted: 10 Mar 2020, 21:29:34 UTC

Ram nowadays is cheap; especially DDR3.
My last Xeon server got maxed out with 4x4GB. The board doesn't support higher DIMMS. 4x 4GB cost me like $40, since I already had 2x4G DIMMS in there.
ID: 677 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : News : NWChem long

©2020 Benoit DA MOTA - LERIA, University of Angers, France