]]>

yoyo wrote:

Workunits are NOT splited based on different ranges!

ECM, the eliptic curve method, is a trial factoring method. For a batch we have to run different trials, all starting with a different random seed.

So all computations are different, they are different trials. For "easy" factors it is therefore possible that with different random seeds the same factor is found.

Workunits are NOT splited based on different ranges!

ECM, the eliptic curve method, is a trial factoring method. For a batch we have to run different trials, all starting with a different random seed.

So all computations are different, they are different trials. For "easy" factors it is therefore possible that with different random seeds the same factor is found.

I understand, thank you.

May I just ask though.. what makes a factor "easy"?

Statistics: Posted by Tom Poleski — 06.12.2018 04:12

]]>

So all computations are different, they are different trials. For "easy" factors it is therefore possible that with different random seeds the same factor is found.

Statistics: Posted by yoyo — 04.12.2018 20:32

]]>

Michael.

Statistics: Posted by Michael H.W. Weber — 04.12.2018 12:34

]]>

yoyo wrote:

To find a factor thousands of workunits have to be computed. This is done in parallel by hundreds of user. So if an easy factor is found it is possible, that the factor is found by many user in parallel. As fast as the server gets a found factor it cancels all not yet sent out workunits and if a client is connected it tells him to abort all not yet started workunits.

To find a factor thousands of workunits have to be computed. This is done in parallel by hundreds of user. So if an easy factor is found it is possible, that the factor is found by many user in parallel. As fast as the server gets a found factor it cancels all not yet sent out workunits and if a client is connected it tells him to abort all not yet started workunits.

But, as far as I understand, each workunit should be attempting to compute a different range, no? Otherwise would it not be a waste of resources? If multiple computers find the same factor in parallel, doesn't that mean that multiple computers are computing the same workunit?? Or at least, performing the same calculation on the same range of data across multiple work units?

Perhaps I just do not understand enough about how the mathematics works behind it. Thank you for the reply.

Statistics: Posted by Tom Poleski — 04.12.2018 10:48

]]>

]]>

How to do to not receive those WU with expected running time of 2 days ?

The here above was predicted running 9 hours, it took 34 hours !

I changed the BAM local prefenrences to accept only work for 0.5 days. But still receiving "monster" with a deadline of 5 days !!!

Considering here above and a ration of 1 / 3 (expected / running). the received WU will never finish before deadline !

So I need to abort. It is not the goal of the project. But the only solution, unless someone has an idea to not get the ECM_UC_...._NP_195..... ?

Best regards

Statistics: Posted by marsinph — 03.12.2018 15:07

]]>

Statistics: Posted by Tom Poleski — 03.12.2018 08:44

]]>

yoyo wrote:

Yes those ecm_uc_1543642398_np_195_850e6* workunits runs long. I see runtimes from 15 to 30 hours. RAM consumption is high, but will not exceed 1,8 GB RAM.

As for every ecm workunit (not P1 or P2) every 20% a checkpoint is done.

Each ecm runs 5 curves and each curve has a 2 stages where stage 1 runs long and needs less ram and stage 2 runs shorter but needs much RAM. Ration between stage 1 and 2 is roughly 4:1.

Yes those ecm_uc_1543642398_np_195_850e6* workunits runs long. I see runtimes from 15 to 30 hours. RAM consumption is high, but will not exceed 1,8 GB RAM.

As for every ecm workunit (not P1 or P2) every 20% a checkpoint is done.

Each ecm runs 5 curves and each curve has a 2 stages where stage 1 runs long and needs less ram and stage 2 runs shorter but needs much RAM. Ration between stage 1 and 2 is roughly 4:1.

Hello, thank you for explanation.

For short WU, there is checkpoint each 1200 seconds !!! Why not for the huge WU who take more than one thousand time to run ???

Then about RAM use of the huge WU, on all my three host (all the same CPU/RAM) ECM, take only a few megabyte !!!

Sometimes about one Giga, but not long (about 10 minuts) !

About your ratio stage 1 and 2. (I not consider RAM)

All very "rough estimation"

If stage 1 take 25% running, It would say, if the first stage take 12 hours to run, so the stage two would need 3 hours to finish !

It is not !

Those monster WU were at 20% after about 3 hours.

20 hours later, only 40% ! So I think the ratio is not 4:1 but 1:4.

It will says, those WU will be finished AFTER deadline (only four days !!!)

The small WU have a deadline of 10 days. The monster only four days (running 24/24, without restart, logoff,......)!!!

I hope the given credits will also be consequent !? On the same host (to be able to compare)

P1 WU runs on my host 29,300sec for a credit of 322.97 (ratio : 39.6 credit / hour)

And ECM_xy_.or hc, runs about 1.2 hour (3900sec for 76.98 credits) (ratio : 71 credit / hour)

Twice more for small WU !

It already shows how longer WU, how less credits !!!

And not considering no checkpoint , errors, host restart with lost of crunched part

I can not do any update on my host who require a restart because the very long WU block all !

My conclusion, the next time, I receive such "monster", I cancel it.

Sorry.It is not the goal of the research, but I will not block some hostys for nothing.

Look my signature, you will see I crunch on several project, also my team.

I will let finish the monster on one host, with hope on credits according running time.

The other still running, I will abort if they take more than 8 hours to completion.

Once again, sorry, but I need to do maintenance on my hosts.

Suggestion : reduce the size of the WU ! The monster requires 266TFLOPs, yes two hundred sixty-six TERRA FLOPs !!!

Only CPU !!!

To compare a i7-2600k OC to 4.0Ghz have a power of 4.22 GFLOPS (and 13.5 GINOPS (integer).

Best regards

Statistics: Posted by marsinph — 02.12.2018 21:26

]]>

As for every ecm workunit (not P1 or P2) every 20% a checkpoint is done.

Each ecm runs 5 curves and each curve has a 2 stages where stage 1 runs long and needs less ram and stage 2 runs shorter but needs much RAM. Ration between stage 1 and 2 is roughly 4:1.

Statistics: Posted by yoyo — 02.12.2018 19:41

]]>