# Benchmark available for testing Lightroom Classic performance on Windows computers



## Roelof Moorlag (Dec 17, 2019)

Today Puget systems has made available a free benchmark for testing  Lightroom Classic performance on Windows computers:
https://www.pugetsystems.com/labs/articles/PugetBench-for-Lightroom-Classic-1571/


----------



## clee01l (Dec 17, 2019)

I think this might be useful when people complain about how bad performance is.  However,Will it help diagnose what you might do to speed up your existing system without needing to upgrade hardware.  
I am of the opinion that most of LRs ills on Windows stem from other garbage that prevents LR from accessing the resources it needs to be efficient.


----------



## PhilBurton (Dec 18, 2019)

Would it be within the scope of this forum if people could upload their benchmark results?  

One of the problems in the Windows world (unlike the Mac world) is the large number of hardware configurations, considering processor models, RAM (and RAM speed), motherboard models (chipset and PCI-E version), and of course graphics cards.  That said, it might be interesting to see which system configurations are the fastest.  

Phil


----------



## clee01l (Dec 18, 2019)

PhilBurton said:


> Would it be within the scope of this forum if people could upload their benchmark results?
> 
> One of the problems in the Windows world (unlike the Mac world) is the large number of hardware configurations, considering processor models, RAM (and RAM speed), motherboard models (chipset and PCI-E version), and of course graphics cards.  That said, it might be interesting to see which system configurations are the fastest.
> 
> Phil


I could see a forum topic dedicated to test results.


----------



## Victoria Bampton (Dec 18, 2019)

PhilBurton said:


> Would it be within the scope of this forum if people could upload their benchmark results?


Great idea Phil, yes, we'll make it a sticky topic so it's easy to find.


----------



## PhilBurton (Dec 18, 2019)

PhilBurton said:


> Would it be within the scope of this forum if people could upload their benchmark results?
> 
> One of the problems in the Windows world (unlike the Mac world) is the large number of hardware configurations, considering processor models, RAM (and RAM speed), motherboard models (chipset and PCI-E version), and of course graphics cards.  That said, it might be interesting to see which system configurations are the fastest.
> 
> Phil


And I somehow forgot to mention overclocking of the CPU, RAM, and separately the graphics card.

Phil


----------



## Jim Wilde (Dec 18, 2019)

Victoria Bampton said:


> Great idea Phil, yes, we'll make it a sticky topic so it's easy to find.


We might wait until they get it out of beta.....I've just tried it and it got all the way through before hanging on the final clean-up task, which I had to cancel and thus didn't get the benchmark scores. I did find the log of the individual test results, so I can do some comparison, but can't calculate the overall benchmark scores.


----------



## Paul_DS256 (Dec 18, 2019)

PhilBurton said:


> Would it be within the scope of this forum if people could upload their benchmark results?


While the results would be interesting, I think we need to keep a couple of things in mind:

This was developed by a company trying to sell Windows based computers. It is also the first one I've seen that sells a configuration for LR. Their configuration does not seem to make use of or reference their benchmark. They also only ship in the USA.
It is based on a specific set of tests which may or may nor relate to a users specific workflow.  I'll leave it to the moderators to comment on the tests they selected but I suspect they were done in relation to components they want to sell. Your individual mileage may vary.
So, IMHO, a nice idea but not sure it's real world worth.


----------



## Jimmsp (Dec 18, 2019)

Paul_DS256 said:


> While the results would be interesting, I think we need to keep a couple of things in mind:
> 
> This was developed by a company trying to sell Windows based computers. It is also the first one I've seen that sells a configuration for LR. Their configuration does not seem to make use of or reference their benchmark. They also only ship in the USA.
> It is based on a specific set of tests which may or may nor relate to a users specific workflow.  I'll leave it to the moderators to comment on the tests they selected but I suspect they were done in relation to components they want to sell. Your individual mileage may vary.
> So, IMHO, a nice idea but not sure it's real world worth.



I have tested run the test once, and I am about to run it again -- I have 2 SSDs to try.
I also want to run it with the resource monitor running.

As to the company, I used their specs a couple of years ago to build my current desktop. While I didn't use it precisely, it was "close enough" to give me a very good pc. I view it as a set of "good guidelines" for a build.

I'm not convinced, however, that comparing results from two different pcs has much value to me.


----------



## PhilBurton (Dec 18, 2019)

Jimmsp said:


> I have tested run the test once, and I am about to run it again -- I have 2 SSDs to try.
> I also want to run it with the resource monitor running.
> 
> As to the company, I used their specs a couple of years ago to build my current desktop. While I didn't use it precisely, it was "close enough" to give me a very good pc. I view it as a set of "good guidelines" for a build.
> ...


No benchmark is perfect.  Some far from perfect.  And even a Lightroom benchmark assumes a certain workflow.  

They do allow a comparison of _relative _performance if the differences in two systems can be narrowed down to just one or two variables.  

The only real benchmark that would be perfect would be _my _workflow run on a lot of different systems.  Will that ever happen?  Very unlikely.

If it doesn't cause a lot of work for Victoria, why not have a new forum dedicated to benchmark results.  But going back to Jim's post, #7, it might be premature to set up that forum until the benchmark is no longer in beta and runs cleanly.

Phil


----------



## Jimmsp (Dec 18, 2019)

PhilBurton said:


> No benchmark is perfect.  Some far from perfect.  And even a Lightroom benchmark assumes a certain workflow.
> 
> They do allow a comparison of _relative _performance if the differences in two systems can be narrowed down to just one or two variables.
> 
> ...


I do think it is premature. The test crashed on its 4th try. It didn't freeze the system, just killed LR.
I had run it 3 times, quitting and restarting LR each time.  I have restarted the OS, and will try it again in a while.

I agree with Phil - I'd like to benchmark my workflow. By running the resource monitor at the same time, I could see, for instance that creating Smart Previews maxed out my CPU. However, I don't do that, so I could eliminate that part of the test.

But as long as we all run the same test, we certainly can compare our systems.  But I don't know if I would learn anything that I would use If I knew Phil's system ran 25 % faster (or slower) than mine. At this point in time, I am quite satisfied with the performance I get on my 32 M pixel Canon 90 D raw photos.
Could it run faster for me - sure.   A 10% improvement here or there would be nice, but I don't intend to change my pc at this time to gain that 10%.


----------



## clee01l (Dec 18, 2019)

PhilBurton said:


> No benchmark is perfect. Some far from perfect. And even a Lightroom benchmark assumes a certain workflow.


If your workflow resulted in lower performance than the Benchmark indicates, I would think you would want to I change your workflow.



Sent from my iPad using Tapatalk


----------



## PhilBurton (Dec 19, 2019)

Jimmsp said:


> I do think it is premature. The test crashed on its 4th try. It didn't freeze the system, just killed LR.
> I had run it 3 times, quitting and restarting LR each time.  I have restarted the OS, and will try it again in a while.
> 
> I agree with Phil - I'd like to benchmark my workflow. By running the resource monitor at the same time, I could see, for instance that creating Smart Previews maxed out my CPU. However, I don't do that, so I could eliminate that part of the test.
> ...


IF we can compare our systems and do this comparison over enough systems, we might discern that our video card could be upgraded.  Or we should add more memory, or even go all the way and upgrade the motherboard/CPU and probably get faster RAM in the process.

I'm planning to build a new system early next year to replace my 2013 vintage Intel 3930k x79 chipset system, and I don't know if I really need to spend $300+ for an nVidia 2060.  Could I get by with a much less expensive 1000 or 1600 series card?  Right now I have no real way to know.

Phil


----------



## Victoria Bampton (Dec 19, 2019)

There are certainly a lot of variables, but I do see some merit in having some benchmarks for comparison. Better that a third-party is doing it too, rather than Adobe.

I don't think we'll need a separate forum, just a way of presenting the information in a single thread. One post per set of system specs, then it's easy to scroll and compare. Discussions can take place in another thread. We'll wait until it's out of beta though. Feel free to ping me when it's out of beta if you spot it before I do, and I'll set up a template.


----------



## Paul_DS256 (Dec 19, 2019)

Jimmsp said:


> The test crashed on its 4th try. It didn't freeze the system, just killed LR


Which is concerning since it is driving LR someway that may not be supported and could lead to catalog corruption.


----------



## clee01l (Dec 19, 2019)

Paul_DS256 said:


> Which is concerning since it is driving LR someway that may not be supported and could lead to catalog corruption.



It certainly would be prudent to make a backup copy of the catalog before testing. Especially with beta software. 


Sent from my iPad using Tapatalk


----------



## Jimmsp (Dec 19, 2019)

Paul_DS256 said:


> Which is concerning since it is driving LR someway that may not be supported and could lead to catalog corruption.



The catalog it corrupts, if it does, is a new one. It never sees or uses what you have already established.
The photos it tests are also raws that come in with the download.
At the end of the tests, it deletes the dngs it creates and removes everything else from the catalog.


----------



## Jim Wilde (Dec 19, 2019)

Victoria Bampton said:


> There are certainly a lot of variables, but I do see some merit in having some benchmarks for comparison. Better that a third-party is doing it too, rather than Adobe.


Probably the biggest problem will be in interpreting the results in order to identify what components need to be targeted for an update. I managed to get a run through to completion, and the results were pretty average (and that's being generous!) though probably not surprising. The Windows desktop that I use (for testing only) is over 7 years old (i7-3770K processor) and with a GPU which Lightroom only supports for basic acceleration only. However, some of the benchmark tests will not use even a fully compatible GPU, whereas others could....so if I were to spend some money updating the system (very unlikely) what should I target first? A new CPU (probably with a new motherboard, which puts me at the top of a slippery slope leading to a complete new system) or a fully compatible GPU? I can easily see some users struggling to make a decision after running the benckmark test.


----------



## clee01l (Dec 19, 2019)

Jim Wilde said:


> ... Windows desktop that I use (for testing only) is over 7 years old (i7-3770K processor) and with a GPU which Lightroom only supports for basic acceleration only...


I don’t think the benchmark would or should be used to upgrade a 7 year old system. Rather after benchmarking, the user would evaluate the merits of replacing a 7 year old system or be able to live with the benchmark results. If I were evaluating a 2-4 year old system, then I might consider replace some components for an improved performance. 




Sent from my iPad using Tapatalk


----------



## Jim Wilde (Dec 19, 2019)

Agreed, but I wouldn't be surprised if some users didn't at least consider a component upgrade of some kind, e.g. GPU.


----------



## Gnits (Dec 19, 2019)

I am currently working with a company to build a very high spec custom workstation.  It will have the  Ryzen 9 3900X  processor, very high end motherboard, 64GB of fast memory, very fast M2 system and cache drives and a graphics card recommended by Puget systems.  I am making a conscious decision to put the fastest components together that I can buy, without getting into the totally exotic end of things.   I will gladly publish my final build spec, do the benchmark and  publish the results. I am pleased a benchmark is available and will be really curious to see how my existing (old but decent) workstation compares to the new build. I would like to buy a system from Puget but they will not sell me one (I live in Ireland).  My main reason to upgrade is to make my Lightroom library and develop sessions as painless as possible and future proof my build, so I can optimise later as an when putting in a more powerful graphics card will give me real world performance benefits. I congratulate Puget Systems for publishing their benchmark.


----------



## PhilBurton (Dec 19, 2019)

Jim Wilde said:


> Agreed, but I wouldn't be surprised if some users didn't at least consider a component upgrade of some kind, e.g. GPU.


Or a a video card upgrade of an existing system for the short term.  In the long term, when building that new system, use that new card.  A way to spread out the total spend.

Phil


----------



## Gnits (Dec 19, 2019)

I am splitting my upgrade into 3 phases.
1. Main processing box..... in the next few weeks. Need faster Lightroom editing of Sony a7r3 raw files.
2. Data (images and other data). Keeping existing primary and backup storage. Will explore  usefulness of 10Gb network storage. Maybe in a years time as current storage fills up.
3. New Adobe Rob Monitor.... maybe mid 2020.

Will monitor how graphic card usage evolves.


----------



## CloudedGenie7 (Dec 26, 2019)

I am also very interested in these results...

 I am trying to isolate what caused my Lightroom installation to become unusable between October and now (Microsoft SurfaceBook 2 with i7, 16 GB RAM, 1TB SSD — Windows, drivers and Adobe all updated to latest version). It has been working hard on processing the massive D850 raw files, but I have been able to do 8-12 photo panoramas in the past. Now, it takes more than hour to edit a single photo, while the CPU is at 50-80% and memory accessed by LR seems to be limited to 4-6GB...

More importantly, I will be replacing my laptop with a more powerful machine due to the demands of my day-job (and inability to upgrade the memory in the SurfaceBook). I know that I need at least 64GB RAM to deal with some massive spreadsheets and datasets, which immediately means that I have to look at ”workstations” rather than normal gaming laptops, especially if I want some future upgrade path.  However, power and upgradability in these mobile workstations quickly add up to huge price tags... So I need to understand which components will have the biggest impact on Lightroom performance — do I spend more money on the faster CPU, better graphics card or faster SSD? Will ECC vs non-ECC memory make a difference?

If I can resolve the issues I’m currently having, buying a powerful desktop and using the SurfaceBook at client sites ( with VPN to my home desktop) may become another option...

Christelle


----------



## Linwood Ferguson (Dec 26, 2019)

So is anyone interested in sharing the beta results?   I ran it and got the below.  Note the last step appears to hang (removing DNG's if I recall), but I just left for a while and it eventually finished.  It also takes quite a while to run -- start it and go to dinner or something.

I'm not quite sure what to make of it, there's a lot of variation in there, from 32 to 92.  And some a bit surprising, for example I would have expected Panorama and HDR merges to both be similarly CPU constrained, but they are quite different.

I certainly agree with the observation that it is hard to draw conclusions from this -- specifically to try to identify if I have a particular bottleneck. Looking at this I have no real idea what would make for a fast library loupe scroll buyt a really slow develop module scroll.

And yes, it's something of an ad for Puget Systems.   But I have long wanted something like this, specifically to run on successive LR versions and see if there are changes.  So nice they are (so far) making it available.

Oh... one additional thought... where I think this would be really interesting, especially if we could start building a database of them, is where someone makes a single change and runs again.  E.g. if I were to get a faster GPU, how would these change (you would also want to run it several times before and after to see how consistent).  Or someone added memory, or overclocked memory, etc.   See how specific changes impact these.


----------



## Roelof Moorlag (Dec 17, 2019)

Today Puget systems has made available a free benchmark for testing  Lightroom Classic performance on Windows computers:
https://www.pugetsystems.com/labs/articles/PugetBench-for-Lightroom-Classic-1571/


----------



## CloudedGenie7 (Dec 27, 2019)

Ferguson said:


> So is anyone interested in sharing the beta results?   I ran it and got the below.  Note the last step appears to hang (removing DNG's if I recall), but I just left for a while and it eventually finished.  It also takes quite a while to run -- start it and go to dinner or something.


The differences in Develop Module brush lag times between the three file types are significant... and brutal for poor D850 owners...


----------



## Linwood Ferguson (Dec 27, 2019)

But you know, I just went from a D800 as my largest file to a Sony A7Riv, which is 61mpix and 120MBs, and while it is not exactly zippy, I was quite pleasantly surprised to find it did not respond all that differently.  Preview builds were substantially slower, but editing was not nearly as bad as I expected.

The one thing I've found over the years working with Lightroom is that it is a relative of Murphy -- whenever you think you understand it and know what's going to happen -- BAM -- hits you up side the head with a surprise.


----------



## Danielx64 (Dec 27, 2019)

If it wasn't for the fact that I am using Lightroom for DAM only (and don't have access to the develop module) I would have taken the benchmark to see how my upgraded PC would go .

I suppose I could create a dummy Adobe account to get a new 7 day trial.


----------



## CloudedGenie7 (Dec 27, 2019)

Ferguson said:


> But you know, I just went from a D800 as my largest file to a Sony A7Riv, which is 61mpix and 120MBs, and while it is not exactly zippy, I was quite pleasantly surprised to find it did not respond all that differently.  Preview builds were substantially slower, but editing was not nearly as bad as I expected.



That is interesting... I notice(d) a very definite difference between the way Lightroom and my laptop coped with the D800E files compared to the increased file size of the D850. I wonder of the .NEF format has something to do with it?

It may simply be because my laptop is now at the low end of what is feasible for Lightroom use, while my camera is being used to stress test computers...


----------



## CloudedGenie7 (Dec 27, 2019)

The benchmark test  on my laptop finally completed...
15" SurfaceBook 2 (2017)
i7-8650 CPU, 16 GB RAM, 1TB SSD






It doesn't really tell me what the problem is... other than being SLOW....

Christelle


----------



## Paul_DS256 (Dec 27, 2019)

The more I look at this, the less value it  holds for me. This may be the result of being in the software industry during the database benchmark wars. The outcome of which was positive with an industry standard based on a close to meaningful scenarios as possible.

Any benchmark will be dependent on the version of the software being tested, the hardware being used, and what else the OS is having to deal with (e.g. Antivirus etc). Adobe is increasing the frequency of releases so what would that mean to something like Convert to DNG if LR increased the level of parallel processing based on an improvement of the algorithm? This means you need to have these released based on changes to LR.

Under the heading of 'your mileage may vary' a lot of these are one off steps. I am not frequently switching from LIBRARY to DEVELOP so who cares if I gain a few seconds in a different configuration. What I would like to see are the times to incrementally change the HIGHTLIGHT control while viewing both the image and histogram with CLIPPING ENABLED. Oh, and I use a SECONDARY DISPLAY.

Unlike other benchmarks I've been been around, there is no justification for these specific tests. Who did they consult? Who has validated them? 

The danger?I expect the results from running the tests on your configuration will have a lower rating than Purget's leading you to believe your system is under-performing for LR. This may be completely false.


----------



## Danielx64 (Dec 27, 2019)

Paul_DS256 said:


> The more I look at this, the less value it  holds for me. This may be the result of being in the software industry during the database benchmark wars. The outcome of which was positive with an industry standard based on a close to meaningful scenarios as possible.
> 
> Any benchmark will be dependent on the version of the software being tested, the hardware being used, and what else the OS is having to deal with (e.g. Antivirus etc). Adobe is increasing the frequency of releases so what would that mean to something like Convert to DNG if LR increased the level of parallel processing based on an improvement of the algorithm? This means you need to have these released based on changes to LR.
> 
> ...


You have a point. Many years ago I remember computer magazine using tools like Pcmark and 3Dmark to do all the testing.

I believe that there was a module in Pcmark (back in the days) that tested performance of Adobe products.

I see many 3Dmark tests being done but not Pcmark.


----------



## Paul_DS256 (Dec 27, 2019)

A bad benchmark is worst than no benchmark IMHO.


----------



## Linwood Ferguson (Dec 27, 2019)

Paul_DS256 said:


> A bad benchmark is worst than no benchmark IMHO.


While I understand that sentiment in some ways, almost any consistent benchmark can be used as an investigative tool.  That mine was 54 and Christelle's was 29 does not necessarily mean mine is nearly twice as fast, but if she (for example) added memory and three specific steps got much faster, we perhaps learn those are more sensitive to memory.  Conversely, if nothing changes, we may learn something as well.

What Puget has done is provide some data and a controlled way to run tests we could not before -- it was always easy to time (say) 500 smart previews.  But not to scroll through the develop module, something I have always found incredibly annoying (e.g. you want to crop and straighten a bunch of shots).

Now collecting data will be pointless if the test is still a moving target also, e.g. if V0.9 gives significantly different results than V0.8.  Or if any version gives significantly different results when run again and again.

But I think we are wrong to discount this because it is not a diagnostic tool that yields a specific recommendation.

After all, we almost quite literally have nothing at present.


----------



## Jimmsp (Dec 27, 2019)

Ferguson said:


> So is anyone interested in sharing the beta results?


I am. But I am traveling right now, and the results are on my desktop.
As I recall, my scores (4 runs) were about 730, with a std of 3%.
This is on a desktop with specs similar to what Puget Systems recommended about 2 years ago.
I'll post my results on this Sunday after I return.

And I agree, the biggest gains for us will be to see a change in a measured system when one component is upgraded.
I think I also learned something when I watched my Windows Resource monitor as it plowed through the tests.

Jim


----------



## CloudedGenie7 (Dec 27, 2019)

I’m a few years past the stage where I would upgrade my computer because a benchmark test told me it was slower than the 98th percentile of fastest gaming machines on the market... 

Seeing that an i9-9900 with 64 GB is used as the 100-score benchmark, I didn’t expect particularly impressive numbers (especially after @Ferguson scored mid 50s), and given the pathetic performance I’ve been experiencing... 

Comparing our numbers shows that something is not quite right, though. Based on a direct comparison, my laptop is better than his at doing auto WB and tone adjustment on the D850 files...  Either practice makes perfect(!) or there is something else at play — given that I have about 9000 D850 raw files in my main catalogue (which should not impact the test). My laptop has never seen a Sony raw file before this test.

Christelle


----------



## Jim Wilde (Dec 27, 2019)

Ferguson said:


> So is anyone interested in sharing the beta results?   I ran it and got the below.  Note the last step appears to hang (removing DNG's if I recall), but I just left for a while and it eventually finished.  It also takes quite a while to run -- start it and go to dinner or something.
> 
> View attachment 13675


Sure. Here's my result, which I find surprising in comparison to yours. The overall score is almost identical, although we get there in rather different ways, but our specs certainly aren't....in theory I could substantially upgrade my CPU, GPU and quadruple my RAM (with double the speed), and that would make no difference? I find that difficult to believe, which would tend to suggest that something else is going on here.


----------



## CloudedGenie7 (Dec 27, 2019)

@Jim Wilde,
I find your results quite telling...
Your machine has the same amount of RAM as mine, and a lower spec CPU, but it achieves almost double the  overall performance of my laptop — expect for Auto settings on D850 raw files, where I’m still the champion 

That made me have a look and realise my CPU has been running at 1.9 GHz (or that is the reported speed), instead of 4.0 GHz... I wonder if this has changed, and explains the sudden loss of performance? Overheating would cause the CPU to downrate... Is there a way to easily track the actual CPU speed during the test (short of running the benchmark inside and outside the fridge)?


----------



## Danielx64 (Dec 27, 2019)

CloudedGenie7 said:


> @Jim Wilde,
> I find your results quite telling...
> Your machine has the same amount of RAM as mine, and a lower spec CPU, but it achieves almost double the  overall performance of my laptop — expect for Auto settings on D850 raw files, where I’m still the champion
> 
> That made me have a look and realise my CPU has been running at 1.9 GHz (or that is the reported speed), instead of 4.0 GHz... I wonder if this has changed, and explains the sudden loss of performance? Overheating would cause the CPU to downrate... Is there a way to easily track the actual CPU speed during the test (short of running the benchmark inside and outside the fridge)?


I don't know if this will work but have task manager open and Lightroom not in full screen?


----------



## CloudedGenie7 (Dec 27, 2019)

Danielx64 said:


> I don't know if this will work but have task manager open and Lightroom not in full screen?


@Danielx64, 
I’ve been running Lightroom that way — which is why I know the CPU utilisation runs between 50-70%, and that it only accesses 4-6GB of RAM (and tells me 80% of RAM is used). I’ve been focusing on trying to find other applications/processes/services running, and haven’t thought about watching the performance monitor screen while running the benchmark. 

As long as Lightroom remains in focus, the script should keep running, it may just be a bit slower...

I will give that a go — unfortunately running the benchmark takes a bit longer than the quoted 30 minutes on my machine  

Christelle


----------



## Linwood Ferguson (Dec 27, 2019)

Adobe made some changes a while back that allowed more parallelism, but they only kicked in somewhere above 4 cores, I'm not sure if 6 or 8 (8 definitely, but I couldn't test 6). 

This is relevant to those with Intel that have Hyperthreading available.  Hyperthreading is a tool to make some Intel's appear to have twice the core count.  If I turned it off (giving me apparently 4 instead of 8) a lot of things changed, notably during import and preview build, making them much slower.

Where people say "something changed" it is worth checking if that switch got changed in your bios/uefi. 

Note I didn't look up any of the ones mentioned to see if they have hyperthreading.

Note: Hyper Thread, not Hyper V, different things.


----------



## Paul_DS256 (Dec 27, 2019)

Ferguson said:


> Adobe made some changes a while back that allowed more parallelism,


You may already know Ferguson, but sometimes parallelism is implemented in the software and not hardware. One if the challenges for software that runs on different CPU's/GPU's, is how much custom code they need to write per platform. One company I worked for started, and continued to develop, software on the Windows platform. When they needed to create an offering for Unix, the used a UNIX Windows emulation library for C++ that translated Windows call to Unix equivalents. Optimum? Not likely but fortunately good enough. I have no idea now Adobe decides on common and per platform software development.

Benchmarks are not generally used for diagnostics but in decision making for a new acquisition. If you are interested in performance diagnostics, follow some of the links for LR performance already listed on this thread or look at the Windows Performance Analyzer (WPA) at Windows Performance Analyzer. Full disclosure, I have not used the WPA.


----------



## Linwood Ferguson (Dec 27, 2019)

@Paul_DS256, the change I am talking about was specifically that Adobe implemented different algorithms with different core counts.  I do not mean just that it worked better.  For example (and I have not looked at this since 7.mumble so not sure if it changed) if you import and build previews with 4 cores, it would import, finish importing, then start building previews.  With 8 cores it would kick off the preview build before it finished importing; indeed it would kick it off several times if the preview build finished (the set it had at the time) before import finished.

I.e. LR actually did different things, not just different speeds. 

The issue with windows performance diagnostics is that it is very hard to reliably reproduce certain behavior in LR. Brush strokes, or scrolling between shots in develop are good examples.  They occur briefly when done by hand, and it is hard to see what resources in windows were constrained while they were done.  It is also hard to change something and reproduce the same scenario in LR to see if it got better, because of human mouse movements and typing  involved.

What this tool does that I find of interest is less its measurements, but that they have scripted all that, so you can run things over and over changing some setting or resource, and see the impact.  What would be REALLY nice is if I could run just one sub-test over and over.  (Hint, hint if Puget is reading.).

Trust me, I've been bit by more benchmarking and performance measurement issues than I can even remember; computers were my day job, Photography just a hobby.   And this is not an ideal tool.  But it is a tool I didn't have before.


----------



## Paul_DS256 (Dec 27, 2019)

Ferguson said:


> The issue with windows performance diagnostics is that it is very hard to reliably reproduce certain behavior in LR. Brush strokes, or scrolling between shots in develop are good examples


Completely agree Ferguson. WPA is not instrumented into LR so you can only get the results for all of LR. 

However, what you can do is plan your test. For example:

Start LR and warm it up. e.g. go between the different modules you will be testing. Leave it in the module you will be testing e.g. DEVELOP 
Start WPA tracing
Note the time and allow a one minute quiet period. This quiet period you should be able to see in WPA.
Perform a specific test in LR. For example,
Increase HIGHLIGHT 5 times.​
Decrease HIGHLIGHT 5 times.​
Stop​
Note time and allow a one minute quiet period.
Perform WPA analysis

Not ideal but comes down to what you are trying to accomplish and if the Pudget benchmark gives you anything outside of what you can already get. My concern is red herrings.


----------



## CloudedGenie7 (Dec 27, 2019)

Good morning everyone!

My second set of results (generated while Resource Manager was active) look quite a bit different (and better).
I had to restart the benchmark after if died before exporting the D850 files (it completed the Canon and Sony files, but got stuck, and stopped running).

The following conditions were different during the two tests:
     -- Task manager / resource manager window was open during the second test, with the CPU performance graph active
     -- The laptop was not connected to the internet during the second test. I didn't disable WiFi, I just didn't connect to the phone's hotspot (we're travelling), while the first test was done while the laptop was connected to the internet (NBN) through the home WiFi router. In both cases Windows virus protection / Windows Defender have been disabled.
     -- It has been about 10 degrees C cooler in the room while running the second test. 

Some observations during the benchmark (I went to sleep after restarting it):
     -- During most of the time, performance seem to be processor limited.
     -- The maximum memory used was 9.3 GB, and it stayed relatively constant.
     -- During building Smart Previews, the hard disk was the apparent limitation. During this time, the system used about 70% CPU at 1.5 - 1.7 GHz while the hard disk use was up around 30% of capacity.
     -- During the CPU-intensive tasks, some strange, saw-tooth behaviour occurred, as if something was limiting or throttling the CPU, without an apparent uptick in disk activity.





I am not sure (yet) how useful the tool will be in comparing different systems (or selecting hardware for purchase), but it is an extremely useful way for me to run a set of standardised actions and measure the outcomes to try and diagnose whether I'm having hardware or software issues, and whether I can make a significant impact by changing some settings.

I need to investigate the  cores / logical processor issue further. I also need to find out if Microsoft did something else to throttle the CPU during the last updates (like they did earlier in the year).

And I seem to get almost 50% more performance out my laptop if I work during the night....

Christelle


----------



## CloudedGenie7 (Dec 27, 2019)

CloudedGenie7 said:


> During the CPU-intensive tasks, some strange, saw-tooth behaviour occurred, as if something was limiting or throttling the CPU, without an apparent uptick in disk activity.



The two “screenshots“ show the CPU graph I mentioned. The top (first) one shows it at 37% at 3.27 GHz, while the bottom one shows it at 66% at 2.03 GHz








During the Auto White Balance and Tone, the CPU utilisation went down to 15% — with nothing else taking up the slack...


----------



## Danielx64 (Dec 27, 2019)

Ferguson said:


> This is relevant to those with Intel that have Hyperthreading available. Hyperthreading is a tool to make some Intel's appear to have twice the core count. If I turned it off (giving me apparently 4 instead of 8) a lot of things changed, notably during import and preview build, making them much slower.


AMD also has their own version of Hyperthreading as well.

Also some newer Intel CPUs don't have  Hyperthreading, therefore you are losing quite a bit of performance compared to older models.


----------



## Linwood Ferguson (Dec 27, 2019)

Danielx64 said:


> Also some newer Intel CPUs don't have  Hyperthreading, therefore you are losing quite a bit of performance compared to older models.


Well, maybe.    TANSTAAFL.   If you have real X cores, but your workload is generally <= X, turning HT off is usually better, as it does not have the scheduling overhead of HT'ing.  

But with Adobe predicating enabling certain performance features on having above a certain number of cores, turning it on is probably a good thing for LR.

Incidentally, some of the esoteric side-channel and execution anticipation bugs (Spectre, zombieland, etc.) had fixes that manufacturers put out that likely slowed down lightroom for some users. Hyperthreading for a while was on the chopping block for zombieland, but I doubt any home users noticed (though it may or may not have changed some OEM's shipping defaults in bios/uefi).  So some people who think over the last year their system got slower -- well, you may be right.  More secure, maybe (but frankly these were very low risk items for home users), but slower.  Welcome to the world where hackers are winning the war.


----------



## PhilBurton (Dec 28, 2019)

Ferguson said:


> While I understand that sentiment in some ways, almost any consistent benchmark can be used as an investigative tool.
> 
> [snipped]
> 
> After all, we almost quite literally have nothing at present.


Agreed.  I think it's only a matter of time (and the end of beta status) until Puget publishes benchmark results for the systems they sell.  And modify their custom order page to give you an approximation of the benchmarks for the configuration you have selected.   If they don't I for one will be surprised.


----------



## Jimmsp (Dec 29, 2019)

Jimmsp said:


> I am. But I am traveling right now, and the results are on my desktop.
> As I recall, my scores (4 runs) were about 730, with a std of 3%.
> This is on a desktop with specs similar to what Puget Systems recommended about 2 years ago.
> I'll post my results on this Sunday after I return.
> ...


For what it is worth, I am posting my results from a desktop.
I had misremembered the numbers I wrote earlier.
The average of 4 runs, two with the resource monitor running, and two without was 719.5 with a std of 1.4%

The first run was 



and was the highest of the four; no resource monitor.
The run closest to the average is




with the resource monitor active, though I can't see any meaningful differences.
I shoot Canon, a D90 with 32 MP & CR3 files, though shot CR2s oreviously.
It isn't clear to me why the creation of HDRs and Panos have such a low avg score .
Is this my GPU? possibly.

Jim


----------



## Roelof Moorlag (Dec 17, 2019)

Today Puget systems has made available a free benchmark for testing  Lightroom Classic performance on Windows computers:
https://www.pugetsystems.com/labs/articles/PugetBench-for-Lightroom-Classic-1571/


----------



## Jimmsp (Dec 30, 2019)

For the heck of it, I ran the test again.
Fresh download and a fresh unzip.  But this time I put the Puget files all on a 2nd SSD that I have dedicated to my usual LR catalogs and my PS scratch drive.  The first SSD has the OS and software apps.
The Beta crashed once, but when it ran, it gave me the best result so far.





I was trying to see if I could tell if and where my GPU maxed out - but I missed it. The poor HDR results are still a puzzle.


----------



## Roelof Moorlag (Feb 16, 2020)

Ferguson said:


> So is anyone interested in sharing the beta results?


Yes, i am! But the test crashed on me. 
However Puget published V0.85 beta so i wil try again one of these days.


----------



## HobbyJohn (Feb 17, 2020)

I have run the benchmark quite a few times making slight tweaks to my system. I haven’t hd any issues getting it to run properly on my new machine, but didn’t have a single success on my old (2014ish) machine. When I’m back on my PC I’ll post my results and observations.
Somewhere it says that you need ~150gb+ free space on the drive or it won’t complete properly. Also shutting down any miscellaneous programs helps, and of course make sure you aren’t using the computer for other things while it is running.


----------



## HobbyJohn (Feb 17, 2020)

Ok, so here's my current PC build. It is roughly based on Puget's recommendation for LR.

Gigabyte X570 Aurus Ultra
AMD Ryzen 9 3900x (stock cooler)
Crucial Ballistix Sport ddr4-2666 (2x16gb)
RTX 2060 6gb GPU
Samsung 870 Evo Plus 500gb m.2 ssd
Samsung 860 Evo 500gb SSD
Seagate Barracuda 4tb hdd
Benchmarks run while using one 27" monitor at 4k resolution
First, note that there will be some variance in performance with each run with the same specs, so one result being slightly higher or lower than the other is probably primarily due to those changes, and not necessarily the hardware/software change. Large differences are what we are looking for.

Overall score is the average of total Active and Passive task scores, times 10.

First I ran the benchmark with a 2gb GT 730 GPU while I waited for the RTX 2060 to be delivered. catalog and files were on the sata SSD. LR GPU Accel  set to Auto.




Then I swapped out the GPU and ran again. LR GPU Accel  set to Auto.




Then I moved the catalog and files to the m.2 ssd and ran again. LR GPU Accel  set to Auto.




Then someone reminded me of the LR GPU Acceleration option. There are four options - OFF, Auto, ON (Display only), ON (Display and image processing)
OFF:




ON (Display Only)




ON (Display and Image Processing)




Then I added another (2) sticks of 16gb ram to see if that made any difference. GPU Accel set to Auto.




Thoughts:

GPU does not make a significant difference from 2gb to 6gb (even with the faster speeds on the 6gb card). It is documented that LR has primarily been CPU heavy rather than GPU, but there have been some more recent changes to take advantage of GPU
m.2 vs sata SSD does not yield significant change in results. As noted with using m.2 as a system drive, on paper it should be much faster but in practice it does not "seem" much faster.
32gb ram is plenty, and is not a bottleneck
AUTO setting for GPU acceleration does a good job
At any rate, my new machine is much nicer to work with in LR than my old system. I am very pleased with the build. I reused my case and power supply, but bought everything else new on Amazon. Total cost of new parts was around $1500.


----------



## PhilBurton (Feb 17, 2020)

HobbyJohn said:


> Ok, so here's my current PC build. It is roughly based on Puget's recommendation for LR.
> 
> Gigabyte X570 Aurus Ultra
> AMD Ryzen 9 3900x (stock cooler)
> ...



IF I understand your post, it would seem that replacing the GTX 730 with the RTX 2060 lowered the total score?  Did I misread something?

What did you do with the old parts?


----------



## HobbyJohn (Feb 17, 2020)

PhilBurton said:


> IF I understand your post, it would seem that replacing the GTX 730 with the RTX 2060 lowered the total score?  Did I misread something?
> 
> What did you do with the old parts?



Since it's such a small change, I think that is more from a normal variance from one test to another, and not so much that the new card is "worse". The first test with the new GPU was 28pts less, but then the second test with the new card was 18pts higher, and the test with GPU acceleration partially on was 58pts higher.  I'd say that upgrading from the GT 730 to the RTX 2060 did not improve the system, and that the GPU is not a major bottleneck in LR . 

Old parts are in my PC parts bin. Reusing ssd and hdd as backup drives. Build was - M5A99FX Pro R2, FX6350 cpu with Hyper 212 cooler, GT 730 2gb GPU, 16gb ddr3, Samsung 850 Evo 250gb SSD, Seagate 2tb hdd.


----------



## Paul_DS256 (Feb 18, 2020)

PhilBurton said:


> lowered the total score


You have to decide if the score means anything to begin with


----------



## HobbyJohn (Feb 18, 2020)

Paul_DS256 said:


> You have to decide if the score means anything to begin with



Yes, and the score is really just a comparison of your system to a “control” system that Puget built. And as with all benchmark tests, they may be a good representation of what you will experience, or they may be way off.  Everyone has a different workflow and uses different functions. 

I ran these purely out of curiosity, and to see what effect a few different variables that I was already changing might have. Obviously far more benchmark runs would be needed for it to even begin to be valuable data.

For me, it has reassured me that I don’t have a bottleneck in LR that would be cheap and easy to fix. In practice, my new machine is far more efficient and enjoyable to use for all of the software I use for business and for pleasure. Photography is only a hobby for me so the LR performance was not top priority, but happened to align with my other needs.


----------



## Paul_DS256 (Feb 18, 2020)

HobbyJohn said:


> For me, it has reassured me that I don’t have a bottleneck in LR


Sorry, I don't read that bench mark as being able to show you don't have a bottleneck as it has limited tests and, as you indicate, is not related to your own workflow.

If you want to find bottlenecks, then record your resource consumption over several workflow sessions and see where demand exceeded capacity.

I worked through numerous benchmarks in the past for other software areas. The only ones that provided some insight were those developed by non-hardware organizations and preferably a consortium of experts who debate the individual benchmarks


----------



## Linwood Ferguson (Feb 18, 2020)

HobbyJohn said:


> I'd say that upgrading from the GT 730 to the RTX 2060 did not improve the system, and that the GPU is not a major bottleneck in LR .


Did you spend some time with both using it manually and does your LR experience mirror the benchmark?


----------



## HobbyJohn (Feb 18, 2020)

Paul_DS256 said:


> Sorry, I don't read that bench mark as being able to show you don't have a bottleneck as it has limited tests and, as you indicate, is not related to your own workflow.
> 
> If you want to find bottlenecks, then record your resource consumption over several workflow sessions and see where demand exceeded capacity.
> 
> I worked through numerous benchmarks in the past for other software areas. The only ones that provided some insight were those developed by non-hardware organizations and preferably a consortium of experts who debate the individual benchmarks


I see where you are coming from. And yes, from the overall score it does not relate to my own workflow. However, I can also look at the processing times for the individual tasks that ARE part of my workflow and can see how those are affected.  I'm not at all saying this is a definitive test and that everyone should use it to judge their systems. I ran these for my own personal interest and posted them so that other people can see the results and use them however they please. I have recorded my resource monitor activity (or using other similar tools) and have compared that to these results, but those are of even less use to the public than the benchmark results.



Ferguson said:


> Did you spend some time with both using it manually and does your LR experience mirror the benchmark?


I did not use LR for any meaningful amount of time between the different runs, since I wanted to get it fully up and running quickly. 


I'm not claiming to be, nor do I want to become, an expert on this topic. I've seen many people run the benchmark on their system and post the results (here and on other websites and forums), but have not seen many cases where people make incremental changes to a system and run the benchmark at each stage.


----------



## Linwood Ferguson (Feb 18, 2020)

HobbyJohn said:


> I'm not claiming to be, nor do I want to become, an expert on this topic. I've seen many people run the benchmark on their system and post the results (here and on other websites and forums), but have not seen many cases where people make incremental changes to a system and run the benchmark at each stage.


My question really related less to the benchmark in general, than just to the question whether you as a human (vs the benchmark) could tell the difference in the moderate vs higher end GPU.


----------



## HobbyJohn (Feb 18, 2020)

Ferguson said:


> My question really related less to the benchmark in general, than just to the question whether you as a human (vs the benchmark) could tell the difference in the moderate vs higher end GPU.


Ah, sorry. In the little time I used it between the GPU changes, I did not experience any noticeable difference in performance. The work I did in LR with the old GPU was limited to importing a couple dozen images, renaming, and keywording, so I don't think that counts as a good point of reference. I also did not notice any significant difference between m.2 and sata ssds, nor did I notice any difference going from 32 to 64gb ram. I did the most work in LR around the time of the ram change so I can say that for my workflow, that made zero noticeable change to performance.  So yes, I'd say the benchmark results suggesting little difference (especially when focused on the individual tasks I perform in my workflow) reflects my experience in using LR.


----------



## Linwood Ferguson (Feb 18, 2020)

HobbyJohn said:


> Ah, sorry. In the little time I used it between the GPU changes, I did not experience any noticeable difference in performance. The work I did in LR with the old GPU was limited to importing a couple dozen images, renaming, and keywording, so I don't think that counts as a good point of reference. I also did not notice any significant difference between m.2 and sata ssds, nor did I notice any difference going from 32 to 64gb ram. I did the most work in LR around the time of the ram change so I can say that for my workflow, that made zero noticeable change to performance.  So yes, I'd say the benchmark results suggesting little difference (especially when focused on the individual tasks I perform in my workflow) reflects my experience in using LR.


That's fairly consistent with some older testing I did, LR seems to rarely get disk limited (there are exceptions of course), but my only testing for GPU was from a very slow one to a moderately slow one, and Adobe has done a lot since in terms of GPU usage.  Was curious what I human would see today, as a new GPU is a pretty cheap upgrade.

I did find that while memory did not help a lot over say 16GB (with some exceptions in big merges), if you over-clock memory you get a mild boost; that's fairly consistent with the idea most things remain CPU limited.


----------



## Roelof Moorlag (May 15, 2020)

Roelof Moorlag said:


> But the test crashed on me.
> However Puget published V0.85 beta so i wil try again one of these days.


Version 0.85 did crash also but i tried V0.9 today with succes. Here are the results of my 4 year old system:


----------

