# Motherboard suggestions



## Linwood Ferguson (Aug 15, 2016)

I think it's time for a new computer.  I'm thinking I7-6770k based (L1151) and Z170 chipset, and still undecided if I'll spring for all SSD or keep a few rotating large drives, but also definitely at least one M.2 card and SSD for work/scratch/etc.

So... most of the choices are fairly easy, or really don't matter, but the motherboard is not so clear cut.

All I've found are primarily gaming boards.  And the priorities of gamers (e.g. multi GPU) are not the same as for photography use.  What I want is stability, stability, performance, and stability in that order.  Along with flexibility in slots.  

I've used ASUS lately, and while they boards have been stable, the bios and documentation kind of stinks.  In fact all the common boards seem to have a bad reputation for support, documentation, etc.  ASRock seems to be more value less reliability based, though that's a vague impression.  Gigabyte similar to ASUS.

So I was leaning toward the Gigabyte GA-Z170x-Gaming 7, or the ASUS Maximum VII Hero.

But it's really hard to compare as most reviews, as mentioned, are all about gaming.  And both have lots of slots and ports, but all sorts of weird "if you plug in X, you disable two Y".  And why in the world someone puts on two NIC ports and then won't let you team them, I do not know (teaming may not be that important to me but still).

If I search for "Workstation" type setups, I tend to get thrown into the really heavy duty graphic styles for video editing and the like (again, multiple super high end video cards). 

What I want is almost more server-like, something that will just sit there and run, hold a lot of storage, etc. But server class is really aimed differently, for one thing they tend to be xeon based (too many slow cores for Lightroom, etc.) and are REALLY noisy to be in my office.

So... anyone have a recommendation for a good solid reliable motherboard for a new build for a photography editing system?  Or opinions on the above two?


----------



## clee01l (Aug 15, 2016)

I built my first computer about 1992.  I can no longer justify the cost benefit (if there is any).   I would go for a brand name computer that meets most of my specifications and add memory, swap out GPU and other components until I got the level of performance that I sought.  That said, I now run OS X exclusive and recently upgrades my iMac to a new spec. 

```
Processor Name:    Intel Core i5
  Processor Speed:    3.2 GHz
  Number of Processors:    1
  Total Number of Cores:    4
  L2 Cache (per Core):    256 KB
  L3 Cache:    6 MB
  Memory:    32 GB
```
With this LR flies.  Why would you need anything with a higher spec?


----------



## Gnits (Aug 15, 2016)

While reviewing the specs of the motherboards it is well worth checking the external port options, so that you cam maximise the performance while importing from cards, etc or you wish to use for external backup performance.  Very useful to make sure that you have USB-C, which looks like it may become the new de-facto usb hardware standard (can be inserted either way and can deliver more power). Currently, even the latest boards may not have usb-c or may only have a single usb-c port.  There are normally plenty of usb-3 ports (but check the throughput in the specs) using the current usb hardware form factor.

Just a thought.  I was thinking of doing the same as yourself, but will defer for a few months.


----------



## Linwood Ferguson (Aug 15, 2016)

clee01l said:


> I built my first computer about 1992.  I can no longer justify the cost benefit (if there is any).   I would go for a brand name computer that meets most of my specifications and add memory, swap out GPU and other components until I got the level of performance that I sought.  That said, I now run OS X exclusive and recently upgrades my iMac to a new spec.
> 
> ```
> Processor Name:    Intel Core i5
> ...



Well, three aspects.

I don't do Mac, so not sure how it compares.  I would hope performance of LR is largely OS irrelevant, but not sure.

I like building my own not for cost/value, but reliability and control.   But I do understand where you are coming from.  When I was doing it for scale in a company I would certainly never think to have in-house builds.

But... maybe we have different expectations.  You didn't say which I5, but let's speculate it is a I5-6500?    That's actually about the same speed as a I7-3770k (not over-clocked).  The 6500 has less L2 cache, the 66000 is 3.3ghz, so I may be guessing model wrong.

I'm not at all sure where my limiting factor is.  I'm sure it is not disk, or amount of memory (32gb and it only uses about 8gb).  My CPU speed is fairly high, and most people say overclocking memory in that system is not all that helpful, but it is one of the slower things at 1600mhz.   I've just upgraded the GPU substantially without impact. But all I can try is just get something a generation later and see what happens.


----------



## Linwood Ferguson (Aug 15, 2016)

Gnits said:


> While reviewing the specs of the motherboards it is well worth checking the external port options, so that you cam maximise the performance while importing from cards, etc or you wish to use for external backup performance.  Very useful to make sure that you have USB-C, which looks like it may become the new de-facto usb hardware standard (can be inserted either way and can deliver more power). Currently, even the latest boards may not have usb-c or may only have a single usb-c port.  There are normally plenty of usb-3 ports (but check the throughput in the specs) using the current usb hardware form factor.
> 
> Just a thought.  I was thinking of doing the same as yourself, but will defer for a few months.



Thanks, yes, both have 3.1 and I think both have at least one USB-C though I need to go check.

I actually need to spend some time one day understanding my current board (a Z77), as it has multiple 3.0 ports, but one is MUCH faster than the others, and I have no clue why.  That's one problem with these gaming/consumer boards, is they have all sorts of quirks on the side; their focus is on overclocking, cooling, and GPU's.  I'm not sure gamers use USB for anything much beyond a mouse, keyboard and occasional backup.


----------



## tspear (Aug 15, 2016)

I do not really follow hardware anymore in detail. But often the difference in the USB specs was caused by how each port was connected. Many of the older MB and chipsets ran additional USB ports off old PCI "slots" that were no longer needed. Plenty of performance for keyboard/mouse...

As for discussions on HW for photography: here are two links I have been following. Both have been responsive this year to questions/comments.
Building a Photo and Video Editing PC on a Budget 2016

https://photographylife.com/the-ultimate-pc-build-for-photography-needs-skylake-edition

Let us know what you decide to do. The CFO in the house has decided I need to finish paying for my kids currently in college before I buy a new toy. That means no new computers for a year unless I win the lottery :(


----------



## Linwood Ferguson (Aug 15, 2016)

tspear said:


> I do not really follow hardware anymore in detail. But often the difference in the USB specs was caused by how each port was connected. Many of the older MB and chipsets ran additional USB ports off old PCI "slots" that were no longer needed. Plenty of performance for keyboard/mouse...
> 
> As for discussions on HW for photography: here are two links I have been following. Both have been responsive this year to questions/comments.
> Building a Photo and Video Editing PC on a Budget 2016
> ...



Thank you.  I think I've seen the latter though it's down at present, the former I had not seen.  I find it very confusing, for example they feature the I7-6700K (which is what I planned) but then in the motherboards they don't show a compatible board for it at all (it's a 1151 socket).   It's like it is a mixture of "bests" that don't actually go together.


----------



## tspear (Aug 15, 2016)

They mention your CPU but do not really discuss it.
Here is a site I used to help a few friends pick parts for custom machines: Pick Parts, Build Your PC, Compare and Share - PCPartPicker
Works really well, covers the major pieces.


----------



## Linwood Ferguson (Aug 15, 2016)

tspear said:


> They mention your CPU but do not really discuss it.
> Here is a site I used to help a few friends pick parts for custom machines: Pick Parts, Build Your PC, Compare and Share - PCPartPicker
> Works really well, covers the major pieces.



Thanks, yes, I use that generally for a working copy.  They sometimes don't have everything listed especially new stuff, but their interaction/compatibility checks are very nice, and it makes a good scratchpad.  It's saved me frequently from forgetting something.


----------



## Gnits (Aug 15, 2016)

USB C Ports.    
I am amazed that top specified motherboards often have only a single USB C Port.   I think it would be useful to have a minimum of 2.  One to use for the fastest card reader possible and the second for an external disk for backup purposes.  Having the remainder as 3.1 is good.


----------



## PhilBurton (Aug 15, 2016)

As you go through different options, pay attention to the Intel chipset, since a lot of motherboard features and limitations are based on those of the chipset. I've always used ASUS boards, with excellent results.  I like the fact that they "extend" the chipset with extra ports, etc.  I have had to do two warranty repairs. One was quick and satisfactory. The other required me to send the board back a second time, but it was fixed in the end.

My current desktop is based on an ASUS P9X79PRO board.  This board has been absolutely rock-solid over four years of use.  If I were getting a board today I would consider the X99 version.

You haven't commented on where you would buy this board.  Since we are both in the USA, I would suggest Newegg as the starting point for shopping.  Much easier than Amazon for comparing specs.


----------



## Linwood Ferguson (Aug 16, 2016)

I spent a lot of time today reading and decided to try Gigabyte (specifically GIGABYTE GA-Z170X-Designare).  Haven't done them before, but they added U.2 support to their latest series and included it in a "Ultra Durable" board that isn't so much gaming oriented (though honestly their listings are confusing as most gaming boards are similarly labeled). 

In this case it included dual Intel gigabit NIC's (I use one for HyperV switch and one for regular use), and can handle both U.2 and M.2 SSD, and two USB Type C 3.1 on the back panel plus 6 USB3 other ports, plus a display port built in which I may not use, but is more handy than (just) HDMI which a lot have now.

I've never done Gigabyte before so feel like going on a bit of a limb, but the U.2 support was nice as I found a very nice, fast Intel U.2 1.2TB SSD relatively inexpensively ($875).   That leaves the M.2 slot available if needed later (I might add that as a scratch disk or something but it removes 2 of the SATA ports, so for right now going to stick with 4xHDD at 2TB each for actual image storage.

I'm a bit nervous at only one "drive" for OS, scratch, sort, cache, etc., but the U.2 speed really ought to be such that adding another (and losing two SATA ports) is not worth it.  The Intel 750 is supposed to really shine in high queue depths, as well, moreso than the Samsung 950 Pro, plus the U.2 form let's me separate it from the motherboard better for better cooling.

I did order from New Egg.  I usually use Amazon or B&H, but New Egg was substantially less expensive on most items, only the case and a mouse were cheaper on Amazon.  New Egg (if they told the truth) also had things in stock Amazon didn't.  I signed up for some "Store Runner" promo and got free 2 day shipping (I'm a bit worried what I signed up for but it kept saying "free" and "free"... will see). 

I also ordered 64G of G.Skill 3000Mhz memory (vs. 2133 which is the standard speed) in case I want to crank it up a bit, will see how stable things are, and start at base first.   I don't need that for photos, but I often run a couple of unix VM's in background when I'm doing non-photography work (no, not when Lightroom is running, that's not why it is slow!).  And a Cooler Master Trooper case, 212 EVO cooler (more than fine for non-over-clocked I think).  I've got a couple power supplies floating around (including one fanless I intend to try, though at 500W it may be a bit weak).

Now Lightroom better run awful fast.

And I have to figure out what to do with the old system.  My wife always complains when old systems are sitting around behind stuff "just in case I need it".  :(

But... you never know when you may need it!


----------



## PhilBurton (Aug 16, 2016)

Ferguson said:


> I spent a lot of time today reading and decided to try Gigabyte (specifically GIGABYTE GA-Z170X-Designare).  Haven't done them before, but they added U.2 support to their latest series and included it in a "Ultra Durable" board that isn't so much gaming oriented (though honestly their listings are confusing as most gaming boards are similarly labeled).
> 
> In this case it included dual Intel gigabit NIC's (I use one for HyperV switch and one for regular use), and can handle both U.2 and M.2 SSD, and two USB Type C 3.1 on the back panel plus 6 USB3 other ports, plus a display port built in which I may not use, but is more handy than (just) HDMI which a lot have now.
> 
> ...


Congrats on making the decision.

Have you decided yet on a graphics card?  Or decided "how to decide" on a graphics card? 

Phil


----------



## Linwood Ferguson (Aug 16, 2016)

I have the GTX970 I bought to try to make LR run faster, I'll just be using it.  

I never found any reliable guidance on which cards work best for LR acceleration.


----------



## PhilBurton (Aug 16, 2016)

Ferguson said:


> I have the GTX970 I bought to try to make LR run faster, I'll just be using it.
> 
> I never found any reliable guidance on which cards work best for LR acceleration.


I was just thinking. Adobe could make things easier for so many of us if they published specifications for "reference builds" for both PC and Mac.


----------



## Gnits (Aug 16, 2016)

Congrats.... looks good. Thanks for the detail.  I would welcome any experiences / lessons  as you build...as I may start a similar project in a few months.


Ferguson said:


> I'm a bit nervous at only one "drive" for OS, scratch, sort, cache, etc., but the U.2 speed really ought to



Speed might not be the only criteria for splitting the O/S from other items.  I deliberately keep the C drive for o/s and apps only  and deliberately keep this footprint as small as reasonably possible.  The reason is that I take an automated backup of the C drive every morning at 6 am and the smaller the footprint the better.  I do  not want to be taking repeated backups of lots of cache folders.

I am not sure what size your C drive will be.  Maybe it would be worth splitting it into 2 partitions  (ie [o/s & apps] plus [catalog and adobe cache related stuff ]).  Someone more  expert than I may comment on any concerns with this.  My C drive is 256 GB and after 6 months I still have 100 Gb free.

Finally... I am not sure what consideration you have given to your backup strategy. 

I use Macrium Reflect to run at 6 am every morning which :
1.  Backs up my C drive  to my backup drive
2.  Backs up my Lr Catalog from my cache ssd to my backup drive
3. Calls a script which uses Beyond Compare** to synch my Photos drive to an external Backup drive.

The whole process takes no more than 10 mins and I get an email to confirm successful completion.

Best wishes with your build.

**  I am thinking of switching from Beyond Compare to Vica Versa to synch (to the right) my Photos drive to my Backup drive.  The GUI on Beyond Compare is superb, but Visa Versa has a great option to schedule which allows me to replace a script for Beyond Compare with a scheduled task.


----------



## Linwood Ferguson (Aug 16, 2016)

PhilBurton said:


> I was just thinking. Adobe could make things easier for so many of us if they published specifications for "reference builds" for both PC and Mac.



Indeed.



Gnits said:


> Speed might not be the only criteria for splitting the O/S from other items.  I deliberately keep the C drive for o/s and apps only  and deliberately keep this footprint as small as reasonably possible.
> 
> The reason is that I take an automated backup of the C drive every morning at 6 am and the smaller the footprint the better.  I do  not want to be taking repeated backups of lots of cache folders.


Well, at the moment I'm thinking I will just have C on the U.2 fast drive, and photos and catalog on spinning media, but after I went to bed this starting bothering me as C was going to not be raid, and the catalog in particular would be quite a bit slower than now (on raid-1 SSD).

Which means either use non-raid SSD for the catalog, or slower.  Not good, so I'm now thinking of going the whole way and putting in something like 3x2TB SSD for photos and put catalogs there on a parity storage space pool.  Though I need to do some experimenting to make sure that's now expandable (without adding 3 at a time, as used to be required).

Anyway, to your point... I actually tend to feel the opposite.  I don't really trust a system disk restored from backup.  Even with VSS it's still a moving target when backed up, and perhaps more to the point, doing a clean install takes every little time and almost always yields a much cleaner, peppier system.  I'd rather back up the folders I need, and take the hit of time to rebuild, then restore individual folders.

I also have seen way too many systems with partitioned drives run out of space for the OS over time, and then fighting a space issue that's present only because of where the partition point was set.  

Also, why can't you just mark the folders such as cache as not backed up (or backup folder structure only)?


Gnits said:


> Finally... I am not sure what consideration you have given to your backup strategy.


Very long subject, but at present I back up daily to two cloud services (ACD and S3/Glacier) and periodically to two different EHD's (whenever I remember to plug them in -- I NEVER leave them plugged in, and highly recommend against it with so many "locker" malware incidents).  I'm not happy with any backup program though; they all seem to lack some key component (e.g. Cloudberry can't do ACD right, Goodsync can't do Glacier right, ARC just didn't work correctly... I'll look at Macrium as hadn't heard of it).


----------



## Gnits (Aug 16, 2016)

From your responses I know that you understand and have a good grasp of the Windows environment (hw & o/s etc), so I am not trying to labour any of these points.



Ferguson said:


> doing a clean install takes every little time and almost always yields a much cleaner, peppier system.



I agree. A clean Windows install is the best starting point.......  but ......

What kills me (and has done several times in the past) is all the other critical stuff that apps install in hidden folders on the C drive and in the registry and in lots of app config files.

I have tried keeping apps on a different drive (but vendors still put stuff on the C drive) ...so this was a disaster.

Twice I have been seriously screwed up by Microsoft Office updates, which corrupted my normal.dot file, my config of the tool bars and made a reconfig to use my VB_scripts I evolved over years and absolute disaster. [Despite having backups of all my template files.]

The reason I keep my C drive backed up is that I not only recover my C drive, but all my apps get recovered exactly as I left them the night before.

I may have a lot more apps than you installed, do not use Vmware or anything else to separate environments, so your needs driven by the apps you use will differ.

Irrespective of the C drive, a tool like Vica Versa is extremely useful to synch your production files with your EHD backups.

Again, I look forward to getting an update in due course on your experiences.  

Thanks for posting.


----------



## Linwood Ferguson (Aug 16, 2016)

Gnits said:


> From your responses I know that you understand and have a good grasp of the Windows environment (hw & o/s etc), so I am not trying to labour any of these points.


I'm not sure "understand windows" is possible.  


Gnits said:


> I may have a lot more apps than you installed, do not use Vmware or anything else to separate environments, so your needs driven by the apps you use will differ.
> 
> Irrespective of the C drive, a tool like Vica Versa is extremely useful to synch your production files with your EHD backups.


I doubt you have more installed, I think it may just be the reverse that leads me here -- over time trying new products, stuff I have to install for work (all the myriad of screen sharing apps that everyone insists on a differnet one), a flurry of VPN tools (same reason)... They all leave garbage behind, drivers, virtual devices.  I've been using HyperV VM's for linux, but to date have not been using them for windows, but I think I need to start -- having a VM for each client perhaps with their VPN, their screen sharing tools, etc. I just hate paying license fees for each, but have been thinking of just doing a snapshot for each on the same base install, it's just highly inefficient of disk structures.  

But that's a bit off subject....  I'm off to experiment with parity storage spaces (right now I am using single mirrors).


----------



## tspear (Aug 16, 2016)

Ferguson,

Have you looked into Intel's answer to the Apple Fusion? Intel® Smart Response Technology
This would allow you to use a fast SSD to cache your spinning disks. I have done this a few years ago on a few database servers and also on a friends gaming machine. It worked very well (especially when SSDs were super expensive), I wonder how it would do for Lr.

As for Gigabyte; my wife's sons have been running Gigabyte for a few years without issue and they and ASUS were my goto mobo's a long time ago. (I switched to Apple around 2004, now I am mid switch back to Windows as primary).


----------



## Linwood Ferguson (Aug 16, 2016)

tspear said:


> Ferguson,
> 
> Have you looked into Intel's answer to the Apple Fusion? Intel® Smart Response Technology
> This would allow you to use a fast SSD to cache your spinning disks. I have done this a few years ago on a few database servers and also on a friends gaming machine. It worked very well (especially when SSDs were super expensive), I wonder how it would do for Lr.
> ...



Yes and no.  "Looked at" is about it.  I tried last night as well and it wasn't immediately obvious why I couldn't see my single non-raid drive in the program.  I probably need to dismount it first(but I still expected it to show -- the storage spaces volumes show up and I can't use them either, and the identical SSD's in a Raid-1 obviously show as it does the Raid-1.  

If I keep spinning disks, I might go that way.  My (loose) understanding is that it is particularly helpful at improving write speeds as it acts as a non-volatile write-back cache, right?   It is more attractive than using write-back cache in windows; even though I have a UPS I worry about BSOD type events (I've had one in last 2 years or so) taking out dirty cache data.  There's a similar feature in Storage Spaces for tiered storage (though my impression it is more about relocating hot data to the SSD than using it as a write-back cache), but I need to figure out if it's in Windows 10 -- it's so hard to get good documentation from Microsoft on which edition and which products have which features (and in which, like W-10, they are simply hidden and you have to use powershell to configure).

Glad to hear Gigabyte worked for you.

Off now to experiment with parity storage spaces.  Incidentally if you haven't looked at them, Microsoft has done some really interesting stuff; feels very similar to mid-range SAN features like EMC's clarion - thin provisioned hierarchical storage pools.


----------



## tspear (Aug 16, 2016)

Intel SRT effectively does both the read and write cache. At the time, they hid the details and I have never looked again, but the system somehow adjusted the read cache based on frequency of read for a specific block vs most recent read; active task versus background. However, write cache was always the priority. For the database servers, a 512GB SSD allowed us to keep the logs in memory and also a significant amount of the primary indexes for multi terabyte transactional system. 

Never played with MS storage spaces; sounds interesting.


----------



## Linwood Ferguson (Aug 21, 2016)

In case anyone is interested, an update.

I decided to get rid of spinning disks entirely, so went with the Intel 750 as a system disk (better deep-queue performance, 1.2TB), not in a raid configuration.

For photos I put in two 2TB 850 EVO SSD's in a mirror storage space.  That way I get redundancy, and I did some tests and mirrored performance is right up there with single drive performance even on writes (a real surprised).  I planned for catalog and photos on that.

Then I put a 2TB 850 EVO SSD alone for scratch, previews, temp.

And.... it didn't work.  The U.2 connection for the 750 takes up ports 2 and 3 of the 6 SATA ports, but ports 0 and 1 are not working fully.  I can't get a drive to run in either one, leaving only ports 5 and 6, one short for this configuration (and two short for what I eventually suspect I'll need, another drive in the mirror). 

Lots of experimenting, checking conflicts, a new bios, making sure it was not drives or cables -- it's the ports.

So a new board is on the way.  My concern is whether it's a design failure or an actual hardware failure.  The board is new, not many people using it, and most seem to be gamers who don't use a lot of drives, or people putting in an M.2 and U.2 in a raid-0 set for real high speed disk.  So who knows if they really tested it as I am using.

Anyway... now the floor of my office is a wreck and I'm sitting half-assembled waiting for a new board.

While waiting, I am spending a lot of time debating ordering another 850 EVO and just building a 2x2 mirror storage space.  It will run a lot faster in writes (the stripe aspect) and seeing that the stand-alone 850 didn't really run faster for scratch/temp, this is a more flexible configuration (if more expensive).   

Having spare time is always costly.  

The bad news is I haven't gotten far enough along to do any lightroom testing.  I'm building with the old system working, so I will try to get some comparative runs for repeatable things like preview builds or exports when both are running.


----------



## Gnits (Aug 21, 2016)

Thanks for the update. 
Best wishes for completing the build and hope you get the performance.


----------



## PhilBurton (Aug 21, 2016)

Ferguson said:


> In case anyone is interested, an update.
> 
> I decided to get rid of spinning disks entirely, so went with the Intel 750 as a system disk (better deep-queue performance, 1.2TB), not in a raid configuration.
> 
> ...


Ferguson,

Which version of Win 10?  Or are you going to transfer the Win 10 "digital entitlement" from your old system?

Can I ask you how much you have spent on this dream rig?  Was this more than you planned?

Phil


----------



## tspear (Aug 22, 2016)

Ferguson said:


> In case anyone is interested, an update.
> 
> I decided to get rid of spinning disks entirely, so went with the Intel 750 as a system disk (better deep-queue performance, 1.2TB), not in a raid configuration.
> 
> ...



Thank you for the update. Sorry about the delay. Have you considered getting a RAID controller on the PCI Express bus instead?
This would eliminate the port issues/conflicts and give you the speed you need. In addition, some of the PCI Express cards now are 16 channels and reasonably priced.


----------



## Linwood Ferguson (Aug 22, 2016)

PhilBurton said:


> Ferguson,
> 
> Which version of Win 10?  Or are you going to transfer the Win 10 "digital entitlement" from your old system?
> 
> ...



I am going to try to transfer my W10 digital entitlement.  I installed the ANniversary edition, which allows you to do activation based on a microsoft windows ID; whether it will work I do not know.  Microsoft has written completely contradictory indications on that.  Initially they said retail versions would transfer, not they say any upgraded version will not.  So who knows.

It was MUCH more than I planned.  I haven't added it all up, but I ended up really blowing any idea of a budget when I decided to go all SSD including photos.  I've got now a 1.2TB Intel plus 4 x 2TB SSD's, so that's over $3000 in storage alone (ouch... I bought those in pieces, now you made me add it up).   But I tend to keep systems for 4-5 years, and I just did not want to start with spinning disks.  No good rationale reason, just trying to rush the future along I think. 

The actual processor and MB were not bad, something like $500 to get the fastest Z170 chipset processor and MB, I don't really the memory but 64GB was not a lot.  It was the SSD's that really blew the bank.

The real killer was getting the MB working.  All that time let me think and think and I just kept hitting the "buy" button.

Don't tell my wife.


----------



## Linwood Ferguson (Aug 22, 2016)

tspear said:


> Thank you for the update. Sorry about the delay. Have you considered getting a RAID controller on the PCI Express bus instead?
> This would eliminate the port issues/conflicts and give you the speed you need. In addition, some of the PCI Express cards now are 16 channels and reasonably priced.


Actually it doesn't eliminate it exactly.  I've considered it, but there are a few issues.  Low end raid controllers seem to get mixed reviews.  HIgh end are REALLY expensive.  So a bit nervous about getting a good one.  But in addition to that, I can't add a x4 much less an x8 raid controller without running the GPU at x8 instead of x16.  I have no idea if that matters -- the whole "will the GPU help LR speed" is so nebulous, that all I can do is put in a decent one, cross my fingers and hope.  Cutting its transfer rate in half made me nervous.

What I considered a bit more strongly was shifting from the Z170 chipset to X99, which adds a lot more unencumbered PCIe lanes, but the CPU's for it get more speed by more cores, not faster cores.  The fastest processor for it is slower, per core, than in the Z170 chipset.  I still think much (most?) of LR's develop performance is spent in very threads, and not sure more cores (than 4) is any help.  Certainly tests on long running renders in Photoshop show a diminishing return as you get to 3, 4, 5 etc cores, though it is not clear if that transfers to LR.

Really what I have SHOULD work.  The U.2 adapter should take 2 SATA ports off, leaving me 4 to use.   New board comes tomorrow so I'll know more then.


----------



## Linwood Ferguson (Aug 26, 2016)

Been doing some timing tests if anyone is interested.  These are timings of repeatable things.  I did this:

1) Took a folder of 76 images (D4/D5 size mixed, already processed).
2) Select all, change exposure
3) Immediately do a "Build 1:1 previews" and time it
4) Export them all (quality 92, full size, sharpen for screen) and time it

I did this with the old computer with and without GPU, and as expected no difference (it only affects develop, but wanted to confirm, so did not test on new).

I then did it with the new system with faster everything, in order: 

a) With hyperthreading enabled (default)
b) With hyperthreading disabled
c) With memory overclocked to 3000 (vs 2133 as standard)






As you can see, the new system is faster, but not by much. The (several years old) I7-3770K is 3.2Ghz, then brand new I7-6700K is 4Ghz.  Neither over-clocked.

The difference in speed is similar to the processor speed difference; a bit better, but not hugely so despite about 4 years difference and generations in processor.

Memory speed makes very little difference, a bit, but not nearly proportional. 

Hyperthreading seems to hurt a bit, though apparently only in preview builds not export.  I am not sure quite why, though I speculate that Previews are more highly parallel than exports, but not so parallel that 8 threads would help.  In other words, the virtual thread switching had more harm than the additional virtual cores.   Many people just keep hyper-threading on all the time, but there are a lot of workloads where it varies from moot to harmful (I always turned it off on big database servers, for example). 

But... this is just about as fast of a desktop as you can build today, without over-clocking.  More cores will not help (e.g. X99 chipset).  And lightroom is still slow.

By the way, this system has 64G of memory and extremely fast disks (Intel 750 SSD for system on x4 PCIe, and 4 x Samsung 850 EVO drives in a 2x2 storage system -- both are much faster than regular SSD's, and the catalog on the former (+ cache and temp) and the photos on the other.  So this was NOT disk limited.  

I did experiment a bit with develop, and the combination of faster base system, faster GPU, and (probably) faster PCIe transfers seemed to make it much less laggy.  But that's a vague subjective impression quite possibly due to wishful thinking.  I still think we need a way to benchmark Develop stuff.

Anyway... FWIW.


----------



## PhilBurton (Aug 26, 2016)

Ferguson said:


> Been doing some timing tests if anyone is interested.  These are timings of repeatable things.  I did this:
> 
> 1) Took a folder of 76 images (D4/D5 size mixed, already processed).
> 2) Select all, change exposure
> ...


And if you can develop scripts, I'll be happy to run benchmarks on my 4 year old X79-based system with 32 GB, an old video card and an SSD only for scratch.  Regular SATA II drive @ 7200 rpm for bulk image file storage.  I might even use these benchmarks as an incentive to finally overclock both memory and CPU.


----------



## Jim Wilde (Aug 26, 2016)

Linwood, I was very surprised at those timings, specifically the huge difference between Preview build and Export. Given that building 1:1 previews and exporting are pretty similar, it would make sense that the timings would be similar as well. That's certainly been my experience in all the performance timing tests that I've been running since LR3.....in LR3, 4 and 5 there was hardly any difference, though with LR6's parallel exports it was then noticeable that multiple exports were always quicker than building 1:1 previews for the same files. Hence my surprise at your timings.....either the preview building is remarkably quick, or the exports are significantly slow (or a bit of both!). Worth further investigation, I would think.


----------



## Gnits (Aug 26, 2016)

If I read these numbers correctly you are getting between 40% and 48 % improvement.

What were you expecting.


I suggest you try a simple timed o/s copy of the files and compare the old to new.  This leaves Lr out of the equation.

The improvement could be down to processing speed rather than disc i/o as the image processing has to take place before the copy to disk operation ..... which begs the question ..... what improvement would you have got my just improving the cpu.


----------



## tspear (Aug 26, 2016)

Fergeson,
    Nice system. When you look at Lr, it is CPU processing serially and has limited benefit from multiple cores; I have seen this documented with multiple blogs doing performance tests on Lr. Depending on the task, Lr seems to peak around 4 cores; with additional cores getting very limited usage. 
    The iCore 7 generational changes deal with mostly cache, max clock speed, memory speed, bus speed, graphics, heat, energy usage and available number of cores. Basically everything around the CPU itself.
    The result, going from a iCore 7 gen 4 to an iCore 7 gen 6 for Lr in most cases will only be faster based on clock speed. So for Lr, counter to what many people look when shopping for a CPU, you want pure clock speed (which you seemed to have picked) and most systems today seem to place an emphasis on number of cores.

   So I am stuck waiting for the another generation or two (this why I have not bothered to upgrade from iCore Gen 4).  

Tim


----------



## Linwood Ferguson (Aug 26, 2016)

Jim Wilde said:


> Linwood, I was very surprised at those timings, specifically the huge difference between Preview build and Export. Given that building 1:1 previews and exporting are pretty similar, it would make sense that the timings would be similar as well. That's certainly been my experience in all the performance timing tests that I've been running since LR3.....in LR3, 4 and 5 there was hardly any difference, though with LR6's parallel exports it was then noticeable that multiple exports were always quicker than building 1:1 previews for the same files. Hence my surprise at your timings.....either the preview building is remarkably quick, or the exports are significantly slow (or a bit of both!). Worth further investigation, I would think.


I just report the news, I don't explain it.  

In watching it run, I did notice that the progress bar for previews jumped; my guess is that it was doing several in parallel.  The progress bar, and watching the target directory fill up, seemed to indicate exports were one at a time.

Try it.

Didn't they withdraw the parallel export aspect?


----------



## Linwood Ferguson (Aug 26, 2016)

Gnits said:


> If I read these numbers correctly you are getting between 40% and 48 % improvement.
> 
> What were you expecting.


Fair question.  I think there's "expecting" and "hoping". 

I was hoping for magic - that the collection of additive improvements - cache, memory, bus, disk -- would, you know... add.  

I am an engineer, so I was expecting more or less what you see.

:(

I think what led to this, at least in part, is the continual discussions here and elsewhere that go more or less like this: 

"Lightroom is awfully slow".

"It's fast for me, must be your computer"

"My computer is X, yours is Y, should be similar". 

"Well, LR runs just fine for me". 

I'm always on the slow end.  Clearly expectations play a role here, but there's always the chance that somewhere hidden within my aging computer was a hidden bottleneck that was strangling performance, even if I could not see it.

So I set out to build the fastest non-overclocked computer I could (with respect to lightroom), and sure enough Lightroom got faster, but not much.

So fundamentally, Lightroom was the issue all along in that it did not meet my expectations, there was no hidden bottleneck, and my hopes for a "throw hardware at it" solution are dashed.  It will be better than it was, but any real solution lies with Adobe.


----------



## Linwood Ferguson (Aug 26, 2016)

tspear said:


> Nice system. When you look at Lr, it is CPU processing serially and has limited benefit from multiple cores; I have seen this documented with multiple blogs doing performance tests on Lr. Depending on the task, Lr seems to peak around 4 cores; with additional cores getting very limited usage.
> The iCore 7 generational changes deal with mostly cache, max clock speed, memory speed, bus speed, graphics, heat, energy usage and available number of cores. Basically everything around the CPU itself.
> The result, going from a iCore 7 gen 4 to an iCore 7 gen 6 for Lr in most cases will only be faster based on clock speed. So for Lr, counter to what many people look when shopping for a CPU, you want pure clock speed (which you seemed to have picked) and most systems today seem to place an emphasis on number of cores.


Right.  I've pretty consistently said that (and frankly that is where I think one fault lies with Adobe -- they have not done well at creating parallelism opportunities.  You would think in image processing, with those huge bitmaps, that massively parallel operations would be possible (indeed, that's where the GPU comes in -- it's not some super computer, it just has hundreds or thousands of processors, of sorts). 

While I took a small jump in CPU speed, the GPU was more like an order of magnitude (GT640 to GTX970) of capability, and I could see negligible benefit when I swapped to just that.  If I was running, say, a video rendering program like Resolve, I bet I would see a huge jump.  Adobe does not seem to have figured out how to get a lot of bang for the buck (literally) out of GPU's for LR.



tspear said:


> So I am stuck waiting for the another generation or two (this why I have not bothered to upgrade from iCore Gen 4).


Honestly I probably should not have.   This all started with a memory failure, made me start thinking it was aging and maybe time to replace so I started going for just a MB and CPU, of course memory followed, that made me realize NVMe was out there, so new OS drive; oops, no raid, can't keep a catalog on that, so I need raid SSD's, don't want to use the old ones... it doesn't take very long pulling this sort of thing to run up a huge bill.

But at least I can put to be "it must be something in your computer".   Maybe it will save you and others from thinking there's a hardware fix to what is fundamentally a software problem.

Linwood


----------



## Jim Wilde (Aug 26, 2016)

Ferguson said:


> I just report the news, I don't explain it.
> 
> In watching it run, I did notice that the progress bar for previews jumped; my guess is that it was doing several in parallel.  The progress bar, and watching the target directory fill up, seemed to indicate exports were one at a time.
> 
> ...



Try it? I already have, very many times, and so far as LR6 is concerned exports are *always* faster than previews. I can't explain what you think you are seeing, previews seem to be built sequentially, and exports are (from LR6.6.1, IIRC) now once again run in parallel (3 at a time, it looks like from my progress bar).


----------



## tspear (Aug 26, 2016)

Jim,

Could it be a difference in export settings and plugin?


----------



## Gnits (Aug 26, 2016)

Ferguson said:


> Fair question. I think there's "expecting" and "hoping".



Great reply.


My compliments for the job that you have done and sharing this with the community. 

I hope someone in Adobe also reads this and be encouraged to do something.  My view is that Adobe develop / optimise to the minimum they have to (eg Books Module, Library usability, etc), preferring to work on widgets and new apps and techie leading edge stuff, etc.. I am not limited this view to Lr, but could include InDesign and PS.

I recall a serious o/s techie mainframe performance expert telling me (at 4 am in the morning), as the client manager with a client performance issue, that performance  can only be gained via lots and lots of little steps. We had a 100k h/w upgrade in the car which we were doing our best to avoid using. Those words have stayed in my mind since.

At least you know that you have done all that you can and ..... hopefully .. hopefully ... be in a good position to take advantage of any performance improvements which come down the line.  Also, you have given yourself a platform for the next 5 years or so and an update on your own knowledge set for the current level of technology available.

Amazingly ... I did what you have just done approx 5 years ago..... and felt it must be time now to upgrade again.  Having looked at your results .... I may wait a little longer.

I would still think there is value to compare the time it takes to copy a bunch of files on both systems, just to see some indicator of performance gain of the upgrade... independently of Lr.

Thanks for sharing.


----------



## Jim Wilde (Aug 26, 2016)

tspear said:


> Jim,
> 
> Could it be a difference in export settings and plugin?


It could be anything, Tim. I'm simply exporting at 100% quality, no resizing, using the standard LR export function. 

So I really have no idea what could be causing that disparity, it'll be up to Linwood to decide if he wants to go digging around to try to find a reason for it.


----------



## Linwood Ferguson (Aug 26, 2016)

Jim Wilde said:


> I can't explain what you think you are seeing...



Now that's a comment that is sure to encourage me.  

The nice thing about science is that it doesn't depend on what you think I think you think... By stating what I did, others can try the same thing and either see something similar or different and from observation, not opinion, a consensus can be formed.

Maybe it's Mac vs Windows, maybe a version difference (2015.6), maybe it's being closer to Stonehenge that time is a bit distorted there.

If someone is so inclined, maybe they can experiment.

I "think" I'll move on to other things.


----------



## Linwood Ferguson (Aug 26, 2016)

Gnits said:


> I would still think there is value to compare the time it takes to copy a bunch of files on both systems, just to see some indicator of performance gain of the upgrade... independently of Lr.



I did some benchmarks of disk, though I was convinced on the old computer I had disk I/O excluded by having catalogs on SSD, cache/temp on another SSD, and images on a Raid 0+1.

If you're curious, here are what the raw disks look like:






The top/left is the NVMe drive (intel 750), the others are identical 850 EVO's.  One is broken and is going to be replaced, the bottom right will only run at Sata II speeds regardless of port or cable.  But it runs.

For the actual run, I combined the four drives into a 2x2 mirror storage space, and it's performance is: 





This is the fastest drive on the old system (I use it for cache/temp), the system and image disks are slower.   So depending on type of IO, all the drives range from a lot faster to a little faster.  I did decide to forgo raid for the system drive, mostly so I didn't need to buy two 750's, and because I expect the Intel drive to be pretty reliable.


----------



## Linwood Ferguson (Aug 26, 2016)

While I'm putting all this stuff together, in case curious -- do not buy a new computer to save electricity.

I did some measurements of power usage (I'm waiting for a replacement SDD before I actually load everything and move, so bored). These are for the computer only, power into the power supply, and do not take into account the monitors or external peripherals like a EHD (which wouldn't be plugged in long). 

I was interested to see the old computer drew 14 watts while turned off (not asleep -- off, the PSU is providing power for various things).  The new 7 watts.

Both computers, with my normal workloads did not vary much whether at idle (but up) and running something (I used a disk benchmark, and was surprised the old HDD did not really see a difference).   The old was about 147 watts, the new about 89.  I would expect most of that is in SDD vs HDD, maybe a bit in a better PSU (though that's probably more at idle than running).

Turn that into annual 24x7 cost and it's about $157/year for the old, $95 for the new at $0.13/KHW.

*Power Saving payoff time not including inflation is about 80-100 years. *

One small help, the new computer so far seems to sleep and wake up fine, the old would never sleep properly (more precisely it would not wake up properly)- never found if software, hardware, bios, etc.  So I may save another $30 or so if I get it to sleep right, pulling down the payoff period to only a couple of (human) generations.


----------



## Gnits (Aug 26, 2016)

Thanks for info.  Useful numbers to reference down stream.



Ferguson said:


> One small help, the new computer so far seems to sleep and wake up fine, the old would never sleep properly (more precisely it would not wake up properly)- never found if software, hardware, bios, etc.



I ran into sleep / wake issues (mostly would not wake) when trying to get my Macrium Reflect software to wake up and complete unattended backups.  I traced the problem down to issues between the motherboard and the then version of Windows.  I could not upgrade the motherboard drivers because there were no  updated drivers for my ver of o/s.  I was up a creek without a paddle. This was a factor pushing me towards an upgrade. 

However, Windows 10 seems to have solved the issue for me as it wakes up and sleeps just fine now.


----------



## Linwood Ferguson (Aug 31, 2016)

Just to keep this current, I finally got a new Motherboard that worked (ASUS Z170-WS), the Gigabyte had some kind of compatibility issue with the Samsung 850 EVO 2TB's. 

With four disks finally working correctly, the speed of the 2x2 storage spaces is now fast, not as fast as the 750, but darn close and redundant.






So now I'm starting to exercise everything and finally going to load it up. 

I swear I've had data centers install faster than this PC build.  Darn Gigabyte.


----------



## Robert Reiser (Aug 31, 2016)

Thank you for sharing your findings. I just wanted to add that they are fully in line with my personal experience:

I have recently upgraded from a X79 based system with spinning hard drives and 16 GB RAM to a new X99 based system with an NVMe PCI-E drive plus SSD RAID 0 and 64 GB RAM and did upgrade from Windows 7 to Windows 10 at the same time. While the speed improvement in Lightroom was noticeable, it was nowhere near of what I had expected. I even tried RAM disks for the previews and Camera RAW cache folders, but it made no noticeable difference in browsing or in the develop module. Not measured scientifically, just personal impression.


----------



## Gnits (Aug 31, 2016)

Robert Reiser said:


> I have recently upgraded from a X79 based system



Thanks for sharing.... what motherboard did you opt for ( and any key reasons for choice would be great...)


----------



## Linwood Ferguson (Aug 31, 2016)

Gnits said:


> Thanks for sharing.... what motherboard did you opt for ( and any key reasons for choice would be great...)



Frustrated with Gigabyte (though recognizing that none of these vendors have good customer service), I went back to ASUS and hunted for U.2 support natively, without an add-on card.  The add-on cards are probably OK, but it shows the board was not designed with U.2 in mind, so much as retrofitted.

I stumbled on the Z170-WS.  It's labeled a "Workstation" board.  It is a strange thing for ASUS, as it is hidden away.  If you come into their site with their selectivity tools, e.g. picking Intel, then Z170 chip, etc. it does not appear.  It's labeled "Commercial".  So if you start from the commercial side and search you end up only with other chipsets, you can't find that one. 

It's an orphan, lost between commercial space (usually X99 and Xeon) and the gamers.  But in reading about it, it appeared aimed at excessive durability and peripheral use (it's the only one I found that could do 4 full size graphics cards, due to a built-in PCIe bridge). I don't need four, but that also brought with it reportedly more duable components, better power regulation (certainly more beefy).  It also can have two M.2 cards, one U.2 (takes the place of one M.2) out of the box.

It also had six different fan headers, all PWM/Voltage controllable and a dedicated water pump header (not using it myself, but still shows they are serious about cooling control).

It's definitely not gamer oriented, no flashy paint job, no LED's (other than functional ones).   But it is built for overclocking in a serious way as well.

I've always had the hypothesis, with no real data to support it (well, other than 5 or so builds of my own) that buying the Overclocking capable boards and unlocked Intel processors intended for high workloads at high frequencies would yield a more stable and reliable board at standard usage.  So in that sense this looked doubly suitable - aimed at even beefier components.

Unfortunately it was quite a bit more expensive ($320) than originally planned, but I'm already deeply into this build, and won't hurt me to skip a few meals.


----------



## Robert Reiser (Aug 31, 2016)

Gnits said:


> Thanks for sharing.... what motherboard did you opt for ( and any key reasons for choice would be great...)



Sure... I did choose the ASRock X99 Extreme 6, based on these requirements:

Support for ECC RAM (just to be future proof, in case I want to upgrade to 128 GB at some point. 8 unbuffered DIMMs are usually not supported, but registered ECC is less of a burden for the controller). Currently, I have 4 x 16 GB ECC modules installed.
Support for 8 DIMM sockets with a maximum of 128 GB (see above)
Support for XEON CPUs (they are required for ECC RAM, and anyway I don't like the idea of overclocking my equipment. Keeping everything stable is hard enough even without overclocking). I went for the 6-core E5-1650 v4 (Broadwell-EP). For me, that is the price/performance sweet spot.
Support for an M.2 slot with 4 x PCI-E lanes for the NVMe boot disk
ATX format (I don't like these huge computer cases)
Socket 2011-3 (because of the total # of PCI-E lanes and quad-channel RAM)
There are not a lot of desktop boards on the market supporting all my requirements. You can buy a workstation board, but they have their price and their own challenges. In hindsight, the ASRock X99 Extreme 4 would have worked as well for me. Let me know if you need additional information.


----------



## Linwood Ferguson (Aug 31, 2016)

My apologies, I now realize Gnits was asking Robert.  Sorry to have jumped in.


----------



## Gnits (Aug 31, 2016)

Ferguson said:


> My apologies, I now realize Gnits was asking Robert. Sorry to have jumped in.



I already knew the board you selected... but the extra info was great.  Appreciated ... and no apologies required.



Robert Reiser said:


> Sure... I did choose the ASRock X99 Extreme 6, based on these requirements:



Thank you for the detailed answer.  The options will probably have changed by the time I think about a rebuild, but it is great to have these answers as a reference point.


----------



## Robert Reiser (Aug 31, 2016)

Ferguson said:


> My apologies, I now realize Gnits was asking Robert.  Sorry to have jumped in.



Well I don't believe we can have too many opinions...


----------

