# Catalog on NAS?



## gfinlayson

OK, this has probably been asked before, but.......

I've just upgraded my network and storage setup  - I have a Windows 10 desktop PC in my office (dedicated concrete 'man-shed' in the back garden). I have Cat 6 GbE for internet and the like, plus a dedicated 20,000 Mb/s link (2 x 10Gb-SR in LACP over fibre optic) to a Synology RS3617xs with 12 x 4 TB HDDs in RAID10. NAS is in the house for security reasons and the fact that it's FAR TOO LOUD to tolerate in my office.

Rather than running my catalog on the local PC and having to back it and the backups   up to the NAS, and then having the NAS sync the backup to the local storage/cloud, is there a really good reason why I couldn't/shouldn't run the catalog from the NAS on an iSCSI target volume? 

My plan going forward is to keep photo folders and catalogs together in one place if possible and keep the catalog sizes relatively small.

I've read lots of arguments about network speed limitations being a reason to not have the catalog networked, but my network transfer speeds are significantly faster than even a local SATA SSD would be. Are there other good technical reasons why it's not a good idea?


----------



## Tony Jay

The most up to date information that I have is that this is impossible.
The catalog needs to be on a drive local to the computer - a NAS does not count in this regard.

A couple of versions back I think it was possible (Lr4.x and Lr5.x).

Welcome to Lightroom Forums BTW!

Tony Jay


----------



## tboydva

I don't have a speed-demon network, but I have a similar setup as you. I put the current year's photos and the catalog on an SSD local to my computer, then sync to the NAS. Older photos are stored physically on the NAS (but the previews are on the local SSD). I put the catalog and the preview folder within my Dropbox (which is on the SSD). In this fashion, my "current" photos load quickly when need be (and the previews for older ones are "local"). The catalog syncs with revision(s) to dropbox (which has come in handy), but the whole arrangement uses less than 60 GB on my local SSD. Like you, my NAS syncs to the cloud so that there are numerous backups distributed geographically (and that's good as we had our house struck by lightning last year and catch fire!). My sense is that the catalog and the files you're working on (zooming in, etc) should be on the fastest SSD you can manage locally... Others may refute this, but I have little speed bottlenecks with this arrangement.


----------



## gfinlayson

Tony Jay said:


> The most up to date information that I have is that this is impossible.
> The catalog needs to be on a drive local to the computer - a NAS does not count in this regard.
> 
> A couple of versions back I think it was possible (Lr4.x and Lr5.x).
> 
> Welcome to Lightroom Forums BTW!
> 
> Tony Jay



Thanks Tony. I know Lightroom won't allow the catalog to run on a network drive per se,  however an iSCSI target NAS isn't a 'network' drive as far as the OS is concerned. It's block level storage which can be formatted to various file systems and appears to the OS as a local hard drive. Hence my question.


----------



## Tony Jay

gfinlayson said:


> Thanks Tony. I know Lightroom won't allow the catalog to run on a network drive per se,  however an iSCSI target NAS isn't a 'network' drive as far as the OS is concerned. It's block level storage which can be formatted to various file systems and appears to the OS as a local hard drive. Hence my question.


If that is the case then it might work.
Why not give it a go!
Just remember to back everything up in case of disaster.

Let us know how you go.

Tony Jay


----------



## clee01l

gfinlayson said:


> ...an iSCSI target NAS isn't a 'network' drive as far as the OS is concerned. It's block level storage which can be formatted to various file systems and appears to the OS as a local hard drive...


As long as it is not connected through an ethernet port it is not a network connection.  But if you access it through the ethernet connection LR won't recognize the drive a  a local drive.


----------



## Wernfried

clee01l said:


> As long as it is not connected through an ethernet port it is not a network connection.  But if you access it through the ethernet connection LR won't recognize the drive a  a local drive.



I think this is a very simplified statement. Nowadays you can virtualise almost everything. You can hide the network share easily with a symbolic link. 
Some weeks ago I made even a test where I put my Catalog to a virtual hard disk which exist only in system memory.

Best Regards
Wernfried


----------



## Johan Elzenga

I don't think that the question whether Lightroom sees the disk as a local disk is very relevant. What is relevant is whether it is safe to use such a setup. As far as I know, it is always_ technically possible_ to start 
Iightroom from a catalog on a shared disk, but you run a great risk of catalog corruption. For that reason I would never trust my production catalog to such a setup, even if the system presents the disk as a virtual local disk and even if initial tests seem to indicate that Lightroom can be fooled into thinking that it is.


----------



## gfinlayson

This is exactly what I'm trying to understand. What is the risk of running the catalog on a non-local drive? Why is there a 'great risk of catalog corruption'? I understand that once upon a time network connection speed could have been an issue. In these days of 10Gb and 40Gb network connections over fibre to high performance NAS, speed is no longer a factor.


----------



## Roelof Moorlag

A network is the most likely cause of  dropped connections (what causes corruption in the Lightroom database). Old network cables, overheating and/or overworked switches/hubs, creaky old NIC's and badly written device drivers are the most common problem areas for this. Bandwith alone is not enough.


----------



## gfinlayson

Roelof Moorlag said:


> A network is the most likely cause of  dropped connections (what causes corruption in the Lightroom database). Old network cables, overheating and/or overworked switches/hubs, creaky old NIC's and badly written device drivers are the most common problem areas for this. Bandwith alone is not enough.



Roelof, thanks for your reply, but I'm really looking for hard facts, rather than conjecture.

My network cables aren't cables, they're custom length, steel-wire-armoured pre-terminated, fully tested OM3 fibre-optics, made by a network company who supply to the UK military. There's no switch involved, connection is direct. NICs are new, made by Intel, running Intel's latest driver set which underwent extensive design and testing to allow use of VLANs and teaming in non-server Windows SKUs - long running saga here: I211/I217-V Windows 10 LACP teaming fails

Dual connections are bonded via LACP, which provides redundancy and seamless failover: Understanding Link Aggregation Control Protocol - Technical Documentation - Support - Juniper Networks

A flaky network is something I don't have.

If no one can really explain Why I shouldn't do it, I'll have to do as Tony suggested - Try it out and report my findings....


----------



## Johan Elzenga

I think it's more than that. Apparently it has something to do with the fact that the Lightroom catalog is based on SQLite, but I'm no expert in this field so I can't tell you exactly what the problem is.


----------



## Jim Wilde

What I know is that the Lightroom catalog uses SQlite, and that either LR or SQlite (or both) does not allow the catalog to be placed on a network volume. Attempting to open a catalog on a volume which Lightroom detects as being a network volume will fail, and the error message will explain why.

Of course there are ways around this restriction, and over the years some users have reported success. However, one of the Adobe engineers (Dan Tull) did some experimentation in hoodwinking Lightroom into using a network-based catalog, and he reported that he consistently managed to irretrievably corrupt the catalog (and at the time he was the acknowledged expert in repairing corrupted catalogs). Since then, Adobe's position has, I believe, remained unchanged....i.e. network-based catalogs are not supported, period.

Whilst you may have success in getting it to work, we would prefer that the actual technical details are not posted in an open forum post.....not everyone has the same level of technical competence, so blindly trying to replicate what may be complex setup instructions could easily lead the less savvy user into catalog disaster, which we would rather not have happen.


----------



## gfinlayson

Jim Wilde said:


> What I know is that the Lightroom catalog uses SQlite, and that either LR or SQlite (or both) does not allow the catalog to be placed on a network volume. Attempting to open a catalog on a volume which Lightroom detects as being a network volume will fail, and the error message will explain why.
> 
> Of course there are ways around this restriction, and over the years some users have reported success. However, one of the Adobe engineers (Dan Tull) did some experimentation in hoodwinking Lightroom into using a network-based catalog, and he reported that he consistently managed to irretrievably corrupt the catalog (and at the time he was the acknowledged expert in repairing corrupted catalogs). Since then, Adobe's position has, I believe, remained unchanged....i.e. network-based catalogs are not supported, period.
> 
> Whilst you may have success in getting it to work, we would prefer that the actual technical details are not posted in an open forum post.....not everyone has the same level of technical competence, so blindly trying to replicate what may be complex setup instructions could easily lead the less savvy user into catalog disaster, which we would rather not have happen.



Thanks Jim. I appreciate your viewpoint. 

Even if I get it to work successfully, it's not something I would advocate that others attempt. I've experienced corrupted catalogs in the past but having a robust backup strategy always prevented any significant loss. 

I'll experiment at my own risk and won't share my experiences openly. 

If any of the more tech savvy are curious to learn about my experience, feel free to message me. 

I'll leave the discussion here. 

Thank you everyone for your contributions.

Graeme


----------



## Jim Wilde

Graeme, no harm in posting back a "yes I did" or "no I did not" get it to work message, I'm sure some folks would like to know the outcome of your test. But thanks for agreeing not to openly post the detailed steps that you use.


----------



## Gnits

I suspect that the underlying protocols are different when retrieving data from a network than retrieving data via a local  storage device.  Networks, by their nature have their own means of handling errors and the standards required to react to networked errors may have different response time standards, regardless of the speed of the physical / logical infrastructure.

It is an interesting topic which I am sure will continue to evolve, but I agree, caution required.


----------



## clee01l

JohanElzenga said:


> I think it's more than that. Apparently it has something to do with the fact that the Lightroom catalog is based on SQLite, but I'm no expert in this field so I can't tell you exactly what the problem is.


Yes SQLite is a Single user database and this database makes up the LR catalog file. It detects the location of the catalog file.  If this file is located on a network resource, protections for referential integrity are invoked.  When the file is located on a network resource,  there is no way in the database engine to prevent the file from being opened and edited simultaneously by multiple clients.  While you can assure yourself that the file will only be opened by one instance of LR, SQLite can not and there is no database user security being enforced to lock out multiple instances of this file being opened.  There is no quicker way to corrupt the catalog file that have two users update then same record at the same time.  In a relational database, records in one table are related to other records in other tables and indexes.  Making one change ripples out into several tables and indexes.  Continuity is maintained by the data base engine.   Continuity can not be maintained if several data base engines are operating on the same file.


----------



## gstrek

Graeme and anyone else interested.  I am a technology professional and agree this is NOT for everyone nor will I make this my production methodology.  This is a proof of concept and intellectual, tech exercise only.

I have a Synology NAS with a single 1 GB network connection to a switch (also using VLAN but not LACP).  I created an iSCSI LUN and target on the NAS, formatted and mounted it to Windows 7 as drive E:.  I was able to create a new LR catalog, import 20 photos and do some edits.  It was slow but does work. I should add my Synology NAS is a very low-end device.

SQLite does NOT throw an error as it does when you try to create a new catalog on a network share. iSCSI operates at a much lower level, as Graeme mentioned it presents as a raw, unformatted disk and appears to the OS as a direct-attached storage device.

The is set up for only a single machine to map so this would force a single user/machine to access the catalog.


----------



## acquacow

If you want a really slow performing catalog, feel free to put it on a NAS. 

Sent from my XT1650 using Tapatalk


----------



## Linwood Ferguson

Gnits said:


> I suspect that the underlying protocols are different when retrieving data from a network than retrieving data via a local  storage device.  Networks, by their nature have their own means of handling errors and the standards required to react to networked errors may have different response time standards, regardless of the speed of the physical / logical infrastructure.



That's true of almost all storage, the underlying protocols are different.  IDE is brain dead compared to SCSI in some ways.  CIFS and SMB (both for accessing NAS data) are different from each other but are treating almost identically.  Almost all the protocols have varying degrees of redundancy and robustness.  And "network storage" can run over numerous protocols, some with lots of error checking and redundancy, some with very little.

To me there's one big difference, and it actually applies to the (permitted) use of EHD/USB drives -- When you have an internal drive in your computer, absent hardware failure, the drive is up and available all the time the computer is available.  When you have a drive outside the computer - NAS for sure, EHD also -- it is extremely easy for the operator (which any statistic you check will show is the least reliable component) to accidentally disconnect it.  Unplug the card reader and... ooops, that wasn't the card reader, it was my EHD.  You get the idea.

Also, "network" to many people means Wifi, subject to all sorts of issues of interference and capacity.

Also, and I am sure with exceptions, external systems are just not as reliable in most cases. When's the last time an IDE, SCSI, or SAS or SATA cable failed inside a computer, but you hear of external cables (and USB hubs) failing all the time.

Add in you will often have a UPS or battery on your computer, but maybe not on your EHD/NAS.  Or a different UPS that might go down separately.

Honestly, Lightroom's technical restrictions aside (to coin a phrase) -- you are just plain better off keeping your images close, and your catalog closer (to your computer).


----------



## gfinlayson

acquacow said:


> If you want a really slow performing catalog, feel free to put it on a NAS.
> 
> Sent from my XT1650 using Tapatalk



That may be true for 1000Base-T on a low end NAS.  On a high end NAS however with 20Gb network, I expect the results will be rather different.....


----------



## acquacow

You are still adding about 120ms of latency for every database call... I run bonded 10gigE to my storage server and I still won't put my catalog on there. I even have the storage to saturate it. 
	

	
	
		
		

		
		
	


	




Still not worth the added latency vs just keeping the catalog and caches on local nvme storage. 

Sent from my XT1650 using Tapatalk


----------



## Linwood Ferguson

Just bear in mind that what you guys are discussing is pretty darn far away from what the average lightroom user is dealing with. Adobe is trying to keep guys with aging laptops from shooting themselves in the foot. It's kind of like someone with a 10" polar mount telescope saying to the guy with the point and shoot "I don't see why you can't get a good shot of that nebula, mine works fine".


----------



## Wernfried

Jim Wilde said:


> What I know is that the Lightroom catalog uses SQlite, and that either LR or SQlite (or both) does not allow the catalog to be placed on a network volume. Attempting to open a catalog on a volume which Lightroom detects as being a network volume will fail, and the error message will explain why.



In general you can create a SQLite database on a network share, see SQLite Hompage: _SQLite database files may be shared accross a network using a network filesystem. This is never a particularly efficient method and may have problems (depending on the filesystem, or may simply not be available. These are alternative techniques for remote access to SQLite databases._

The issue is not the network drive as such, the problem is on a SHARED drived other processes may access the same database file at the same time as you do which can corrupt the database.
As long as you can ensure that only one process (i.e. one single Lightroom instance) access the catalog, you should not get any issue.

If someone likes to read some technical details how a SQLite database get corrupt (resp. where it does not) here you go: How to Corrupt an SQLite Database. However, it might be difficult to be understood for non-engineers.

Wernfried


----------



## gfinlayson

acquacow said:


> You are still adding about 120ms of latency for every database call... I run bonded 10gigE to my storage server and I still won't put my catalog on there. I even have the storage to saturate it.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Still not worth the added latency vs just keeping the catalog and caches on local nvme storage.
> 
> Sent from my XT1650 using Tapatalk


I'm curious about such high latency. What setup are you using that gives 120ms latency per database call?


----------



## gfinlayson

OK, this has probably been asked before, but.......

I've just upgraded my network and storage setup  - I have a Windows 10 desktop PC in my office (dedicated concrete 'man-shed' in the back garden). I have Cat 6 GbE for internet and the like, plus a dedicated 20,000 Mb/s link (2 x 10Gb-SR in LACP over fibre optic) to a Synology RS3617xs with 12 x 4 TB HDDs in RAID10. NAS is in the house for security reasons and the fact that it's FAR TOO LOUD to tolerate in my office.

Rather than running my catalog on the local PC and having to back it and the backups   up to the NAS, and then having the NAS sync the backup to the local storage/cloud, is there a really good reason why I couldn't/shouldn't run the catalog from the NAS on an iSCSI target volume? 

My plan going forward is to keep photo folders and catalogs together in one place if possible and keep the catalog sizes relatively small.

I've read lots of arguments about network speed limitations being a reason to not have the catalog networked, but my network transfer speeds are significantly faster than even a local SATA SSD would be. Are there other good technical reasons why it's not a good idea?


----------



## acquacow

gfinlayson said:


> I'm curious about such high latency. What setup are you using that gives 120ms latency per database call?


Sorry, that's supposed to be microsecond (us).


----------



## kindersnap

I know this thread is over a year old now but I was wondering about the NAS corruption. Does it corrupt the catalogue only if you attempt to use it? If I were to use NAS to archive catalogues but move them to local disc when I needed to access them, would they still be at high risk of corruption?


----------



## Wernfried

The SQLite database gets corrupted when the drive is disconnected while your application writes data to it which is not completed. 

Some technical detail: How To Corrupt An SQLite Database File 

Moving a catalog offline from one drive to another (while LR is not open) is no problem at all.


----------



## kindersnap

Wernfried said:


> The SQLite database gets corrupted when the drive is disconnected while your application writes data to it which is not completed.
> 
> Some technical detail: How To Corrupt An SQLite Database File
> 
> Moving a catalog offline from one drive to another (while LR is not open) is no problem at all.



Great!! I'm having storage issues with my previews chewing through my memory and I hate working with a portable hard drive connected to my laptop. I generally only use a catalogue twice and months apart. Then I can archive the images so this solves a big issue with it eating all my laptop storage.

Thank you!


----------



## Gnits

And be careful with external usb drives if powered by usb cable or hub. Other demands may compromise the power delivered and such drives may go to sleep, leaving incomplete database transactions.


----------



## kindersnap

Gnits said:


> And be careful with external usb drives if powered by usb cable or hub. Other demands may compromise the power delivered and such drives may go to sleep, leaving incomplete database transactions.




Noted. It is another reason I don't like working off tethered storage. I made a rookie mistake two years ago and confused solid state and HDD. Having used only solid state for so long, I didn't think twice when I accidentally knocked the cable out and lost a whole day's work because the transfer wasn't finalised.


----------



## Gnits

I stopped using NAS some time ago because copy speed is too slow over a typical network.  Faster interfaces now but beware of the slowest link in the chain and fast NAs can be expensive. I have two internal drives, one production, second is backup and 3rd backup external, which can be NAS.


----------



## kindersnap

Unfortunately, finances don't stretch to internal drives and the amount of work I produce makes it prohibitive (nearly 2TB in contracting work alone last year, then my personal and private work on top of that). And it compounds every year so 3.5 years of contracting for this company and I have nearly 6TB that they want me to store indefinitely at my cost (don't get me started on how shit that is. I don't like it. I don't agree with it. But for now, it is what it is). My desktop is too old and won't let me use Lightroom anymore since the update so I am stuck using a laptop that was only ever meant to be my secondary machine.


----------



## Focus

I have been grappling with this same issue and wanted to know what you think of my solution. I have not implemented this yet, I'm just planning for a 10gbps home network upgrade soon and wanted to know if anyone could see any problems with the following:

When I rebuild our home server rack I figured I'd install a beefy photo editing machine in a 4RU box. High end graphics card, CPU with lots of cores, onboard M.2, 10gbps NIC, maybe a larger SATA SSD for preview storage and not much else. The OS, Lightroom and its catalogue files will all be stored locally on the M.2 SSD and any preview files can go on the SATA drive. All DNG files would be stored on a large NAS in the same server rack and accessed over the 10gbps network connection. 

I figure then I can just have a little all-in-one desktop computer in the home office and remote into the powerful computer to use lightroom over the network. I could even do this from a laptop in the living room this way. Really any computer on the home network could remote in and take advantage of the editing machine in the server room. None of the other computers would need to hold a copy of the catalogue or be very powerful at all.

Obviously two users can't use the catalogue simultaneously but I see this as a good work around to the catalogue syncing issue between computers. 

What do you think?


----------



## Linwood Ferguson

@Focus that sounds like a decent approach.  Some comments though:  Really high end graphics cards are unlikely to help much, and will not help at all if you are accessing it remotely; so far as I know Lightroom (and I think Photoshop) cannot use a graphics card other than the one driving the display (compared to various video editors where you can have cards dedicated to them).  I may be wrong - anyone?

The 10gbps is probably wasted but not expensive and can't hurt, I doubt you can saturate a 1gbs card.  But maybe, depends a lot on the NAS. Obviously over wifi to a laptop (if you are not wired) depends on the wifi.

I have the same setup basically on storage, an M.2 for much, then SATA SSD for photos.  I did some experimentation of moving stuff around and testing and did not see much difference with the catalog on slower vs faster disks for most operations; the previews on fast disk made  a much bigger difference as did ACR cache and temp storage.  So I went back to putting the catalog on Raid SSD drives just for a tiny bit of extra reliability.  I found it virtually impossible to saturate an M.2 card (vs. SATA SSD which you can), by the way.  Yes, there are some catalog operations that are slow, but the vast majority of the time I am waiting for the computer it's in develop mode, where the catalog is not terribly involved.

Be sure you find a good approach to backing up the result, and some mechanism for making sure you never confuse who has the master catalog if you are moving it around as you describe.  I worry a lot in such cases about human error (perhaps because this human is so unreliable). 

Also, for a NAS box, try to find one that has a file system aimed at detecting and correcting bit rot, e.g. one using zfs, btrfs, etc.  Most don't. 

Finally the one caveat I'll offer in that setup is the LAN connection.  CIFS/SMB is not as robust generally as direct disk access.  It's pretty robust, widely used, but if there is a weak point in your setup where corruption could creep in I would say it's that connection (assuming a quality NAS box).  You might also look at iSCSI as an alternative that may be both a bit faster and more reliable, if you can get both clients and NAS that support it.


----------



## Focus

Be sure you find a good approach to backing up the result, and some mechanism for making sure you never confuse who has the master catalog if you are moving it around as you describe.  I worry a lot in such cases about human error (perhaps because this human is so unreliable).
QUOTE]

So with what I was describing you would never need to have a master copy of a catalogue. There would only be one copy and that would stay on the server machine. The other computers would just remote in and use this copy. Then when they log out, the catalogue would stay where it was, locally stored on the server machine.


----------



## Linwood Ferguson

Focus said:


> So with what I was describing you would never need to have a master copy of a catalogue. There would only be one copy and that would stay on the server machine. The other computers would just remote in and use this copy. Then when they log out, the catalogue would stay where it was, locally stored on the server machine.


Yes, of course, sorry, just so much in the habit of people trying to use multiple copies.


----------



## Focus

Ferguson said:


> Yes, of course, sorry, just so much in the habit of people trying to use multiple copies.



Have you had any experience running creative cloud on Windows server OS? Does it work? Is it stable? Any downsides?


----------



## Linwood Ferguson

Focus said:


> Have you had any experience running creative cloud on Windows server OS? Does it work? Is it stable? Any downsides?


I have not.  I have found most programs though work the same, though Microsoft of course says one is "tuned" for interactive use, one for background tasks.  Some programs explicitly check and won't install on desktop, or on server, if they think their home is the opposite.  No idea on CC.  Someone here may chime in.

There's very little any more that, on modest hardware (i.e. not-data-center type) that you can't do on Windows Pro that you can on Server.  One though that is changing is rfs file systems, I think beginning soon (now?) you can't use them on regular pro, but need a special version of windows 10 or server.  I think rfs is immature though, personally (at least it was when I last looked).  It had the design to be a zfs-like file system, but they had not finished its support -- no obvious way to scan for problems, to fix them, it was all "set it and forget it and hope it worked because we won't tell you anything" type of microsoft fluff when I last looked.  Storage pulls with ntfs, on the other hand, I use on pro and am quite happy with (though you need some powershell magic to get much use of them).


----------



## PhilBurton

Ferguson said:


> I have not.  I have found most programs though work the same, though Microsoft of course says one is "tuned" for interactive use, one for background tasks.  Some programs explicitly check and won't install on desktop, or on server, if they think their home is the opposite.  No idea on CC.  Someone here may chime in.
> 
> There's very little any more that, on modest hardware (i.e. not-data-center type) that you can't do on Windows Pro that you can on Server.  One though that is changing is rfs file systems, I think beginning soon (now?) you can't use them on regular pro, but need a special version of windows 10 or server.  I think rfs is immature though, personally (at least it was when I last looked).  It had the design to be a zfs-like file system, but they had not finished its support -- no obvious way to scan for problems, to fix them, it was all "set it and forget it and hope it worked because we won't tell you anything" type of microsoft fluff when I last looked.  Storage pulls with ntfs, on the other hand, I use on pro and am quite happy with (though you need some powershell magic to get much use of them).


RFS = ???

Phil Burton


----------



## Linwood Ferguson

PhilBurton said:


> RFS = ???



Sorry,  mis-remembered, it's ReFS, for Resiliant File System.

Resilient File System (ReFS) overview

btrfs and zfs are two better known (linux-y) competitors.  The object is to have a design that allows (a) interruptions (e.g. unexpected power outage) that cannot corrupt data only cause loss of what was being written, (b) provide built in checksums or similar for validating file contents is not changing and is read correctly, and (c) optionally provide redundancy for automated recovery of data if corruption is detected. It's ideal for long term storage of large amounts of data. 

Rather than adopt a current standard, Microsoft had to reinvent the wheel of course.

When I built my last PC I was going to use it and experimented heavily, even posted for Microsoft's comment the observation that there were no management tools exposed (they agreed, and said "future").  Lately they announced they are pulling support from WIndows 10 Pro for it.


----------



## gabrieljorby

Hi,

I've tried not using a SCSI system, instead i have used a sync solution which keeps the files on the computer that runs Lightroom.
Synology NAS offers Sync capabilities, with client and server apps. It's named Synology Cloud Station Drive or Synology Drive.

Lightroom access the catalog from a local path (not network), then as soon as i change something in the catalog, it sync back to the NAS. I usually need to access previews and make slight changes in the metadata. Because the referenced files are not available, the metadata are saved in the catalog, until the referenced files are available again.

When i'm the same local network as where the NAS is located, I also access the referenced media and i can save the metadata in the file or save them in the sidecar.

The only issue with that is that you must be the only Lightroom user which access this catalog at the same time. Otherwise, things screws up and you start getting dupes and .conflicts files.

hope this helps.


----------



## cdubea

I've got a similar setup, but my catalog file is stored locally on the particular service that I'm using. I use Resilio Sync to synchronize the catalog to my Synology NAS which in turn synchronizes the catalog on my other computer.

This works just fine and I've had no catalog issues in some 3 years of doing this way.

I tried using Google Drive to store the catalog, but it was always getting corrupted.

Resilio Sync is free to use and there is a native app  for Synology NAS boxes.

Highly recommended.


----------



## miktyur

hi, im planning to go nas on my files. ive just started researching. 
so-- will lightroom work on a synology nas set up with the catalog on my laptop and the files on the nas drive? 
anything we need to do? im interested in not doing iscsi, i have a mac computer.

thanks


----------



## Victoria Bampton

Yes, that'll work miktyur. It can be a bit slower accessing the image files across a network, so you might want to build smart previews for the files you're actually working on (Library menu > Previews > Build Smart Previews) and check the Prefer smart previews instead of originals when editing checkbox in Preferences > Performance to make that a bit quicker.


----------



## miktyur

Victoria Bampton said:


> Yes, that'll work miktyur. It can be a bit slower accessing the image files across a network, so you might want to build smart previews for the files you're actually working on (Library menu > Previews > Build Smart Previews) and check the Prefer smart previews instead of originals when editing checkbox in Preferences > Performance to make that a bit quicker.


thanks victoria! now off to save up for a nas drive


----------



## clee01l

miktyur said:


> thanks victoria! now off to save up for a nas drive


Some pointers:  A Gigabit Ethernet cable attached from router to computer and the (little) computer on the network that is the NAS will give you the best possible speeds for data transfer.    I have been the NAS route and eventually abandoned the project due to performance issues moving data through a network.   For your Mac a non network solution might be best.  If your Mac supports Thunderbolt 2 or Thunderbolt 3, getting a TB3 external disk drive will give you transfer speeds equivalent to the internal Disk drive.    I moved from a NAS to several TB3 EHDs  to hold my data that does not need to fit on my internal Mac.  
If you have multiple computers on the network, you can share data disks on one computer with any other computer (this is what a NAS does in essence)  I have an iMac and a MBP.  I can get to any shared file on my iMac from my MBP as long as both are on the same network.  Of course any time you move data across an ethernet network, you are limited to the available throughput speed of the ethernet.


----------



## miktyur

clee01l said:


> Some pointers:  A Gigabit Ethernet cable attached from router to computer and the (little) computer on the network that is the NAS will give you the best possible speeds for data transfer.    I have been the NAS route and eventually abandoned the project due to performance issues moving data through a network.   For your Mac a non network solution might be best.  If your Mac supports Thunderbolt 2 or Thunderbolt 3, getting a TB3 external disk drive will give you transfer speeds equivalent to the internal Disk drive.    I moved from a NAS to several TB3 EHDs  to hold my data that does not need to fit on my internal Mac.
> If you have multiple computers on the network, you can share data disks on one computer with any other computer (this is what a NAS does in essence)  I have an iMac and a MBP.  I can get to any shared file on my iMac from my MBP as long as both are on the same network.  Of course any time you move data across an ethernet network, you are limited to the available throughput speed of the ethernet.


thanks for that info. i will use it for storage only and everytime id work on a project ill move them on my computer. was after the redundancy of a drive. i have ended up with 20 or more 4tb externals lol. was looking into a synology 8 bay to add drives slowly


----------



## Michael Bateman

Okay. Here’s a dopey idea:

What if you had a really tricked out NAS with LOTS of RAM and cores and a fast NIC and network and built a virtual machine on it. All the storage would be “local” to the (virtual) machine running Lightroom. You could have a huge expandable volume for storing images, no?

I know this is an expensive option but so long as you had a good console to access the virtual machine you’d be set. This would NOT solve any of the multiuser issues. 

Just an idea. Thanks for this thread. Lots of find information here. Including a lot of things that WON’T work. Maybe I have added to the second list! 

Michael



Sent from my iPhone using Tapatalk


----------



## Linwood Ferguson

If it's a VM running "on" the NAS and local to it, then it's not a NAS at all (from the perspective of the VM).  Well, unless rather than using the storage locally you access it by a CIFS/SMB/NFS/etc share, which would be a bit silly.

NAS more or less by definition means attached over the network.  What it's implemented inside of it is a whole different matter.  I keep a NAS system that I built from my prior desktop, and use it for backup.  It didn't stop being the same desktop hardware when I turned it into a NAS.  Indeed, it actually DOES have local programs running on it, including a network video recorder.   The NVR accesses the drives locally, not as NAS.  My nightly backup of my current desktop accesses the drives over the network, i.e. as a NAS.  Same drives, it's about access, not device.

I could (and used to) run Lightroom locally on it with the catalog stored locally.  I should not run lightroom on my current desktop with the same catalog in the same place accessing it over the network.


----------



## gfinlayson

OK, this has probably been asked before, but.......

I've just upgraded my network and storage setup  - I have a Windows 10 desktop PC in my office (dedicated concrete 'man-shed' in the back garden). I have Cat 6 GbE for internet and the like, plus a dedicated 20,000 Mb/s link (2 x 10Gb-SR in LACP over fibre optic) to a Synology RS3617xs with 12 x 4 TB HDDs in RAID10. NAS is in the house for security reasons and the fact that it's FAR TOO LOUD to tolerate in my office.

Rather than running my catalog on the local PC and having to back it and the backups   up to the NAS, and then having the NAS sync the backup to the local storage/cloud, is there a really good reason why I couldn't/shouldn't run the catalog from the NAS on an iSCSI target volume? 

My plan going forward is to keep photo folders and catalogs together in one place if possible and keep the catalog sizes relatively small.

I've read lots of arguments about network speed limitations being a reason to not have the catalog networked, but my network transfer speeds are significantly faster than even a local SATA SSD would be. Are there other good technical reasons why it's not a good idea?


----------



## clee01l

If LR is installed locally on the NAS (actually another computer on the network running an OS that can install LR,  Then every thing is local to that machine.  You would need to login remotely to that machine to run LR,  Your computer them would behave as a dumb terminal.  I have not attempted this remote login recently, but the screen refresh on the terminal is glacial.


----------



## Michael Bateman

clee01l said:


> I have not ! this remote login recently, but the screen refresh on the terminal is glacial.



I hear you! This truly is a half baked idea. Thinking outside the box actually requires that you know where and what the box is. For sure. But I knew some computer animators working for a large movie studio you have heard of in Southern California. All their jobs got moved to canada and during and after the transition stragglers would work from terminals down south with super fast connections to their data (large amounts) from a keyboard and monitor. What I am suggesting would be local over a 10gbe connection. 

I am still researching this idea. I’ll start a new thread (maybe elsewhere) if I get us too off topic but at the heart of this thread is a storage issue. Many of us are pushing Lightroom Classic’s limitations on the desktop and have workflows not yet well supported by Lightroom CC (just LightRoom now!)

To the other comment yes, the NAS stops being a NAS when you use it as I have discussed but within one’s own NAS wouldn’t I have the all the other advantages of what people normally get a NAS to do? Backup automation, expandability, etc. I mean I would have all the tools necessary to create another virtual machine in support of a local workgroup provided I have good fast local cables and switches in support of 10gbe?  I confess I have limited experience with VMware. I have built machines on AWS but that’s a lot different than running VMWare on my own hardware I would imagine. 

It’s just an idea. I am new to the class. Send me a private message if you want me to take this elsewhere, I get it. This is a LightRoom forum. And trust me if I had synchronous bi-directional gigabit INTERnet many of my current issues with LightRoom CC and LightRoom Web would probably not be issues. I am just trying to play to my advantages as they are now which are super fast local network (10gbe) and storage and 300/30 internet speed. Time is money and I shoot a LOT. 

Thanks all. 

Michael



Sent from my iPhone using Tapatalk


----------



## PhilBurton

Michael Bateman said:


> I hear you! This truly is a half baked idea. Thinking outside the box actually requires that you know where and what the box is. For sure. But I knew some computer animators working for a large movie studio you have heard of in Southern California. All their jobs got moved to canada and during and after the transition stragglers would work from terminals down south with super fast connections to their data (large amounts) from a keyboard and monitor. What I am suggesting would be local over a 10gbe connection.
> 
> I am still researching this idea. I’ll start a new thread (maybe elsewhere) if I get us too off topic but at the heart of this thread is a storage issue. Many of us are pushing Lightroom Classic’s limitations on the desktop and have workflows not yet well supported by Lightroom CC (just LightRoom now!)
> 
> To the other comment yes, the NAS stops being a NAS when you use it as I have discussed but within one’s own NAS wouldn’t I have the all the other advantages of what people normally get a NAS to do? Backup automation, expandability, etc. I mean I would have all the tools necessary to create another virtual machine in support of a local workgroup provided I have good fast local cables and switches in support of 10gbe?  I confess I have limited experience with VMware. I have built machines on AWS but that’s a lot different than running VMWare on my own hardware I would imagine.
> 
> It’s just an idea. I am new to the class. Send me a private message if you want me to take this elsewhere, I get it. This is a LightRoom forum. And trust me if I had synchronous bi-directional gigabit INTERnet many of my current issues with LightRoom CC and LightRoom Web would probably not be issues. I am just trying to play to my advantages as they are now which are super fast local network (10gbe) and storage and 300/30 internet speed. Time is money and I shoot a LOT.
> 
> Thanks all.
> 
> Michael
> 
> 
> 
> Sent from my iPhone using Tapatalk


Michael,

If you have  10 GB Ethernet and have or are willing to put up a NAS,, then you should have the funds for a reasonably fast workstation with a big enough SSD.  Having the LR catalog on a NAS is a real performance boost.  If you shoot a lot, then keep your actual images files on a fast (7200 rpm) HDD, also local.  SSDs have dropped a lot in price in the past year.

There is also a more fundamental issue.  The Lightroom catalog is based on SQLite, which is designed for local operation only, no networked operation.  In Windows (not sure about MacOS), the file access commands are different between local access and network access.  With network access, you risk subtle but real catalog corruption.

Phil


----------



## Michael Bateman

PhilBurton said:


> you should have the funds for a reasonably fast workstation with a big enough SSD. Having the LR catalog on a NAS is a real performance boost. If you shoot a lot, then keep your actual images files on a fast (7200 rpm) HDD, also local. SSDs have dropped a lot in price in the past year.



Huge thank you brother! The price drop in SSDs in the past year has kinda snuck up on me. You are correct I might just have a look and rethink this. I do think you meant to say that having the catalog on an SSD is a huge performance boost. Yeah. 

At the end of the day my issues with Lightroom are that it’s not an enterprise tool nor a workgroup tool and that I like it too much! I am addicted to the ease in which I am able to edit massive amounts of photos while chilling out on the couch! I am such a self indulgent lazy old guy dang it! I have this love/like relationship with Lightroom mobile. I wish I could have adobe’s cloud in my own server rack. I want to have a bunch of friends over to take wildlife pictures in the woods out back and have them waiting for us back at the house ready to edit when we go back. But I digress. 

Thanks for indulging my (probably futile) brainstorming. This is a great group. Thanks very much. 

Michael


Sent from my iPhone using Tapatalk[/QUOTE]


----------



## PhilBurton

Michael Bateman said:


> Huge thank you brother! The price drop in SSDs in the past year has kinda snuck up on me. You are correct I might just have a look and rethink this. I do think you meant to say that having the catalog on an SSD is a huge performance boost. Yeah.
> 
> At the end of the day my issues with Lightroom are that it’s not an enterprise tool nor a workgroup tool and that I like it too much! I am addicted to the ease in which I am able to edit massive amounts of photos while chilling out on the couch! I am such a self indulgent lazy old guy dang it! I have this love/like relationship with Lightroom mobile. I wish I could have adobe’s cloud in my own server rack. I want to have a bunch of friends over to take wildlife pictures in the woods out back and have them waiting for us back at the house ready to edit when we go back. But I digress.
> 
> Thanks for indulging my (probably futile) brainstorming. This is a great group. Thanks very much.
> 
> Michael
> 
> 
> Sent from my iPhone using Tapatalk


[/QUOTE]
Michael,

Yes, I meant SSD when I wrote NAS.  And yes, this is a truly great group of people. 

I am somewhat surprised that Adobe hasn't tried to build an workgroup/enterprise version of Lightroom, because the rest of their Creative Suite products are all workgroup/enterprise ready and have corporate pricing.  it very well may be that eventually, eventually (and i am completely speculating here) that the cloudy version will evolve with all of the features of the desktop version and be inherently workgroup/enterprise ready.  I think I know the technical obstacles to converting the current desktop version to a workgroup tool, but it's not worth any discussion here.

If you have your own server rack, then (1) I am bright green with envy, and (2) may I suggest the following components for a killer homebuilt Lightroom  workstation:


AMD Threadripper CPU AMD 2nd Gen RYZEN Threadripper 2990WX 32-Core, 64-Thread, 4.2 GHz Max Boost (3.0 GHz Base), Socket sTR4 250W YD299XAZAFWOF Desktop Processor - Newegg.com or Intel Extreme Edition Intel Core i9, Processors - Desktops, CPUs / Processors, Components - Newegg.com
A motherboard with 128 GB of RAM, and at least 2 m.2 slots for the fastest NMVe SSDs. 
An nVidia 2080 Super GPU card.
2 TB SSD
etc.


----------



## johnrellis

Since this thread appears at the top of Google results for "lightroom catalog NAS", I'd like to clarify the technical issues of why Lightroom restricts catalogs from being placed on network volumes.  (Note that Storage Area Network (SAN) systems using SCSI or iSCSI over ethernet are completely different technical beasts and there is no evidence indicating that placing a catalog on a SAN volume would lead to corruption.)  Here is what I wrote in the Adobe Lightroom forums:

------------------------------------------------
Re: Operating Lightroom CC Classic via network drive? 

The issue with catalogs stored on network drives is that SQLite relies on the filesystem to provide correctly working file locking, and when the LR team was testing this 12 years ago, many implementations of network file systems didn't implement file locking correctly. The SQLite documentation warned about it:


> "One should note that POSIX advisory locking is known to be buggy or even unimplemented on many NFS implementations (including recent versions of Mac OS X) and that there are reports of locking problems for network filesystems under Windows. Your best defense is to not use SQLite for files on a network filesystem."


Senior Adobe engineer Dan Tull did some quickie experiments in 2007 [using NFS, not SMB] and found that by simply disconnecting a network cable, he could corrupt a LR catalog stored on a file server.  Given that there were many potential network-attached storage (NAS) products with this problem, Adobe felt, not unreasonably, that the risk and cost of users corrupting their catalogs was too high. There used to be a hidden config.lua switch to override LR's prohibition, but that stopped working a long time ago.

See this long discussion topic starting here, including Dan Tull's contributions: Lightroom Classic and CC: Allow Catalog to be stored on a networked drive. | Photoshop Family Custom...

Twelve years later, and the landscape has changed considerably -- Windows SMB implementations are much more mature and Samba has become the standard non-Microsoft implementation of the SMB protocol.  LR has switched to using the write-ahead-logging mode (WAL) of SQLite, which has different issues with network files.   I'm not aware of anyone who has done recent tests with network-stored LR catalogs or written authoritatively about SQLite and newer file-server implementations.

Adobe has shown no inclination to re-examine the issue. Last year they changed LR to use the higher-performance write-ahead logging mode of SQLite and broke the longstanding ability to import from catalogs stored on network servers. Even though the fix for that is trivial, Adobe hasn't implemented it.  I'm not sure the LR team has the requisite engineering skills to re-examine competently network storage of catalogs.

Finally, note that there are two somewhat distinct issues: Allowing users one-user-at-time access to catalogs stored on network servers, and allowing multiple users to access server-stored catalogs concurrently.  The former is perhaps solvable with a small amount of re-engineering of how LR uses SQLite or perhaps doing a relatively low-level port to another database engine. The latter would require significant re-engineering of LR, introducing an application-level server architecture (and probably a database more amenable to that architecture, e.g. MySQL), and introducing new concepts and features into the user interface.


----------



## johnrellis

Correction: "Senior Adobe engineer Dan Tull did some quickie experiments in 2007 [using NFS, not SMB Mac server, Windows XP client, SMB]"


----------



## nevrsmer

gfinlayson said:


> OK, this has probably been asked before, but.......
> 
> I've just upgraded my network and storage setup  - I have a Windows 10 desktop PC in my office (dedicated concrete 'man-shed' in the back garden). I have Cat 6 GbE for internet and the like, plus a dedicated 20,000 Mb/s link (2 x 10Gb-SR in LACP over fibre optic) to a Synology RS3617xs with 12 x 4 TB HDDs in RAID10. NAS is in the house for security reasons and the fact that it's FAR TOO LOUD to tolerate in my office.
> 
> Rather than running my catalog on the local PC and having to back it and the backups   up to the NAS, and then having the NAS sync the backup to the local storage/cloud, is there a really good reason why I couldn't/shouldn't run the catalog from the NAS on an iSCSI target volume?
> 
> My plan going forward is to keep photo folders and catalogs together in one place if possible and keep the catalog sizes relatively small.
> 
> I've read lots of arguments about network speed limitations being a reason to not have the catalog networked, but my network transfer speeds are significantly faster than even a local SATA SSD would be. Are there other good technical reasons why it's not a good idea?



Hello,



gfinlayson said:


> OK, this has probably been asked before, but.......
> 
> I've just upgraded my network and storage setup  - I have a Windows 10 desktop PC in my office (dedicated concrete 'man-shed' in the back garden). I have Cat 6 GbE for internet and the like, plus a dedicated 20,000 Mb/s link (2 x 10Gb-SR in LACP over fibre optic) to a Synology RS3617xs with 12 x 4 TB HDDs in RAID10. NAS is in the house for security reasons and the fact that it's FAR TOO LOUD to tolerate in my office.
> 
> Rather than running my catalog on the local PC and having to back it and the backups   up to the NAS, and then having the NAS sync the backup to the local storage/cloud, is there a really good reason why I couldn't/shouldn't run the catalog from the NAS on an iSCSI target volume?
> 
> My plan going forward is to keep photo folders and catalogs together in one place if possible and keep the catalog sizes relatively small.
> 
> I've read lots of arguments about network speed limitations being a reason to not have the catalog networked, but my network transfer speeds are significantly faster than even a local SATA SSD would be. Are there other good technical reasons why it's not a good idea?



Hello gfinlayson,

I was wondering what the outcome was if you attempted to run a Lightroom catalog on an iSCSI target volume or if you found some other way to do so.

Please let me know when time permits.  I find myself in a similar situation, and would love to be able to have everything - the catalog and image files - on a centralized storage location.

Thank you!


----------



## Hal P Anderson

You couldn't do it in 2012, and you still can't.


----------



## johnrellis

Hal, nevrsmer is asking about using iSCSI, which is a SAN (Storage Area Network) technology for accessing disks over the network.  SAN is much different than NAS (Network Attached Storage), which is what most people want to use with LR.  To the client operating system, the remote disks appear to be locally attached. The file-locking issues that LR and SQLite have with NAS server implementations don't apply to SAN volumes, since the client operating system is doing all locking exactly the same as with locally attached disks.

A quick web search shows a number of reports of people successfully placing LR catalogs on SAN volumes residing on Synonology, QNAP, and other such devices:

https://www.youtube.com/watch?v=JaCKA1YaOoM 
https://forum.qnap.com/viewtopic.php?p=453677&sid=5b34dec9be3591cca1c7fc0d54b2c656https://community.spiceworks.com/topic/192537-lightroom-on-nas-san


----------



## Hal P Anderson

John,
I stand corrected. Thanks.


----------



## gorlen

iSCSI doesn't support locking, so the shared volume's filesystem can be corrupted if two or more client machines attempt to create or write files at the same time, unless they are all running a clustered file system specifically designed for this type of configuration (Clustered file system - Wikipedia).

I've been accessing catalogs on a NAS from Winodws PCs since 2011.  See kgorlen/lightroom for details on why this works, the risks involved, and how to set it up.


----------



## johnrellis

Excellent detail, thanks for sharing that.

You say, "iSCSI doesn't support locking".  If one client machine is accessing an iSCSI drive, and multiple processes on that single machine place OS locks on the same file, will those local locks work properly?


----------



## gorlen

Yes, that should work.  It's equivalent to accessing a directly attached (unshared) drive, except for performance.


----------



## johnrellis

That's good to know. As you've read, SQLite depends on file locking to operate correctly, and a multi-threaded program that accesses a database concurrently depends on correct locking. I believe that LR has multiple threads accessing the database concurrently, which is why Adobe engineer Dan Tull's NAS tests failed in 2007 (his environment was testing an NFS file server with broken locking).


----------



## gorlen

Yes.  The README shown at the link I gave above (kgorlen/lightroom) has references to Dan Tull's tests and relevant SQLite/SMB documentation.  The situation is similar to that of GPUs: Lightroom *should* work with any GPU that *correctly* implements a supported interface (DirectX 12, Metal), but Adobe has actually tested some to assure compatibility and provides a list.  Adobe could do the same  for popular NAS and file server products and remove the network drive restriction.


----------



## johnrellis

The difference between GPUs and file servers, of course, is that a failing GPU is non-destructive of the catalog. While certifying popular NAS products makes eminent sense, I doubt that after all these years Adobe would do it. (I doubt whether the LR team has the requisite systems-engineering skills -- the  engineers who built LR are all long gone.)


----------



## atj777

I have the reverse problem... sort of...

In my quest to run Lightroom 6.14 on my MacBook Pro with Catalina, I have installed a Virtual Machine running Mojave and installed Lightroom 6.14 on that.  It works perfectly.  So far so good.

My main catalog and all my image files are on my Mac Mini.  I just want to use the MacBook Pro if I'm travelling (which won't be happening for a long time) and for being able to review some images while I'm sitting in the backyard or even in front of TV (and when I'm back travelling to work by train).  For the reviewing, I would just export as catalog (with smart previews) from my main catalog and copy to the MacBook Pro.  I used to do this with my old MacBook and it worked well.

I would rather copy them to local drive of the MacBook and then access them via the virtual machine so I don't have to make too large a virtual drive on the VM.  The problem is that shared folders appear as network folders on the VM and so Lightroom won't let me open them.

Does anyone know of a workaround?  They aren't strictly on the network and there's no way I can access them on from Lightroom on Catalina.


----------



## atj777

I should have mentioned I'm using VMware Fusion 11.5.


----------



## johnrellis

I think the best you can do is to put the catalog folder on the virtual machine's drive and put the photos on the physical drive. But that's mildly inconvenient, since after exporting the catalog from your Mini, you'll have to separate the photos from the catalog.


----------



## atj777

johnrellis said:


> I think the best you can do is to put the catalog folder on the virtual machine's drive and put the photos on the physical drive. But that's mildly inconvenient, since after exporting the catalog from your Mini, you'll have to separate the photos from the catalog.


I don't need the photo files as I'm exporting with Previews and Smart Previews.  I end up with three files per export: Catalog, Smart Previews, and Previews.  They all have to stay together.


----------



## johnrellis

Some people have used symbolic links to place the previews and smart-previews subfolders in different locations than inside the catalog folder, e.g. 
Is it possible to move the file location for Lightroom Previews?


----------



## Linwood Ferguson

I don't speak mac, but in windows you can mount a virtual drive in the hypervisor as well.  So for example if my machine X has a VM Y, and Y has a virtual disk Z, when Y is not running I can mount Z in X and use it.

Could something like that help?  A separate virtual drive that is mounted either in the VM or in the main machine, just not both? 

(Note if it's different OS versions that might complicate things if the drive formats changed). 

Again, I don't quite know how to spell Mac.  Mc.  Something like that.


----------



## atj777

johnrellis said:


> Some people have used symbolic links to place the previews and smart-previews subfolders in different locations than inside the catalog folder, e.g.
> Is it possible to move the file location for Lightroom Previews?


Yeah, I tried symbolic links.  Lightroom still knows the catalog is on a "network" drive.


----------



## atj777

Ferguson said:


> I don't speak mac, but in windows you can mount a virtual drive in the hypervisor as well.  So for example if my machine X has a VM Y, and Y has a virtual disk Z, when Y is not running I can mount Z in X and use it.
> 
> Could something like that help?  A separate virtual drive that is mounted either in the VM or in the main machine, just not both?
> 
> (Note if it's different OS versions that might complicate things if the drive formats changed).
> 
> Again, I don't quite know how to spell Mac.  Mc.  Something like that.


It is easy enough on a Mac to create a disk image that could be mounted to either the host of the guest - in fact that's exactly how Mojave was installed on the guest.

The problem with this approach is that it carves off a chunk of the host's hard drive and so really doesn't provide any benefit over just making the drive of the guest larger.


----------



## gfinlayson

OK, this has probably been asked before, but.......

I've just upgraded my network and storage setup  - I have a Windows 10 desktop PC in my office (dedicated concrete 'man-shed' in the back garden). I have Cat 6 GbE for internet and the like, plus a dedicated 20,000 Mb/s link (2 x 10Gb-SR in LACP over fibre optic) to a Synology RS3617xs with 12 x 4 TB HDDs in RAID10. NAS is in the house for security reasons and the fact that it's FAR TOO LOUD to tolerate in my office.

Rather than running my catalog on the local PC and having to back it and the backups   up to the NAS, and then having the NAS sync the backup to the local storage/cloud, is there a really good reason why I couldn't/shouldn't run the catalog from the NAS on an iSCSI target volume? 

My plan going forward is to keep photo folders and catalogs together in one place if possible and keep the catalog sizes relatively small.

I've read lots of arguments about network speed limitations being a reason to not have the catalog networked, but my network transfer speeds are significantly faster than even a local SATA SSD would be. Are there other good technical reasons why it's not a good idea?


----------



## Linwood Ferguson

atj777 said:


> Yeah, I tried symbolic links.  Lightroom still knows the catalog is on a "network" drive.


Does it have a concept of mount points, where the remote drive becomes a folder in another drive?


----------



## atj777

Ferguson said:


> Does it have a concept of mount points, where the remote drive becomes a folder in another drive?


It does.  It is very much Unix under the covers.  Lightroom still thinks it is a folder on a network drive.


----------



## PhilBurton

Ferguson said:


> I don't speak mac, but in windows you can mount a virtual drive in the hypervisor as well.  So for example if my machine X has a VM Y, and Y has a virtual disk Z, when Y is not running I can mount Z in X and use it.
> 
> Could something like that help?  A separate virtual drive that is mounted either in the VM or in the main machine, just not both?
> 
> (Note if it's different OS versions that might complicate things if the drive formats changed).
> 
> Again, I don't quite know how to spell Mac.  Mc.  Something like that.


Ferguson,

I rarely use VMs, but the ability to mount that virtual disk Z sounds interesting.  How do you do that?  I'm using MS's VM feature.

Phil


----------



## Linwood Ferguson

If it's a hyperv disk, i.e. a vhd or vhdx, just make sure no VM is using it, right click and say "mount"; it's usually the top option in the context menu.

Do NOT do this if there's a checkpoint outstanding as it will break the chain between parent and deltas and you will not be able to start the VM.  But if it's just a regular drive you can mount it, even if it's the system disk for a VM. 

And do NOT let two things mount it at once (e.g. you and the VM running).  I think it checks and prevents it, but not sure.


----------



## gorlen

gorlen said:


> iSCSI doesn't support locking, so the shared volume's filesystem can be corrupted if two or more client machines attempt to create or write files at the same time, unless they are all running a clustered file system specifically designed for this type of configuration (Clustered file system - Wikipedia).
> 
> I've been accessing catalogs on a NAS from Winodws PCs since 2011.  See kgorlen/lightroom for details on why this works, the risks involved, and how to set it up.



I recently discovered and am now trying out a workaround on Windows to allow a catalog on a NAS with previews on a local drive to improve performance: a folder on a local SSD with symlinks to the .lrcat file and the Backups directory on the SAMBA network share.

While LrC will display the "Lightroom cannot open the catalog named ... located on network volume ..." when double-clicking or directly opening the .lrcat symlink, clicking "OK", then choosing the same symlink from the Select Catalog dialog box will succeed, and it can be set as the default startup catalog.  Also, LrC locks the catalog on the network share, preventing use/corruption by other LrC instances on the same or other machines; i.e., the .lrcat.lock file LrC creates need not be on the network share in order to lock out other instances.

I'll update the details at kgorlen/lightroom in a few weeks when I'm satisfied that this works without issues.


----------



## Linwood Ferguson

I'm sure one can work around Adobe's restrictions.  It's worth asking if it is a good idea.  It's not like Adobe did that to make more money selling disk drives.


----------



## gorlen

Ferguson said:


> I'm sure one can work around Adobe's restrictions.  It's worth asking if it is a good idea.  It's not like Adobe did that to make more money selling disk drives.


See kgorlen/lightroom for a description of why it's technically sound and for references to background on how Adobe came to impose this restriction.


----------



## Linwood Ferguson

I've read it in the past.  I've done a fair amount of SQLite programming including over networks.   

But two considerations: 

NAS connectivity is far more problematic than local drives.  Especially since so many people use wifi.  The frequency with which you may have interrupted I/O operations is likely to be far higher on NAS than a local drive.  Every time you have an interruption of I/O on a stupid little SQLite database you risk corruption. It's an odds game.

It's a reason I always recommend people not put catalogs on EHD's.  They are more reliable than NAS connectivity but less reliable than local, because there's always a dumb human that might hit the cable.  Unfortunately they are awfully convenient for people using multiple computers.

Also, and a moot point for many perhaps: If you ever wanted Adobe to help with a corrupted catalog, telling them it is on a NAS is not motivational.


----------



## gorlen

Ferguson said:


> I've read it in the past.  I've done a fair amount of SQLite programming including over networks.
> 
> But two considerations:
> 
> NAS connectivity is far more problematic than local drives.  Especially since so many people use wifi.  The frequency with which you may have interrupted I/O operations is likely to be far higher on NAS than a local drive.  Every time you have an interruption of I/O on a stupid little SQLite database you risk corruption. It's an odds game.
> 
> It's a reason I always recommend people not put catalogs on EHD's.  They are more reliable than NAS connectivity but less reliable than local, because there's always a dumb human that might hit the cable.  Unfortunately they are awfully convenient for people using multiple computers.
> 
> Also, and a moot point for many perhaps: If you ever wanted Adobe to help with a corrupted catalog, telling them it is on a NAS is not motivational.


All true, and I'd definitely not do this over wifi.

Other considerations in favor of NAS is that it is easier and less expensive than client PCs to set up (1) with RAID to protect against single-drive failures, (2) with power from a UPS to allow graceful shutdown when power fails, (3) to be "always up" to run nightly backups, when catalogs are less likely to be in use.

Since you've worked with SQLite over networks, perhaps you know the details of how Windows handles database files.  From what I've read, Windows will automatically set oplocks on network files and cache them locally.  Since only a single Lightroom instance at a time can be accessing the database files, the server won't receive any requests from other clients to break the oplocks, resulting in all/most database operations happening locally until the files are closed.  Is this actually what happens?


----------



## Linwood Ferguson

What I've done in the past is access SQLite on windows (e.g. Calibre as a repository) from embedded linux systems (e.g. a player piano system using windows as a library).  I recall needing to disable byte range locking to get it to work with smb/cifs (norbl in options).  If you didn't disable rbl, write operations would fail (I do not recall if all, or some).  In my case it was very well controlled, and only for me (not a client), as I only updated on windows when adding music, so all access was over cifs, and I did not pursue further once I found a workaround.  In that case caching on or off made no difference. And that was 2017-ish.  Haven't gone back to it (though oddly enough that system failed last week and I'm about to rebuild it, with fresher linux as what I had is no longer updatable, so I might end up refreshing my memory). 

Your access is windows to windows, I take it.  Frankly I've done little of that with databases; my windows to windows was almost always MS SQL and a bit of Oracle, which isn't filesystem oriented. 

Sorry.  Long ramble for a short subject: we each must decide our tolerance for risk.  I tend to be conservative for things hard to fix, like lightroom's catalog.


----------



## PhilBurton

gorlen said:


> I recently discovered and am now trying out a workaround on Windows to allow a catalog on a NAS with previews on a local drive to improve performance: a folder on a local SSD with symlinks to the .lrcat file and the Backups directory on the SAMBA network share.
> 
> While LrC will display the "Lightroom cannot open the catalog named ... located on network volume ..." when double-clicking or directly opening the .lrcat symlink, clicking "OK", then choosing the same symlink from the Select Catalog dialog box will succeed, and it can be set as the default startup catalog.  Also, LrC locks the catalog on the network share, preventing use/corruption by other LrC instances on the same or other machines; i.e., the .lrcat.lock file LrC creates need not be on the network share in order to lock out other instances.
> 
> I'll update the details at kgorlen/lightroom in a few weeks when I'm satisfied that this works without issues.


@gorlen,
In your particular use case and with your technical skills, perhaps you have a design that prevents catalog corruption.  But in general, as it has been said many, many times in this forum, Adobe engineered the catalog in such a way that storing the catalog on any sort of network drive creates a risk of data corruption.  

_How much is your time worth?  How important is it to absolutely minimize the risk of catalog corruption.  _Pay The Man and get an additional or larger internal or USB external drive.  

And don't forget backups.  On this forum, there are many tales of woe posted by people whose catalog got corrupted and didn't have a backup.  Back up every single time you exit Lightroom, not just once a week or once a month.


----------



## gorlen

Ferguson said:


> Your access is windows to windows, I take it.  Frankly I've done little of that with databases; my windows to windows was almost always MS SQL and a bit of Oracle, which isn't filesystem oriented.


Configuration is Windows clients to a NAS running a SAMBA server on some Linux derivative.


----------



## gorlen

PhilBurton said:


> @gorlen,
> In your particular use case and with your technical skills, perhaps you have a design that prevents catalog corruption.  But in general, as it has been said many, many times in this forum, Adobe engineered the catalog in such a way that storing the catalog on any sort of network drive creates a risk of data corruption.



Right, I'd not recommend this for the totally technically unskilled.  The main problem has been that every couple of years, either QNAP or Netgear or CrashPlan or whoever releases a software upgrade that breaks something critical, and diagnosing the problem and dealing with product tech support is a pain.  I'm always looking for more reliable NAS and cloud backup vendors.

From what I've read, it doesn't appear that "Adobe engineered the catalog ...".  The Lightroom catalog is an SQLite database.



PhilBurton said:


> _How much is your time worth?  How important is it to absolutely minimize the risk of catalog corruption.  _Pay The Man and get an additional or larger internal or USB external drive.



Space isn't the issue.  I'm not convinced that simply placing a catalog on a local drive provides the best protection.  In my experience, power, software, hard disk drives, and manual backup procedures fail much more often than hard-wired LAN components or NAS enclosures.



PhilBurton said:


> And don't forget backups.  On this forum, there are many tales of woe posted by people whose catalog got corrupted and didn't have a backup.  Back up every single time you exit Lightroom, not just once a week or once a month.



Good advice.  Reliable backup is my primary motivation for this configuration.  Accessing catalogs from multiple PCs is a bonus.  Unlike a typical desktop PC, the NAS is set up with RAID to protect against single-drive failures, connected to a UPS so cached writes can be flushed to drives when power fails, and is "always up" to automatically run nightly backups -- when Lightroom catalogs are not likely in use -- to the cloud and to two local archives.  The result is five automatic copies of everything, one off-site. The archives, particularly the one off-site, are important protection against ransomware.

And I regularly back up when exiting Lightroom -- to the NAS, of course.


----------



## PhilBurton

gorlen said:


> Right, I'd not recommend this for the totally technically unskilled.  The main problem has been that every couple of years, either QNAP or Netgear or CrashPlan or whoever releases a software upgrade that breaks something critical, and diagnosing the problem and dealing with product tech support is a pain.  I'm always looking for more reliable NAS and cloud backup vendors.
> 
> From what I've read, it doesn't appear that "Adobe engineered the catalog ...".  The Lightroom catalog is an SQLite database.
> 
> 
> 
> Space isn't the issue.  I'm not convinced that simply placing a catalog on a local drive provides the best protection.  In my experience, power, software, hard disk drives, and manual backup procedures fail much more often than hard-wired LAN components or NAS enclosures.
> 
> 
> 
> Good advice.  Reliable backup is my primary motivation for this configuration.  Accessing catalogs from multiple PCs is a bonus.  Unlike a typical desktop PC, the NAS is set up with RAID to protect against single-drive failures, connected to a UPS so cached writes can be flushed to drives when power fails, and is "always up" to automatically run nightly backups -- when Lightroom catalogs are not likely in use -- to the cloud and to two local archives.  The result is five automatic copies of everything, one off-site. The archives, particularly the one off-site, are important protection against ransomware.
> 
> And I regularly back up when exiting Lightroom -- to the NAS, of course.


@gorlen,

I'm sure you realize that you are probably in the top 10% of technical skills of the members of this forum.  The "other 90%" need a total Lightroom hardware/software configuration that is "set it and forget it" with reasonable cost and excellent reliability.


----------



## gorlen

PhilBurton said:


> @gorlen,
> 
> I'm sure you realize that you are probably in the top 10% of technical skills of the members of this forum.  The "other 90%" need a total Lightroom hardware/software configuration that is "set it and forget it" with reasonable cost and excellent reliability.



That was exactly my goal when I decided on this configuration nearly 10 years ago.  Sadly, it's not worked out as well as I'd hoped.  Equipment and storage costs are reasonable, I've not lost data, performance is good, availability is high, and it normally requires only an hour or so monthly to install software updates (for security).  But configuration is complicated and error-prone, and I've experienced serious outages every couple of years when software updates are defective or make unexpected changes to functionality.  When this happens, my computer engineering experience is needed.

But I wonder if *any* consumer electronics product is "set and forget".  My Canon cameras have come closest by far to being trouble free.  Among everything else -- Windows PCs, NAS, Android phones/tablets, video/audio devices, etc. -- there's always something malfunctioning, nearly always due to software/firmware defects or incompatibilities.

I've posted because "how to share a Lightroom catalog among PCs?" is a frequently-asked question on this and many other forums.  The discussions are filled with questionable responses, e.g. "can't be done", "manually switch external drives", "sync with Dropbox/Google Drive", "use iSCSI" (horrors!).  So I've contributed my $.02 worth.


----------



## PhilBurton

I hardly have you


gorlen said:


> That was exactly my goal when I decided on this configuration nearly 10 years ago.  Sadly, it's not worked out as well as I'd hoped.  Equipment and storage costs are reasonable, I've not lost data, performance is good, availability is high, and it normally requires only an hour or so monthly to install software updates (for security).  But configuration is complicated and error-prone, and I've experienced serious outages every couple of years when software updates are defective or make unexpected changes to functionality.  When this happens, my computer engineering experience is needed.
> 
> But I wonder if *any* consumer electronics product is "set and forget".  My Canon cameras have come closest by far to being trouble free.  Among everything else -- Windows PCs, NAS, Android phones/tablets, video/audio devices, etc. -- there's always something malfunctioning, nearly always due to software/firmware defects or incompatibilities.
> 
> I've posted because "how to share a Lightroom catalog among PCs?" is a frequently-asked question on this and many other forums.  The discussions are filled with questionable responses, e.g. "can't be done", "manually switch external drives", "sync with Dropbox/Google Drive", "use iSCSI" (horrors!).  So I've contributed my $.02 worth.


I hardly have  your computer engineering experience, but I have learned a lot by some reading and a lot of trial-and-error.  I am also the de facto IT support person.  And I agree that there is "always something" at issue, even as Microsoft and other software companies have made big strides in reducing the need for user expertise.  [Do you remember that days when you had to manually set IRQs on your PC,  for the parallel and serial ports?  I do, and I don't, mixing my meanings here.]

For most people, I think that the less complex their setup, the happier they will be.  That probably means using an external hard drive from Western Digital (not Seagate!!!), and staying away from questionable websites.

Even my Nikon D3 is not totally problem free.  When the backup batter in the D3 body fell out, or ran low, then I got weird dates for my photos.  That's when I learned about EXIFTool and EXIFToolGUI.

I do also remember SCSI, since I worked at the company that produced SASI, but those days are lost in the mists of time.

Phil Burton


----------



## Linwood Ferguson

gorlen said:


> From what I've read, it doesn't appear that "Adobe engineered the catalog ...".  The Lightroom catalog is an SQLite database.


Yes and no.  They use SQLite, but they decided how everything inside hangs together, how normalized (or not) the data structures are.  To a very real degree, how you design the schema determines how fragile the database is.  But more to the current point, how you ACCESS the database also impacts that.  Lightroom is very chatty -- even when you are idle (maybe especially when you are idle) it can be making massive updates to the database and doing so in multiple threads.  Interruption of these idle process updates may cause corruption.  Keeping the database open when not updating, depending on how carefully you manage caching and write logs can cause vulnerabilities in actual idle time.

I do not mean to imply Adobe did this badly -- or well for that matter -- but there is a LOT one can do to make a SQLite database more, or less, vulnerable to corruption.  Not all designs are created equal.


----------



## gorlen

PhilBurton said:


> [Do you remember that days when you had to manually set IRQs on your PC,  for the parallel and serial ports?  I do, and I don't, mixing my meanings here.]



I do too.  That was *much* easier than programming the Peripheral Processors (PPs) on the CDC 6000 series.  They didn't have interrupts -- had to poll external devices for status and data.  There were 10 PPs, each with 4096 12-bit words of memory.  Programmed them with punch cards in the wee hours.  Only debugging tool was to force a core* dump of the entire memory, which left the location of the last instruction zeroed out as a hint.  *Tiny ferromagnetic donuts strung on wires.  (Apologies for the off-topic war story.)


----------



## Linwood Ferguson

gorlen said:


> I do too.  That was *much* easier than programming the Peripheral Processors (PPs) on the CDC 6000 series.


" This _book_ is _dedicated_ to _A6_ & _A7_, _without_ which none of the results in this _book_ could _have been saved_ "   [Who said I don't remember anything from college]



[Yes, I realize that was the main processor not the PP's; I can't recall any specifics about PP processing though I actually wrote one of my thesis projects using them.]


----------



## gorlen

Ferguson said:


> " This _book_ is _dedicated_ to _A6_ & _A7_, _without_ which none of the results in this _book_ could _have been saved_ "   [Who said I don't remember anything from college]
> 
> 
> 
> [Yes, I realize that was the main processor not the PP's; I can't recall any specifics about PP processing though I actually wrote one of my thesis projects using them.]


Good one!  Took me a few seconds to get it!     From what book?
Recently read The Friendly Orange Glow -- fun trip for CDC fans.


----------



## gorlen

Ferguson said:


> Yes and no.  They use SQLite, but they decided how everything inside hangs together, how normalized (or not) the data structures are.  To a very real degree, how you design the schema determines how fragile the database is.  But more to the current point, how you ACCESS the database also impacts that.  Lightroom is very chatty -- even when you are idle (maybe especially when you are idle) it can be making massive updates to the database and doing so in multiple threads.  Interruption of these idle process updates may cause corruption.  Keeping the database open when not updating, depending on how carefully you manage caching and write logs can cause vulnerabilities in actual idle time.
> 
> I do not mean to imply Adobe did this badly -- or well for that matter -- but there is a LOT one can do to make a SQLite database more, or less, vulnerable to corruption.  Not all designs are created equal.


The question is how much risk is added by accessing a catalog on a network share?  Here's what makes me suspect that the considerations you mention don't matter: [Chapter 5] 5.5 Locks and Oplocks.   Oplocks are enabled by default, and according to Opportunistic Locks - Win32 apps, are transparent to applications.  
My understanding is that (1) Lightroom calls the SQLite library to open a catalog, (2) Windows sends the open request to the SAMBA server and sets an oplock on the database file, (3) the SAMBA server sees a bunch of read requests while the catalog is in use, but no lock or write requests, (4) On exit, Lightroom calls the SQLite library to close the database, (5) Windows sends all the modified cached data blocks back to the SAMBA server and closes the database.  In this scenario, should Lightroom or the client PC crash while the catalog is open, all the modifications are lost, but the database file hasn't been modified and is left in its initial, consistent state.

Other factors are: (1) Lightroom now accesses the catalog in SQLite WAL (Write-Ahead Log) mode: changes are written to a .wal file and later applied to the .lrcat catalog file.  With a catalog folder on a local drive containing a symlink to the .lrcat file on a network share, the .wal file will reside on the local drive; and (2) SQLite provides a function to flush changes to storage.  I don't know if Lightroom uses it, and if it actually causes the changes to be written all the way back to the server.

While I'm curious about how this works, Lightroom has been running without catalog issues, I have backups, and photos to process.


----------



## Linwood Ferguson

gorlen said:


> Good one!  Took me a few seconds to get it!     From what book?
> Recently read The Friendly Orange Glow -- fun trip for CDC fans.


It was a teal paperback on CDC architecture, though I cannot recall the book and no longer have it.  Though in poking around I actually found it online: http://www.bitsavers.org/pdf/cdc/cyber/books/Grishman_CDC6000AsmLangPgmg.pdf

To your other: 



gorlen said:


> In this scenario, should Lightroom or the client PC crash while the catalog is open, all the modifications are lost, but the database file hasn't been modified and is left in its initial, consistent state.


I find it unlikely that in a typical session of editing all (literally all) updates are cached.

What happens if half the data is flushed and half not at the point of a crash.  The half written is unlikely to be transactionally consistent, but likely random, the least recently used perhaps. 

Also, 



gorlen said:


> SQLite provides a function to flush changes to storage. I don't know if Lightroom uses it, and if it actually causes the changes to be written all the way back to the server.



You can probably dig through the code to find out, but often (as you apparently hint) these kind of things are limited to calling the next layer's "flush', which may or may not connect all the way down depending on file system involved.  Especially since many performance-enhancing well-meaning developers break such chains, like various disk write caches you can turn on and off.  

But... I also agree with your concept -- the risk is low, you are aware of the risk.


----------



## gorlen

Ferguson said:


> It was a teal paperback on CDC architecture, though I cannot recall the book and no longer have it.  Though in poking around I actually found it online: http://www.bitsavers.org/pdf/cdc/cyber/books/Grishman_CDC6000AsmLangPgmg.pdf



Cool -- thanks!  It probably post-dates my CDC days since it refers to the Cyber series, which came out in the 70's,  Fond memories.



Ferguson said:


> To your other:
> 
> I find it unlikely that in a typical session of editing all (literally all) updates are cached.
> 
> What happens if half the data is flushed and half not at the point of a crash.  The half written is unlikely to be transactionally consistent, but likely random, the least recently used perhaps.
> 
> Also,
> 
> You can probably dig through the code to find out, but often (as you apparently hint) these kind of things are limited to calling the next layer's "flush', which may or may not connect all the way down depending on file system involved.  Especially since many performance-enhancing well-meaning developers break such chains, like various disk write caches you can turn on and off.
> 
> But... I also agree with your concept -- the risk is low, you are aware of the risk.



This discussion has been useful.  It's gotten me to realize that separating the .lrcat and .wal files probably impairs SQLite's ability to recover databases from power outages/crashes (How To Corrupt An SQLite Database File), so I'm switching to using a symlink to the entire catalog folder on the network share, especially since I've not noticed a significant performance improvement with the previews folder on a local SSD.

OK -- back to actually using LrC to process photos.


----------



## PhilBurton

Ferguson said:


> It was a teal paperback on CDC architecture, though I cannot recall the book and no longer have it.  Though in poking around I actually found it online: http://www.bitsavers.org/pdf/cdc/cyber/books/Grishman_CDC6000AsmLangPgmg.pdf
> 
> To your other:
> 
> 
> I find it unlikely that in a typical session of editing all (literally all) updates are cached.
> 
> What happens if half the data is flushed and half not at the point of a crash.  The half written is unlikely to be transactionally consistent, but likely random, the least recently used perhaps.
> 
> Also,
> 
> 
> 
> You can probably dig through the code to find out, but often (as you apparently hint) these kind of things are limited to calling the next layer's "flush', which may or may not connect all the way down depending on file system involved.  Especially since many performance-enhancing well-meaning developers break such chains, like various disk write caches you can turn on and off.
> 
> But... I also agree with your concept -- the risk is low, you are aware of the risk.


Given all the isssues, or the potentials for corrupting the SQLite database I think it would be wise for the 90% of us, and perhaps even the top 10%, to use the database only in configutations that have been tested (adequately ???) by Adobe.  If that means havinig the previews on the same drive and folder as the main catalog file, then consider that the "Adobe uncreative configuration performance penalty" and move on.

I wonder just how many forum members made it this far into this thread.


----------



## atj777

gorlen said:


> While LrC will display the "Lightroom cannot open the catalog named ... located on network volume ..." when double-clicking or directly opening the .lrcat symlink, clicking "OK", then choosing the same symlink from the Select Catalog dialog box will succeed,


I wish it did this on a Mac. Although I'm tying to load from symlinked folder but I get the same message no matter how I try to access the catalog.


----------



## gfinlayson

OK, this has probably been asked before, but.......

I've just upgraded my network and storage setup  - I have a Windows 10 desktop PC in my office (dedicated concrete 'man-shed' in the back garden). I have Cat 6 GbE for internet and the like, plus a dedicated 20,000 Mb/s link (2 x 10Gb-SR in LACP over fibre optic) to a Synology RS3617xs with 12 x 4 TB HDDs in RAID10. NAS is in the house for security reasons and the fact that it's FAR TOO LOUD to tolerate in my office.

Rather than running my catalog on the local PC and having to back it and the backups   up to the NAS, and then having the NAS sync the backup to the local storage/cloud, is there a really good reason why I couldn't/shouldn't run the catalog from the NAS on an iSCSI target volume? 

My plan going forward is to keep photo folders and catalogs together in one place if possible and keep the catalog sizes relatively small.

I've read lots of arguments about network speed limitations being a reason to not have the catalog networked, but my network transfer speeds are significantly faster than even a local SATA SSD would be. Are there other good technical reasons why it's not a good idea?


----------



## gorlen

gorlen said:


> Other factors are: (1) Lightroom now accesses the catalog in SQLite WAL (Write-Ahead Log) mode: changes are written to a .wal file and later applied to the .lrcat catalog file.  ... SQLite provides a function to flush changes to storage.  I don't know if Lightroom uses it, and if it actually causes the changes to be written all the way back to the server.



I used several tools (watch, inotifywait, wireshark) to observe catalog operations while having LR sync develop settings across >3K photos:

- Windows does set oplocks on the .lrcat* files, enabling client caching.​​- Data is frequently flushed to the .lrcat-wal file on the server.​​- Occasionally, SQLite checkpoint operations apply the WAL data to the .lrcat file on the server.​​- Other than the initial oplocks, no other lock requests are sent to the server.  (According to the documentation, WAL mode limits database access to the (Lightroom, in this case) processes on a single PC, and these processes coordinate through shared memory.)​​- SQLite also creates a shared memory-mapped file (.lrcat-shm) in the catalog folder.  According to the documentation, this file contains no persistent content and is not used for recovery.​​- SQLite creates temporary files, but none were observed in the catalog folder, so I presume that they're in a temp folder on a local drive.​​This is all good behavior.  Since "one test is worth a thousand expert opinions", it would be interesting to force Windows to crash (see notmyfault) while LR is performing various catalog modifications to measure the relative risk of corruption with the catalog on a network drive vs. a local drive. I suspect it's negligible.


----------



## Linwood Ferguson

Glad to see the detail; you have the energy many of us lack to dig into it. 

While you are set up, I'd be curious to hear how much activity you see while lightroom is otherwise idle (but with a good sized catalog).   I've always worried more about people who might just leave it sitting, especially since Cloudy came along and people want it up as a receiver for sync'd photos from mobile.   Then the power goes out (or whatever). 

Of course, as to testing, as they say -- it's hard to prove a negative.   You can just draw some statistics around it.


----------



## gorlen

Ferguson said:


> While you are set up, I'd be curious to hear how much activity you see while lightroom is otherwise idle (but with a good sized catalog).   I've always worried more about people who might just leave it sitting, especially since Cloudy came along and people want it up as a receiver for sync'd photos from mobile.   Then the power goes out (or whatever).



Opening a catalog with >30K photos and then letting it idle for 30 min. resulted in 1670 SMB2 ops total on the lrcat* files on the server.  No Write ops after open.  LR just does a Find op on the catalog folder every 10s while it's idling.  My LR isn't syncing with cloud, though.


----------



## Linwood Ferguson

gorlen said:


> Opening a catalog with >30K photos and then letting it idle for 30 min. resulted in 1670 SMB2 ops total on the lrcat* files on the server.  No Write ops after open.  LR just does a Find op on the catalog folder every 10s while it's idling.  My LR isn't syncing with cloud, though.


That's interesting.  I thought it was always doing stuff in the background, but maybe only if there are things to do, like purging 1:1 previews that have expired (I realize that's not in the catalog, not sure if it updates anything there), address lookups, or such.  

Thank you forchecking.


----------



## gorlen

Ferguson said:


> That's interesting.  I thought it was always doing stuff in the background, but maybe only if there are things to do, like purging 1:1 previews that have expired (I realize that's not in the catalog, not sure if it updates anything there), address lookups, or such.
> 
> Thank you forchecking.


Since I thought your concern was about the catalog, I only checked ops on the three lrcat* files.  Running


		Code:
	

inotifywait -r -m --timefmt "%H:%M:%S" --format "%T %w %e" -o events.txt Master


on the server shows that LR searches the Backups, Helper, and Previews folder trees once/hour while idle.


----------



## Linwood Ferguson

gorlen said:


> Since I thought your concern was about the catalog, I only checked ops on the three lrcat* files.


It is really, that was the only example of a known idle time thing I could think of.


----------



## PhilBurton

I would like again to make the point that @gorlen is easily in the top 10%, if not the top 1%, of technical skills in this forum.  For most everyone else, you probably don't know how to cope with some Windows issue created by using a NAS, that @gorlen just fixes without even thinking about it.  SQLite was not designed to work over a network, including a NAS.  I personally have experienced problems with the personal finance manager Quicken, when keeping the Quicken data files on a network share.  Quicken also uses SQLite.

We all know that catalog corruption very real, and sometimes even Adobe can't even recover a corrupted catalog.  Get a larger HDD, get an external HDD that plugs in via USB, but please don't use a NAS.  Your time and effort are more than worth the expense.


----------



## gorlen

PhilBurton said:


> SQLite was not designed to work over a network, including a NAS.


From SQLite CVSTrac :



> *Using SQLite on a Network*
> SQLite database files may be shared accross a network using a network filesystem. This is never a particularly efficient method and may have problems (depending on the filesystem, or may simply not be available. These are alternative techniques for remote access to SQLite databases.



It appears that accessing a Lightroom catalog on a network filesystem has gotten a bad reputation because, years ago, catalog corruption was common due to  software bugs which are now rare -- the software involved has matured.

Furthermore, a Lightroom catalog presents a less demanding situation than what SQLite's design supports.  First, the catalog is locked when in use, restricting access to a single Lightroom instance, so the catalog is not being *shared *by two or more machines; sharing is limited to the Lightroom processes running on the same machine. 

Second, catalog corruption was usually attributed to faulty network file locking software.  In LrC 7.3 /7.3.1 released in April, 2018, the catalog was converted to WAL mode, which does not use network file locking.

Third, performance is no longer such a major issue.  Since the catalog is not shared across machines, it can be cached in local memory, which is now plentiful.  Gigbit/s Ethernet is common/inexpensive, and network equipment providing higher speeds (2.5 - 5 Gbps) over the same wiring (CAT5e) is becoming available as well (though still expensive).

My point is that using a catalog on a network file system is no longer an inherently bad idea, as are some of the other methods proposed in various forums for sharing catalogs.  It would be useful to repeat the tests done by Adobe in 2007 with catalogs on both network and local drives to measure the relative risk of corruption.  And as I posted previously, Adobe could maintain a compatibility list of NAS/fileserver products they've tested, as they already do for GPUs.


----------



## clee01l

gorlen said:


> It appears that accessing a Lightroom catalog on a network filesystem has gotten a bad reputation because, years ago, catalog corruption was common due to software bugs which are now rare -- the software involved has matured.


I don't think this is relevant. 
SQLLite is a single user database.   It has no  login security to prevent multiple uses from opening the same database file (a Lightroom Catalog for instance). With the file on the network,  two or more users are able to make changes.  These concurrent changes violate data integrity. User 1 can be changing the same records that user 2 is accessing.   The result is a corrupted database.   In addition to data tables, there are indexes that get rebuilt by the database engine. Then you have two database engines updating on the same index tables and you quickly trash the database. 
A Client/server database like Oracle manages users and prevents different users from accessing the same table rows by locking at the row level. This is not possible in the single user database as row level locking is not even implemented


----------



## gorlen

clee01l said:


> I don't think this is relevant.
> SQLLite is a single user database.


From the SQLite documentation at Appropriate Uses For SQLite (emphasis mine) :


> *Situations Where A Client/Server RDBMS May Work Better*
> 
> *Client/Server Applications*
> 
> If there are many client programs sending SQL to the same database over a network, then use a client/server database engine instead of SQLite. *SQLite will work over a network filesystem*, but because of the latency associated with most network filesystems, performance will not be great. Also, file locking logic is buggy in many network filesystem implementations (on both Unix and Windows). If file locking does not work correctly, two or more clients might try to modify the same part of the same database at the same time, resulting in corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.





clee01l said:


> It has no  login security to prevent multiple uses from opening the same database file (a Lightroom Catalog for instance) ...



From Lightroom catalog cannot be opened (emphasis mine):


> When Lightroom Classic is running with a catalog open, it creates a *[yourcatalogname].lrcat.lock* file next to the *[yourcatalogname].lrcat* file. *This file ensures that there is no other access to the catalog in use.* When you exit Lightroom Classic, the lock file is deleted automatically.


----------



## PhilBurton

clee01l said:


> I don't think this is relevant.
> SQLLite is a single user database.   It has no  login security to prevent multiple uses from opening the same database file (a Lightroom Catalog for instance). With the file on the network,  two or more users are able to make changes.  These concurrent changes violate data integrity. User 1 can be changing the same records that user 2 is accessing.   The result is a corrupted database.   In addition to data tables, there are indexes that get rebuilt by the database engine. Then you have two database engines updating on the same index tables and you quickly trash the database.
> A Client/server database like Oracle manages users and prevents different users from accessing the same table rows by locking at the row level. This is not possible in the single user database as row level locking is not even implemented


To Clee's point.  While the original poster may understand the risk of multiple users accessing (or trying to access) the Lightroom catalog, I daresay that  some number of forum participants, as well as the vast majority of Lightroom users in the world, simply do not.  Inevitably, there will be "Adobe bugs" reported when two or more family members try to do Lightroom work from different systems "to save time," or some such.

I'm enough of a tech-geek to appreciate the points in this thread.  But if there is one thing I learned in my "day job" career as a software product manager, it's that you can't expect your users to be technically proficient about your product.  Otherwise, a company's support team would consist of just one, very unhurried person.

Phil Burton


----------



## gorlen

PhilBurton said:


> To Clee's point.  While the original poster may understand the risk of multiple users accessing (or trying to access) the Lightroom catalog, I daresay that  some number of forum participants, as well as the vast majority of Lightroom users in the world, simply do not.  Inevitably, there will be "Adobe bugs" reported when two or more family members try to do Lightroom work from different systems "to save time," or some such.
> 
> I'm enough of a tech-geek to appreciate the points in this thread.  But if there is one thing I learned in my "day job" career as a software product manager, it's that you can't expect your users to be technically proficient about your product.  Otherwise, a company's support team would consist of just one, very unhurried person.
> 
> Phil Burton


I generally agree with you, but this thread is about _Catalog on a NAS?_  It's the NAS part that has required far more expertise and maintenance than I anticipated.  Storing and using LrC catalogs on one is a relatively minor addition, with significant benefits, for those who've managed to set up a NAS together with a backup strategy.  I've had a good experience with accessing catalogs on a NAS from Windows PCs since 2011.  My recent investigations into the underlying software (LrC on Windows 10, SQLite, SMB, SAMBA) have uncovered no technical issues, but a *lot *of dated and/or incorrect information in various forums.  Nor have I found any actual tests/measurements of the relative risk of catalogs on NAS vs. local drives.  I suspect it to be negligible.

I keep a repository of documentation and configuration files for my setup here: kgorlen/lightroom


----------

