# Best Practices for NAS Centric Workflow?



## Michael Bateman (Jul 10, 2016)

Lightroom Catalogs have to be on a local volume of course.  But referenced images can go on a network drive, which is perfect when you have a lot of RAW images, etc. For my part, I sometimes run into trouble because the volume always gets mounted as "home" or "home-1" etc. which can be a problem if you ever connect to more than one NAS as I often do. 

I've discovered a great trick on my Mac with something called autofs. Here is a good article on how to keep network volumes mounted using autofs.  This works great with my existing catalog images. I can give the network volume a unique name and it appears in a folder called "servers" in my home directory. Lightroom never has trouble finding it.  

...except if I try to import from a folder on the share!
....or if I try to use a folder on the share as a destination for an import!
.....or if I try to browse a folder in the share using Bridge!

Yeah, for some reason my Adobe products just don't like it when I mount network folders this way - and I would love to be wrong about that! Anyone?  

*Most of you probably don't have a "NAS Centric" workflow like me and rely more on externally mounted USB 3 and Thunderbolt Drives* for hot and warm projects, maybe using a NAS for cold storage. (Just remember kids, *NAS is not Backup*, But can be used as part of an overall backup strategy - but I digress!)

I should probably re-think my workflow.  But I just love my Canon 7D with the Wireless File Transmitter WFT-E7A.  It does a great job of putting all the files from the camera straight onto the NAS where I can use shell scripts to automate my workflow, renaming files, backing them up, etc. 

But in the meantime I thought I would reach out here and see what everyone else does. I am new to the forum; this is my first post. Anyone else here use a NAS with LightRoom? Any ideas for me?  I have two: A Synology DS1513+ and a Qnap TVS-871T.  

In any event, thanks for reading my post to anyone who has made it this far!  

-Michael


----------



## Victoria Bampton (Jul 10, 2016)

Welcome to the forum Michael!

When you say that Adobe products don't like it when you mount the drives this way, what happens?  They just don't show up?


----------



## Michael Bateman (Jul 11, 2016)

Hello! Thanks for the reply. Basically yes, you see the folder where the mounted shares should be and in Bridge you see a "lock" and in LightRoom you see just the folder and no subdirectories. Strangely in the case of Importing from the Share into Lightroom, you can specify "Show SubDirectories" and it will show you the photos you want to import AND EVERYTHING ELSE ON THAT SHARE, so yeah, not very practical. Do you use a NAS? Which? How do you mount the shares on it for use with LightRoom?


----------



## Michael Bateman (Jul 11, 2016)

And by the way, if you think I should call tech support and open a ticket, sure, but I was mostly curious what anyone else does in this situation. If you have never played with one of these machines they are amazing, they are always on and available for chores like cloud backups and any workflow chore you can imagine. I would blog about it myself except I am JUST NOT quite sure I am doing it right myself, so why confuse anyone else!!! : )


----------



## Michael Bateman (Jul 11, 2016)

On more re-reply! Lightroom is COMPLETELY fine with the folders once the photos are in there. It's the import process that cannot navigate around the shared folder, either to specify a FROM directory or a Destination directory.  If I map the drives the way one normally would in OSX with "Goto Server..." etc and mounting it through the Finder, I can import from the share and specify any destination folder on the share without a problem. The issue is that subsequently LightRoom can't find them because it remembers the images being in a shared folder called, Home-1, but now something else has co-opted the Home-1 designation and the images are in a folder now designated as Home.  

-Michael


----------



## rob211 (Jul 11, 2016)

A WAG would be that it has to do with the mount point. Lr does sometimes get confused about that with /volumes, so perhaps something similar is going on with the shares. But I don't use a NAS with it much so I'm not sure where it looks for 'em.


----------



## Michael Bateman (Jul 12, 2016)

Yes /volumes was where home and home-1 were mounted and truly the only problem was probably (My WAG now!) that the first one to be mounted was home, next home-1. Then if for whatever reason one was disconnected it would be re-mounted as home-2 and if I could trick Lr into finding the photos again, that got written into the link to the image in the database and it would start all over again on the subsequent re-mount. 

Look, the main issue is that I want to use the home folder service on each NAS, but give Lr a unique handle for each, right? I will play some more on this and report back. I may be on my own here in the world of LightRoom users with a NAS, but I find that surprising. If you knew what these machines can do when you have the workflow set up properly you'd wonder how you ever managed without it. : )

-Michael


----------



## rob211 (Jul 12, 2016)

In the article on autofs you cite it specifically warns against using /volumes as the mount point. Having the same name doesn't help, since the home-1 and home-2 would change arbitrarily.


----------



## Michael Bateman (Jul 12, 2016)

rob211 said:


> In the article on autofs you cite it specifically warns against using /volumes as the mount point.


YES, Don't try this at home kids, thanks. I had misgivings about that soon after I posted that. The home folder concept is important and many systems use it. It mounts the folder as "home" when in fact the physical folder on the system is in fact "homes/username" and if they allowed you to mount it as "username" you might have a chance at avoiding these issues so long as you had a different username on each system I suppose. But thanks, yeah, no, don't mount in Volumes. Bad idea.


----------



## Michael Bateman (Jan 12, 2017)

Michael Bateman said:


> Lightroom Catalogs have to be on a local volume of course.  But referenced images can go on a network drive, which is perfect when you have a lot of RAW images, etc. For my part, I sometimes run into trouble because the volume always gets mounted as "home" or "home-1" etc. which can be a problem if you ever connect to more than one NAS as I often do.
> 
> I've discovered a great trick on my Mac with something called autofs. Here is a good article on how to keep network volumes mounted using autofs.  This works great with my existing catalog images. I can give the network volume a unique name and it appears in a folder called "servers" in my home directory. Lightroom never has trouble finding it.
> 
> ...


----------



## Michael Bateman (Jan 12, 2017)

Hello. I just wanted to post a quick update and cautionary note about Lightroom and network drives and NAS systems. 

Don't use network drives with Lightroom catalogs or images. 

I am a huge fan of Lightroom. I am a huge fan of both Synology and QNAP. But the only way to use these cool devices is to mount a network drive on your computer and lightoom will let you reference images on it and it will work well until it doesn't and it's a really bad idea. 

I now have a thunderbolt lacie external drive large enough to accommodate my main catalog and library. It's backup religiously to my NAS which in turn is backed up regularly. 

Most of you seem to know this. 

It's a shame. I am not gonna vent about it here, like you I have work to do!!  But man I long for a product like Lightroom that is a workgroup tool that allows people to collaborate within the same catalog of images. 

But if it's just me and my Canon 7D Lightroom is the bomb and I look forward to browsing this forum for tips on how to make it work for me when I leave my desk!!  I am eyeing several threads about a travel catalog etc. 

Thanks all. 



Michael Bateman said:


> Lightroom Catalogs have to be on a local volume of course.  But referenced images can go on a network drive, which is perfect when you have a lot of RAW images, etc. For my part, I sometimes run into trouble because the volume always gets mounted as "home" or "home-1" etc. which can be a problem if you ever connect to more than one NAS as I often do.


----------



## Victoria Bampton (Jan 12, 2017)

Hi Michael

Absolutely right that the catalog can't be on the NAS. Images is ok, as long as you know how to reconnect them if they show up as missing because the mount point's changed.

If you're eyeing the travel catalog threads, you might find this one useful: How do I use my Lightroom catalog on multiple computers?


----------



## clee01l (Jan 12, 2017)

Michael Bateman said:


> It's a shame. I am not gonna vent about it here, like you I have work to do!! But man I long for a product like Lightroom that is a workgroup tool that allows people to collaborate within the same catalog of images.


The LR catalog file is a single user database and uses a DB engine called SQLite.  A multi-user database requires User Access Control and referential integrity checks to prevent two users from accessing the same database records at the same time.  To get that kind of industrial strength database, you need to put your data on these heavy duty database servers and run database engines like Oracle which can cost thousands of $$ and require a full time staff of dedicated Database Analysts. Then an app like LR needs to be rewritten as a front end to this database.  Adobe (I'm sure) has determined that there is not enough demand for this type of multi user database app to justify the cost of development.  And if developed, the LR client certainly could not be sold for $10/mo subscription.


----------



## Michael Bateman (Jan 12, 2017)

Victoria Bampton said:


> Hi Michael
> 
> Absolutely right that the catalog can't be on the NAS. Images is ok, as long as you know how to reconnect them if they show up as missing because the mount point's...



Just to be clear though in regard to Workflow which is presumably about a repeatable efficient process, I strongly advise against having a catalog reference an image on a network drive. Lightroom will let you but it's a bad, bad idea. Or maybe having 120,000 images totally 2.2 TB is a bad idea. But I will never have a lightoom catalog reference an image on a. Network drive again. It's just too much work to deal with a corrupted catalog and too big of a mess to undo. If anyone has a workflow that relies upon a NAS that works more than 90% of the time let me know. 

But yeah getting it to work is not the problem. It's getting it to not fail I have a issue with!!

Thank you! I will check that out. : )


----------



## Victoria Bampton (Jan 12, 2017)

Perhaps I'm misunderstanding Michael - exactly what kind of problem were you having when the images were on the NAS? That wouldn't have corrupted a catalog, although it can be a little slow for it to load.


----------



## Wernfried (Jan 31, 2017)

clee01l said:


> The LR catalog file is a single user database and uses a DB engine called SQLite.  A multi-user database requires User Access Control and referential integrity checks to prevent two users from accessing the same database records at the same time.  To get that kind of industrial strength database, you need to put your data on these heavy duty database servers and run database engines like Oracle which can cost thousands of $$ and require a full time staff of dedicated Database Analysts. Then an app like LR needs to be rewritten as a front end to this database.  Adobe (I'm sure) has determined that there is not enough demand for this type of multi user database app to justify the cost of development.  And if developed, the LR client certainly could not be sold for $10/mo subscription.



An Oracle database would be certainly an overkill. For example supporting MySQL would be fairly simple, the SQL commands are almost the same. There are state-of-the-art technologies to support multiple database engines with the same pice of code.

Best Regards
Wernfried


----------



## clee01l (Jan 31, 2017)

Wernfried said:


> An Oracle database would be certainly an overkill. For example supporting MySQL would be fairly simple, the SQL commands are almost the same. There are state-of-the-art technologies to support multiple database engines with the same pice of code.
> 
> Best Regards
> Wernfried


Welcome to the forum.
"Industrial Strength" databases require two components usually on two different computers. a client (The LR part) and the Server (The database part)  A major re-write would be required to turn LR into a client component to a MySQL or Oracle type database.  (FWIW, I used Oracle as an example since it is a term that most non technical users ace identify MySQL, Microsoft SQL Server, PostgreSQL and IBM DB2 are less well known).


----------



## Wernfried (Jan 31, 2017)

clee01l said:


> Welcome to the forum.
> "Industrial Strength" databases require two components usually on two different computers. a client (The LR part) and the Server (The database part)  A major re-write would be required to turn LR into a client component to a MySQL or Oracle type database.  (FWIW, I used Oracle as an example since it is a term that most non technical users ace identify MySQL, Microsoft SQL Server, PostgreSQL and IBM DB2 are less well known).



No, that's wrong. For a typical database connector it does not matter whether you connect to a local SQLite DB file or an Oracle or MySQL database, no matter if this is hosted on local computer or somewhere else.

See this example written in Perl, it would work for any database *with the same* UPDATE command:


```
if ( db eq "SQLite" ) {
    $dbh = DBI->connect("DBI:SQLite:dbname=C:/Catalogs/Lightroom-2.lrcat", "", "");
} elsif ( db eq "Oracle" ) {
    $dbh = DBI->connect("dbi:Oracle:$db", $username, $password);
} elsif ( db eq "MySQL" ) {
    $dbh = DBI->connect("DBI:mysql:$db:$hostname", $username, $password);
}

$dbh->do("UPDATE Adobe_images SET fileFormat = 'JPG' WHERE id_local = 76543");
```

Sure, when you really write such program you have to consider many details but in general it works transparent.

And of course, for a "Multi-User Environment" it makes sense to have database running on one central remote server but this is not must.

Best Regards


----------



## Gnits (Jan 31, 2017)

I have a Synology Nas, but do not use it for Lightroom or Photoshop related files.   Occasionally, I will use it as an extra external backup for my images.  

My preference is to have my Catalog and Cache folders on local SSD drives and images on a fast local drive.  In due course I will test 10GbE cards from my workstation to a 10Gbe Nas to see what impact that makes, but not any time soon.


----------



## LRnewbie736 (Jul 28, 2017)

Gnits said:


> I have a Synology Nas, but do not use it for Lightroom or Photoshop related files.   Occasionally, I will use it as an extra external backup for my images.
> 
> My preference is to have my Catalog and Cache folders on local SSD drives and images on a fast local drive.  In due course I will test 10GbE cards from my workstation to a 10Gbe Nas to see what impact that makes, but not any time soon.


***
Hi Gnits, Thank you for your post. Long story short, I am interested in possibly acquiring a Synology NAS... DiskStation DS1817+ | Synology Inc. 
I use LR and use a MAC. Could you please elaborate concerning WHY you do NOT chose to use it for Lightroom or Photoshop related files?  When you "import" your photos into LR, I thought you mentioned that you were storing your photos "...on a fast local drive...."?
Surely, that local storage would fill-up quickly?
How do you back up those photos?
And how to backup those photos so that Lightroom can keep track of those photos?
Thank you in advance for any comments.
LRnewbie736


----------



## rob211 (Jul 29, 2017)

LRnewbie736 said:


> ***
> Could you please elaborate concerning WHY you do NOT chose to use it for Lightroom or Photoshop related files?  When you "import" your photos into LR, I thought you mentioned that you were storing your photos "...on a fast local drive...."?
> Surely, that local storage would fill-up quickly?
> How do you back up those photos?
> ...



I dunno about Gnits, but I also use an external SSD and HDD for images, not a NAS. They're faster, and do perhaps more moving around than some people. Local storage can be huge: not only can you have numerous just regular ol' HDDs, but RAID as well. NAS is not necessarily bigger, it's just the way it's connected: network (slow) vs USB, Thunderbolt (faster). "Local" doesn't mean "internal" necessarily.

You can also backup to externals. And/or to the cloud. Gnits, I see, is on Windows. But I use a Mac and use Time Machine to back everything up to alternating HDDs that I alternate off site, and one permanently attached HDD. And to the cloud as well. Lr doesn't need to keep track of where those are as long as you keep the same folder structure; you just reconnect the images to the backed up catalog and you're back in biz.

Hope that helps; if not, sorry for butting in.


----------



## Gnits (Jul 29, 2017)

My setup. Windows.
C drive. SSD internal. O/S and apps. Kept below 100GB used to keep backups small and fast.
G drive. SSD internal. Lr Catalog 
P drive. HD internal. Photos and personal data
Q drive. HD internal. Backup of P drive, Lr Cat and C drive.
T drive. Ext HD ...External backup of P drive, Lr Cat and C drive.
Nas drive. Occasional backup of P Drive, Lr Cat and C drive.

Backup routine.
A. At 6am every morning Macrium Reflect automatically backs up my C drive and Lr Cat to Q drive.
B. At 6:10 am every morning GoodSynch copies all new and changed files from P drive to Q drive.
By 6:20 approx my Pc shuts down.


Totally automated.  When I wake up I have an email summary confirming all backups successful.

If I import a photo shoot , after the import process I will manually trigger step B above. 

The reason I do not use Nas is that when I bought a Nas drive all the network components, including a network card on my PCs, network interface on Nas , my managed switch and Nas speed were max 1 Gb connections.  I could not believe how long it took to complete routine tasks.  

The situation is different now in that Nas devices can be bought with much faster network connections and fast internal write speeds.  But you still need to make sure that there are no bottlenecks, such as slow switches or routers, faulty cables, etc. Your Nas may have a 10Gbe network interface card, but does your workstation, switch and all other bits in the pipeline have these specs and do they all work together.



My P, Q, T drives started off at 1 TB, currently 3 TB and half full, in a few years time probably 8TB. I have approx 85,000 images in my catalog. For my volumes I am not worried about internal drives being too small. When that happens there will be alternative solutions.


----------



## Linwood Ferguson (Jul 29, 2017)

Wernfried said:


> No, that's wrong. For a typical database connector it does not matter whether you connect to a local SQLite DB file or an Oracle or MySQL database, no matter if this is hosted on local computer or somewhere else.
> 
> See this example written in Perl, it would work for any database *with the same* UPDATE command:
> .....
> Sure, when you really write such program you have to consider many details but in general it works transparent.


Is it possible to write SOME code that will work on MOST databases - sure.  But "in general it works transparent[ly]"?  Really?

What you say is simply not true. Not all databases support the same syntax, or the same semantics. Even something as simple as ISNULL as a function may become IFNULL or even COALESCE.  But the syntax issues are minor compared to semantics of statements.  Case sensitivity, null handling, error handling, transaction handling and scopes, collation sequences, support for nested, embedded, recursive and other advanced queries. 

Sorry... it's just not that easy.  There is a HUGE price a software developer pays for a decision to support lots of back end databases, both in development cost, and unless you seriously inflate that development cost even moreso in performance and reliability.

Remember... performance and reliability are by far the two most often complaints most of us hear about Adobe.



Wernfried said:


> And of course, for a "Multi-User Environment" it makes sense to have database running on one central remote server but this is not must.



Just for the record, SQLite does support a significant amount of concurrency, and Adobe COULD have decided to use it for multiple users.  There are architectural issues in a high update environment but LR usually isn't in a high update mode, and having 2-3 users updating at once is certainly within SQLite's capabilities. There are also concurrency issues in some NAS applications that make locking hard (notably NFS).

Had they wanted to do so, there are of course much better AND FREE options: MySQL and Postgresql are two that are quite compatible.  When I install Resolve (Video editing software) it installs Postgresql as a shared working environment automatically.

The decision not to be multi-user is one Adobe made, not BECAUSE of SQLite, but having made it, it made SQLite a more viable alternative.  The decision impacts their code in a huge variety of ways -- database access is the least of them; coordinated access and caching are much larger.  When multi-user applications, you cannot depend on anything in (non-shared) memory, so memory caching of information becomes a non-starter. Think for a moment about two people editing the same image at the same time?  OK, that may be too obvious, how about changing keywords while someone else is assigning them?

The bigger deal is: *Multi-user applications are inherently slower than single user applications.*

So be careful what you ask for if you are wanting Adobe to make Lightroom multi-user.

PS. Just for the irony of it: I spent most of today migrating a bunch of MySQL stored procedures to Postgresql functions for Zabbix - a product that DOES support both databases, and the migration was pretty much a rewrite.  And if you look inside their code (It's open source) you see massive amounts of parallel code for different back ends.  It's NOT "transparent".


----------



## HansT (Jul 29, 2017)

I recently moved my originals to a DS415+.  This only works well if you tell LR to prefer smart previews for all work.  If you do this then you'll only touch the originals when you import or export them -- which generally is a "get a cup of coffee" operation, regardless of where they sit.

My sources were ~100GB and growing, which was starting to threaten my SSD's capacity.  The smart previews are around 20GB.

---------

Around the same time I increased the synology's RAM to 8GB, of which typically 7GB is being used as a file cache.  I really think this has eliminated any speed concerns accessing the originals.  My home LAN is all wired 1Gb.

I run LR from one Windows and one Mac computer.  Moving the sources to the NAS means one level of  backup (replicated photos on the two desktops) is gone, so I've added a backup from the NAS to an internet backup (Amazon Drive).

(On second thought .. I suppose the .xmp's are also being written to the NAS .. so far I haven't noticed any irritating delays with this.)


----------



## Wernfried (Jul 30, 2017)

SQLite locks the entire database when you have an open transaction. It also does not support any kind of user management or privileges. Thus using SQLite for Multi-User applications have many limitations.
Regarding "transparency" it depends: the more database specific functions you use the less transparency you will have. But when you use the database just as a "stupid" data store it becomes easier.

A Lightroom catalog SQLite database uses tables, indexes and a some simple triggers - that's all. Based on such tiny scope it should be possible to support also other RDBMS. But of course, this is a decision Adobe has to take.


----------



## Michael Bateman (Jul 10, 2016)

Lightroom Catalogs have to be on a local volume of course.  But referenced images can go on a network drive, which is perfect when you have a lot of RAW images, etc. For my part, I sometimes run into trouble because the volume always gets mounted as "home" or "home-1" etc. which can be a problem if you ever connect to more than one NAS as I often do. 

I've discovered a great trick on my Mac with something called autofs. Here is a good article on how to keep network volumes mounted using autofs.  This works great with my existing catalog images. I can give the network volume a unique name and it appears in a folder called "servers" in my home directory. Lightroom never has trouble finding it.  

...except if I try to import from a folder on the share!
....or if I try to use a folder on the share as a destination for an import!
.....or if I try to browse a folder in the share using Bridge!

Yeah, for some reason my Adobe products just don't like it when I mount network folders this way - and I would love to be wrong about that! Anyone?  

*Most of you probably don't have a "NAS Centric" workflow like me and rely more on externally mounted USB 3 and Thunderbolt Drives* for hot and warm projects, maybe using a NAS for cold storage. (Just remember kids, *NAS is not Backup*, But can be used as part of an overall backup strategy - but I digress!)

I should probably re-think my workflow.  But I just love my Canon 7D with the Wireless File Transmitter WFT-E7A.  It does a great job of putting all the files from the camera straight onto the NAS where I can use shell scripts to automate my workflow, renaming files, backing them up, etc. 

But in the meantime I thought I would reach out here and see what everyone else does. I am new to the forum; this is my first post. Anyone else here use a NAS with LightRoom? Any ideas for me?  I have two: A Synology DS1513+ and a Qnap TVS-871T.  

In any event, thanks for reading my post to anyone who has made it this far!  

-Michael


----------



## Linwood Ferguson (Jul 30, 2017)

Wernfried said:


> SQLite locks the entire database when you have an open transaction. It also does not support any kind of user management or privileges. Thus using SQLite for Multi-User applications have many limitations.
> Regarding "transparency" it depends: the more database specific functions you use the less transparency you will have. But when you use the database just as a "stupid" data store it becomes easier.
> 
> A Lightroom catalog SQLite database uses tables, indexes and a some simple triggers - that's all. Based on such tiny scope it should be possible to support also other RDBMS. But of course, this is a decision Adobe has to take.


Yes, though a well designed system, especially a multi-user system, keeps transactions as short as possible.  The vast majority of access in most system (including Lightroom) are read-only.  I did not say SQLite was the best choice, simply that it was not a non-viable choice as had been indicated.

I think we are mostly off the subject, though.  

The issue for a NAS centric workflow is not about the catalog (unless one cheats and forces the catalog onto the NAS).  The issue is about the images, folder updates, and image updates.  A NAS is reasonably safe for those operations, however it suffers from some of the same issues as an external hard drive - you are more likely to be disconnected from your data store while updates are occurring than you are with internal storage.  For a USB device cable movement/failure, cheap controllers and human error play a role.  For NAS, especially Wifi nas, poor quality home gear, RF interference, and human error play a role.  But if one takes care to get good quality gear and use it properly both are quite usable.  In fact, I would argue that having the catalog on NAS is not materially, inherently more problematic than on a EHD, today.

I use a NAS for backup, but that NAS is one heck of a lot more reliable than my primary storage (albeit slower), but that's because I built it that way.  That's not necessarily the case of a randomly selected NAS system, often quite the reverse.

I think the insidious nature of any storage mechanism for photographic use is not hard failure.  The "my disk drive failed" scenarios have one of two answers generally -- you have a backup, or you are screwed.

The bigger issue of using more... disconnected (not the right word but close)... solutions for storage is that they increase the number of vulnerable components and the complexity of getting your data from what you see, to where you save it.  And for photography, for the most part, there is zero check that what you saved is correct.  Images do not have built in checksums or redundancy (mostly, DNG is a partial exception) -- if you write 40MB of image data to a disk, you have no way to know if what you read next time is what you wrote.  The more complex the pathway from computer to storage, the more things can screw up either from interruption (as above) or just bad hardware or software in components.  For digital systems we tend to treat them digitally -- they are working, or they are not.  On or off.  That's often right, but actually incorrect.  Almost every digital component has some level of undetected error rate.  The more components and complexity you introduce, especially in series, the higher the overall error rate.

NAS is "safe".  EHD's are "safe".  They are used for industrial strength systems. Not trying to say the sky will fall.

But with home-type systems often bought from low-bid suppliers, as a general statement for any random implementation you will find on a photographer's desk, they are not as safe as in-system drives. 

I realize the push toward mobile/laptop/tablet pushes all of us toward such solutions, some situations leave one with no choice.

I just offer this ramble (and sorry for the diversion into multi-user database space) as a suggestion:   If you are highly I.T. literate, ignore this advice as you already know enough to decide for yourself.  If you are not, and you have the choice of in-system drives and NAS or EHD's for primary storage (or same redundancy levels), use in-system drives.  The KISS principle still applies.


----------



## Michael Bateman (Sep 19, 2017)

Hello! Just checking back with any to see if anyone has a solid NAS Centric Workflow they can tell us about? I love LR and I love both my Qnap and my Synology DiskStation but I have not yet been able to ween myself off my fast local external drive for my primary workspace. It's a shame because the QNAP has a Thunderbolt connection and an SSD Cache! 

I really appreciate the discussion about the SQL Backend and I get why Adobe might not yet or ever support a large multi-user backend for LightRoom but it would be really cool to have a workgroup share photos and editing chores on a project either locally across an ad-hoc LAN at something like a Music Festival or wedding or a WAN with someone at the backoffice getting a headstart on photo editing while some of the team is still shooting. 

Take care,
-Michael


----------



## HansT (Sep 19, 2017)

Not sure what you're looking for.  Victoria's list of ways to use a catalog on multiple computers pretty much exhausts the available options.   If one of those doesn't suit your workflow, you'll need another tool.

I'm not familiar with e.g. Photoshop -- but suspect it saves edits to the image rather than keeping them in the (single-user) catalog.  That might be an alternative approach for your multiuser needs?

The extreme level of dependence on the catalog requires Lr user(s) to enforce a protocol that precludes multiple simultaneous (or "stale") edits to the catalog.  Maybe you invent a token -- a stuffed doll -- that a user must physically hold before she can open the catalog on her computer.  (Personally, I use myself as a token -- I share a catalog across multiple computers, sync'd via a NAS, but I'm the only person who uses it -- thus eliminating the possibility of simultaneous multiuser access.)


----------



## Michael Bateman (Sep 20, 2017)

Which NAS do you have and which sync tool? That sounds interesting. How big is your catalog as you use it? What kind and size local storage do you use? Do you ever have to sync to your NAS on the road (not on your own LAN?)

I might be better served with something like PhotoMechanic but I do so adore Lightroom and how it lets you batch process and I have gotten so comfortable with it. Your approach intrigues me. I used to keep all my photos in DropBox and just selectively sync which photos I was working on from where I was but I hit the upper limit of DB for Teams storage. Perhaps I could do this on my NAS. Perhaps that’s what you are describing. 

As these cameras get bigger and faster LightRoom is gonna have to evolve to accommodate larger shared storage and workgroups me thinks. 

Thanks very much for sharing your thoughts. 

Michael


----------



## tspear (Sep 20, 2017)

Michael,

I had a NAS setup on my Mac a while back. It worked fine, I custom built the file server using Linux with a Samba (SMB) network service. 
But in any case, the critical aspect is how the NAS is named. This needs to be unique for the Mac to correctly mount it under the volumes area in a consistent manor.
I no longer have the Mac or the file server (I switched to Windows and local storage); so I cannot look at any details. 

Tim


----------



## Michael Bateman (Sep 27, 2017)

Does anyone know how to tap the Adobe Metadata written to disk outside of LightRoom, Photoshop, or Bridge?  

(Marking a file with a color does not seem to translate to the OSX tags, etc. Nor do the other pick flags, etc. )

What I want to do is work off a fast local drive that’s backed up frequently to a NAS. One of the problems I have is deletion. I want to first mark a file with the rejected flag then wait for that to sync to the NAS before actually telling LightRoom to “delete the files marked for rejection.”  

Then I just need a shell script to run across the NAS volume where the photos are kept and physically delete files with the rejected flag set. (Otherwise when I go back to an older project the deleted files don’t keep coming back to haunt me. 

I shoot a lot of birds you see. Usually in RAW and sometimes also with bracketing and very often shooting hundreds of frames trying to catch that “fight shot!”  

So I need fast local storage and either an unlimited amount of archive space or some ability to manage it! ; )

Michael


----------



## Gnits (Sep 27, 2017)

Can you write your own scripts.


----------



## Linwood Ferguson (Sep 27, 2017)

Michael Bateman said:


> I want to first mark a file with the rejected flag then wait for that to sync to the NAS before actually telling LightRoom to “delete the files marked for rejection.”
> 
> Then I just need a shell script to run across the NAS volume where the photos are kept and physically delete files with the rejected flag set.



OK, I'll bite -- why wait for it to sync if your plan is to delete what it just sync'd?



Michael Bateman said:


> Does anyone know how to tap the Adobe Metadata written to disk outside of LightRoom, Photoshop, or Bridge?



First, the metadata is only written if you have the option turned on (which slows down lightroom), or if you do a "write metadata" explicitly.  It's fine to do either, just mentioning it.

Secondly, WHERE it is written depends on file type, e.g. it may be different in TIFF, PSD, JPG and regular RAW.  I can't recall for DNG.  While it's doubtful you use all of those you might use some.  in regular raw it's in an XMP file and scattered hither and yon.  The simplest way to find how for any field of interest is this:

Without the file marked, copy the XMP somewhere else.
Mark the file
Do a text file comparison between the XMP file before and after.
If it's not an XMP file, unless you love pretty technical editing, you can't "see" the metadata directly.  You might look at a tool like EXIFTOOL to extract it separately then search the extraction but now it's getting even more complicated.

On the Mac there are a bunch of comparison programs available though I never use any (here is a list I ran across, I know nothing about them, just a starting point).

Third, finding them on NAS depends on tools you have, does Mac have "grep" like regular unix, that's very powerful and can find and delete at the same time.

Fourth and most important (and a bit of a surprise) the Reject file is not written to disk in the XMP file.  It's apparently only stored in the catalog.  I just did a trial run and a comparison before and after and don't see it there.  So you'd have to select all the rejected files, mark some other metadata that is written, and then go by that.

But I'll come back to the first question -- why back it up before you cull, if your intent is to delete the backup?   I would get that if the backup was a safety copy you kept, but why go to the trouble and then delete it.  Cull first?

If the reason is that it backs up automatically and it's going to catch it before you cull (and if it was OK to not back up that fast), do your initial imports into a folder on the same fast disk that is set not to back up, e.g. two parent trees like \photos and \photosNoBack, with the same folder names underneath, then as soon as you are done culling, delete the photos, and use Lightroom to move the subfolder (e.g. \photosNoBack\20170917) to the backed up folder (\photos\20170917).  If they are on the same disk it's instant -- no photos are actually copied, it just moves the subfolder and updates the catalog.


----------



## Michael Bateman (Sep 27, 2017)

Gnits said:


> Can you write your own scripts.


Yes. What I can’t do is ask a shell script to take an action based on an Adobe Metadata Tag. 

That’s the question I am asking. 

Thanks!!

Michael


----------



## tspear (Sep 27, 2017)

Michael Bateman said:


> Yes. What I can’t do is ask a shell script to take an action based on an Adobe Metadata Tag.
> 
> That’s the question I am asking.
> 
> ...



No. Bets bet, write a shell script to look for missing files. So look at all files in the backup, and compare against the master. If the file is not found on the master, delete it.
I have written such scripts before, in Java, in Bash... So it can be done. I do not have any handy examples.

Tim


----------



## tspear (Sep 27, 2017)

Actually, even better. Use rsync. Functionality is built in.

Tim


----------



## Michael Bateman (Sep 27, 2017)

Gnits said:


> Can you write your own scripts.





Ferguson said:


> OK, I'll bite -- why wait for it to sync if your plan is to delete what it just sync'd?.



My fast local drive is backed up per best practices. 

It’s backed up to the NAS. 

As I work, changes are written to the fast local drive. A chron task syncs any changes to the NAS backup in the background. 

Now. In this scenario. Running the “Delete all photos marked as rejected” only deletes the local copy.  It does not, itself, know about the backup I have created outside of LightRoom on the NAS. And if the local copy is deleted before the “rejected” tag syncs to the backup the file won’t delete off the backup. 

Yes there are ways of syncing deletions and I have some long, complicated, boring reasons why those won’t work with my workflow. 

This is entirely tangential to my question but all very much appreciated and it’s entirely possible that you help me in a way I was not anticipating. 

There seems to be a unifying standard for how metadata is written to disk or Bridge would not see changes made in LightRoom.  I am hoping it’s not a proprietary adobe format that won’t allow me to script my workflow outside of Adobe products. 

In short: does anyone know of a shell command that acts on a filespec based on an attribute set from LightRoom? Or any framework that would allow me to script a workflow outside of Bridge/LightRoom, Regardless of what I am trying to accomplish? Which is still open for discussion mind you, I just would like to know the answer to this query if someone has one. 

Thanks!

Michael


----------



## Linwood Ferguson (Sep 27, 2017)

Michael, the reject file isn't in the file, it's in the catalog. One option is to write a SQL script against the catalog before you delete the files, that creates a delete command for each file spec marked rejected, then just run that command (possibly edited to adjust the root folder) on the backup system.

But... did you consider my idea of not backing up the folder with new images until you have culled?   Or are you culling old items, not just a recent shoot?


----------



## Michael Bateman (Sep 27, 2017)

tspear said:


> Actually, even better. Use rsync. Functionality is built in.
> 
> Tim


Thanks very much I am totally up for that but I sometimes delete files from the local fast drive that I still want to be able to recover. Imagine for instance I want to keep all five star photos from a year ago and all four and five star photos from up to six months ago. So I delete a bunch of 1,2,3 and 4 star photos from the local drive accordingly. I do this NOT wanting to delete them from the backup. This, as distinguished from photos I never want to see again for the rest of my natural life. 

Now, I do not ACTUALLY rely on the Star ratings for my workflow. I just used that as an example. I know this is a forum for Workflow Strategies but I am asking a specific question that if I had an answer to I could more easily engage with all of you on what or how I might do differently. 

Am I really the only one who wants to automate a workflow in some fashion outside of LightRoom that uses Metadata written by LightRoom? It seems this would be a pretty handy thing to be able to do yes? Workflow automation that Works, That Flows, and is Automatic?! ; )

Can rsync tap Adobe Metadata in the filespec?!  That would do it I think. 

Thanks everyone for indulging me in this quest!

Michael


----------



## Michael Bateman (Sep 27, 2017)

Ferguson said:


> Michael, the reject file isn't in the file, it's in the catalog. One option is to write a SQL script against the catalog before you delete the files, that creates a delete command for each file spec marked rejected, then just run that command (possibly edited to adjust the root folder) on the backup system.
> 
> But... did you consider my idea of not backing up the folder with new images until you have culled?   Or are you culling old items, not just a recent shoot?


I am pretty sure the reject tag is written out. If you mark a file as rejected in LightRoom and close  LightRoom you can see it in Bridge. 

Thanks for the suggestion but 1) I never work on my only copy of something, and 2) yes actually sometimes I shift from one project to another.


----------



## Gnits (Sep 27, 2017)

As you can write scripts I can suggest the following.

Select within Lr the images you wish to process on your Nas. (eg All files marked for deletion in Lr)

Then you can use LrTransporter or JB Listview to create a csv file (one record per image) which contains the fields of your choice available from within the metadata.  The csv file only holds details for your selected records.

Use the CSV file combined with a script or an app to process the files on your NAS as you see fit.

So ...most likely you will use the existing filename/folder name to identify the image you wish to process on the NAS, but will need some rule to identify it on the NAS (eg change the leading characters in the foldername string). 

You will need to devise your own workflow and controls to make sure you work in the proper sequence.

I use LrTransporter or JB Listview to create csv interface between Lr and lots of other applications, including InDesign to create preformatted high quality prints using Title and other metadata fields, Ms Word to create PDF A4 documents with an image, title and caption per page, or Photoshop to handle applying a template to a set of images.  

I hate inefficient workflows and am upset that Adobe does not fully understand the word Workflow, especially between their own products (ie why can we not create books within InDesign without creating intermediate files from Lr). 

There are some 'gottchas'  which you may bump into if using with other Adobe apps.  If you have specific queries I will be happy to try and answer.


----------



## PhilBurton (Sep 27, 2017)

Michael Bateman said:


> Does anyone know how to tap the Adobe Metadata written to disk outside of LightRoom, Photoshop, or Bridge?
> 
> (Marking a file with a color does not seem to translate to the OSX tags, etc. Nor do the other pick flags, etc. )
> 
> ...


If you want to preview your RAW files outside LR and also mark them for deletion, try About | FastRawViewer.  It's very inexpensive and fast.

Phil Burton


----------



## tspear (Sep 27, 2017)

Michael Bateman said:


> Thanks very much I am totally up for that but I sometimes delete files from the local fast drive that I still want to be able to recover. Imagine for instance I want to keep all five star photos from a year ago and all four and five star photos from up to six months ago. So I delete a bunch of 1,2,3 and 4 star photos from the local drive accordingly. I do this NOT wanting to delete them from the backup. This, as distinguished from photos I never want to see again for the rest of my natural life.
> 
> Now, I do not ACTUALLY rely on the Star ratings for my workflow. I just used that as an example. I know this is a forum for Workflow Strategies but I am asking a specific question that if I had an answer to I could more easily engage with all of you on what or how I might do differently.
> 
> ...



No, Rsync cannot look at such metadata. 
Sounds like a complex workflow. But in any case, I see a few possible choices:
1. Add a manual step such a Gnitts suggested to capture the file list, and then process is manual via another script.
2. Write a script that when Lr is closed, it can open the database file via SQLLite and query for all files matching the reject flag.
3. Write a LUA script and add it to Lr to do what you want.
4. Change your workflow.

Tim


----------



## Linwood Ferguson (Sep 27, 2017)

Michael Bateman said:


> I am pretty sure the reject tag is written out. If you mark a file as rejected in LightRoom and close  LightRoom you can see it in Bridge.



I did this in 2015.12:

Write XMP
Copy/save that file
Reject that image
Write XMP
Diff the two XMP's.
I got this, which doesn't seem to indicate the flag. 

19c19
<    xmp:MetadataDate="2017-09-27T15:05:59-04:00"
---
>    xmp:MetadataDate="2017-09-27T15:05:19-04:00"
77c77
<    xmpMM:InstanceID="xmp.iid:edd95aec-8e62-674d-acf4-1d98cf0bfe2a"
---
>    xmpMM:InstanceID="xmp.iid:48b76329-862c-df48-940a-2bbffb3f24c2"
231,232c231,232
<       stEvt:instanceID="xmp.iid:edd95aec-8e62-674d-acf4-1d98cf0bfe2a"
<       stEvt:when="2017-09-27T15:05:59-04:00"
---
>       stEvt:instanceID="xmp.iid:48b76329-862c-df48-940a-2bbffb3f24c2"
>       stEvt:when="2017-09-27T15:05:19-04:00"​

I did it again with a star rating of 1 star and got a clear difference:

19,20c19
<    xmp:MetadataDate="2017-09-27T15:10:19-04:00"
*<    xmp:Rating="1"*
---
>    xmp:MetadataDate="2017-09-27T15:05:19-04:00"
78c77
<    xmpMM:InstanceID="xmp.iid:57bb73ad-ed0c-7e49-b458-ba0a0669e2a3"
---
>    xmpMM:InstanceID="xmp.iid:48b76329-862c-df48-940a-2bbffb3f24c2"
232,233c231,232
<       stEvt:instanceID="xmp.iid:57bb73ad-ed0c-7e49-b458-ba0a0669e2a3"
<       stEvt:when="2017-09-27T15:10:19-04:00"
---
>       stEvt:instanceID="xmp.iid:48b76329-862c-df48-940a-2bbffb3f24c2"
>       stEvt:when="2017-09-27T15:05:19-04:00"​
I have no explanation why you can see it in bridge; I don't use bridge, but I downloaded it and tried it, and I cannot see any sign of a reject when Bridge views the file.

Maybe you did it the reverse way -- rejected in bridge and saw it in Lightroom?


----------

