Well aint this a blast from the past 
And thanks for your kind words @VoltVisionFrenchy.
After all these years I would say there is no universal storage solution that satisfies each project’s needs. Some projects require very complex storage retrieval features, other only speed, others need to be finished tomorrow. Which means apart from the project needs, it also depends on what experience the developer has.
That said, for large complex sets with inter-dependencies, not many things beat an old fashioned relational database. In that field, SQLite is still extremely stable (it’s used in almost every cellphone in the world), zero effort to deploy/maintain, and extremely fast, suitable for very large complex datasets. For small datablobs like thumbnail images they claim it’s ~35% faster than your regular filesystem.
If your files are larger than a small thumbnail, you should indeed only store the reference to the file path in your database. SQLite is either memory or disk based, but it’s meant to be used on the same machine without having to start an external db server.
If you have multiple machines / processes and need a central database I would use something like a Redis server probably.
But if you don’t need advanced database features (or coding for databases is not your thing) , there are many other ways in TD. You can store your information simply in DATs, which makes it easy to edit, and you can save/load them to .dat or .csv file format.
Very often I use Python objects like dicts, which you can load/save to JSON fileformat.
Whatever format you choose, the management code is best written in a TD Extension.
Also these days you don’t need tmp files on disk anymore to work with TOPs, now you can get bytes from a TOP directly, using the TOP’s class numpyArray()
or saveByteArray()
methods.
And you can load bytes directly in the Script TOP using copyNumpyArray
, copyCUDAMemory
or loadByteArray
.
Hope this gives you some inspiration pointers!
cheers, idzard