Pages

May 2, 2011

Read me, write me

Recently I've came up with a timeless problem ... imagine what happens if you try to read from and write to a file simultaneously - junk. Let's say that you are in a single process space, you have the option to synchronize threads and go around the problem, but for more than one process it becomes tricky. From what I can tell there are two options, firstly one can devise a scheme with interprocess communication (IPC) to serialize the read-writes. Unfortunately if you're aiming for platform independence, it won't work, not that it's impossible, but it would take quite an effort to do it. You'd have to implement the IPC for every platform you are going to support.

I've thought about this some time now, and I've came up concluding that there is simpler, easier and portable way to do it - file server. It's not a classical ftp type server per se, but more of a simplistic network oriented file management server. One would create a daemon which listens to some port, and use a simple protocol to communicate with it. The daemon itself would be able to list/create/delete/read/write files on the host machine. Aside from a small overhead, it can work on the local computer (through local port), as well as on a remote machine, which gives it the flexibility I'm looking for. It's not something new in the world of programming, but gives a way to implement file operations serialization primitive which is portable.

The purpose of all this, is a project I have a plan on starting in the near future. And since it would need both network client-server connectivity and local file locking primitive, I've decided to merge the two concepts and use serialization for both cases.