As befits my new found world of finance, there is a certain paranoia around what can and cannot be done, who is allowed to do what and to whom and by whom. Suffice to say it all needs to be auditable, verifiable, and secure.

Because it has to be secure one of the policy decisions currently is the use of a secure managed file transfer. This system allows you to transfer files from one machine to another via a “secure” pipe using essentially a black box to direct the flow of information. Great for security, a nightmare at 3am when you wonder where the files have got to.

Which brings me to the point of this post. The Client mandated the use of secure managed file transfer, especially for sensitive data, a sensible practice and one that was implemented.

In the late hours the monitors duly rang and warned us of the failure of a process dependent on the file that should have been delivered. To spare the details of the night, the support guy could not be reached in time and we decided that the file could be applied in the morning.

In the morning we then learned that the file had not been produced. The method to use – in future, as we were imperiously informed – was to check the production of the file in the directory listed on a webserver on an intranet server.

WAIT.

A webserver ?

Could we reach this from our server ? Yes.

Can we use https ? Yes.

Http ? Um… yes ?

So… when we can’t see the file from XFB, we then go to the server, we look for it in the web directory… accessible by Perl and LWP. With no audit, no black box, and very little monitoring…

Some days I wish I made this up.