Why won’t it matter? Because of technological innovations or the Mayan Apocolypse? Which reminds me, I have to check which date in December we have the Chess League on… Have to make sure the rounds gets rated by mid December. Those flipped poles are going to be murder on magnetic tapes and hard drives.
Technological innovations only work if there is a behind the scenes effort to keep the archives on current media, whatever it may be. And that is going to involve both time and money.
Perhaps in some future year we’ll see complaints about why the USCF is spending X percent of its annual budget maintaining archives.
Perhaps, but current archiving processes are not that complex or expensive. Files for something of this size no longer need to be compressed, they aren’t on tape, they aren’t tied to any particular media. We’ve reached the point where all that matters is the data and the format. The data is the data. Mainstream formats grandfathered. If updates are necessary they will likely be largely machine-performed, one time, and then be good again for a number of years. The issue is only bad if they are left in a static format - like paper.
This is what Oracle has been saying for the last decade. The fact is though, that most
companies refuse to trust any on-line storage medium with confidential data. They want
this info under their own lock and key.
It depends on the situation. The University of Nebraska recently rejected one vendor for a new email server because that vendor could not guarantee that emails would be kept in secure sites within the United States, as required by many of their government research grants.
So am I Kevin, So am I. I know some execs of multi-nationals that are petrified about the
idea of on-line storage. There is not a data base that cannot be hacked into.
Well we really don’t have multiple locations. The USCF is a 3 million revenue organization which is remarkably unsophisticated in its IT systems. We need significnat improvement in this area - and we are starting from pretty far back. We have great systems in the office - for about 1997.
Agreed. Crashplan and/or Carbonite or similar things would suffice for most of our business backup.
Allen, one thing I’ve seen in chess - this cult of do-it-yourselfers - is that we will avoid “standard solutions” to save nickels and forget about the soft dollar costs involved in compatibility and ease of use. So my questions are:
What are the areas of business systems that we need?
I would think that there is little that is spectacular about our needs, save for software devoted to chess-specific tasks.
Do we have (have we ever had) an IT plan that notes that IT hardware/software is a process (not an end-product) that needs to continue to evolve, and PLANNED for that evolution and expense? A simple example might be - to plan on replacing 1/3 of the computers every 3 years.
Prior to our legal/financial problems, we were replacing our main database server on about an 18 month to two year basis (early 2004, mid 2005 and early 2007). We have not replaced it since 2007.
At present our outbound Internet connection is too slow to support full scale offsite backup protocols. (Backing up a 48 GB database takes something like 24 hours, during which time all other IP traffic to/from the USCF office is affected.)
The USCF is in the process of installing a fiber circuit which should significantly upgrade our outbound data speeds, which may make using external backup vendors feasible. (There will likely be some down time later this month or in October when we switch over to the new fiber circuit, since that will involve assigning new IP addresses for all externally visible nodes.)
Our membership structure is probably not one that would easily fit into an existing package for membership organizations, and there are numerous places where the membership system is tied to chess-specific data, such as TLAs, TD certification and all the tournament/ratings data.
Don’t know if Carbonite does this, but one of the things I like about Crashplan is that it handles local and offsite backups both and will do them real-time if we want.
Maybe, maybe not. IT depends on what you mean by “business backup”
The office does not even have an effective network. Files are transferred by e-mail or sneaker net. We need about everything in the office.
There are the office business systems and then there are the database and web systems. The office systems are unremarkable, the database systems are really mission critical.
There does not seem to have been a rolling 3-5 year plan. When I manage IT for accounting firms that is what I keep in place and update every 3-6 months to guide investment. Phil Smith is working on such a plan now.
Real-time backup protocols may not work well with a relational database, because you cannot copy physical database files (and there are several thousand of them) while the database is operational and have them be intact without taking special steps that can slow the database down while the backup is running (and that’s before taking into account the resources used by the backup process itself.)
Do backup protocols like Carbonite even support Linux servers?
Ideally I would like us to have a three system cluster with one master server and two slave servers using the ‘hot standby’ replication procedures available in the latest release of our database platform, postgresql, but now we’re talking setting up at least TWO new servers. (The current server could become the third server in the cluster, although since it is 5 1/2 years old it might be too old to be reliable over the long haul.)
I don’t know yet if our new fiber circuit will be fast enough to support asynchronous replication to an off-site server. That’s one of the things we could test, though. (But of course we would need access to an off-site server with sufficient resources to handle serving as a slave server.)
The fiber circuit may also be fast enough to support going to an off-site server setup, but that may raise as many issues as it solves.
You would have to ask Phil Smith about the cost of the fiber circuit, I do not have that information, nor do I know its rated speed.
Tests on our current DSL connection have shown that outbound upload speeds (for compressed data) max out at about 175K bytes/second. That means uploading a large file (like a compressed dump of our entire database) would take over 24 hours.