I was going to use my old laptop Zeus, but instead I’m trying to get the wiki back up and running on EC2. I got the signed up. Got the EC2 instance up and running with an image pre-populated with apache, mysql and php. I transfered over all my wiki files, and the last mysql dump I had. I restored from the dump. At first it gave some db errors because there was one place where the host location was hardcoded. Then I changed that and also the user grants in the db to get around that.
Now I have no db errors, but a blank white screen.
More debugging to go.
If I get it all up and running, I won’t KEEP running it at EC2 for a variety of reasons. It will just give me the confidence to wipe my current setup on Cronus and rebuild there. But EC2 is proving to be a handy quick way to get another “machine” up and running to try some things out on.
With the data transfer and everything, and me having run the instance overnight while I slept too, so far it has cost me $2.38 to have this EC2 instance up and running.
It should be interesting to try the new bigger ones. The current EC2 small instance is pretty slow, and worse, inconsistently slow. That is, it’s slow, but it’s not predictably slow.
However, for certain things, one of my customers may be able to make use of the new large and extra large instances.
For permanent use, it’s definitely not terribly economical, as you seem to allude to. Between instance-hour charges and data transfer and the lack of static IP or persistent hostname between reboots, it’s more economical to lease a machine at a commercial colo (and cheaper still to host a customer-owned machine if one has access to cheap colo space.)
The main advantage over colos is if you have an application where you need to be able to rapidly scale and descale on demand rather than just being able to say “I need X machines” and sticking with it for awhile. But of course there are only certain applications that fit that model.
As for how it will evolve over time… well, we shall see. :-)