Cloud Computing 2009, The Forum, Bryanston

With the usual eats (ok, the brownies are good – chewey, but not gooey) welcoming those who chose to attend, the presentations at the Cloud Computing Conference 2009 promise three potentially interesting presentations: a case study by iBurst, one by the University of the Witwatersrand, and a presentation about the potential security risks that cloud computing inherently presents.Updates will follow as presentations are made. Let’s hope it’s  not a disappointing display of vendor presentations that will be passed off as actual discourse about the potential and the practical applications of technology. Otherwise, this is nothing more than another “infomercial”. Here’s hoping…


Update: Annalise Olivier, a KPMG IT Advisory Services Manager, kicked off by trying to contextualise the technology and power savings through virtualisation, with a warning flag around the Protection of Personal Information Bill (soon to be the PPI Act), King III with regard to security and privacy and data access. Mirek Novotny, from HP, did an HP pitch (“we are the greatest”). Only Clifford Foster, CTO of IBM South Africa, gave a good talk specifically taking a step back and phrasing Cloud Computing as a methodological approach using current technogies such as virtualisation, Software as a Service (SaaS), Platform as a Services (PaaS) and Infrastructure as a Services (IaaS) to leverage computing power that’s out there.


Did you know?

synch.cc‘s next privat cloud deployment will encompass a distributed 10TB environment with HA failover – you heard it here first!

Update: After the presentation by David Ives from Microsoft South Africa (“The Cloud Market”) there should hopefully be the case studies… with an estimated value of $150bn (that’s what Gartner says) by 2013. Currently, it’s at $46bn in terms of Cloud Spend… BTW – Microsoft Azure is now live for development in the cloud — but you knew that already 🙂


Update: The iBurst presentation only had 2 good examples — the fact that the Obama campaign’s support of the Red Cross managed to crash the Red Cross site with 15min of the Obama campaign’s announcement thereof, Animoto’s scaling from 5000 to 750 000 users with a jump from 50 to 3500 servers, and that’s it… Sigh…

Update: After a good discussion about the future of enterprise cloud computing over lunch with Andre Fredericks from Sanlam and Hannes Lategan from CA Southern Africa, now for the presentation by Scott Hazelhurst (acting Director – Bioinformatics, WITS) about Computing in the Cloud as a case study of Amazon’s S3 combined with EC2, and Hadoop. They worked on datasets ranging from 10MB to 1.5TB, with computationally sensitive parameter-sweeping computational challenges, and bursty workloads. WITS is looking at setting up a large private cloud at WITS for enterprise and academic computing.

In case you didn’t know, on the Amazon Cloud, S3 has a simple storage model: each user (credit card holder) can have up to 100 buckets, with unlimited number of objects, limited to 5GB each. ($0.15/GB/month), and each bucket can be accessed remotely by string keys (virtual URLs). EC2 extends this by providing you with virtual hosts deployed via AMIs (Amazon Machine Images) that are provided by Amazon or others to run a full machine (even Windows 2k3 at 30% surcharge) in the cloud on a “virtual computer” with a public and private IP.

Instance types are rated in EC2 units which correspond to an entry-level machine as at 2007 with uniform preformance. Starting at a 1 EC2 machine at $0.085 per hour per month, a 3.25 EC2 machine corresponding to 8 processors and 68GB RAM costs $2.40/hour/month. With good documentation and access, useability is quite good, with good command line and GUI support on the EC2 and S3 systems. You can have full control with full configuration control as root – which is both good and bad 😉

In terms of efficiency, the Meraka and Centre for High Performace Computing clusters were used as comparative models, and the EC2 efficiency rated quite well.  Multi-core speed efficiency on EC2 was rather disappointing, but overall, it’s a useful tool, specifically with defined computing components. Even with South African bandwidth, access was good.

Hadoop, on the other hand, is a framework for distributed computation and storage that was inspired by Google’s initial work, with PB (petabyte)-level processing (even ZB (zeta bytes)). Large files are chunked and replicated across multiple machines (ie high resilience) with loose consistency. Ideally, the files don’t change a lot. Computation is processed via functional programming’s map/reduce. Map: break up the data and repeat the function eg squaring numbers (map(sqr, [1,2,3,4,5])). Reduce: combine results back eg addition (reduce(add, [1,2,3,4,5])). Zookeeper manages the high performance service, PIG and PIG LATIN allow for a query/ high level processing language.

Hadoop proved to be challenging but powerful.

Update: VirtualBox’s presentation is only vendor presentation… sigh…

Update: After the way-overtime VirtualBox presentation, Markus Krauss of Novell described the Novell approach of miminising and bundling the required software around and application into a Workload that can then be deployed and managed using a whole variety of Novell applications, typically (as they are Novell, after all) running on a SUSE stack, though any hypervisor base-platform can support their stack… His presentation, though with the vendor twist quite firmly present, didn’t rely on hype and glory as some as some of the others, and was thus more of a solutions presentation. Ab und zu hat er sich zwar etwas verplappert, welches aber direkt auf die Deutsch/Englischen Redewendungsübersetzungsschwierigkeiten zurückzuführen ist 🙂