Emma Norris from the Institute for Government on What can we learn from London 2012. Author of this report.
Scale and complexity was huge.
Serious challenges along the way including security, financial etc
But was perceived to be very successful
So, how did we deliver such a complex, risky project so well?
Two main ways of doing it, new ways of working and new ways of engagement
New way of working
Politics was dealt with head on, using its advantages and minimising risks. Real openness between different parties. Turned it into an advantage
People and skills.
World class recruitment and leadership in Finance, HT, IT, project management etc
Also hired the best people to the teams, mixed teams, multi skilled
Stability , personnel stayed the course
Design and governance
Delivery bodies were built from scratch, responsibility spread across different government departments. Everyone had clear roles and responsibilities in different organisations. Lots placed at arms length form government
Programme management and delivery
Failing to deliver on time was not an option!
Focused on getting the scope right, and didn't change it
Large investment in project management £725m spent on it!
Delegate authority to bodies such as TfL, Olympic delivery authority
Some failures. Eg G4S security, they tried to treat it as business as usual. Didn't step up and adopt new ways of working.
New ways of engagement
Budget. Often public sector projects go over budget. This process was transparent. Quarterly reporting that drove efficicnet behaviour.
Vision
LCOG created a vision that tied everyone together whilst allowing flexibility to meet all agendas including benefits to London, the country and sports participation
New skill sets.
Civil servants developed new expertise in major project management and delivery
Commercial skills and intelligent client role developed in partnership with private sector.
Are these skills being redeployed?
Some overarching lessons:
Project trumps silo
Bring together right people in effective teams
Personnel stability and personal relationships matter
Political cooperation creates space for project success
Change and time discipline are crucial
Limit Innovation
Arms length bodies and the public sector can deliver
Budget transparency matters
Design in safety and sustainability from the start
Beware false economies
Plan, assure, test
Be bold and ambitious
Lots of lessons from this that can be applied to all major projects. Especially with £727m is available ;-)
Excellent talk, and I suspect the report I linked to at the beginning would be an interesting read.
- Posted using BlogPress from my iPad
Dr Christine Sexton, Director of Corporate Information and Computing Services at the University of Sheffield, shares her work life with you but wants to point out that the views expressed here are hers alone.
Showing posts with label eduserv. Show all posts
Showing posts with label eduserv. Show all posts
Thursday, 16 May 2013
How to make sure a 70 year old business model stays relevant
Next up, Mike Dixon from Citizens Advice.
Interesting body. Considered to be lovely, and trusted. Everyone has a soft spot for them.
12.6m unique users of the website in last year, but not a good site, rated about 3/10. So, it's not all about technology!
Now looking to digitally transform their services. Their guiding principles are:
Content tailored to user profiles
Flexible digital publishing
Truly accessible, responsive design
Assisted digital-ready content driving core processes across channels
Easy out of the box solutions for local bureaux
Devolved content creation and management
Integrated social functions including peer to peer sharing, knowledge sharing.
Can't buy the above off the shelf!
Again, taking a digital by design approach with agile development. Also, pruning web pages as they move to a new CMS.
Looking at personalised social intranet. Has to be more compelling than the Daily Mail sidebar of shame. Or as he put it, the sidebar with a newspaper attached. If your content isn't compelling, people won't read it.
Fast moving digital content and debate is as important as more traditional forms of influence. Need a mixture of:
Slow web, downloads, projects, serious pieces
Integrated team and people blogs
Twitter, Facebook, integrated feeds
Tumblr etc, fast stuff. Snappy, rolling, short cuts
6 months ago, no social media presence. Now have >300 active twitter accounts, sentiment analysis shows very positive reaction. Huge number of followers. Really driven by one person. So, can change your organisations presence and reach very quickly.
Great talk, and although from a different sector, much in common.
- Posted using BlogPress from my iPad
Interesting body. Considered to be lovely, and trusted. Everyone has a soft spot for them.
12.6m unique users of the website in last year, but not a good site, rated about 3/10. So, it's not all about technology!
Now looking to digitally transform their services. Their guiding principles are:
Content tailored to user profiles
Flexible digital publishing
Truly accessible, responsive design
Assisted digital-ready content driving core processes across channels
Easy out of the box solutions for local bureaux
Devolved content creation and management
Integrated social functions including peer to peer sharing, knowledge sharing.
Can't buy the above off the shelf!
Again, taking a digital by design approach with agile development. Also, pruning web pages as they move to a new CMS.
Looking at personalised social intranet. Has to be more compelling than the Daily Mail sidebar of shame. Or as he put it, the sidebar with a newspaper attached. If your content isn't compelling, people won't read it.
Fast moving digital content and debate is as important as more traditional forms of influence. Need a mixture of:
Slow web, downloads, projects, serious pieces
Integrated team and people blogs
Twitter, Facebook, integrated feeds
Tumblr etc, fast stuff. Snappy, rolling, short cuts
6 months ago, no social media presence. Now have >300 active twitter accounts, sentiment analysis shows very positive reaction. Huge number of followers. Really driven by one person. So, can change your organisations presence and reach very quickly.
Great talk, and although from a different sector, much in common.
- Posted using BlogPress from my iPad
Data
Next up 3 short talks about data
1. Technical director of Open Data Institute On Adapting to an Open Data World
What is open data?
Data for everyone, not limited by funding, who you are, or what you intend to do with it.
ata that is reusable, published with a permissive licence, machine readable in standard format, reliable and trustworthy.
Has to be good enough quality to base decisions on.
Accountability - citizens expect to know more.
Protection of Freedoms Act 2012.. Right to have data behind an FoI response in a machine readable form so you can analyse it yourself.
Move to more transparency, eg Tesco website with detailed information about all of their products. ( can't help thinking this might not be the best example following the horse meat scandal :-))
Open data can help with efficiencies. Can inform key activities, make better decisions.
Also improves collaboration eg open street map, legislation.gov.uk
To get the best out of open data, have to engage a community around it.
Use of open data requires tools - publication, analysis, visualisation, interactive guides, questionnaires.
Better quality data is easier to reuse. Need to focus on quality that makes a difference.
Open Data Institute trying to help organisations who are publishing and consuming data.
They run short course, lectures, on- line guides, training and consultancy.
2. Head of public sector consulting from IPL talking about Data Headaches.
Total amount of global data grew to 2.7 zetabytes during 2012, increase of 48%. Not just structured data anymore, mainly unstructured. Digital by default can only mean one thing, more data. Double edged sword. Online delivery of services cuts cots, but there is a cost in managing the data produced. Not solely a technology issue, requires people with the right skills.
People need to be skilled in information management and this requires a culture change, it not something that "IT can do".
Regulation and legislation provide the stick ( CEOs can go to jail). That's OK then as long as its not CIOs......
IM basics, housekeeping, metadata, quality. Everyone should be responsible for this on their own data. Think deleting emails. :-)
But, will need specialists to manage specialist data. Need skills in:
Assurance, data quality, master data management
Retention, records management, archiving, digital continuity (maintaining access in the future)
Finding, enterprise searching.
To get true value out of data, need not just to store it, but to analyse it. Trend analysis, predictive analysis, performance analysis.
Data visualisation with dashboards, heat maps, bubblemaps.
Layering data, eg with GIS.
Need people with skills in data analytics, and information designers to exploit the data in a good presentation layer.
Can ignore it, only going to get worse, have to do something about it- asset management is critical!
3. Independent consultant talking about legal aspects of data and cloud services
Legal risks of new technologies not just technological, but reputational.
All very contextual, and little certainty in this area.
All countries have different laws, but is a lot of guidance available.
Interesting clarification on whether data has to be kept in UK ( it doesn't).
Her view is that all personal data has to be kept in European Economic Area. Doesn't fit with our view. Hasn't mentioned safe harbor. In a question asked at the end it was acknowledged that it does apply.
- Posted using BlogPress from my iPad
1. Technical director of Open Data Institute On Adapting to an Open Data World
What is open data?
Data for everyone, not limited by funding, who you are, or what you intend to do with it.
ata that is reusable, published with a permissive licence, machine readable in standard format, reliable and trustworthy.
Has to be good enough quality to base decisions on.
Accountability - citizens expect to know more.
Protection of Freedoms Act 2012.. Right to have data behind an FoI response in a machine readable form so you can analyse it yourself.
Move to more transparency, eg Tesco website with detailed information about all of their products. ( can't help thinking this might not be the best example following the horse meat scandal :-))
Open data can help with efficiencies. Can inform key activities, make better decisions.
Also improves collaboration eg open street map, legislation.gov.uk
To get the best out of open data, have to engage a community around it.
Use of open data requires tools - publication, analysis, visualisation, interactive guides, questionnaires.
Better quality data is easier to reuse. Need to focus on quality that makes a difference.
Open Data Institute trying to help organisations who are publishing and consuming data.
They run short course, lectures, on- line guides, training and consultancy.
2. Head of public sector consulting from IPL talking about Data Headaches.
Total amount of global data grew to 2.7 zetabytes during 2012, increase of 48%. Not just structured data anymore, mainly unstructured. Digital by default can only mean one thing, more data. Double edged sword. Online delivery of services cuts cots, but there is a cost in managing the data produced. Not solely a technology issue, requires people with the right skills.
People need to be skilled in information management and this requires a culture change, it not something that "IT can do".
Regulation and legislation provide the stick ( CEOs can go to jail). That's OK then as long as its not CIOs......
IM basics, housekeeping, metadata, quality. Everyone should be responsible for this on their own data. Think deleting emails. :-)
But, will need specialists to manage specialist data. Need skills in:
Assurance, data quality, master data management
Retention, records management, archiving, digital continuity (maintaining access in the future)
Finding, enterprise searching.
To get true value out of data, need not just to store it, but to analyse it. Trend analysis, predictive analysis, performance analysis.
Data visualisation with dashboards, heat maps, bubblemaps.
Layering data, eg with GIS.
Need people with skills in data analytics, and information designers to exploit the data in a good presentation layer.
Can ignore it, only going to get worse, have to do something about it- asset management is critical!
3. Independent consultant talking about legal aspects of data and cloud services
Legal risks of new technologies not just technological, but reputational.
All very contextual, and little certainty in this area.
All countries have different laws, but is a lot of guidance available.
Interesting clarification on whether data has to be kept in UK ( it doesn't).
Her view is that all personal data has to be kept in European Economic Area. Doesn't fit with our view. Hasn't mentioned safe harbor. In a question asked at the end it was acknowledged that it does apply.
- Posted using BlogPress from my iPad
In With The New - 21st century government
In London today for the Eduserv Symposium: In With The New. An interesting agenda, based on delivering customer centric services in a "digital by default" era. None of the speakers are from the HE sector which makes it double interesting. I'll try and blog the key points, but as usual, they may be in note form and you may have to fill in the missing bits yourself.
Opening session is David Cotterill from the Cabinet Office. Formerly from DWP I've blogged about seeing David talk before about Ideas Street, which was the catalyst for us purchasing Ideascale.
Today David is talking about 21st Century Government, the way the public sector is using technology to deliver services. He works in the Government Digital Service, which is very exciting.
Old model for technology in government, multi year sourcing contracts with a limited number of suppliers. Inadequate competition, smaller innovative suppliers locked out. Bad for users, bad for taxpayer, bad for growth.
Outsourced IT to one or more large suppliers. IT treated as not particularly important, "noncore" so could be outsourced.
Can't bundle IT up, need to break it down. Mission iT services and digital public services - these are unique services to meet customer needs. Concentrate on these. Back office ERP, and infrastructure are more commodity, can be swopped in and out. Use open standards so can swop to different providers and ensure decent competition. Unbundling the legacy big contracts. Effects are starting to be felt with big savings being realised.
What is 21st century government? Will involve things like gov.uk. Simpler, clearer,faster. Build a platform and then build services on top. Eg licence applications, e-petitions. Make it easier for people to do the things they need to do.
They've developed an IPad app for PM to use to run the country:-)
Key part of digital strategy is to look at the big services that citizens most often require from government and change them so they are digital by default. Digital teams being created to change the way services are delivered.
Need multidisciplinary teams - developers, designers, product and service managers, policy, comms etc. Must start with user needs.
Can't build websites with tools designed for building bridges. Previously, long requirement gathering process, very detailed spec, develop. 2 yrs later show it to users. Users don't like it!
Now, more discovery work up front on what user needs, produce alpha, test with users then either throw away and produce another alpha or go to beta. This method is cheaper, and faster and meets users needs better.
They have a dashboard for all services for GDS. Also a Government service design manual. Eg before go live a minister must be able to complete a transaction on your service. Also Cabinet Office standards hub, this is open and people can contribute. Definitely worth a look.
So, in summary, can build services that meet user needs and create big savings if you use open standards, open platforms, put user needs first and use agile, fast development.
Great talk from David, as usual.
- Posted using BlogPress from my iPad
Opening session is David Cotterill from the Cabinet Office. Formerly from DWP I've blogged about seeing David talk before about Ideas Street, which was the catalyst for us purchasing Ideascale.
Today David is talking about 21st Century Government, the way the public sector is using technology to deliver services. He works in the Government Digital Service, which is very exciting.
Old model for technology in government, multi year sourcing contracts with a limited number of suppliers. Inadequate competition, smaller innovative suppliers locked out. Bad for users, bad for taxpayer, bad for growth.
Outsourced IT to one or more large suppliers. IT treated as not particularly important, "noncore" so could be outsourced.
Can't bundle IT up, need to break it down. Mission iT services and digital public services - these are unique services to meet customer needs. Concentrate on these. Back office ERP, and infrastructure are more commodity, can be swopped in and out. Use open standards so can swop to different providers and ensure decent competition. Unbundling the legacy big contracts. Effects are starting to be felt with big savings being realised.
What is 21st century government? Will involve things like gov.uk. Simpler, clearer,faster. Build a platform and then build services on top. Eg licence applications, e-petitions. Make it easier for people to do the things they need to do.
They've developed an IPad app for PM to use to run the country:-)
Key part of digital strategy is to look at the big services that citizens most often require from government and change them so they are digital by default. Digital teams being created to change the way services are delivered.
Need multidisciplinary teams - developers, designers, product and service managers, policy, comms etc. Must start with user needs.
Can't build websites with tools designed for building bridges. Previously, long requirement gathering process, very detailed spec, develop. 2 yrs later show it to users. Users don't like it!
Now, more discovery work up front on what user needs, produce alpha, test with users then either throw away and produce another alpha or go to beta. This method is cheaper, and faster and meets users needs better.
They have a dashboard for all services for GDS. Also a Government service design manual. Eg before go live a minister must be able to complete a transaction on your service. Also Cabinet Office standards hub, this is open and people can contribute. Definitely worth a look.
So, in summary, can build services that meet user needs and create big savings if you use open standards, open platforms, put user needs first and use agile, fast development.
Great talk from David, as usual.
- Posted using BlogPress from my iPad
Friday, 13 May 2011
eduserv Symposium Round up
A quick round up of some of the eduserv symposium sessions which I haven't blogged about in detail.
We had a lightening round - 4 very quick talks on different aspects of the UMF money which has been allocated to shared services in cloud computing. I've posted about this many times before, so won't go into much detail, but a couple of interesting extra items came up.
Dan Perry from JANET talked about the brokerage service. It's aim is to facilitate the uptake of off-campus datacentres and cloud centres. Is it cost saving, or quality and service improvement? A blend of both. Also reducing risk, addressing technical and business questions, easier to do as a broker.
The JANET brokerage role is a cross between a dating agency and marriage guidance. Wonderful quote, I look forward to seeing it in action!
Matt from eduserv gave an outline of the cloud services they are developing for education. These are designed to address the major concerns of HEIs, including ensuring that the data remains in UK, that they are integrated with JANET and are low cost. They will operate out of Eduserv's new Swindon data centre with Janet connectivity, and by this summer will provide a shared services platform which the UMF services will run on. By the end of the year a sustainable business model will be in place. He confirmed that they are looking to compete with Amazon on price.
Phil Richards from Loughborough University talked about his views of cloud computing, especially after some recent work he's had to do which started off looking at how to rebuild his old data centre.
There's two sorts of IT activities: complex and innovative, and commodity.
We need to differentiate between the two, and outsource or share the latter. This will be a dynamic equilibrium as today's complex will become tomorrow's commodity.
We have a great distribution network for our commodity in JANET which the private sector has to invent. So when HP went to a private cloud and reduced their data centres from 85 to 6, they had to invest $100ms to create their own network. Still made enormous savings. Industrial scale can give huge savings in power.
Greenpeace just released a report called How dirty is your data which gives size of corporate data centres, eg Microsoft at 303,000 sq feet, and many others similar. Is this where the critical mass is for power? If so, we're way behind, eg eduserv's is 37,000.
What's the killer app for HE/FE cloud? Is is Research tools or admin apps? More likely to be cheap virtual servers via a hybrid cloud, achieved by JANET brokerage.
There has to be an exit strategy mitigated by migration back to local cloud.
Definition of not having an exit strategy. Owning an iPod and iPhone, buying lots of music from iTunes, and then deciding to buy an android device.
Terence Harmer, from the Belfast eScience centre talked about their experience of cloud.
The BeSC is entirely self funding, don't use shared resources within the University infrastructure. They have no internal infrastructure for mail, calendars, chat rooms, and all project shared services have migrated to utility resources. They are in the business of turning internal kit off. Users are not interested in kit, but capabilities. They buy capacity and storage on demand, and play the market.
His advice? Don't go into cloud half heartedly. Don't pick up your server room and put it with a provider. That's not a cloud, it's a bunch of kit.
Don't adopt a single vendor.
The cost of cloud infrastructure is low. You can punch above your weight if you go cloud properly. They have 10 staff and 300 servers running simultaneously.
Interesting fact about the scale of digital media scale. The current entire BBC archive is 52PB, but iPlayer pumps out about 7 PB every month. That's why ISPs don't like streaming media.
- Posted using BlogPress from my iPad
We had a lightening round - 4 very quick talks on different aspects of the UMF money which has been allocated to shared services in cloud computing. I've posted about this many times before, so won't go into much detail, but a couple of interesting extra items came up.
Dan Perry from JANET talked about the brokerage service. It's aim is to facilitate the uptake of off-campus datacentres and cloud centres. Is it cost saving, or quality and service improvement? A blend of both. Also reducing risk, addressing technical and business questions, easier to do as a broker.
The JANET brokerage role is a cross between a dating agency and marriage guidance. Wonderful quote, I look forward to seeing it in action!
Matt from eduserv gave an outline of the cloud services they are developing for education. These are designed to address the major concerns of HEIs, including ensuring that the data remains in UK, that they are integrated with JANET and are low cost. They will operate out of Eduserv's new Swindon data centre with Janet connectivity, and by this summer will provide a shared services platform which the UMF services will run on. By the end of the year a sustainable business model will be in place. He confirmed that they are looking to compete with Amazon on price.
Phil Richards from Loughborough University talked about his views of cloud computing, especially after some recent work he's had to do which started off looking at how to rebuild his old data centre.
There's two sorts of IT activities: complex and innovative, and commodity.
We need to differentiate between the two, and outsource or share the latter. This will be a dynamic equilibrium as today's complex will become tomorrow's commodity.
We have a great distribution network for our commodity in JANET which the private sector has to invent. So when HP went to a private cloud and reduced their data centres from 85 to 6, they had to invest $100ms to create their own network. Still made enormous savings. Industrial scale can give huge savings in power.
Greenpeace just released a report called How dirty is your data which gives size of corporate data centres, eg Microsoft at 303,000 sq feet, and many others similar. Is this where the critical mass is for power? If so, we're way behind, eg eduserv's is 37,000.
What's the killer app for HE/FE cloud? Is is Research tools or admin apps? More likely to be cheap virtual servers via a hybrid cloud, achieved by JANET brokerage.
There has to be an exit strategy mitigated by migration back to local cloud.
Definition of not having an exit strategy. Owning an iPod and iPhone, buying lots of music from iTunes, and then deciding to buy an android device.
Terence Harmer, from the Belfast eScience centre talked about their experience of cloud.
The BeSC is entirely self funding, don't use shared resources within the University infrastructure. They have no internal infrastructure for mail, calendars, chat rooms, and all project shared services have migrated to utility resources. They are in the business of turning internal kit off. Users are not interested in kit, but capabilities. They buy capacity and storage on demand, and play the market.
His advice? Don't go into cloud half heartedly. Don't pick up your server room and put it with a provider. That's not a cloud, it's a bunch of kit.
Don't adopt a single vendor.
The cost of cloud infrastructure is low. You can punch above your weight if you go cloud properly. They have 10 staff and 300 servers running simultaneously.
Interesting fact about the scale of digital media scale. The current entire BBC archive is 52PB, but iPlayer pumps out about 7 PB every month. That's why ISPs don't like streaming media.
- Posted using BlogPress from my iPad
eduserv Symposium, Above the Clouds
Armando Fox, from UC Berkley gave the final keynote at the Eduserv symposium which was entitled Above the Clouds, A View from Academia.
The report, Above the Clouds, on which the talk was based is available here.
The talk centred around how his lab had used the cloud over the past few years, and some of then issues associated with that. It was a fast paced talk, full of good information, and I'm just going to post a few key points, the video of the whole thing will be up on the Eduserv web site soon.
One of the things that had interested him and his lab was whether they could demonstrate that using cloud based services they could enable an entrepreneur to prototype a great web app over a long weekend and then deploy at scale. eBay had supposedly been developed over this time scale, but had had to be re-architectured many times since to cope with scale problems.
They had moved their services to Amazon's EC2 in 2008, and since then have spent $350,000 on amazon web services. That's about 1/3 of a PhD student a month. It's allowed them to carry out many experiments, ( 100 to 300 nodes most common, 900 max), have large scale storage and carry out cloud programming.
They have done work that they could not have done without cloud services, and it has acted as a research accelerator, at a cheaper cost.
Has given students an experience they would not have been able to have. Administering, provisioning, sizing and delivering courses have been much easier on the public cloud than using UC instructional computing.
In terms of costs, capital, hardware, networking and power is 5 to 7 times cheaper at 100k scale, ie when you have data centres that have at least a hundred thousand servers in them.
Cloud operations are heavily automated with 1000s of machines looked after by 1fte admin
This scale makes availability affordable with wide- area disaster recovery facilities.
It's hard to compete on cost with cloud providers, and that's even with their margins, which are estimated to be big! However, more competition may bring costs down.
Cloud allows you to smooth out peaks and troughs. Not waiting in a queue accelerates research. You can run several experiments simultaneously each using 100s of machines for 1 to 2 hours, without queuing up.
On the other hand, a lot of data is generated. for example, the LHC generates 60TB per day. All of this data needs moving. In the US, long haul networking is the most expensive cloud resource. UC Berkley have found that it's easier, cheaper and quicker to ship the drives to Amazon in the post. In UK, we are lucky to have JANET, but we need to combine this with cloud providers, ie get them direct links to JANET.
Does cloud create a single point of failure?
30 hour Amazon outage in April 2011. Triggered by human error during network configuration change. Good test case!
Netflix were largely unaffected, yet they are one of Amazon's largest customers, because Netflix had re-architectured their software to think about how to deal with failure.
Non redundant services were screwed with catastrophic outages. Cloud does not buy you redundancy.
Would more operational expertise have resolved the outage faster? Should they have been able to recover faster? Interesting question!
Keeping up with innovation can be an issue with cloud - AWS has deployed 1 new service every two months.
- Posted using BlogPress from my iPad
The report, Above the Clouds, on which the talk was based is available here.
The talk centred around how his lab had used the cloud over the past few years, and some of then issues associated with that. It was a fast paced talk, full of good information, and I'm just going to post a few key points, the video of the whole thing will be up on the Eduserv web site soon.
One of the things that had interested him and his lab was whether they could demonstrate that using cloud based services they could enable an entrepreneur to prototype a great web app over a long weekend and then deploy at scale. eBay had supposedly been developed over this time scale, but had had to be re-architectured many times since to cope with scale problems.
They had moved their services to Amazon's EC2 in 2008, and since then have spent $350,000 on amazon web services. That's about 1/3 of a PhD student a month. It's allowed them to carry out many experiments, ( 100 to 300 nodes most common, 900 max), have large scale storage and carry out cloud programming.
They have done work that they could not have done without cloud services, and it has acted as a research accelerator, at a cheaper cost.
Has given students an experience they would not have been able to have. Administering, provisioning, sizing and delivering courses have been much easier on the public cloud than using UC instructional computing.
In terms of costs, capital, hardware, networking and power is 5 to 7 times cheaper at 100k scale, ie when you have data centres that have at least a hundred thousand servers in them.
Cloud operations are heavily automated with 1000s of machines looked after by 1fte admin
This scale makes availability affordable with wide- area disaster recovery facilities.
It's hard to compete on cost with cloud providers, and that's even with their margins, which are estimated to be big! However, more competition may bring costs down.
Cloud allows you to smooth out peaks and troughs. Not waiting in a queue accelerates research. You can run several experiments simultaneously each using 100s of machines for 1 to 2 hours, without queuing up.
On the other hand, a lot of data is generated. for example, the LHC generates 60TB per day. All of this data needs moving. In the US, long haul networking is the most expensive cloud resource. UC Berkley have found that it's easier, cheaper and quicker to ship the drives to Amazon in the post. In UK, we are lucky to have JANET, but we need to combine this with cloud providers, ie get them direct links to JANET.
Does cloud create a single point of failure?
30 hour Amazon outage in April 2011. Triggered by human error during network configuration change. Good test case!
Netflix were largely unaffected, yet they are one of Amazon's largest customers, because Netflix had re-architectured their software to think about how to deal with failure.
Non redundant services were screwed with catastrophic outages. Cloud does not buy you redundancy.
Would more operational expertise have resolved the outage faster? Should they have been able to recover faster? Interesting question!
Keeping up with innovation can be an issue with cloud - AWS has deployed 1 new service every two months.
- Posted using BlogPress from my iPad
eduserv symposium, Research data management
Kenji Takeda from the University of Southampton gave a great presentation on research data management. They have are taking part in a JISC funded project, the Institutional Data Management Blueprint, and the web site is here. There's a lot of information on the site, including a comprehensive report of a survey they carried out of researchers, and it's worth a look.
The survey asked questions such as Where do you store your data?" Answers ranged from paper, to CDs, to local hard discs, local servers, off site storage solutions, with only a minority using University provided storage, (with associated resilience, security, disaster recovery etc).
"How much data do you have?" gave answers from paper records only, to many terabytes.
Not surprisingly, the answers to " How long do you store it for?" ranged from don't know, to forever.
I suspect that these responses would be mirrored in most of our institutions.
Southampton carried out a gap analysis to see where they needed to take action, and drew up a series of recommendations. in the short term they included developing an institutional data repository and develop a scaleable business model, ( ie how are recommendations going to be paid for). They have calculated that 1 petabyte of data costs £1m over 5 years.
They also agreed to set up a one stop shop for researchers for data management advice and guidance.
In the medium term they are establishing a comprehensive and affordable back up service for all, and proposals to manage the complete research data life cycle.
Their ultimate long term is that good research data management is embedded in all policies and procedures, and it does not need to be considered separately. An Institutional data management policy is needed to help researchers by providing guidance on what is expected and providing guidance.
Has to be about building trust. if you want a quick win, give them hundreds of terabytes for free and back it up!
They have done some case studies which are outlined in their report accessed via the previous web link, especially in Archaeology which lends itself this sort of study. They collect a lot of data in many forms from laser scans, geophysics, CAD etc., and they understand the need for context and metadata. They have developed a Sharepoint 2010 site for Archeology data management using Pivot to slice and dice the data based on the metadata.
A good talk, and good to get an academic perspective.
- Posted using BlogPress from my iPad
The survey asked questions such as Where do you store your data?" Answers ranged from paper, to CDs, to local hard discs, local servers, off site storage solutions, with only a minority using University provided storage, (with associated resilience, security, disaster recovery etc).
"How much data do you have?" gave answers from paper records only, to many terabytes.
Not surprisingly, the answers to " How long do you store it for?" ranged from don't know, to forever.
I suspect that these responses would be mirrored in most of our institutions.
Southampton carried out a gap analysis to see where they needed to take action, and drew up a series of recommendations. in the short term they included developing an institutional data repository and develop a scaleable business model, ( ie how are recommendations going to be paid for). They have calculated that 1 petabyte of data costs £1m over 5 years.
They also agreed to set up a one stop shop for researchers for data management advice and guidance.
In the medium term they are establishing a comprehensive and affordable back up service for all, and proposals to manage the complete research data life cycle.
Their ultimate long term is that good research data management is embedded in all policies and procedures, and it does not need to be considered separately. An Institutional data management policy is needed to help researchers by providing guidance on what is expected and providing guidance.
Has to be about building trust. if you want a quick win, give them hundreds of terabytes for free and back it up!
They have done some case studies which are outlined in their report accessed via the previous web link, especially in Archaeology which lends itself this sort of study. They collect a lot of data in many forms from laser scans, geophysics, CAD etc., and they understand the need for context and metadata. They have developed a Sharepoint 2010 site for Archeology data management using Pivot to slice and dice the data based on the metadata.
A good talk, and good to get an academic perspective.
- Posted using BlogPress from my iPad
Thursday, 12 May 2011
Shared Services in IT Management
Next session was Chris Cobb talking about Shared Services in IT Management. Bit of deja vue here as the more observant of you will have noticed that he was speaking on a similar topic yesterday.
Some confusion of terms, shared services can mean many things. Sharing within the same university, ie centralising and standardising, sharing between institutions, or outsourcing completely to a different provider.
Economies of scale, or critical mass both drivers for shared services.
Economies of scale such as JANET, UCAS, USS, SLC.
Critical mass, where institution can't afford a large team, but a single person is a risk, things like out of hours IT support, internal audit, procurement.
Being pushed down the economy of scale route, so how do we create that within our institutions? Need to identify those elements which lend themselves to economies of scale, ie transactional based, and those that are decision support and strategic in nature. Then concentrate on transactional based processes. We're guilty of looking at things at too high a level, ie HR or finance function. Need to dig beneath the surface and drill down.
Need to get beyond the barriers. VAT often quoted. But we already pay VAT on many things eg non staff, and our staff costs are higher than private sector. Service sharing beaten ourselves will be VAT exempt eventually.
Don't outsource a problem. Need to get the process sorted out first, but we're all going through efficiency programmes at the moment and looking at business process review, so this might be a good time to outsource.
Devil is in the detail, things are often more complex than initially perceived to be. Need to break things down to a functional level, and enterprise architecture will help with this.
Drivers for shared services used to be resilience and quality, but moving more towards cost savings and efficiency now.
Let's not think about outsourcing or sharing a student record system, but look at what processes we could. Already doing some elements of it eg for payments.
Four step approach to sharing services:
Disaggregate what we have, distinguish strategic from transactional.
Start transferring systems and services into the cloud, possibly with another institution to get joint procurement
Than look at sharing the architecture, eg a finance system with multiple operating entities
Then share the services, eg finance transactional processes.
Then Chris talked about the University Modernisation Fund, he sits on the Steering Group as I do. I've blogged about it before, but to recap, funding is going into these areas:
JANET brokerage service for Cloud Infrastructure
National Research Data Curation Centre
Systems and services procurement service ( first project will be a research management system)
Electronic resource management service
Secure document management service
Some confusion of terms, shared services can mean many things. Sharing within the same university, ie centralising and standardising, sharing between institutions, or outsourcing completely to a different provider.
Economies of scale, or critical mass both drivers for shared services.
Economies of scale such as JANET, UCAS, USS, SLC.
Critical mass, where institution can't afford a large team, but a single person is a risk, things like out of hours IT support, internal audit, procurement.
Being pushed down the economy of scale route, so how do we create that within our institutions? Need to identify those elements which lend themselves to economies of scale, ie transactional based, and those that are decision support and strategic in nature. Then concentrate on transactional based processes. We're guilty of looking at things at too high a level, ie HR or finance function. Need to dig beneath the surface and drill down.
Need to get beyond the barriers. VAT often quoted. But we already pay VAT on many things eg non staff, and our staff costs are higher than private sector. Service sharing beaten ourselves will be VAT exempt eventually.
Don't outsource a problem. Need to get the process sorted out first, but we're all going through efficiency programmes at the moment and looking at business process review, so this might be a good time to outsource.
Devil is in the detail, things are often more complex than initially perceived to be. Need to break things down to a functional level, and enterprise architecture will help with this.
Drivers for shared services used to be resilience and quality, but moving more towards cost savings and efficiency now.
Let's not think about outsourcing or sharing a student record system, but look at what processes we could. Already doing some elements of it eg for payments.
Four step approach to sharing services:
Disaggregate what we have, distinguish strategic from transactional.
Start transferring systems and services into the cloud, possibly with another institution to get joint procurement
Than look at sharing the architecture, eg a finance system with multiple operating entities
Then share the services, eg finance transactional processes.
Then Chris talked about the University Modernisation Fund, he sits on the Steering Group as I do. I've blogged about it before, but to recap, funding is going into these areas:
JANET brokerage service for Cloud Infrastructure
National Research Data Curation Centre
Systems and services procurement service ( first project will be a research management system)
Electronic resource management service
Secure document management service
Eduserv Symposium. Situation normal, everything must change
Am at the Eduserv symposium on Virtualisation and the Cloud. Will try and blog the sessions again, but not sure if I'll be able to keep up with pace of yesterday - someone just described me as a blogger on speed! Anyway, here goes.
Opening keynote from Simon Wardley from Leading Edge Forum talking about Situation Normal, Everything Must Change.
We don't know what cloud computing is yet, but lots of definitions of it, including some given by the kittens which keep popping up in this talk!
There is a path by which innovations and business activities eventually become a commodity, commoditisation. Because of competition, there's a demand for a constant drive for improvement. Demand and improvement drive commoditisation. It's happening now to computing. Cloud simply reflects the path from product to a utility in computing.
Why now? Some of past barriers to it have gone. to be successful, you the need the concept of utility, suitability, the technology, and most importantly, change in attitude Ie a willingness to adopt new models. As activities evolve and become ubiquitous, they lose their competitive advantage. Then they become suitable to be provided as a standard service eg HR, payroll, finance
So, you need: Concept, Attitude, Technology, Suitability. Conveniently spells cats. There's been a lot of them so far in the slides. Guess you have to be here!
So, it's all down to risks and benefits.
Benefits are economies of scale, ability to focus on core activities, pay for use. Increased efficiencies, which could reduce costs.
Also it will increase agility. Eg time it takes to get a server up and running.
Also increases opportunities for use.
Comoditisation increases rate of innovation. Enables it, and accelerates it. Provides a stable infrastructure base for innovation in higher order systems.
All of these increase consumption. So, get more use, more innovation.....
The risks are mainly associated with transitioning for one model to another and are around confusion, governance, trust, security, transparency.
Outsourcing risks are mainly around competition, lock in, control, suitability.
These are standard to the commoditisation of any utility, not unique to cloud.
To mitigate these risks you need many providers, open APIs and data interchange so you can get your data out.
Also, all providers have to run same system for maximum interoperability.
Will reduce some of outsourcing risks. Some interesting stuff on open source going on in this space.
Hybrid model will reduce some of transitional risks. But comes at a cost including economies of scale. Building a private cloud is expensive. Won't get the benefits, also will have to continually evolve to keep up.
Acceleration of the rate of innovation is the most exciting benefit of cloud.
Commodity services are repeatable, standard, linear in nature.
At other extreme there are chaotic, innovative, dynamic services.
Because things change as they evolve from one to the other, need different techniques eg for project management and organisational structure. One size does not fit all. Same true for outsourcing and cloud.
Innovate, leverage, commoditise.
Pattern used by companies like Google and Amazon.
Cloud is more about new models of management than anything else.
When something moves from product to commodity it's almost always disruptive. Need to manage, and more importantly, leverage this.
Excellent opening session.
- Posted using BlogPress from my iPad
Opening keynote from Simon Wardley from Leading Edge Forum talking about Situation Normal, Everything Must Change.
We don't know what cloud computing is yet, but lots of definitions of it, including some given by the kittens which keep popping up in this talk!
There is a path by which innovations and business activities eventually become a commodity, commoditisation. Because of competition, there's a demand for a constant drive for improvement. Demand and improvement drive commoditisation. It's happening now to computing. Cloud simply reflects the path from product to a utility in computing.
Why now? Some of past barriers to it have gone. to be successful, you the need the concept of utility, suitability, the technology, and most importantly, change in attitude Ie a willingness to adopt new models. As activities evolve and become ubiquitous, they lose their competitive advantage. Then they become suitable to be provided as a standard service eg HR, payroll, finance
So, you need: Concept, Attitude, Technology, Suitability. Conveniently spells cats. There's been a lot of them so far in the slides. Guess you have to be here!
So, it's all down to risks and benefits.
Benefits are economies of scale, ability to focus on core activities, pay for use. Increased efficiencies, which could reduce costs.
Also it will increase agility. Eg time it takes to get a server up and running.
Also increases opportunities for use.
Comoditisation increases rate of innovation. Enables it, and accelerates it. Provides a stable infrastructure base for innovation in higher order systems.
All of these increase consumption. So, get more use, more innovation.....
The risks are mainly associated with transitioning for one model to another and are around confusion, governance, trust, security, transparency.
Outsourcing risks are mainly around competition, lock in, control, suitability.
These are standard to the commoditisation of any utility, not unique to cloud.
To mitigate these risks you need many providers, open APIs and data interchange so you can get your data out.
Also, all providers have to run same system for maximum interoperability.
Will reduce some of outsourcing risks. Some interesting stuff on open source going on in this space.
Hybrid model will reduce some of transitional risks. But comes at a cost including economies of scale. Building a private cloud is expensive. Won't get the benefits, also will have to continually evolve to keep up.
Acceleration of the rate of innovation is the most exciting benefit of cloud.
Commodity services are repeatable, standard, linear in nature.
At other extreme there are chaotic, innovative, dynamic services.
Because things change as they evolve from one to the other, need different techniques eg for project management and organisational structure. One size does not fit all. Same true for outsourcing and cloud.
Innovate, leverage, commoditise.
Pattern used by companies like Google and Amazon.
Cloud is more about new models of management than anything else.
When something moves from product to commodity it's almost always disruptive. Need to manage, and more importantly, leverage this.
Excellent opening session.
- Posted using BlogPress from my iPad
Subscribe to:
Posts (Atom)