CARMEN is a $9M project building a scalable science cloud. Its focus is on supporting neuroscientists who will use it to store, share and analyse 100s of TBs of data.
Understanding how the brain works is a major scientific challenge which will benefit medicine, biology and computer science. Globally, over 100,000 neuroscientists are working on this problem. However, the data that forms the basis for their work is rarely shared even though it is difficult and expensive to produce.
The CARMEN project (www.carmen.org.uk) is addressing these challenges by developing scalable cloud architecture to enable data sharing, integration, and analysis supported by metadata. An expandable range of services are provided in the cloud to extract value from raw and transformed data. This promotes the sharing of analysis services as well as data, and allows services to execute close to the data on which they operate. This is essential to avoid having to ship vast quantities (TBs) of data out of the cloud to the user’s machine for analysis.
Internally, the CARMEN cloud is built as a set of Web Services. Through experience of a wide variety of e-scientific projects over the past 8 years, we have identified a core set of generic services that we believe are needed to support science. These are: a data repository for file and structured data, a metadata repository to allow users to locate and interpret data, a service repository with dynamic deployment onto compute resources, a workflow enactment engine, and a security infrastructure.
The talk will describe the design of the CARMEN system explaining how it is designed to support thousands of users analysing TBs of data. We will describe a typical neuroscience scenario and show how it is supported by the CARMEN prototype.