Sand Hill: How Enterprises Can Bend the Programmer Learning Curve
5 June, 2012
Chris Carter (VP Business Development, Cloudsoft) writes for Sand Hill:
“Anyone who works in a corporate environment will tell you that trying to hire developers is probably just as stressful as raising venture capital, if not more so. Why? First, there is the challenge of finding the right kind of developers. Second, once you find the best people, even the most qualified developer can cost a fortune to train; so they seek out environments that can educate them faster and focus on their skill sets and interests.
Why is this an issue? Much more is needed for an enterprise to flourish than just sufficient capital. Enterprises need access to talent in order to execute their vision. Once the talent is found and hired, the new employees can rarely hit the ground running. Often it takes months to find and hire the right developers, and then even more time to “bend their learning curve” so they can successfully contribute to the business.
In order for enterprises to take advantage of the skill sets of new developers … ”
Continue reading the full article.
[ Sand Hill ]
CloudTweaks: Cloudy Apps – New Challenges And Complexities
29 May, 2012
Alex Heneveld (CTO, Cloudsoft) writes for CloudTweaks:
“New technologies often simplify some aspect of life, an aspect which was previously painful. But then, as soon as it is adopted, a technology presents new challenges and new complexities.
With cloud computing, you can get a new machine in minutes—less than a minute, in fact, with some of the leading systems. Alternatively, you can provision a new virtual datacenter with secure VLAN and as much storage and “core-age” as you need. Once this is done, the problem of manually sourcing an environment for your application goes away, as it becomes increasingly easy to source one automatically.
With applications, however, there is a new challenge: their … ”
Continue reading the full article.
[ CloudTweaks ]
Big Data for Finance: Clearing the Big Data Hurdle: The OSS Advantage
29 May, 2012
Chris Carter (VP Business Development, Cloudsoft) writes for Big Data for Finance:
“In today’s world there is a new understanding, the emergence of a new “reality” that is much, much different than what we had even a decade ago. This new reality of big data that exists within today’s enterprises cannot be underestimated. Big data is becoming more important in all industries, but none more so than in the finance arena, both in enterprises and big finance in Wall Street firms. Most businesses aren’t ready to manage this flood of data, much less … ”
Continue reading the full article.
JAXenter: CloudSoft open sources DevOps PaaS, Brooklyn
4 April, 2012
Chris Mayer has covered Brooklyn on JAXenter.
“Unlike the normal infrastructure-centric alternatives, Brooklyn is tearing up the rulebook with a new application-focused approach to managing resources and workloads, rather than working with the underlying foundations.”
“The flexibility within Brooklyn really sets it apart as there’s so much potential here. By breaking down the boundaries that often inhibit the managerial aspect of multi-cloud enterprise platforms and bringing in a dash of DevOps, Brooklyn could usher in a completely new approach altogether. It will certainly increase adoption enterprise-wide of PaaS, for sure.”
Read the full post on JAXenter.
[ via: JAXenter ]
CloudAve.com: Cloudsoft Makes DevOps Approach To PaaS Seamless
3 April, 2012
Krishnan Subramanian has covered Brooklyn on CloudAve.com
“What is Brooklyn? …In short, it is DevOps nirvana without getting the hands dirty.”
“Essentially, this announcement is solid development in the platform services market and this is going to accelerate enterprise adoption of PaaS, especially for building next generation of applications around big data.”
Read the full post on cloudave.com.
[ via: cloudave.com ]
451 Research: PaaSification – use Brooklyn to create your own Force.com
1 April, 2012
William Fellows (VP Research) from 451 Group has covered Brooklyn and Cloudsoft’s related activities.
PaaSification – use Cloudsoft’s Brooklyn to create your own Force.com?
The 451 Take
By adding the Brooklyn control plane, Cloudsoft is effectively providing a toolkit for users to create their own Force.com-like PaaS platform for internal and/or external use – but with multi-cloud deployment options. It certainly appears to have found its calling with Brooklyn and jclouds. End-user references are forthcoming, on the back of which it will seek VC funding to take its game to the broader market. It feels distinctly Enigmatec ‘V2′ – but open source.
Download the full report. [ PDF ]
[ via: 451 Research ]
451 Research: Could jclouds complete Cloudsoft?
13 January, 2012
We are flattered to have had our jclouds activities covered by 451 Research. The 451 Take very succinctly articulates how Monterey and jclouds complement each other.
Could jclouds be the abstraction layer that completes Cloudsoft?
The pioneering middleware company specializes in cloud-enabling large-scale, distributed, transactional apps. Its latest mission is to support the jcloud library.
Download the report. [ PDF ]
[ via: 451 Research ]
Datanami: Beyond Big Data – Addressing the Challenge of Big Applications.
3 January, 2012
We are delighted that Datanami headlined the New Year with Alex’s contribution on the difference between Big Data and Big Apps. [ via: Datanami ]
Beyond Big Data – Addressing the Challenge of Big Applications
The challenge of handling big data has spawned a wealth of new data stores and file systems, addressing the challenges of scale and distribution, wide-area and availability, consistency and latency trade-offs. If the data isn’t accessed too frequently, and if the infrastructure doesn’t have to grow, these technologies can mean “job done”.
More often, however, solving the big data challenge is only part of the solution: most of the time, if you have big data, you’ll have one or more a “big apps”, and sooner or later (better sooner than later) you’ll have to address three more questions:
The first question revolves around how you will compute and deliver this data; the second causes one to question how to operate in or across different infrastructure environments and the third invokes questions about monitoring system behavior and topology changes.
Computing Big Data
The first question follows from the fact that every read and write against data involves compute. In some systems, this computation adds up and quickly becomes the bottleneck; so even if the storage can scale to petabytes, the number of concurrent requests might max out at an unacceptably low level.
The solution is to design for processing scalability, and broadly speaking there are two popular approaches. In grid computing, “jobs” or “tasks” are given to “worker” nodes, and scalability becomes a question of scaling the number of such worker nodes. Hadoop is a powerful extension of this approach where each job is decomposed into multiple tasks each of which runs as near to its target data segment as possible.
The actor model is a more general approach to scalable processing which has lately seen a resurgence in popularity, in part because it can address grid-style compute as well as more sophisticated cases where a unique serialization point is required or shared memory / datastore lookup is too expensive. In the actor model, application code is decomposed into individual actors, and at runtime messages are passed to relevant actors or chain of actors. In some ways this is similar to SOA, although often on a much more finely-grained scale. Actor model systems, such as Erlang, Smalltalk, and Akka, are frequently championed for simplifying the design of applications with good scalability and robustness characteristics — particularly in the face of concurrency and distribution — because of how they force code to be structured.
When working with big data, the important aspect is that actors can be situated near the most relevant data; and with some frameworks, these actors can be moved at will, locally or wide area, with negligible overhead. This capability allows processing to be optimized in real time for scale, latency, or even cost of compute resource.
At a high level, the thrust of this question is to ensure that, whatever big data solution is being used, the processing fabric is also sufficiently elastic and resilient to handle the corresponding compute load. Some data systems include some of the above approaches (such as Hadoop), but a crucial part of the architect’s job is to make sure that the compute strategy suits not just the data but also the consumers.
Operating in Hybrid and Multi-Cloud Environments
The second question is a pragmatic one, recognizing that deploying and operating at scale often means using heterogeneous resources, from local hardware or virtualization through to off-site private and public clouds. In order to use these resources, the application — at least for its deployment — has to interact with the providers and navigate the various models of compute, storage, and networking. When designing for resilience, or operating at scale, or optimizing for performance or cost, some of the subtleties in these implementations can be quite dramatic.
One way of tackling this is to standardize at the virtualization level across an organization, including external suppliers. CloudStack, OpenStack, and vCloud are some of the leading choices in this area. When working in such an environment, the application merely need be designed around that provider’s model and built against their API.
This solution brings its own difficulties, however: these virtual infrastructure platforms are still quite rapidly evolving, and the API’s change from release to release. Worse, these changes are not always backwards compatible. It also requires a heavyweight dev/test environment, when developers might prefer a lightweight local implementation such as VirtualBox to test against.
A complementary approach which can deliver the best of both worlds is to use a cloud abstraction API, such as jclouds (Java), libcloud (Python), or Deltacloud (REST). These projects present a uniform concept model and API for use in the application, with implementations that allow the code to work against a wide range of cloud providers and platforms. This means the choice of infrastructure becomes a runtime choice, not a design-time choice; and writing big apps for portability or spanning across multiple clouds becomes natural and consistent. Furthermore, in many cases, the provider-specific implementations include additional robustness and performance consideration which makes the application developer’s life even better.
The rise of Platform-as-a-Service is another reason to consider an abstraction API. It is true that writing for a PaaS can greatly accelerate development and insulate an application from the subtleties of different physical and virtual infrastructure layers (and in many cases this they do this by using jclouds or libcloud). However it can cause lock-in at a different level: an application designed for a specific PaaS can be very tricky to port to a different PaaS. Cloud abstraction APIs can prevent against this by making applications portable from the outset, whether targetting infrastructure or PaaS. Some PaaS entities, such as load-balancers and blobstores, are already available in both jclouds and libcloud, and because these projects are open source, new abstractions can and almost certainly will be emerging.
Effecting Topology Change
One of the most exciting facets of cloud is the ability to have new IT infrastructure “on tap”. Unfortunately, however, simply having this tap doesn’t mean that applications will automatically benefit. Designing apps so that they can take advantage of flexible infrastructure — and ultimately communicate how much infrastructure they want — is one of the biggest unsung challenges of cloud.
The naïve answer is not to think outside the box, but to make the box bigger (whether a VM or VLAN or virtual disk). But with Big Data and Big Apps, this scaling up hits its limit far too early: the only viable option for scaling is to scale out, that is to get more boxes and to be able to use them.
Recognizing the need for more capacity is not difficult: the application will respond slowly or not at all, and will report errors. What is substantially more difficult is to anticipate this need, and more difficult still, to get new capacity online and ready in time.
NoSQL data fabrics, such as Gemfire and Terracotta, can simplify part of this issue, making it easier for applications to incorporate new compute instances, but they tend not to pass judgment on when or how to request these instances (or give them back). Equally, PaaS offerings can be a good answer where an application’s shape fits a common pattern, such as at the presentation tier. However in the realm of Big Apps, with large data volumes and multiple locations, the shape and the constraints tend to be unique and the problem remains with the application designers.
The answer in practice is almost always some combination of a CMDB, one or more monitoring frameworks, and scripts to detect and resolve pre-defined situations. CFEngine, Puppet, Chef and Whirr are noteworthy emerging players tackling various parts of this motley strategy, but even with these tools, writing good scalability and management logic for an application is no small undertaking, even when the management policies are relatively straightforward.
That said, it is an unavoidable part of writing a modern application. The following collection of suggested best practices is the most which can be said:
- Design so that anything can be changed, with as little disruption as possible
- Be consistent across how the application is initially rolled-out and how changes are subsequently effected
- Decentralize the monitoring and management, pushing logic out to be near the relevant application components
- Consider “infrastructure-as-code”, so that the deployment topology can be tracked and, ideally, replayable
- Consider “policy-as-code”, as the logic which drives an application’s elasticity, fault tolerance, and run-time optimization is an important part of an application (especially with big applications)
- Treat the above “code” like any other code, with version control and testing
- Keep these pieces small and hierarchical, modular and substitutable
- Watch out for new developments in this space, as the current level of difficulty is not sustainable!
In this article we’ve looked at the key design questions facing Big Apps, the flip side of big data: put simply, how do you ensure that having made the right choice at the data tier, the system doesn’t fall down at the processing tier or the infrastructure layer? The triangle diagram summarizes one way of addressing this, focusing on a virtuous cycle of provisioning, middleware and management. Getting this right delivers a robust, powerful runtime environment, where Big Apps can get the most out of big data.
About the Author
Alex Heneveld, CTO and co-founder of Cloudsoft, brings twenty years experience designing software solutions in the enterprise, start-up, and academic sectors. Most recently Alex was with Enigmatec Corporation where he led the development of what is now the Monterey® Middleware Platform™. Previous to that, he founded PocketWatch Systems, commercialising results from his doctoral research.
Alex holds a PhD (Informatics) and an MSc (Cognitive Science) from the University of Edinburgh and an AB (Mathematics) from Princeton University. Alex was both a USA Today Academic All-Star and a Marshall Scholar.
How cloud computing will change application platforms
3 May, 2011
Cloud computing will bring demand for elastic application platforms.
Promises that cloud computing can save money and reduce time-to-market by automatically scaling applications (either up or down) oversimplify what it takes to develop application architectures to achieve these benefits of elastic scaling. Few of today’s business applications are designed for elastic scaling, and most of those few involve complex coding unfamiliar to most enterprise developers.
A new generation of application platforms for elastic applications is arriving to help remove this barrier to realising cloud’s benefits. Elastic application platforms (EAPs) will reduce the art of elastic architectures to the science of a platform.
EAPs provide tools, frameworks, and services that automate many of the more complex aspects of elasticity. These include all the runtime services needed to manage elastic applications, full instrumentation for monitoring workloads and maintaining agreed-upon service levels, cloud provisioning, and, as appropriate, metering and billing systems.
EAPs will make it normal for enterprise developers to deliver elastic applications — something that is decidedly not the norm today.
Forrester defines an elastic application platform as:
An application platform that automates elasticity of application transactions, services, and data, delivering high availability and performance using elastic resources.
We see organisations moving toward EAPs by extending their current web architectures, following one or more of four paths:
- Extend web architectures with elastic caching. We find widespread adoption of elastic caching platforms across the economy — not just in consumer-facing web products and properties. Application development and delivery shops that adopt elastic caching products introduce EAPs’ computing-services/data-services combination into their architectures — and add a certain degree of elasticity to their applications.
- Add NoSQL for “big data” applications. Most application development and delivery teams seem to adopt NoSQL products to create so-called “big data” applications. These applications typically analyze large and/or fast-changing pools of data that would be too expensive to manage using conventional relational database products. Some organisations use NoSQL products to improve their ability to deliver data and/or content to mobile devices. Shops that adopt NoSQL products introduce an elastic data service into their architecture.
- Adopt EAP distributed computing layers to virtualise applications. Some application development and delivery teams introduce new middleware to help them adapt existing applications to take advantage of elastic scaling. Application development and delivery shops that implement specialists such as Appistry, Cloudsoft, CloudSwitch, and Paremus introduce EAPs’ computing services and deployment services layers to their architectures.
- Adopt PaaS products that provide EAP concepts. Lastly, platform-as-a-service products will introduce many application development and delivery teams to the benefits of EAPs.
We are fast approaching the “crossing the chasm” moment for EAPs. Only a handful of vendors are offering comprehensive EAPs today, but Microsoft is in the game with its Azure platform. So are salesforce.com with its Force.com portfolio and GigaSpaces with XAP.
We expect IBM to push its WebSphere platform into the EAP arena this year. And Oracle, as well. Bottom line: The big enterprise vendors will soon be offering EAPs too.
Mike Gualtieri and [John R. Rymer] collaborated on this research.
See the full report here.
Overcoming Traditional Roadblocks to Scaling Enterprise Applications in the Cloud
24 March, 2011
To realize the full benefits of cloud computing, application services must be built in a way that gives cloud providers the freedom to deploy them in the most efficient manner, while respecting any business constraints. Any technical restrictions that introduce rigidity into the application are sure to impede the cloud provider’s ability.
Transactional applications are a particular case in point – the need for transactional integrity imposes complex constraints that impede effective scalability and distribution in the cloud. Removing these impediments necessitates a new approach that revisits what is fundamentally required. Companies must now look at a new set of requirements, including:
1. Finer-grained scalability and distribution
Decomposing an application into coarse-grained services so that services can be individually distributed across multiple machines is a well-established pattern for scaling a transactional application. Effectively the service is the unit-of-scalability.
The number of services into which an application can be decomposed is limited; therefore the scalability achievable in this way is equally limited. However, within a service are a potentially unlimited number of segments, which can, in turn, distribute across multiple machines.
2. Segment mobility
The potential of finer-grained scalability can only be fully realized if segments are mobile, i.e. can be dynamically migrated across multiple resources. Without mobility, the way you initially deploy the segments is the way that they stay deployed.
The importance of mobility is well established at the level of virtual machines. What’s needed now is mobility for very fine-grained segments, allowing dynamic configuration.
3. Mobility must not interrupt or degrade service
Continual resource optimization is only feasible if frequent reconfiguration is possible, which requires the ability to move transactional segments around with zero interruption or degradation to service, which means you have to be able to move segments while they are still running and without pausing them.
4. Near-instantaneous mobility
Moving segments is orders-of-magnitude faster than moving entire virtual machines or LPARS – moving even large numbers of segments typically takes milliseconds. You then have the potential to be near-instantly responsive to changes in workload and to precisely match resource usage to rapidly fluctuating workloads.
5. Full mobility over the wide area network
Segment-level mobility makes it very fast to relocate some or all of your applications across geographies and clouds. Coupled with zero interruption or service degradation, global resource optimization and precise scalability of transactional applications becomes possible.
6. Automatic governance and control
These capabilities create the potential for transactional applications to fully exploit the benefits of cloud computing via highly mobile application segments that can dynamically and continuously reconfigure themselves in response to changing workloads, resource availability, user demand, performance criteria and costs.
With so many factors in play, it is only possible to transform the potential benefits into reality by fully automating capability management.
Using a policy-based framework to automate the dynamic application management is essential to drive down management costs, increase the elasticity benefits of cloud computing and ensure enforcement of good governance. A policy framework that is geo-location aware can ensure compliance with industry-specific regulations even as data and processes dynamically move around a cloud.
The Solution: Intelligent Application Mobility
In order to liberate transactional applications from the constraints of traditional scaling and distribution, a new approach is needed – one that is based on the high-speed mobility of very fine-grained application components, which are automatically managed in real-time by user- defined policies that ensure continual optimisation and compliance. Collectively, these capabilities are referred to as intelligent application mobility.
Intelligent application mobility is essential if we’re to make the full elasticity and cost-model benefits of cloud computing available to transactional applications.