WHAT IS SERVERLESS COMPUTING?
Serverless is an approach to computing that offloads responsibility for common infrastructure management tasks (e.g., scaling, scheduling, patching, provisioning, etc.) to cloud providers and tools, allowing engineers to focus their time and effort on the business logic specific to their applications or process.
The most useful way to define and understand serverless is focusing on the handful of core attributes that distinguish serverless computing from other compute models, namely:
- The serverless model requires no management and operation of infrastructure, enabling developers to focus more narrowly on code/custom business logic.
- Serverless computing runs code only on-demand on a per-request basis, scaling transparently with the number of requests being served.
- Serverless computing enables end users to pay only for resources being used, never paying for idle capacity.
Serverless is fundamentally about spending more time on code, less on infrastructure.
UNDERSTANDING THE SERVERLESS STACK
Defining serverless computing as a set of common attributes, instead of an explicit technology, makes it easier to understand how the serverless approach can manifest in other core areas of the stack.
Functions as a Service (FaaS)
FaaS is widely understood as the originating technology in the serverless category. It represents the core compute/processing engine in serverless and sits in the center of most serverless architectures.
Serverless databases and storage
Databases and storage are the foundation of the data layer. A “serverless” approach to these technologies (with object storage being the prime example within the storage category) involves transitioning away from provisioning “instances” with defined capacity, connection, and query limits and moving toward models that scale linearly with demand, in both infrastructure and pricing.
Event streaming and messaging
Serverless architectures are well-suited for event-driven and stream-processing workloads, which involve integrating with message queues, most notably Apache Kafka.
API gateways
API gateways act as proxies to web actions and provide HTTP method routing, client ID and secrets, rate limits, CORS, viewing API usage, viewing response logs, and API sharing policies.
WHAT ARE THE ADVANTAGES OF SERVERLESS COMPUTING WITH KNATIVE ON THE IBM CLOUD?
In this webinar, discover how serverless with knative is a simpler, more cost-effective way of building and operating applications in the cloud.
This webinar will showcase the main advantages of choosing to run your servers applications on the IBM Cloud. You will hear experts from IBM and DNA IT outline how the IBM Cloud brings unique advantages to developers and IT teams that are testing and deploying applications in the Cloud.
There is a full Q&A session at the end of the webinar.