A Dive into Encoder-Decoder Models Image Captioning

In artificial intelligence (AI), the capability to merge the visual world with language opens innovative applications, from enhancing accessibility with automatic text generation to powering intelligent photo management tools. Google Advanced Solutions Lab’s machine learning engineer, Takumi, recently took a deep dive into image captioning, revealing how a simple generative model can bridge the gap between images and text. Here’s a closer look at the process and insights from Takumi’s session, emphasizing the encoder-decoder model’s pivotal role in achieving this feat.

The content featured in this blog post has been inspired by Create Image Captioning Models: Lab Walkthrough
You can find the implementation on GitHub.

 The Journey Begins with a Model

At the heart of the image captioning process lies the encoder-decoder model, a framework designed to translate the complex visual information in images into descriptive, natural language captions. This model is not just about understanding what’s in a picture but about narrating a story that the picture holds, making AI seem almost human in its perception.

 The Encoder: Extracting Visual Cues

The encoder’s job is to dive into the image and bring out rich features that describe it. Takumi illustrates this using the Inception ResNet v2, a convolutional neural network known for its deep understanding of images. By freezing the pre-trained layers of this model, the encoder efficiently extracts a high-dimensional feature map that distills the essence of the image. This process mirrors how humans observe an image and discern its significant elements before describing it.

 The Decoder: Weaving Words into Captions

With the visual context encoded into a digestible format, the decoder takes the stage. It’s here that the magic of language generation unfolds. The decoder, equipped with GRU (Gated Recurrent Unit) layers and an attention mechanism, focuses on different parts of the image as it generates each caption word. This attention to detail ensures that the generated captions are not just generic descriptions but are contextually relevant narratives of the image content.

 The Catalyst: Attention Mechanism

The attention mechanism is the MVP of this process. It allows the decoder to concentrate on specific image segments during each step of the caption generation. This focused approach mimics human cognition, enabling the model to produce captions that are insightful and detailed. Whether identifying a photo’s main subject or noting subtle background details, the attention mechanism ensures that no critical element is overlooked.

 Training on the COCO Dataset

The choice of the dataset for training the model is crucial. Takumi opts for the COCO dataset, renowned for its diversity and depth in image captions. This rich dataset enables the model to learn various visual contexts and corresponding linguistic expressions, making the model robust and versatile in generating captions across various scenes and subjects.

 The Promise of Image Captioning

The simplicity of Takumi’s generative model, as demonstrated in the session, belies its profound implications. From enhancing user experiences on social media platforms to creating more accessible content for visually impaired users, the applications of image captioning are vast and varied. Furthermore, as AI research progresses, integrating more sophisticated models promises even greater advancements in how machines understand and describe the visual world.

 Embracing the Future

Takumi’s walkthrough not only demystifies the technical complexities behind image captioning but also showcases the potential of AI to enrich our interaction with digital content. As we stand on the brink of this technological evolution, it’s exciting to envision a future where AI can seamlessly translate the visual into the verbal, making digital spaces more intuitive and inclusive for everyone.

This exploration into image captioning is just a glimpse of what’s possible when we harness the power of AI. For those eager to dive deeper, the ASL GitHub repository offers a treasure trove of notebooks and resources, inviting us to explore the frontiers of machine learning and beyond.

Data Security | Privacy Standards for Fintech Solution Architect

Image Source

ISO27001, SOC2, PDPA, and GDPR These acronyms refer to various standards, regulations, and guidelines related to information security, privacy, and web application security.

  1. ISO27001: ISO/IEC 27001 is an internationally recognized information security management system (ISMS) standard. It systematically manages sensitive information by applying risk management processes, ensuring the data remains secure. Organizations can get certified to demonstrate their commitment to information security and compliance with the standard.
  2. SOC2: Service Organization Control 2 (SOC 2) is a criterion for managing customer data based on five “trust service principles”: security, availability, processing integrity, confidentiality, and privacy. It is an audit framework designed by the American Institute of Certified Public Accountants (AICPA) to help service providers assure their clients that their information systems meet the necessary requirements in these areas.
  3. PDPA: The Personal Data Protection Act (PDPA) is a collection of laws or regulations governing personal data protection in various jurisdictions. For example, Singapore’s Personal Data Protection Act establishes a data protection law to govern private data collection, use, disclosure, and care. The specifics of PDPA may vary between countries, but the general goal is to protect individuals’ privacy and personal information.
  4. GDPR: The General Data Protection Regulation (GDPR) is a comprehensive data protection and privacy regulation that applies to organizations operating within the European Union (EU) and European Economic Area (EEA). It also affects organizations outside these regions if they process the personal data of individuals within the EU/EEA. GDPR aims to give individuals more control over their personal data and ensure that organizations responsibly, securely, and transparently handle it. Non-compliance can result in hefty fines.

Fintech organizations, like any other organizations handling sensitive data, can implement the ISO27001, SOC2, PDPA, and GDPR standards and guidelines by following a series of steps that help ensure compliance and enhance their overall security posture. Here’s how they can approach each standard:

  1. ISO27001:
  • Perform a risk assessment to identify information assets, threats, vulnerabilities, and potential impacts.
  • Establish an Information Security Management System (ISMS) based on the ISO27001 framework, including information security policies and procedures.
  • Implement security controls per ISO27001’s Annex A, which outlines 114 rules across 14 domains tailored to the organization’s risk profile.
  • Train employees to understand their roles and responsibilities in maintaining information security.
  • Conduct regular internal audits and management reviews of the ISMS to ensure ongoing effectiveness and compliance.
  • Obtain certification from an accredited certification body to validate the organization’s adherence to the ISO27001 standard.
  1. SOC2:
  • Identify the applicable trust service principles (security, availability, processing integrity, confidentiality, and privacy) based on the organization’s specific services and client requirements.
  • Develop and document policies, procedures, and controls addressing each relevant trust service principle.
  • Train employees on the organization’s SOC2-related policies and procedures.
  • Conduct a gap analysis to identify areas where the organization’s controls may not meet SOC2 requirements.
  • Address identified gaps and implement any necessary improvements.
  • Engage an independent auditor to perform a SOC2 audit and provide a SOC2 report, which can be shared with clients to demonstrate compliance.
  1. PDPA:
  • Appoint a Data Protection Officer (DPO) to ensure compliance with the applicable PDPA.
  • Identify the types of personal data collected, processed, and stored by the organization.
  • Develop and implement data protection policies, procedures, and practices that comply with the PDPA’s requirements (e.g., consent, data access, and correction, data retention, etc.).
  • Implement security measures to protect personal data against unauthorized access, disclosure, and other risks.
  • Train employees on data protection principles and their responsibilities under the PDPA.
  • Establish processes for handling data breaches, including notification and reporting requirements.
  1. GDPR:
  • Identify whether the organization is subject to GDPR and whether it acts as a data controller or data processor.
  • Appoint a Data Protection Officer (DPO) if required by the GDPR.
  • Perform a data mapping exercise to understand the flow of personal data within the organization.
  • Implement GDPR-compliant policies and procedures, including obtaining consent, handling data subject requests, data breach reporting, and data retention.
  • Incorporate privacy by design and privacy by default principles in the organization’s product and service development.
  • Conduct Data Protection Impact Assessments (DPIAs) for high-risk processing activities.
  • Establish data processing agreements with third-party data processors to ensure GDPR compliance throughout the data processing chain.

Various tools and solutions are available to help organizations implement the ISO27001, SOC2, PDPA, GDPR, and standards and guidelines. Some tools may be specific to a particular standard, while others can be used across multiple standards. Here’s a list of some tools and solutions that can help:

  1. ISO27001:
  • Risk assessment tools: Tools like vsRisk, Eramba, and RiskWatch can help organizations conduct risk assessments, identify vulnerabilities, and manage risks.
  • ISMS tools: Solutions like ISMS.online, StandardFusion, and MyEasyISO can help in the development, implementation, and management of an Information Security Management System.
  1. SOC2:
  • Compliance management tools: Platforms such as ZenGRC, StandardFusion, and Hyperproof can help organizations manage their compliance efforts, track controls, and streamline the audit process.
  • Continuous monitoring tools: Solutions like Datadog, Splunk, and Sumo Logic can help organizations monitor their infrastructure, detect anomalies, and meet SOC2’s continuous monitoring requirements.
  1. PDPA and GDPR:
  • Data mapping and inventory tools: OneTrust, TrustArc, and BigID can help organizations discover, classify, and manage personal data, ensuring compliance with data protection regulations.
  • Privacy management platforms: Comprehensive privacy management solutions like OneTrust and TrustArc provide a range of tools for GDPR and PDPA compliance, including consent management, data subject request management, and data breach notification.
  • Data protection impact assessment (DPIA) tools: Solutions like OneTrust and TrustArc also offer built-in DPIA templates and workflows to help organizations conduct DPIAs as required by the GDPR.

Architecting Microsoft Azure Solutions: Key Concepts and Integration Landscape

As businesses adopt cloud-based solutions, Microsoft Azure has emerged as a leading platform for building and deploying various applications. This blog post will explore some essential concepts related to architecting Microsoft Azure solutions and discuss the integration landscape and differences between Azure API Apps and Web Apps. By the end of this post, you will better understand the tools and techniques required to design and implement effective Azure solutions.

Key Concepts in Architecting Microsoft Azure Solutions:

  1. Designing and Implementing Azure App Services: Azure App Services allow developers to create, deploy, and manage various applications, including web, mobile, and API applications. When designing and implementing these services, it’s crucial to consider cost, scalability, performance, and security factors to ensure your app runs smoothly and meets business requirements.
  2. Creating and Managing Azure Resources: Azure offers various resources, such as virtual machines, storage accounts, and databases. Understanding how to create, configure, and manage these resources is essential to build a robust and efficient infrastructure that supports your applications.
  3. Securing Azure Solutions: Security is a top priority when architecting Azure solutions. You must understand Azure’s built-in security features, such as Azure Active Directory, role-based access control (RBAC), and Azure Security Center, to ensure your applications and data are protected from threats.

Azure Integration Landscape:

Azure provides a rich set of integration services to help you connect various applications, data sources, and services. Some of the key integration services include:

  1. Azure Logic Apps: Logic Apps enable you to create serverless workflows that integrate with various services and APIs. Logic Apps can automate processes, orchestrate data workflows, and integrate with external systems.
  2. Azure API Management: API Management allows you to create, publish, and manage APIs in a secure and scalable manner. You can use API Management to control access, monitor usage, and analyze the performance of your APIs.
  3. Azure Service Bus: Service Bus is a reliable messaging service that enables the communication between decoupled applications and services. It supports various messaging patterns, including publish-subscribe, point-to-point, and request-response.

Azure API App vs. Web App:

Azure API Apps and Web Apps are part of the Azure App Service platform. However, there are some key differences between the two:

  1. Purpose: API Apps are designed specifically for creating, hosting, and managing APIs, while Web Apps focus on hosting web applications and websites.
  2. Features: API Apps provide built-in features like API documentation (Swagger), authentication, and CORS (Cross-Origin Resource Sharing) support, making it easier to develop and manage APIs. On the other hand, Web Apps offer features tailored to web application hosting, such as custom domains, SSL support, and deployment slots.
  3. Integration: API Apps can be easily integrated with other Azure services, like Logic Apps and API Management, to create powerful workflows and manage APIs. Web Apps primarily integrate with other web-focused services, like Azure Content Delivery Network (CDN) and Azure Traffic Manager.

Mastering AI-Driven API Development: Key Insights, Best Practices, and Challenges

APIs (Application Programming Interfaces) are crucial in modern software development, enabling applications to communicate and exchange information seamlessly. As AI and machine learning applications become more prevalent, well-designed APIs tailored to these technologies are paramount. This blog post will explore critical insights and best practices in building and maintaining APIs for AI-driven applications while addressing their unique challenges.

Building and Managing APIs

Developers must realize that building an API is only half the battle; they must also focus on managing and securing their APIs. Effective API management includes:

  1. Ensuring high performance and scalability
  2. Implementing proper authentication and authorization
  3. Handling API versioning and backward compatibility
  4. Monitoring, logging, and tracing API usage

API security is also of utmost importance. Developers must consider various attack vectors, such as DDoS attacks, injection attacks, and man-in-the-middle attacks. To address these threats, they should adopt security best practices like enforcing TLS encryption, implementing API gateways and using rate limiting and throttling mechanisms.

Building APIs for AI

When building APIs for artificial intelligence (AI) and machine learning (ML) applications, developers face unique challenges that require additional considerations, such as:

  1. Managing data privacy and security: Protect sensitive data by implementing robust encryption, access controls, and privacy policies.
    Example: Use end-to-end encryption to secure data during transmission and at rest, implement role-based access control (RBAC) to restrict data access to authorized users, and adopt privacy-enhancing technologies (PETs) like differential privacy to protect individual user information while allowing for data analysis.

2. Handling large volumes of data efficiently: Optimize data processing and storage by using techniques such as data compression, streaming, and parallel processing.
Example: Utilize Apache Kafka for real-time data streaming and processing, implement data compression algorithms like GZIP or Snappy to reduce storage requirements, and leverage parallel processing techniques, such as MapReduce or Apache Spark, to speed up data processing tasks.

3. Ensuring model accuracy and interpretability: Monitor model performance and explain model predictions clearly to enhance transparency and trust.
Example: Use performance metrics like precision, recall, and F1-score to evaluate model accuracy and employ explainable AI techniques, such as Local Interpretable Model-agnostic Explanations (LIME) or Shapley Additive Explanations (SHAP), to provide interpretable insights into model predictions.

4. Managing the lifecycle of AI models: Establish processes for updating, retiring, and replacing models to maintain optimal performance and relevance.
Example: Implement continuous integration and continuous deployment (CI/CD) pipelines for AI models, monitor model performance metrics regularly to identify performance degradation or drift, and utilize model versioning tools, such as MLflow or DVC, to track model changes and easily revert to previous versions if necessary.

5. Creating an easy-to-use API for developers: Emphasize clear documentation, consistent API design, and responsive support to ensure a positive developer experience.
Example: Use tools like Swagger or OpenAPI to create comprehensive, interactive documentation, follow established API design principles and conventions, such as RESTful architecture or GraphQL, and provide dedicated support channels, such as forums, chat, or email, to address developer questions and concerns promptly.

Conclusion

Building and managing robust APIs is crucial in modern software development, especially for AI-driven applications. By understanding the challenges and best practices associated with API development, developers can tackle the complexities of managing performance, security, and usability. By designing user-friendly APIs that cater specifically to AI and ML applications, they can build and maintain thriving API ecosystems that adapt to various use cases and drive innovation.

References

  1. Helmig, B. (n.d.). Building APIs for AI: An Interview with Zapier’s Bryan Helmig. Y Combinator Blog. Retrieved from https://www.ycombinator.com/blog/building-apis-for-ai-an-interview-with-zapiers-bryan-helmig
  2. Palladino, M. (2023, March 13). Building an API is Half the Battle: Q&A with Marco Palladino from Kong. Stack Overflow Blog. Retrieved from https://stackoverflow.blog/2023/03/13/building-an-api-is-half-the-battle-qa-with-marco-palladino-from-kong

The Future of Streamlined Service Integration

Middleware connects disparate systems, applications, and services within an IT environment. In today’s fast-paced world, businesses continuously seek ways to automate processes and streamline integration. This blog post delves into two resources that showcase the value of middleware automation collections and how they shape the future of middleware as a service.

Middleware Automation Collections:

Red Hat recently published an insightful tutorial on middleware automation collections (source 1). These collections enable users to easily deploy, configure, and manage middleware components. The tutorial demonstrates how Ansible Automation Platform, a popular open-source automation tool, can manage middleware components through automation collections. Key takeaways from the tutorial include:

  1. Introduction to Ansible Automation Platform: The tutorial highlights the benefits of using Ansible, such as its agentless nature, ease of use, and extensive library of pre-built modules.
  2. Middleware Automation Collections: The tutorial introduces Red Hat JBoss Enterprise Middleware Automation Collection, simplifying the deployment, configuration, and management of middleware components.
  3. Hands-on Example: The tutorial provides a step-by-step guide for using the Red Hat JBoss Enterprise Middleware Automation Collection to deploy a WildFly application server.

Middleware as a Service:

Middleware as a Service (MaaS) has gained traction recently (source 2). MaaS offers a range of benefits, including reduced time and cost associated with middleware management, improved scalability, and enhanced security. Key insights from the article include:

  1. MaaS Overview: Middleware as a Service is a cloud-based approach to managing middleware components, providing users with on-demand access to resources and seamless integration capabilities.
  2. Advantages of MaaS: The article highlights the benefits of MaaS, including cost savings, resource optimization, and simplified management.
  3. Implementation Considerations: Organizations looking to adopt MaaS must consider factors such as infrastructure compatibility, security, and vendor selection.

Conclusion:

Middleware automation collections and Middleware as a Service transform how businesses manage and integrate their systems and applications. By leveraging these technologies, organizations can simplify middleware management, reduce operational costs, and improve scalability. As businesses adopt cloud-based solutions and automation tools, middleware automation collections and MaaS will play an increasingly important role in streamlining service integration.

1. https://developers.redhat.com/articles/2023/03/14/tutorial-middleware-automation-collections

2. https://www.makeitfuture.com/automation/middleware-as-a-service

The Importance of Technology Roadmaps for Fintech Organizations

A technology roadmap is a strategic plan that outlines the technology initiatives, goals, and milestones an organization aims to achieve over a specific time frame. For fintech organizations, technology roadmaps are crucial in staying competitive, fostering innovation, and ensuring seamless integration of technology to meet business objectives. Here are several reasons why technology roadmaps are essential for fintech organizations:

  1. Align technology initiatives with business goals: A well-defined technology roadmap helps fintech organizations prioritize and align their technology investments with their overall business objectives. This alignment ensures that technology initiatives contribute to the organization’s growth and success.
  2. Foster innovation: Fintech organizations thrive on innovation to stay ahead in a rapidly evolving industry. A technology roadmap helps identify opportunities for innovation, such as exploring new technologies, improving existing products, or developing new services.
  3. Improve decision-making: Technology roadmaps provide a clear view of the organization’s technology landscape, enabling stakeholders to make informed decisions regarding investments, resource allocation, and prioritization of initiatives.
  4. Enhance collaboration and communication: A technology roadmap serves as a communication tool that fosters collaboration between different departments and stakeholders, ensuring a shared understanding of the organization’s technology goals and initiatives.
  5. Manage risks and uncertainties: Fintech organizations face various risks, including rapid technological advancements, regulatory changes, and shifting customer demands. A technology roadmap can help identify and mitigate these risks by outlining potential scenarios and providing a plan to address them.

Guidance on Creating a Technology Roadmap for Fintech Organizations

To create a technology roadmap that aligns with your fintech organization’s goals, follow these steps:

  1. Define your business objectives: Start by identifying your organization’s short-term and long-term business goals. These objectives will serve as the foundation for your technology roadmap.
  2. Assess your current technology landscape: Evaluate your existing technology stack, infrastructure, and processes. Identify any gaps, inefficiencies, or areas that require improvement to achieve your business objectives.
  3. Identify opportunities and challenges: Consider emerging technologies, industry trends, and potential challenges that may impact your organization. This analysis will help you identify areas for innovation and potential risks that need to be addressed.
  4. Prioritize initiatives: Based on your assessment, prioritize technology initiatives that align with your business goals and offer the most significant benefits. Consider factors such as feasibility, the potential return on investment (ROI), and resource availability when prioritizing projects.
  5. Develop a timeline: Create a realistic timeline for implementing your technology initiatives, including milestones and dependencies. This timeline will help you track progress and ensure your projects stay on schedule.
  6. Allocate resources: Assign the necessary resources, including personnel, budget, and tools, to support your technology initiatives.
  7. Communicate the roadmap: Share your technology roadmap with stakeholders, including employees, investors, and partners. This communication fosters collaboration and ensures everyone is on the same page regarding your organization’s technology goals and initiatives.
  8. Monitor progress and adapt: Regularly review your technology roadmap to track progress, identify any obstacles, and adjust your plans as needed. This ongoing monitoring ensures that your roadmap remains relevant and aligned with your organization’s evolving needs and goals.

By following these steps, your fintech organization can create a technology roadmap that effectively aligns with its business objectives, promotes innovation, and ensures a competitive edge in the rapidly evolving fintech landscape.

Message-driven vs. Event-driven approaches to application integration

Application integration is a crucial component of modern software development. With the increasing complexity of applications and the need for real-time data processing, businesses need efficient ways to integrate different applications and systems. Message-driven and event-driven approaches are popular integration strategies to help organizations achieve this goal. In this blog post, we summarize the discussion on StackOverflow on the benefits and drawbacks of these approaches.

What is Message-Driven Integration?

Message-driven integration is a process where messages are sent between different applications or systems. The sender application creates a message, which is then transmitted to the receiver application. The receiver application processes the message and returns a response message to the sender. The key advantage of message-driven integration is that it allows applications to communicate with each other in a loosely coupled way. This means that applications can evolve independently without affecting the integration between them.

What is Event-Driven Integration?

Event-driven integration is a process where events trigger actions in other applications or systems. In this approach, applications generate events that are consumed by other applications. The key advantage of event-driven integration is that it enables real-time data processing, as events are generated and processed as they occur. This approach is particularly useful in scenarios where real-time data processing is essential, such as financial transactions or IoT applications.

Benefits of Message-Driven Integration

  • Loosely coupled architecture: Applications can evolve independently without affecting their integration.
  • Reliability: Messages can be stored and resent if they fail to reach the receiver application.
  • Scalability: Message-driven integration can handle large volumes of messages and be scaled horizontally by adding more instances.

Drawbacks of Message-Driven Integration

  • Complexity: Setting up message queues and configuring applications to use them can be complex.
  • Latency: Message-driven integration can introduce latency as messages are sent and processed.
  • Overhead: The message header can introduce additional overhead and increase the size of messages.

Benefits of Event-Driven Integration

  • Real-time processing: Events are processed as they occur, enabling real-time data processing.
  • Scalability: Event-driven integration can handle large volumes of events and can be scaled horizontally by adding more instances.
  • Agility: Event-driven integration enables applications to respond quickly to environmental changes.

Drawbacks of Event-Driven Integration

  • Complexity: Event-driven integration requires a complex event-processing infrastructure to be set up.
  • Debugging: Debugging can be challenging, as events are generated asynchronously and may not be related to specific user actions.
  • Coupling: Applications may become tightly coupled if they share events, making it challenging to evolve them independently.

Conclusion

  • Message-driven integration suits scenarios where reliable communication between applications is essential and latency is acceptable. 
  • Event-driven integration is ideal for scenarios where real-time data processing is essential and applications need to respond quickly to changes in the environment. 

Fintech Solution Architects Hard Skills

Fintech solution architects are critical in designing, implementing, and maintaining complex systems that support the financial technology industry. They must possess diverse technical skills and knowledge to build and maintain reliable, scalable, and secure fintech solutions. The following 7 key areas are among the most critical for fintech solution architects, encompassing a range of technical expertise, including architecture design and implementation, cloud computing, payments and financial systems, security and compliance, data architecture and analytics, API design and integration, and emerging technologies.

By mastering these areas, fintech solution architects can design and deliver high-quality, cutting-edge solutions that meet the ever-evolving needs of the financial technology industry.

My last blog post was evaluated by ChatGPT

can you evaluate a blog post?

Sure, I can try to evaluate a blog post for you. Please provide me with the blog post you would like me to evaluate, and let me know what aspects of the post you would like me to consider.

https://devamanthrils.wordpress.com/2018/12/31/types-of-microservices-which-are-best-implemented-the-serverless-way-or-container/

The blog post you shared discusses the different types of microservices and evaluates the benefits of implementing them using serverless architecture or containers. Here are some aspects of the post that I think are noteworthy:

  1. Clarity of writing: The post is written in a clear and concise manner, making it easy to understand and follow the author’s arguments.
  2. Well-organized content: The post is well-structured, with clear headings and subheadings that make it easy to navigate and locate specific information.
  3. Technical expertise: The author demonstrates a good understanding of the technical aspects of microservices, serverless architecture, and containers, and presents their ideas in a well-reasoned and informed manner.
  4. Inclusion of examples: The author provides relevant examples to support their arguments and make the post more engaging and informative.
  5. Lack of sources: While the author presents their ideas in a clear and logical manner, there are no external sources cited to support the arguments presented in the post.

Types of microservices which are best implemented the Serverless way or Container

fig_2-51d839a1bfd30c8dd1a9f0e608cacfe9.png

The use case for Microservices would arise when the you need less collaboration between services. The important part is to determine where the boundary lies in terms of collaboration. Within a microservice, you might end up using serverless architecture resulting in cost saving as well as inherent decoupling. As far as containers are concerned, they are the tools to achieve true isolation for microservices.

Serverless architectures have several trade-offs, for example, it’s easier to get strongly attached to some provider like AWS, which itself has its caveats like timeout restrictions. Also, authentication is still a big issue with serverless architectures, most of them relying on SPA’s to overcome this limitation.

If A website that needs to keep some connections with external providers active,a serverless architecture is not an option. So we keep some microservices up and running in containers just to keep these connections active.

When building microservices, primary considerations, IMO, should be things like: “small” teams, one for each micro service if possible, independent scaling and deployment, loosely coupled services providing one functionality. Serverless vs container is a technical implementation of the service.

The choice will depend on one or more of:

1. Compute requirements,

2. The service running times,

3. Service SLAs if any,

4. Cost of computing,

5. Agility

Most of the use cases for serverless can be event based invoke of a service, “fairly quick” compute and returning the response. Interaction model could be synchronous request-response, event based fire and forget and (within AWS) working on streams of data.

Respective examples:

1. Get account opening form,

2. Notify banking officials on the new account creation request,

3. Saving and processing multiple such concurrent customers’ requests in the backend “asynchronously”.

Use case for (self managed) containers could be long running processes, back-end connection pooling, and any kinda state management within the containers.

Dockerized application could be seen as a immutable piece of code which can run in any platform, OS and environment. Portability, separation of concern are the biggest plus for dockerized application. But you would still need a server to run the docker containers and you have to pay for the hosting server. But with serverless applications it is mostly managed by external vendors like cloud service providers who charges based on the specifications (memory, cpu, time) provided you to execute a piece of code. It’s good if you don’t have hard SLAs to meet as serverless application might take time to boot up if it’s in cold state, also don’t need to worry about HA, patching, network Config etc as they are all managed by the cloud vendors itself.

 

Serverless :

– You will have a very limited control on infrastructure, however cloud provider will have more liability/responsibility on infra and non functions aspects of your micro service.

– cloud native ( locked with a specific cloud provider)

– Focus on PaaS and limited visibility into IaaS

– seamless integration with other services inthe same cloud platform In summary, you focus only business logic and one does not have to think about infrastructure set-up

 

Container approach:

– In this approach, you will have more control and responsibility on the Infra like VM’s/Containers etc. . Cloud provider will have very less role to play.

– Cloud agnostic and it can run on premise as well as on any cloud (including AWS and Azure)

– Focus on IaaS: In addition to business logic one need to focus on infra design aspects as well.

 

You can apply some of the same principles that help decide when to use Object-Oriented vs. Functional programming when developing an application. Objects are ideal for domain entities with state and behavior. Functions are ideal for event-driven behavior (logic) where state is passed or stored elsewhere e.g. callbacks. Similarly, FaaS / Serverless is ideal for event-driven logic vs. microservices packaged in containers will work best for applications with behavior and state.