<![CDATA[The Bitmaskers]]>https://bitmaskers.in/https://bitmaskers.in/favicon.pngThe Bitmaskershttps://bitmaskers.in/Ghost 5.82Fri, 26 Apr 2024 19:27:41 GMT60<![CDATA[Run Llama model on Raspberry Pi !]]>https://bitmaskers.in/run-llama-model-on-raspberry-pi/662bf8f5248a2b0001eafb3aFri, 26 Apr 2024 19:25:44 GMT

The Raspberry Pi, a small yet powerful device, has become increasingly popular for various computing projects, including running large language models (LLMs) like Llama. This blog post provides a detailed guide on setting up your Raspberry Pi, installing Llama, configuring it, and troubleshooting common issues.

Setting Up Your Raspberry Pi
  1. Flashing the OS: Begin by downloading the Raspberry Pi Imager from the official Raspberry Pi website. Select the appropriate OS. I used Raspbian OS since it's pretty tiny and neat. Flash the OS to a micro SD card.
  2. Initial Configuration: Insert the microSD card into your Raspberry Pi and boot up the device. You might want to use an external monitor and devices to be able to do the initial setup.

Brining LLama to your Pi

Llama (Large Language Model Meta AI) is a family of autoregressive large language models (LLMs), released by Meta AI starting in February 2023.

Install Git: Open a terminal and ensure that git is installed:

sudo apt update && sudo apt install git

Install Python modules that will work with the model to create a chatbot:

pip install torch numpy sentencepiece

Ensure that you have g++ and build-essential installed, as these are needed to build C applications:

sudo apt install g++ build-essential

Cloning the Repository - LLama.cpp

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

git clone https://github.com/ggerganov/llama.cpp

Build the project

cd llama.cpp
make

Wait for the build to complete while you can get a model downloaded

Since Raspberry Pi is a small device, it's wise to get a tiny model and hence we will go with Tiny LLM.

Download any one of the version from here

I used tinyllama-1.1b-chat-v1.0.Q6_K.gguf

Go to /models folder inside llama.cpp and run below command in terminal to download the model

wget https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/tinyllama-1.1b-chat-v1.0.Q6_K.gguf

Running your model

Hopefully, the make command would have your app setup. Now all you need to do is run the server -

./server -m models/tinyllama-1.1b-chat-v1.0.Q6_K.gguf -t 3

You should now see the llama.cpp server up -

Go to localhost:8080, you should see the below UI

Run Llama model on Raspberry Pi !

Let's try playing with the model now.

Ask a quick question and see it respond 😃

Run Llama model on Raspberry Pi !

Hey! It just suggested some great places to visit in India!

Do give it a try. It's amazing !! 🥂

]]>
<![CDATA[Serverless Fast API]]>https://bitmaskers.in/serverless-fast-api/6628ae828915dc00017a7109Sun, 08 Oct 2023 15:08:37 GMT

This is the second part of a two-part series on creating Serverless FAST API.

If you haven't checked the first part, do take a look here

Serverless Fast API
This is the first part of a two-part series on creating a serverless API using Fast API Fast API FastAPI is a modern, high-performance web framework for building APIs with Python based on standard type hints. It has the following key features: * Fast - Very high performance, on par with
Serverless Fast API


From our previous setup, we created a fast API application that does CRUD operation on Todos.

Now let us modify this a bit to support AWS serverless.

For this application, we will use AWS SAM.

I have already talked about serverless here for you to explore.


Updating the setup

We need to add a SAM template in order to let us deploy fast API to AWS Lambda. However, we also need an adapter that can translate AWS serverless events to be transferred to our Fast API server.

Enters Magnum!

We will use Magnum, an adapter for running ASGI applications in AWS Lambda to handle Function URL, API Gateway, ALB, and Lambda@Edge events.

This is how we configure the adapter

from mangum import Mangum

from app.main import app

handler = Mangum(app=app)

Now let's add the SAM Template

We will create a ProxyApi of type Serverless API.

The Runtime selected is python 3.8 and architecture is arm64
The proxy setup allows a pass-through of all API Calls to our fast API server using Magnum.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
  fast-api

Globals:
  Function:
    Timeout: 30
    MemorySize: 128

Resources:
  ProxyApi:
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod
      BinaryMediaTypes: [ '*/*' ]

  HandlerFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: handler.handler
      Runtime: python3.8
      Architectures:
        - arm64
      Events:
        FunctionProxy:
          Type: Api
          Properties:
            RestApiId: !Ref ProxyApi
            Path: "/{proxy+}"
            Method: ANY

Outputs:
  HandlerFunctionApi:
    Description: "API Gateway endpoint URL for Prod stage"
    Value: !Sub "https://${ProxyApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
  HandlerFunction:
    Description: "Lambda Function ARN"
    Value: !GetAtt HandlerFunction.Arn
  HandlerFunctionIamRole:
    Description: "Implicit IAM Role created"
    Value: !GetAtt HandlerFunction.Arn

Deployment

Now all you need to do is run the below commands

sam build
sam deploy --guided

Once the deployment completes we can check in cloud formation, it should have an entry like the one below -

Serverless Fast API

Now let us try the endpoints using Postman or any other rest client

Creating Todo

Serverless Fast API

Fetching Todos

Serverless Fast API

Deleting Todo

Serverless Fast API

That's it! This completes our serverless Fast API! Hope you like it 🍻

]]>
<![CDATA[Fast API]]>This is the first part of a two-part series on creating a serverless API using Fast API

Fast API

FastAPI is a modern, high-performance web framework for building APIs with Python based on standard type hints. It has the following key features:

  • Fast - Very high performance, on par with
]]>
https://bitmaskers.in/creating-your-first-fast-api/6628ae828915dc00017a7106Mon, 02 Oct 2023 09:56:18 GMT

This is the first part of a two-part series on creating a serverless API using Fast API

Fast API

FastAPI is a modern, high-performance web framework for building APIs with Python based on standard type hints. It has the following key features:

  • Fast - Very high performance, on par with NodeJS and Go (thanks to Starlette and Pydantic)
  • Fast to Code - Increases developer productivity to ship code faster
  • Fewer Bugs - Reduces human-induced errors
  • Intuitive - Great editor support
  • East - Designed to be easy to learn and code
  • Short - Minimize code duplication
  • Robust - It provides production-ready code with automatic interactive documentation.
  • Standards-based - It’s based on the open standards for APIs, Open API, and JSON Schema.

Let's create a small to-do app that uses an in-memory data store.

Structuring the codebase

Let's structure the app like below

Fast API
  1. The app folder contains everything related to the app
  2. The model contains all the models
  3. The router contains a route for each use case
  4. The service contains the business logic

The Model

For the model we use below code -

from pydantic import BaseModel
from typing import Union


class Todo(BaseModel):
    id: Union[str, None] = None
    todo: str


The Router

The router has the below routes -

from fastapi import APIRouter, HTTPException

import app.service.todo as todo_service
from app.model.todo import Todo

router = APIRouter(
    prefix="/todos",
    tags=["todos"],
    responses={404: {"description": "Not found"}},
)


@router.get("/")
async def read_todos():
    return await todo_service.read_todos()


@router.post("/")
async def create_todo(todo: Todo):
    return await todo_service.create_todo(todo)


@router.get("/{todo_id}")
async def read_todo(todo_id: str):
    todo = await todo_service.read_todo(todo_id)
    if todo is None:
        raise HTTPException(status_code=404, detail="Todo not found")
    return todo


@router.delete("/{todo_id}")
async def delete_todo(todo_id: str):
    todo = await todo_service.read_todo(todo_id)
    if todo is None:
        raise HTTPException(status_code=404, detail="Todo not found")

    return await todo_service.delete_todo(todo_id)

We use the @router annotation for various HTTP verbs

The router calls the service which has the business logic


The Service

Service details -

import uuid
from typing import List

from app.model.todo import Todo

todos_db: List[Todo] = []


async def read_todos():
    return todos_db


async def create_todo(todo: Todo):
    todo.id = str(uuid.uuid4())
    todos_db.append(todo)
    return todo


async def read_todo(todo_id: str):
    for todo in todos_db:
        if todo.id == todo_id:
            return todo
    return None


async def delete_todo(todo_id: str):
    t = None
    for todo in todos_db:
        if todo.id == todo_id:
            t = todo
    if t is not None:
        todos_db.remove(t)
    else:
        return None

Service handles the generic CRUD operation with an in-memory data store called todos_db.


Putting it all together

The main app needs to be configured this way -

from fastapi import FastAPI

from .router import todo

app = FastAPI(
    title="Todos API",
    description="Todos API with in memory Database",
)

app.include_router(todo.router)


@app.get("/", tags=["health"])
async def root():
    return {"status": "Success", "message": "App is running!"}

The title and description will be used for the out-of-box Open API specification.


Final Step

Fast API can be run with uvicorn server. Let's run it with the below configuration

import uvicorn


host="0.0.0.0"
port=8002
app_name="app.main:app"


if __name__ == '__main__':
    uvicorn.run(app_name, host=host, port=port)

Okay, now let's run the code.

python .\main.py

We get amazing open API specs out of the box 😍

Go to http//:localhost:8002/docs and we get the open API specs

Fast API

We can now run the API using the docs or any REST API client


Hope you like it! Do give it a try.

Oh! Here's the GitHub repository to play with the code

]]>
<![CDATA[Serverless Chat GPT API in Go]]>This blog shares how to create your own serverless ChatGPT API in Go.

What is Chat GPT ?

A natural language processing (NLP) model called GPT-3, also known as the "Generative Pre-training Transformer 3," was developed using text that humans produced. It can create its own human-like text using

]]>
https://bitmaskers.in/serverless-chat-gpt-api-in-go/6628ae828915dc00017a7108Fri, 13 Jan 2023 21:45:12 GMT

This blog shares how to create your own serverless ChatGPT API in Go.

What is Chat GPT ?

A natural language processing (NLP) model called GPT-3, also known as the "Generative Pre-training Transformer 3," was developed using text that humans produced. It can create its own human-like text using a range of languages and writing styles given some text input.

More details on Chat GPT and the excellent work can be found here.

What is Go?

Go, also referred to as Golang, is an open-source, statically typed, compiled computer language created by Google. It is designed to be straightforward, powerful, readable, and effective.

Prerequisites

The following prerequisites are needed -

  • OpenAI Account - can be created here
  • SAM CLI Installed - can be found here
  • AWS Account(if you plan on deploying it to AWS)
Create Serverless API
  • Run sam init
  • Select AWS Quick Start Templates below
Serverless Chat GPT API in Go
  • Edit the structure to create a folder called pkg and add a main.go
  • Create the functionality to call the Chat GPT API and return the response
  • Create a struct TextCompletionRequest like the below which matches the format, Chat GPT API accepts
type TextCompletionRequest struct {
	Model            string  `json:"model"`
	Prompt           string  `json:"prompt"`
	Temperature      float64 `json:"temperature"`
	MaxTokens        int     `json:"max_tokens"`
	TopP             float64 `json:"top_p"`
	FrequencyPenalty float64 `json:"frequency_penalty"`
	PresencePenalty  float64 `json:"presence_penalty"`
}
  • Create a struct TextCompletionResponse like the below which matches the format, Chat GPT API accepts
type Choice struct {
	Text         string      `json:"text"`
	Index        int         `json:"index"`
	Logprobs     interface{} `json:"logprobs"`
	FinishReason string      `json:"finish_reason"`
}

type TextCompletionResponse struct {
	ID      string   `json:"id"`
	Object  string   `json:"object"`
	Created int      `json:"created"`
	Model   string   `json:"model"`
	Choices []Choice `json:"choices"`
	Usage   struct {
		PromptTokens     int `json:"prompt_tokens"`
		CompletionTokens int `json:"completion_tokens"`
		TotalTokens      int `json:"total_tokens"`
	} `json:"usage"`
}
  • Create function ConverseWithGPT function below
func ConverseWithGPT(prompt string) (TextCompletionResponse, error) {
	httpReq := TextCompletionRequest{
		Model:            "text-davinci-003",
		Prompt:           prompt,
		Temperature:      0.7,
		MaxTokens:        100,
		TopP:             1.0,
		FrequencyPenalty: 0.0,
		PresencePenalty:  0.0,
	}
	jsonValue, _ := json.Marshal(httpReq)

	bearer := "Bearer " + BearerToken

	req, _ := http.NewRequest("POST", ChatGPTHTTPAddress, bytes.NewBuffer(jsonValue))
	req.Header.Set("Authorization", bearer)
	req.Header.Add("Content-Type", "application/json")

	client := &http.Client{}
	resp, err := client.Do(req)
	if err != nil {
		return TextCompletionResponse{}, err
	}
	defer resp.Body.Close()

	if resp.StatusCode != 200 {
		return TextCompletionResponse{}, ErrNon200Response
	}

	body, err := ioutil.ReadAll(resp.Body)
	if err != nil {
		return TextCompletionResponse{}, err
	}

	var textCompletionResponse TextCompletionResponse
	if err := json.Unmarshal(body, &textCompletionResponse); err != nil {
		fmt.Println("Can not unmarshal JSON")
	}

	return textCompletionResponse, nil
}

To check, the code, checkout this repo.

Let's see it in action

Let's now run this locally.

Run the below command -

sam build
sam local start-api

Now, we can run this using Postman

Serverless Chat GPT API in Go

All right, Chat GPT suggested some Netflix series to watch. Let me go watch them while you try running this yourself.

Cheers 🍻

]]>
<![CDATA[gRPC using Java]]>What is gRPC

As per the official site -

gRPC is a modern open-source high-performance Remote Procedure Call (RPC) framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking, and authentication. It is also

]]>
https://bitmaskers.in/grpc-using-java/6628ae828915dc00017a7107Sun, 21 Aug 2022 12:00:28 GMTWhat is gRPCgRPC using Java

As per the official site -

gRPC is a modern open-source high-performance Remote Procedure Call (RPC) framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking, and authentication. It is also applicable in the last mile of distributed computing to connect devices, mobile applications, and browsers to backend services.

Some salient features -

  • Simple service definition using Protocol Buffers
  • Easily scalable
  • Works across languages and platforms
  • Supports Bidirectional streaming with HTTP/2-based transport

gRPC Architecture

gRPC is based on the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. On the server side, the server implements this interface and runs a gRPC server to handle client requests. On the client side, the client has a stub that provides the same methods as the server.

gRPC using Java


Protocol Buffers

Protobuf is the most commonly used IDL (Interface Definition Language) for gRPC. It's where you store your data and function contracts in the form of a proto file.

Both Client and Server need to have the same proto file which acts as a contract between them.

Protobuf is popular since it sends over JSON strings as bytes, making it much smaller and enabling faster performance.

Learn more about it here.


Now that we know some theory, let's get our hands dirty! We will use Java to create a gRPC server and call it using postman.

Code Setup
  • Let's create a new maven project in Inteiij/Eclipse
  • Now let's add a few required dependencies
gRPC using Java

See the detailed pom file here

  • Now we will create the proto file as below -
syntax = "proto3";

option java_package = "in.bitmaskers.grpc";
option java_outer_classname = "Todos";

service TodoStore {
  rpc add (Todo) returns (APIResponse);
  rpc searchTodo(TodoSearch) returns (Todo);
  rpc listTodos(Empty) returns (stream Todo);
}

message Todo {
  string name = 1;
}

message TodoSearch {
  string name = 1;
}

message Empty {
}

message APIResponse{
  string responseMessage = 1;
  int32 responseCode = 2;
}
  • Now let's run 'mvn clean install' from the location of the app and we can see the appropriate java files will be created
gRPC using Java
  • Now we will create TodosStoreService which will extend TodoStoreGrpc.TodoStoreImplBase and override the methods.

Creating the Server

Now let's implement the methods created in the previous steps

  • First, we create a transient memory Todo List to save the todos. We can always hook it to a database but let's keep it in memory for this article
  • Method to add Todos
@Override
    public void add(Todos.Todo request, StreamObserver<Todos.APIResponse> responseObserver) {
        String name = request.getName();
        Todos.Todo todo = Todos.Todo.newBuilder().setName(name).buildPartial();
        todoList.add(todo);
        Todos.APIResponse.Builder apiResponse = Todos.APIResponse.newBuilder();
        apiResponse.setResponseCode(0);
        apiResponse.setResponseMessage("Todo added successfully");
        responseObserver.onNext(apiResponse.build());
        responseObserver.onCompleted();
    }

Here, we take the input name from the request, create a new Todo using the Builder pattern, and then add it to in memory TodoList.

Additionally, we also create an apiResponse which is sent back using responseObserver.

  • Method to searchTodo
@Override
    public void searchTodo(Todos.TodoSearch request, StreamObserver<Todos.Todo> responseObserver) {
        String name = request.getName();
        Optional<Todos.Todo> optionalTodo = todoList.stream().filter(todo -> todo.getName().equals(name)).findFirst();
        if (optionalTodo.isPresent()) {
            responseObserver.onNext(optionalTodo.get());
        } else {
            responseObserver.onNext(Todos.Todo.newBuilder().buildPartial());
        }
        responseObserver.onCompleted();
    }

Here we use streams API to search for the todo with the name passed and return it if found else returns empty

  • Method to  stream a list of Todos

This is an interesting one. We are not sending the entire Todo List in one go but streaming it using Server Streaming capability.

@Override
    public void listTodos(Todos.Empty request, StreamObserver<Todos.Todo> responseObserver) {
        for (Todos.Todo todo : todoList) {
            responseObserver.onNext(todo);
            try {
                // Replicate time-consuming IO Calls
                Thread.sleep(5000);
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        }
        responseObserver.onCompleted();
    }

Here we are using Thread sleep of 5000ms to replicate some time-consuming IO calls.

  • The Server

Easiest one of all. We create a main method that can be used to run the code.

public class GRPCServer {
    public static void main(String[] args) throws IOException, InterruptedException {
        Server server = ServerBuilder.forPort(9090).addService(new TodosStoreService()).build();
        server.start();
        System.out.println("Server started at " + server.getPort());
        server.awaitTermination();
    }
}

All right our server is ready. Now let's run it and use Postman to perform some calls.

Setting postman
  • Create a new gRPC request
gRPC using Java
  • Import the proto file created before
gRPC using Java
gRPC using Java

Import is as an API

gRPC using Java
  • We should now see the URL and methods appearing

Let's now run all the methods and see them in action

  • Add Todo
gRPC using Java
  • Search Todo
gRPC using Java
  • List Todos
gRPC using Java

See the response is a streaming one with some gap in between!


All right folks! This was a long one but enjoyed learning something new.

Hope you like it. Cheers 🍻

]]>
<![CDATA[AWS Lambda Function URLs]]>Serverless most often refers to applications that don’t require you to provision or manage any servers. You can focus on your core product and business logic instead of responsibilities like operating system (OS) access control, OS patching, provisioning, right-sizing, scaling, and availability.

For more details on Serverless read

]]>
https://bitmaskers.in/aws-lambda-function-urls/6628ae828915dc00017a7103Thu, 16 Jun 2022 19:39:13 GMT

Serverless most often refers to applications that don’t require you to provision or manage any servers. You can focus on your core product and business logic instead of responsibilities like operating system (OS) access control, OS patching, provisioning, right-sizing, scaling, and availability.

For more details on Serverless read below

AWS Serverless
Let’s see what does Serverless means!
AWS Lambda Function URLs
The Serverless API

Earlier, we had to orchestrate a few services together to get an API up and running.

Several serverless functions implement the business logic in these apps. Using services like Amazon API Gateway and Application Load Balancer, each function is mapped to API endpoints, methods, and resources.

However, at times you need a simple HTTPS endpoint without having to configure or learn additional AWS services.

This is what AWS Lambda Function URLs allow you to do. It removes the dependency on API Gateway to expose HTTPS endpoints.

The Lambda function URL is unique globally and follows the following format:
https://<url-id>.lambda-url.<region>.on.aws

Creating Lambda Function URLs using AWS console

  • First, we create a new Lambda Function
AWS Lambda Function URLs
  • Next, we enable the function URL in Advanced Settings and select Auth Type as NONE
AWS Lambda Function URLs
  • The boilerplate code is added and you will also see a new function URL generated which is a unique HTTPS endpoint
AWS Lambda Function URLs
  • Now we try calling that URL from the postman and see the response below
AWS Lambda Function URLs
Features available

Although it doesn’t support all the features that API Gateway does still it has some good features which can be useful

  • Allows AWS IAM Auth type
  • Enable CORS policy
  • Limit the HTTP methods
  • Header control
AWS Lambda Function URLs
One of the most amazing feature

One feature that makes lambda function better than API Gateway is the timeout.

API Gateway has a limited 30seconds timeout and it might be a problem in some use cases.

However, Lambda function URLs don’t have any such limit but lambda, in general, has a timeout limit of 15 minutes which is more than enough.

You say — You need more time? — Well consider your architecture selection then 😉

For checking this, I updated the code with sleep for 1 min

AWS Lambda Function URLs

Next, we try calling it from the postman again

AWS Lambda Function URLs

Hope you liked this!
I will cover lambda function configuration using serverless.com templating in upcoming blogs.

Cheers. 🍻

]]>
<![CDATA[Integrate Python with MuleSoft]]>Python 🐍

Python is a programming language that is commonly used to create websites and applications, automate operations, and perform data analysis. Python is a general-purpose programming language, which means it can be used to develop a wide range of applications and isn’t specialized for any particular problem.

]]>
https://bitmaskers.in/integrate-python-with-mulesoft/6628ae828915dc00017a7104Wed, 15 Jun 2022 19:43:00 GMTPython 🐍Integrate Python with MuleSoft

Python is a programming language that is commonly used to create websites and applications, automate operations, and perform data analysis. Python is a general-purpose programming language, which means it can be used to develop a wide range of applications and isn’t specialized for any particular problem. Because of its versatility and beginner-friendliness, it has become one of the most widely used programming languages today.

Why is Python so popular?

Although, Python is known for being slow 🐢 yet it’s very popular. Here are some reasons:

  • Easy to learn and use
    Python is a High-Level Programming Language that is close to writing pseudocode in English and hence people find it a bit easy
  • Libraries
    “Don’t reinvent the wheel” and so there are hundreds of useful libraries which get job done faster
  • Community Support
    It has rich community support hosting meetups and events to clear fellow developers’ doubts.

These are just a few of many other reasons why python is trending.

Here’s a StackOverflow survey showing popular ones :
Click here for the survey insights.

Integrate Python with MuleSoft
Integrating python with MuleSoft Scripting Module
  • Create a new app
  • Add Scripting module as below
Integrate Python with MuleSoft
  • Add Jython dependency

<dependency>
 <groupId>org.python</groupId>
 <artifactId>jython-standalone</artifactId>
 <version>2.7.2</version>
</dependency>

  • Also include jython shared library

<configuration>
 <sharedLibraries>
  <sharedLibrary>
   <groupId>org.python</groupId>
   <artifactId>jython-standalone</artifactId>
  </sharedLibrary>
 </sharedLibraries>
</configuration>

  • Now we should be able to add Scripting Execute in our flow to look like something as below
Integrate Python with MuleSoft
  • Here’s the python code that uses request library to call a website and return the status code
import requestsdef request_website(site):	
    r = requests.get(site, verify=False)
    return r.status_code
    result = request_website(site)
  • Alternatively, we can also add a reference to the script file using ${file::app.py} considering app.py is your script file present in src/resources
Integrate Python with MuleSoft
  • Files should look like below:
Integrate Python with MuleSoft
  • Finally, we add a Transform Message to display the content to the calling client
Integrate Python with MuleSoft
  • Now let’s see it in action using Postman
Integrate Python with MuleSoft

There we have a working MuleSoft application integrated with python.

Hope you liked it. Cheers 🍻

]]>
<![CDATA[Integrating .NET with MuleSoft]]>Often you will find a lot of legacy code residing in .NET code in the project ecosystem.

These codes can have some good business logic implemented and reinventing the wheel in Mule might sometimes get messy and tiresome.

Mule has support for handling this scenario as well. We can use

]]>
https://bitmaskers.in/integrating-net-with-mulesoft/6628ae828915dc00017a70e5Thu, 14 Oct 2021 19:30:00 GMT

Often you will find a lot of legacy code residing in .NET code in the project ecosystem.

These codes can have some good business logic implemented and reinventing the wheel in Mule might sometimes get messy and tiresome.

Mule has support for handling this scenario as well. We can use the MuleSoft Microsoft .NET Connector to use these legacy codes in a mule app.

Here, I will show an end-to-end method of integrating with .NET code.

So, let's get started.


What is DLL ?

A DLL is a library that contains code and data that can be used by more than one program at the same time. This contains code that can be reused by many other programs thus adding a sense of reusability.

For Java folks: Consider this as JARs.👌

Creating the .NET DLL

You will need Visual Studio to create a DLL file. 💬

Let's see the process:

  1. We will create a C# Class Library
Integrating .NET with MuleSoft

2. Next we will name the solution and create the application

Integrating .NET with MuleSoft

3. Finally we will create a simple calculator app and code in C# as below:

Integrating .NET with MuleSoft

4. On building the project we can see that the DLL files get created as \bin\Debug\netstandard2.0\CalculatorApp.dll

Integrating .NET with MuleSoft

This completes the work needed for the .NET code. Now, we will use this in our Mule app.


We will create a new Mule App. I named it as netapp(you can name anything 😄)

Firstly, copy the generated DLL files to the src/resource directory of your mule app.

Copy the dll and pdb files as below:

Integrating .NET with MuleSoft

This will be then referenced in the connector.

  • Microsoft .NET Connector can be added from Exchange as below:
Integrating .NET with MuleSoft

Once added we can see it in the Mule Palette.

  • Next, we will create the connector config
Integrating .NET with MuleSoft

Select Connection as Resource since we have our DLL present in the resource folder.

The scope can be singleton so that one instance is created for all application calls.

Next, add the file name in the path as below:

Integrating .NET with MuleSoft

On clicking Test Connection, we should get the below response:

Integrating .NET with MuleSoft

This completes our Connector configuration.

  • Now, we will create the flows for our app and add the executed operation

Create a simple HTTP Listener and add Execute from the .NET Connector

Integrating .NET with MuleSoft
  • Now, let's see what the execute operation looks like.
Integrating .NET with MuleSoft

We will add the Connector configuration as the one created earlier and on clicking the refresh button for Method Info Type, we will find the details populated as above.

All the functions we wrote using Visual Studio for the Library are available in the method.

For this sum operation, we will select

int sum(int a, int b) (CalculatorApp.Calculator, CalculatorApp, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null | sum(System.Int32 a, System.Int32 b) -> System.Int32)

For input to the method we can use the Arguments and I am passing two query parameters as input namely number1 and number2.

Integrating .NET with MuleSoft

These values will be then passed to our DLL and it will return the response.

  • Finally, a Transform message to get back the response.
Integrating .NET with MuleSoft

Now, let's run the app and pass two query parameters.

  • Now, we will test it using Postman and pass 4, and 5 as input
Integrating .NET with MuleSoft

Thus, we can see the output coming as expected. This shows the capability of talking to a .NET DLL from a Mule app.


Similarly, we can complete all other calculator operations, and the end-to-end application flow should look like the below:

Integrating .NET with MuleSoft

Point to note: The .NET Connector needs a .NET framework in order to execute.

Hope you like it. Cheers!

]]>
<![CDATA[Hosting React Apps in MuleSoft]]>https://bitmaskers.in/hosting-react-apps-in-mulesoft/6628ae828915dc00017a7102Mon, 20 Sep 2021 16:09:42 GMT

MuleSoft is best known for APIs. But we can also use it as a normal web server serving HTML pages. In this tutorial, I will cover the way to host React applications in MuleSoft.


Creating a simple react app

Here, I will create a very simple React app showing an image

  • npx create-react-app simple-page-app
    Use this command to create the a simple react app skeleton which can be modified as per wish
  • Update the App.js to show image
  • And you are done! 😄

Did't I say simple ? Yes!


Hosting this app in MuleSoft

Create a mule app "mulesoft-web-server"

  • Drag a HTTP Listener
  • Drag a Load static Resource with below config
Hosting React Apps in MuleSoft
  • Your flow should look like below
Hosting React Apps in MuleSoft
  • We will now create a folder called public in src/main/resources and put the react app files
  • Content should look like below
Hosting React Apps in MuleSoft

Now we can start the application and see the output!

Hosting React Apps in MuleSoft

Hope you liked it. Cheers 🍻

GitHub link to repo is here!

]]>
<![CDATA[Docker Cheat Sheet]]>https://bitmaskers.in/docker-cheat-sheet/6628ae828915dc00017a7101Thu, 02 Sep 2021 16:39:53 GMT

Docker is a revolution in the IT landscape and has made things easy.

It disallows developers to claim: "It works on my machine 😄"

Here, I created a cheat sheet I use to work with Docker

Frequently used commands

  • docker --version

It shows the docker version installed.

Docker Cheat Sheet
  • docker pull <image_name>

It allows you to pull images from the repository.

  • docker images

To get all the pulled images that are available locally.

Docker Cheat Sheet
  • docker ps

It allows you to see all the containers which are up and running.

Docker Cheat Sheet
  • docker ps -a

It allows you to see all the containers that are available as well up and running.
The difference between docker ps and ps -a is that ps -a allows you to see containers that are down as well.

  • docker run -it -d <image_name>

To Create a Docker Container from Docker Image in a detached mode.

  • docker exec -it <container_id> bash

To run the container in interactive mode using bash shell.

Docker Cheat Sheet

Tip: you can use part of the container_id too

  • docker start <container_id>

To start a container with mentioned <container_id>.

  • docker restart <container_id>

To restart a container with mentioned <container_id>.

  • docker stop <container_id>

To stop a container with mentioned <container_id>.

  • docker rm <container_id>

To delete a container with mentioned <container_id>.

  • docker rmi <image_id>

To delete a docker image with the mentioned <image_id>

  • docker inspect <container_id>

To give details on the container id. It is JSON data with all the details of the container


Hope you liked this cheat sheet. 🍻

]]>
<![CDATA[AMQ vs SQS vs SNS vs Google Pub-Sub]]>https://bitmaskers.in/amq-vs-sqs-vs-sns/6628ae828915dc00017a7100Sat, 21 Aug 2021 16:05:00 GMTMessaging QueuesAMQ vs SQS vs SNS vs Google Pub-Sub

Asynchronous communication is enabled via message queue software, which allows machines to communicate remotely, which acts as a backbone of any distributed system. A single application cannot be responsible for the entire operation in advanced systems. Rather, numerous apps are linked together to complete their respective sets of tasks and meet the system's overall requirement. Each system need to communicate among themselves and this is where the need for queues arises!

Currently, market has to offer a bunch of options when it comes to Queue. You have Apache Kafka, Apache Active MQ,  AWS SQS, AWS SNS, Google Pub/Sub, Rabbit MQ etc.

Let's make a comparison which can help us choosing the right tool for our project.


I already did a series on Kafka vs RabbitMQ which can be found below:

Kafka vs RabbitMQ
Are RabbitMQ and Kafka same ?
AMQ vs SQS vs SNS vs Google Pub-Sub

ActiveMQ

Apache ActiveMQ is a prominent Java-based open source messaging server. It acts as a bridge between numerous apps that are hosted on different servers or developed in different languages. It supports numerous messaging protocols, including AMQP and MQTT, and implements JMS (Java Message Service).

Features -

  1. Multiple connection protocols are supported
  2. Along with vertical scaling, built-in functionality for horizontal scaling, called Network of Brokers, is also supported.
  3. Schedule delayed deliveries
  4. Offers an API for custom authentication plug-ins

When to use?

ActiveMQ is preferably used where small amounts of data is involved. Messages can be transmitted as part of a queue or as a topic with ActiveMQ. One or more consumers are connected to the queue through point to point messaging, and the broker utilizes a round robin strategy to direct messages to specific consumers. Brokers send communications to all consumers who have subscribed to the topic via subscription-based messaging.

For enterprises, which don’t have use for big data processing, ActiveMQ is a good option.


AWS SNS

Amazon SNS, or Amazon Simple Notification Service, is a push notification service offered by Amazon.  It is is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication.

Features-

  1. Provides High Throughput
  2. Fully managed and uses AWS cloud to automatically scale the workload.
  3. Through Amazon Cloudwatch, it allows you to view the system metrics and resolve issues quickly.
  4. The A2P functionality enables you to send messages to users at scale via SMS, mobile push, and email.
  5. Push based system

When to use ?

It’s a low-cost infrastructure, primarily used by companies to send pub/sub messages to their customers. This web service makes it easier for publishers to create and push notifications from the cloud. It’s ideal for developers who are looking for a message notification system that integrates with minimal effort, requires minimum maintenance and works on pay-as-you-go pricing. Unless your application requires a conventional queuing system, Amazon SNS offers a cheaper solution to push subscribed messages to customers.


AWS SQS

Amazon SQS is a fully managed distributed message queuing service offered by Amazon. It’s a cost-effective and simple technique to manage communication between components of software systems running in the cloud(even on prem if required). SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware, and empowers developers to focus on differentiating work.

Features

  1. The SQS queues automatically scale to the size of the workload.
  2. No additional infrastructure is needed for using Amazon SQS.
  3. Unprocessed messages can be maintained in a “dead letter” queue.
  4. Gets benefitted with the large scale of AWS Infrastructure
  5. FIFO queues guarantee exactly-once delivery
  6. Standard queues guarantee atleast-once delivery

When to use?

Amazon SQS is of prime value to a serverless architecture where you want different services to function independently. It offers a lightweight and fast approach to establish communication between these decoupled services.

It’s a good option in applications where multiple independent systems need to be integrated without the overhead of maintaining own queue infrastructure. If your systems uses serverless stack with Lambdas then this is definitely a good option.

The FIFO queues are good for an ordered delivery of message but comes at a price of limited throughput. So, we need to choose wisely!


Google Pub/Sub

Google Cloud Pub/Sub is an asynchronous messaging service that allows you to send and receive messages between different apps. It provides dependable message storage and low-latency real-time message delivery, making it a popular choice among developers that need to send out event notifications, stream data from many devices, or build asynchronous workflows.

Features

  1. It offers low latency and high throughput.
  2. Both push and pull message deliveries are supported.
  3. It’s highly scalable, with support for 10,000 messages per second for all customers by default.
  4. The first 10 gigabytes of data are free.
  5. It has a lite version which costs even less.
  6. It has no exactly-once delivery

When to use?

Google Pub/Sub offers reliable messaging and data streaming across applications hosted anywhere on the internet, including Google Cloud Platform. Many advanced features are included to make communication easier to manage. Auto-scaling, dead-letter delivery and filtering makes your applications simpler. Many developers prefer it for the flexible pricing. Google Pub/Sub charges you on the volume transmitted after you have used your free 10 gigabytes.

Those who already have their applications running on Google Cloud Platform should go for Google Pub/Sub.


Hope this article helps. Cheers 🍻

]]>
<![CDATA[SOA vs Microservices]]>https://bitmaskers.in/soa-vs-microservices/6628ae828915dc00017a70fbSat, 14 Aug 2021 11:33:56 GMT

Most people who work in technology, especially cloud computing, are probably familiar with service-oriented architecture (SOA) and microservices but still there always remains some doubt regarding the differences between these two.

In this article, we will learn in deep, regarding SOA(Service Oriented Architecture) and Microservices.

What is SOA ?

A service-oriented architecture (SOA) is a method of developing software that focuses on reusability while also ensuring that non-functional needs (such as security, scalability, and performance) are addressed. It includes a group of services that are modular in nature and "communicate" with one another to support applications. The communication can involve either simple data passing or two or more services coordinating some activity. Some means of connecting services to each other is needed which is typically ESB(Enterprise Service Bus).

SOA vs Microservices
ESB - Enterprise Service Bus
What is Microservice ?

Microservices are a sort of evolution or extension of SOA. Microservices use APIs, or application programming interfaces, to connect with one another. Each of them contributes to the formation of an organizational network centered on a given business topic. When these services are combined, they form sophisticated applications. Microservices construct an application system as a collection of single-purpose services that are generally different and distinctive.

SOA vs Microservices
MS - Microservice

SOA and Microservice can be differentiated on Scope, Granularity, and Implementation!

  • Scope

SOA is a collection of enterprise-level applications and services that need developers to have a thorough understanding of the application and all of its dependencies in order to code properly.

Microservices, on the other hand, is a design pattern for an application. It divides a single application into numerous distinct services, each of which performs a different purpose. In other words, each capability excels at one task, decreasing the amount of knowledge required by engineers to work on each module.

  • Granularity

SOA is "coarse-grained", which means it concentrates on broad, business-domain functions. As a result, each functionality has a large number of dependents.

Microservices are even more "fine-grained", resulting in a tangle of capabilities with a single emphasis known as a bounded context. Each bounded context is self-contained, lightly connected, and much more focused than an SOA's domain functions. This allows it to be more scalable than SOAs.

  • Implementation

SOA requires software that handles communication. Historically, businesses have used web service access protocols like SOAP to expose, and then transact with each functionality. This created a point-to-point integration which was difficult to manage. This is why SOA makes use of an Enterprise Service Bus (ESB), a middleware technology that handles communication and lowers the amount of point-to-point connections between capabilities.

When it comes to a microservices architecture, each individual service operates and scales independently and so each has its own separate point of failure, making microservices far more resilient and agile, and enabling the independent coding of each individual functionality.


Hope you liked it. There will be a second part published on when to use SOA and when to use Microservices. So stay tuned.

Cheers 🍻

]]>
<![CDATA[Database Indexing]]>https://bitmaskers.in/database-indexing/6628ae828915dc00017a70ffFri, 30 Jul 2021 15:41:11 GMTDatabaseDatabase Indexing

In computing, a database is an organized collection of data stored and accessed electronically from a computer system. It is a system for storing data in an ordered manner. The data is stored in a specified structure within the storage. Each database type has its own data storage format. They've been tweaked and tuned for various scenarios.

Let's take this simple Employee Table as an example

Wondered how this simple table is actually saved Internally.


Storage

Internally, each database is kept as a file with a certain encoding and layout. Let's pretend that a database is backed by a CSV file for the purpose of simplicity. Here's how it appears:

Id,FirstName,LastName,Designation
1,Analise,Holsall,Account Executive
2,Agnes,St. Hill,Structural Analysis Engineer
3,Ulrica,Willimot,Mechanical Systems Engineer
4,Coop,Awcoate,VP Product Management
5,Willi,De Maria,Business Systems Development Analyst

Everything appears to be straightforward. It's not difficult to perform a lookup with only five entries but what if there were 100,000 entries? It would take a long time to go through the entire file. The query time grows in direct proportion to the file size.

This brings us to finding an optimal solution to retrieve data faster.

Indexing comes to the rescue!!

Database Indexing
Indexing at Rescue

Indexing

A database index is a data structure that can be used to speed up data retrieval activities. What does it resemble?

It would be much faster to skip to the appropriate row without looping through the remainder of the table if we needed to fetch an employee with ID 5. This is the central concept of indexing. We must additionally save the offset, which leads to the appropriate entry.

The simplest way to achieve this would be to use hashing. The key is the value of the column we want to index (in this example, it is the Id column). The hash value is the offset(starting index) in the database file. For ID = 1, the offset is 0. For ID = 2, the offset is 36.

The end-to-end structure will look like this:

1 -> 0  -------> 1,Analise,Holsall,Account Executive
2 -> 36  ------> 2,Agnes,St. Hill,Structural Analysis Engineer
3 -> 72  ------> 3,Ulrica,Willimot,Mechanical Systems Engineer
4 -> 118 ------> 4,Coop,Awcoate,VP Product Management
5 -> 155 ------> 5,Willi,De Maria,Business Systems Development Analyst

Querying employees by ID will yield faster results if we build an index. The retrieved request accesses the hash index and retrieves the offset for the desired ID.

There is also the option of having several indexes. If any other column needs to be accessed quickly, we create an index for it as well. For example, we could create a designation index and query employees by designation faster! However, each new index adds an additional load to the database.

The Cost of Indexing

Firstly, each index hash requires additional Memory space. It's crucial to remember to index only the columns that will be queried frequently. Otherwise, indexing each column would take up a huge amount of Memory space.

Secondly, there will be slightly slower write operations for the quick read operations. We must construct an item in the hash index every time we add an entry to the table. This makes insert operations slow.

TL;DR
  1. A database index can help speed up read queries.
  2. Memory usage increases for each index added.
  3. Adding an index has an impact on database writing operations.

In this article, we saw a simple file-based data storage with offset-based indexing. There are many other approaches like BTree or B+Tree but the concept remains relevant.


I always wondered how to enhance my API performances and indexing was one of many ways we can use to make our APIs faster.

Hope you like it. Cheers 🍻

]]>
<![CDATA[AWS Lambda Cold Start]]>https://bitmaskers.in/aws-lambda-cold-start/6628ae828915dc00017a70feFri, 23 Jul 2021 17:13:08 GMT

Lambda is part of the Serverless offering from AWS. Check a detailed post here:

AWS Serverless
Let’s see what does Serverless means!
AWS Lambda Cold Start

Although, Lambdas are great it comes with something called Cold Start.

Cold starts may wreak havoc on Lambda's performance, especially if you're working on a customer-facing app that needs to respond in real-time. They occur because AWS must deploy your code and spin up a new container before the request can begin if your Lambda is not currently running.

This is the typical Request Life Cycle

AWS Lambda Cold Start
source - AWS

The first request handled by a new Lambda worker is known as a "cold start." This request takes a little longer to complete because the Lambda service must:

  1. Identify an EC2 instance
  2. Initialize the worker
  3. Initialize the function module

Cold starts account for fewer than 0.25 percent of Lambda requests, but their impact can be significant. This issue is especially important for applications that require real-time execution or rely on split-second timing.


How to solve the cold start?

Using Provisioned Concurrency

Knowing that the time it takes to configure the computational worker nodes is a key cause of cold starts, the AWS Provisioned Concurrency solution is straightforward. Those worker nodes are already up and running! There is no extra code needed, just some clicks and your app is always ready to respond to users without any delay.

The idea is that you can now choose how many of these worker nodes you want to keep initialized for your time-sensitive serverless apps. These worker nodes will be frozen, with your code already downloaded and the underlying container infrastructure in place. As a result of not consuming any resources, the benefit here is a guaranteed response time of approximately double that of the previous method.

AWS Lambda Cold Start

However, it comes at a price. From the moment you enable provided concurrency, you'll be charged for it unlike normal lambdas, which charge you only when it executes your code.

Therefore, make sure you are aware of the lambdas which has provisioned concurrency and also the number of concurrency you are assigning.

I created this tool, that you can use to fetch all the functions that have provisioned concurrency. Check the code below:

GitHub - tirthankarkundu17/lambda-auditor: An utility to check your AWS Lambda Functions
An utility to check your AWS Lambda Functions. Contribute to tirthankarkundu17/lambda-auditor development by creating an account on GitHub.
AWS Lambda Cold Start

Steps on adding provisioned concurrency:

  • Select the Lambda function you want to add provisioned concurrency
  • Select the configuration tab and then click concurrency
AWS Lambda Cold Start
  • Click on Add as below
AWS Lambda Cold Start
  • Select the version of the app and the number of concurrency. It will also show the additional cost associated. Finally, click save
AWS Lambda Cold Start

Hope you like this. Cheers 🍻

]]>
<![CDATA[API First Development]]>https://bitmaskers.in/api-first-development/6628ae828915dc00017a70fcSun, 18 Jul 2021 14:23:32 GMT

We all started with Web Development with some MVC framework initially(at least I did). Web apps back then would start a thread with new requests, the thread would process the request, and a “view” (HTML) would be generated and given to the client to render.

Layer after layer of inadequate software architecture, system design, size, and technical rot would crumple systems. In addition, tremendous amounts of valuable intellectual property — business rules, procedures, and workflows — were tightly coupled and a lacked reusability.

API First Development

The progress

Since then, we've made significant progress. We could use reusable libraries to create better monolithic applications, but what if we wanted to create scalable(horizontal and vertical) architectures that could be distributed over several computing resources? What if we wanted to support desktop, online, and mobile devices with as much reusability and central control as possible?

Eventually came the world of APIs. The concepts of distributed software APIs and how simple Request/Response protocols may power almost any business activity were popularized by SOAP and REST.

With this evolution, we often found every team having 3 main pillars - the data person(dealing with data store), the service person(Writing APIs and building scalable systems) and an UI person(creating the User Interfaces).


What makes an API beautiful ?

In today's world of SAAS, APIs are really a product that powers your UI and is as important as that shinier User Interface and underlying data pool/lake/ocean. Creating beautiful API is an art and these key factors determine how beautiful you APIs are -

  • Consistency : refers to the idea that one portion of your API should appear to be identical to every other section of your API. Having a shared set of rules and patterns makes the API consistent.
  • Simplicity : is the idea that the API and its models makes sense. Is the intent of your domain reflected in your models and paths(Domain Driven Design)?  Do they organize information in ways that are simple to comprehend and process? Answering these questions makes the API simple for consumers to consume.
  • Technically compliant : confirms that it follows the API Design best practices and is properly documented.
    Check this blog on some of the best practices.
REST API Design Best Practices
API Best Practices
API First Development

Once our beautiful API is thought through, we can now go to the API first development approach.


API First Development

It states that the first part of API Development is the design of APIs. Ask yourself how you, as a consumer, would want to work with data and conduct actions to solve your business challenges, rather than how your entity diagrams and database would work. Then, to the best of your technical ability, develop an API to solve the challenge(s).

Once API is designed, the database person can now create stored procedures or queries to fetch the data from data source. Service person can create highly scalable APIs with queues and messaging systems and other powerful tools(you have many nowadays, thanks to cloud computing 😉). The UI folks can use the mocked APIs to create mockup, designs and interfaces.

With less overall interaction and dependency, teams can begin to stagger the completion of their swim lanes. Iterative feedback can be gathered, allowing the overall software development process to progress more quickly. API-first aims to achieve this.

API First Development

This all looks good but what are the tools available to make this work?


Enters OpenAPI and RAML
API First Development

OpenAPI provides a variety of capabilities to guarantee that APIs are thoroughly documented. Versions and query tokens and parameters, security specifications (JWT/oAuth/basic/etc. ), HTTP headers, multi-part request bodies, a bewildering array of response types, and some amazing modelling tools that reflect the needs of both static and dynamic type systems present in today's back-end services can all be specified in paths. It was formerly known as Swagger.

API First Development

RAML (RESTful API Modeling Language) is a YAML-based modelling language that is used to represent RESTful APIs. It gives a well-structured and easy-to-understand framework for describing the API. It "makes it straightforward to manage the entire API lifecycle," according to RAML's website.


Hope you liked it. Cheers 🍻

]]>