Understanding the web – request/response – Introduction

Before going any further, it is imperative to understand the basic concept of the web. The idea behind HTTP 1.X is that a client sends an HTTP request to a server, and then the server responds to that client. That can sound trivial if you have web development experience. However, it is one of the most important web programming concepts, irrespective of whether you are building web APIs, websites, or complex cloud applications.Let’s reduce an HTTP request lifetime to the following:

  1. The communication starts.
  2. The client sends a request to the server.
  3. The server receives the request.
  4. The server does something with the request, like executing code/logic.
  5. The server responds to the client.
  6. The communication ends.

After that cycle, the server is no longer aware of the client. Moreover, if the client sends another request, the server is unaware that it responded to a request earlier for that same client because HTTP is stateless.There are mechanisms for creating a sense of persistence between requests for the server to be “aware” of its clients. The most well-known of these is cookies.If we dig deeper, an HTTP request comprises a header and an optional body. Then, requests are sent using a specific method. The most common HTTP methods are GET and POST. On top of those, extensively used by web APIs, we can add PUT, DELETE, and PATCH to that list.Although not every HTTP method accepts a body, can respond with a body, or should be idempotent, here is a quick reference table:

MethodRequest has bodyResponse has bodyIdempotent
GETNo*YesYes
POSTYesYesNo
PUTYesNoYes
PATCHYesYesNo
DELETEMayMayYes

* Sending a body with a GET request is not forbidden by the HTTP specifications, but the semantics of such a request are not defined either. It is best to avoid sending GET requests with a body.

An idempotent request is a request that always yields the same result, whether it is sent once or multiple times. For example, sending the same POST request multiple times should create multiple similar entities, while sending the same DELETE request multiple times should delete a single entity. The status code of an idempotent request may vary, but the server state should remain the same. We explore those concepts in more depth in Chapter 4, Model-View-Controller.Here is an example of a GET request:

GET http: //www.forevolve.com/ HTTP/1.1
Host: www.forevolve.com
Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9,fr-CA;q=0.8,fr;q=0.7
Cookie: …

The HTTP header comprises a list of key/value pairs representing metadata that a client wants to send to the server. In this case, I queried my blog using the GET method and Google Chrome attached some additional information to the request. I replaced the Cookie header’s value with … because it can be pretty large and that information is irrelevant to this sample. Nonetheless, cookies are passed back and forth like any other HTTP header.

Code smell – Control Freak – Introduction

An excellent example of a code smell is using the new keyword. This indicates a hardcoded dependency where the creator controls the new object and its lifetime. This is also known as the Control Freak anti-pattern, but I prefer to box it as a code smell instead of an anti-pattern since the new keyword is not intrinsically wrong.At this point, you may be wondering how it is possible not to use the new keyword in object-oriented programming, but rest assured, we will cover that and expand on the control freak code smell in Chapter 7, Deep Dive into Dependency Injection.

Code smell – Long Methods

The long methods code smell is when a method extends to more than 10 to 15 lines of code. That is a good indicator that you should think about that method differently. Having comments that separate multiple code blocks is a good indicator of a method that may be too long.Here are a few examples of what the case might be:

  • The method contains complex logic intertwined in multiple conditional statements.
  • The method contains a big switch block.
  • The method does too many things.
  • The method contains duplications of code.

To fix this, you could do the following:

  • Extract one or more private methods.
  • Extract some code to new classes.
  • Reuse the code from external classes.
  • If you have a lot of conditional statements or a huge switch block, you could leverage a design pattern such as the Chain of Responsibility, or CQRS, which you will learn about in Chapter 10, Behavioral Patterns, and Chapter 14, Mediator and CQRS Design Patterns.

Usually, each problem has one or more solutions; you need to spot the problem and then find, choose, and implement one of the solutions. Let’s be clear: a method containing 16 lines does not necessarily need refactoring; it could be OK. Remember that a code smell indicates that there might be a problem, not that there necessarily is one—apply common sense.

Anti-patterns and code smells – Introduction

Anti-patterns and code smells are bad architectural practices or tips about possible bad design. Learning about best practices is as important as learning about bad ones, which is where we start. The book highlights multiple anti-patterns and code smells to help you get started. Next, we briefly explore the first few.

Anti-patterns

An anti-pattern is the opposite of a design pattern: it is a proven flawed technique that will most likely cause you trouble and cost you time and money (and probably give you headaches).An anti-pattern is a pattern that seems a good idea and seems to be the solution you were looking for, but it causes more harm than good. Some anti-patterns started as legitimate design patterns and were labelled anti-patterns later. Sometimes, it is a matter of opinion, and sometimes the classification can be influenced by the programming language or technologies.Let’s look at an example next. We will explore some other anti-patterns throughout the book.

Anti-pattern – God Class

A God class is a class that handles too many things. Typically, this class serves as a central entity which many other classes inherit or use within the application it is the class that knows and manages everything in the system; it is the class. On the other hand, it is also the class that nobody wants to update, which breaks the application every time somebody touches it: it is an evil class!The best way to fix this is to segregate responsibilities and allocate them to multiple classes rather than concentrating them in a single class. We look at how to split responsibilities throughout the book, which helps create more robust software.If you have a personal project with a God class at its core, start by reading the book and then try to apply the principles and patterns you learn to divide that class into multiple smaller classes that interact together. Try to organize those new classes into cohesive units, modules, or assemblies.To help fix God classes, we dive into architectural principles in Chapter 3, Architectural Principles, opening the way to concepts such as responsibility segregation.

Code smells

A code smell is an indicator of a possible problem. It points to areas of your design that could benefit from a redesign. By “code smell,” we mean “code that stinks” or “code that does not smell right.”It is important to note that a code smell only indicates the possibility of a problem; it does not mean a problem exists. Code smells are usually good indicators, so it is worth analyzing your software’s “smelly” parts.An excellent example is when a method requires many comments to explain its logic. That often means that the code could be split into smaller methods with proper names, leading to more readable code and allowing you to get rid of those pesky comments.Another note about comments is that they don’t evolve, so what often happens is that the code described by a comment changes, but the comment remains the same. That leaves a false or obsolete description of a block of code that can lead a developer astray.The same is also true with method names. Sometimes, the method’s name and body tell a different story, leading to the same issues. Nevertheless, this happens less often than orphan or obsolete comments since programmers tend to read and write code better than spoken language comments. Nonetheless, keep that in mind when reading, writing, or reviewing code.

What is a design pattern? – Introduction

Since you just purchased a book about design patterns, I guess you have some idea of what design patterns are, but let’s make sure that we are on the same page.Abstract definition: A design pattern is a proven technique that we can use to solve a specific problem.In this book, we apply different patterns to solve various problems and leverage some open-source tools to go further, faster! Abstract definitions make people sound smart, but understanding concepts requires more practice, and there is no better way to learn than by experimenting with something, and design patterns are no different.If that definition does not make sense to you yet, don’t worry. You should have enough information by the end of the book to correlate the multiple practical examples and explanations with that definition, making it crystal clear.I like to compare programming to playing with LEGO® because what you have to do is very similar: put small pieces together to create something bigger. Therefore, if you lack imagination or skills, possibly because you are too young, your castle might not look as good as someone with more experience. With that analogy in mind, a design pattern is a plan to assemble a solution that fits one or more scenarios, like the tower of a castle. Once you designed a single tower, you can build multiple by following the same steps. Design patterns act as that tower plan and give you the tools to assemble reliable pieces to improve your masterpiece (program).However, instead of snapping LEGO® blocks together, you nest code blocks and interweave objects in a virtual environment!Before going into more detail, well-thought-out applications of design patterns should improve your application designs. That is true whether designing a small component or a whole system. However, be careful: throwing patterns into the mix just to use them can lead to the opposite result: over-engineering. Instead, aim to write the least amount of readable code that solves your issue or automates your process.As we have briefly mentioned, design patterns apply to different software engineering levels, and in this book, we start small and grow to a cloud-scale! We follow a smooth learning curve, starting with simpler patterns and code samples that bend good practices to focus on the patterns—finally ending with more advanced topics and good practices.Of course, some subjects are overviews more than deep dives, like automated testing, because no one can fit it all in a single book. Nonetheless, I’ve done my best to give you as much information about architecture-related subjects as possible to ensure the proper foundations are in place for you to get as much as possible out of the more advanced topics, and I sincerely hope you’ll find this book a helpful and enjoyable read.Let’s start with the opposite of design patterns because it is essential to identify wrong ways of doing things to avoid making those mistakes or to correct them when you see them. Of course, knowing the right way to overcome specific problems using design patterns is also crucial.

Before you begin: Join our book community on Discord – Introduction

Give your feedback straight to the author himself and chat to other early readers on our Discord server (find the “architecting-aspnet-core-apps-3e” channel under EARLY ACCESS SUBSCRIPTION).

https://packt.link/EarlyAccess

The goal of this book is not to create yet another design pattern book; instead, the chapters are organized according to scale and topic, allowing you to start small with a solid foundation and build slowly upon it, just like you would build a program.Instead of a guide covering a few ways of applying a design pattern, we will explore the thought processes behind the systems we are designing from a software engineer’s point of view.This is not a magic recipe book; from experience, there is no magical recipe when designing software; there are only your logic, knowledge, experience, and analytical skills. Let’s define “experience” as your past successes and failures. And don’t worry, you will fail during your career, but don’t get discouraged by it. The faster you fail, the faster you can recover and learn, leading to successful products. Many techniques covered in this book should help you achieve success. Everyone has failed and made mistakes; you aren’t the first and certainly won’t be the last. To paraphrase a well-known saying by Roosevelt: the people that never fail are the ones who never do anything.At a high level:

  • This book explores basic patterns, unit testing, architectural principles, and some ASP.NET Core mechanisms.
  • Then, we move up to the component scale, exploring patterns oriented toward small chunks of software and individual units.
  • After that, we move to application-scale patterns and techniques, exploring ways to structure an application.
  • Some subjects covered throughout the book could have a book of their own, so after this book, you should have plenty of ideas about where to continue your journey into software architecture.

Here are a few pointers about this book that are worth mentioning:

  • The chapters are organized to start with small-scale patterns and then progress to higher-level ones, making the learning curve easier.
  • Instead of giving you a recipe, the book focuses on the thinking behind things and shows the evolution of some techniques to help you understand why the shift happened.
  • Many use cases combine more than one design pattern to illustrate alternate usage so you can understand and use the patterns efficiently. This also shows that design patterns are not beasts to tame but tools to use, manipulate, and bend to your will.
  • As in real life, no textbook solution can solve all our problems; real problems are always more complicated than what’s explained in textbooks. In this book, I aim to show you how to mix and match patterns to think “architecture” instead of giving you step-by-step instructions to reproduce.

The rest of the introduction chapter introduces the concepts we explore throughout the book, including refreshers on a few notions. We also touch on .NET, its tooling, and some technical requirements.In this chapter, we cover the following topics:

  • What is a design pattern?
  • Anti-patterns and code smell.
  • Understanding the web – request/response.
  • Getting started with .NET.

Limitations of Explainable AI – Explainable AI

Following are some of the limitations of Explainable AI in Vertex AI:

  • Each attribution merely displays how much the attribute influenced the forecast for that case. A single attribution may not represent model behavior. Aggregate attributions is preferred over a dataset to understand approximate model behavior.
  • Model and data determine attributions. They can only show the model’s data patterns, not any underlying linkages. The target’s association with a feature does not depend on its strong attribution. The attribution indicates if the model predicts using the characteristic.
  • Attributions alone cannot determine quality of the model; it is recommended to consider assessment of the training data and evaluation metrics of the model.
  • Integrated gradients method works well for the differentiable models (where derivative of all the operations can be calculated in TensorFlow graph). Shapley method is used for the Non-differentiable models (non-differentiable operations in the TensorFlow network, such as rounding operations and decoding).

Conclusion

In this book, we started by understanding the cloud platform, a few important components of the cloud, and the advantages of the cloud platforms. We started working development of the machine learning models through Vertex AI AutoML for tabular, text, and image data, we deployed the trained models onto the endpoints for the online predictions. Even before entering into the complexity of the custom model building, we worked to understand how to leverage pre-build models of the platform to obtain predictions. For the custom models, we utilized a workbench for the code development for the model training and utilized docker images to submit the training jobs, also worked on the hyperparameter tuning to further enhance the model performance using Vizier. We worked on the pipeline components of the platform to train the model and evaluate and deploy the model for online predictions using both Kubeflow and TFX. We worked on creating a centralized repository for the features using the feature store of the Vertex AI. This is the last chapter of the book, where we learned about the explainable AI, need of it. We trained the AutoML classification model for image and tabular data for the explanations and obtained the explanations using the Python code. GCP is adding lot of new components and features to enhance its capability, check the platform (documentation of the platform) regularly to keep yourself updated.

Questions

  1. Why explainable AI is important?
  2. What are the different types of explanations supported by Vertex AI?
  3. What is the difference between example based and feature based examples?

Explanations for tabular data (classification) – Explainable AI

Once the model is deployed successfully, open the Jupyter lab from the workbench created and enter the Python code given in the below steps.
Step 1: Input for prediction and explanation
Select any record from the data. Modify it in the below mentioned format and run the cell:
instances_tabular=[{“BMI”:”16.6”,”Smoking”:”Yes”,”AlcoholDrinking”:”No”,”Stroke”:”No”,”PhysicalHealth”:”3”,”MentalHealth”:”30”,”DiffWalking”:”No”,”Sex”:”Female”,”AgeCategory”:”55-59”,”Race”:”White”,”Diabetic”:”Yes”,”PhysicalActivity”:”Yes”,”GenHealth”:”Very good”,”SleepTime”:”5”,”Asthma”:”Yes”,”KidneyDisease”:”No”,”SkinCancer”:”Yes”}]

Step 2: Selection of the endpoint select
Run the below lines of code to select the endpoint where the model is deployed. In this method, we are using the display name of the endpoint (instead of the endpoint ID). “tabu” is the endpoint name where the model is deployed. Full path of the endpoint (along with the endpoint ID) will be displayed in the output:
endpoint_tabular = gcai.Endpoint(gcai.Endpoint.list(
filter=f’display_name={“tabu”}’,
order_by=’update_time’)[-1].gca_resource.name)
print(endpoint_tabular)

Step 3: Prediction
Run the following lines of code to get the prediction from the deployed model:
tab_endpoint = gcai.Endpoint(endpoint_name)
tab_explain_response = tab_endpoint.explain(instances=instances_tabular)
print(tab_explain_response)
Prediction results will be displayed as shown in the following figure which contains classes and the probability of the classes:

Figure 10.23: Predictions from deployed tabular classification model
Step 4: Explanations
Run the following lines of codes to get the explanations for the input record:
key_attributes = tables_explain_response.explanations[0].attributions[0].feature_attributions.items()
explanations = {key: value for key, value in sorted(key_attributes, key=lambda items: items[1])}
plt.rcParams[“figure.figsize”] = [5,5]
fix, ax = plt.subplots()
ax.barh(list(explanations.keys()), list(explanations.values()))
plt.show()

Shapley value is provided in the explanations for each of the features and it is visualized as shown in the following figure:

Figure 10.24: Explanations from deployed tabular classification model
Deletion of resources
We have utilized cloud storage to store the data, delete the files from the cloud storage manually. Dataset is created for image data and tabular data to delete them manually. Classification models for image and tabular are deployed to get the predictions and explanations, ensure to un-deploy the model from the endpoints and delete the endpoints (Refer Chapter 2, Introduction to Vertex AI & AutoML Tabular and Chapter 3, AutoML Image, text and pre-built models). Predictions are obtained using workbench, ensure to delete the workbench instance.

Tabular classification model deployment – Explainable AI

In case of the image data, users had to configure explainable AI during the training and deployment phase, whereas, in case of tabular data, explainable AI needs to be configured only during the deployment phase (AutoML will enable the explainable AI by default during the training phase for the tabular data). Follow the steps mentioned in Chapter 2, Introduction to Vertex AI & AutoML Tabular for the tabular dataset creation and tabular AutoML model training. Follow the below mentioned steps for the model deployment of the trained model.

Step 1: Trained model in the model registry

Trained model will be listed in the model registry as shown in the following figure:

Figure 10.17: Model registry (tabular classification model)

  1. tabular_classification trained using AutoML. Click on the model and the version of the same.

Step 2: Deploy to end point

Once the model is selected (along with the version) users will get option to evaluate the model, deploy and test the model, and so on. Follow the steps mentioned below to deploy the model:

Figure 10.18: Trained tabular classification model

  1. Select DEPLOY AND TEST tab.
  2. Click DEPLOY TO ENDPOINT.

Step 3: Endpoint definition

Follow the steps mentioned below to define the endpoint:

Figure 10.19: Endpoint definition tabular classification model

  1. Provide Endpoint name.
  2. Click CONTINUE.

Step 4: Model settings

Follow the steps mentioned below to configure the model settings and to enable the explain ability options:

Figure 10.20: Model settings (enabling explain ability)

  1. Set the Traffic split to 100.
  2. Set the Minimum number of compute nodes to 1.
  3. Set the Maximum number of compute nodes to 1.
  4. Select n1-standard-8 in Machine type.
  5. Enable the Explainability options.
  6. Click EDIT.

Step 5: Set the Explainability options

You can set the Explainability options by following the steps shown in the following figure:

Figure 10.21: Sampled Shapley path count

  1. Select Sampled Shapley method.
  2. Set the Path count to 7 (randomly chosen).
  3. Click DONE.

Step 6: Model monitoring

Follow the below mentioned steps to disable the model monitoring (since it is not needed for the explanations):

Figure 10.22: Model monitoring

  1. Disable Model monitoring options.
  2. Click DEPLOY.

Explanations for image classification – Explainable AI

Once the model is deployed successfully, open the Jupyter lab from the workbench created and enter the Python code given in the following steps:
Step 1: Install the required packages
Type the following Python code to install the required packages:
!pip install tensorflow
!pip install pip install google-cloud-aiplatform==1.12.1

Step 2: Kernel restart
Type following commands in the next cell, to restart the kernel: (Users can restart kernel from the GUI as well):
import os
import IPython
if not os.getenv(“”):
Ipython.Application.instance().kernel.do_shutdown(True)

Step 3: Importing required packages
Once the kernel is restarted, run the following lines of codes to import the packages:
import base64
import tensorflow as tf
import google.cloud.aiplatform as gcai
import explainable_ai_sdk
import io
import matplotlib.image as mpimg
import matplotlib.pyplot as plt

Step 4: Input for prediction and explanation
Choose any image from the training set (stored in the cloud storage for the prediction) and provide the full path of the image chosen in the following code. Run the cell to read the image and covert the image to the required format:
img_input = tf.io.read_file(“gs://AutoML_image_data_exai/Kayak/adventure-clear-water-exercise-1836601.jpg”)
b64str = base64.b64encode(img_input.numpy()).decode(“utf-8”)
instances_image = [{“content”: b64str}]

Step 5: Selection of the endpoint select
Run the following lines of code to select the endpoint where the model is deployed. In this method, we are using the display name of the endpoint (instead of the endpoint ID). Image_ex is the endpoint name where the model is deployed. Full path of the endpoint (along with the endpoint ID) will be displayed in the output:
endpoint = gcai.Endpoint(gcai.Endpoint.list(
filter=f’display_name={“image_ex”}’,
order_by=’update_time’)[-1].gca_resource.name)
print(endpoint)

Step 6: Image prediction
Run the following lines of code to get the prediction from the deployed model:
prediction = endpoint.predict(instances=instances_image)
print(prediction)
Prediction results will be displayed as shown in the following figure which contains display names and the probability of the classes:

Figure 10.15: Image classification prediction result
Note: Since we are running this code using the Vertex AI workbench, we are not using service account for authentication.
Step 7: Explanations
Run the following lines of codes to get the explanations for the input image:
response = endpoint.explain(instances=instances_image)

for explanation in response.explanations:
attributions = dict(explanation.attributions[0].feature_attributions)
image_ex = io.BytesIO(base64.b64decode(attributions[“image”][“b64_jpeg”]))
plt.imshow(mpimg.imread(image_ex, format=”JPG”), interpolation=”nearest”)
plt.show()

The output of the explanations is shown in the figure. Highlighted areas in green indicates areas/pixels which played important role for the prediction of the image:

Figure 10.16: Image classification model explanation

Image classification model deployment – Explainable AI

Once the model is trained, it needs to be deployed to end point for online predictions. Also create a workbench to get the predictions (python workbench will suffice). Follow the steps mentioned in the chapter Vertex AI workbench & custom model training., for creation of the workbench (Python workbench will suffice). Follow the below mentioned steps for the deployment of the models.

Step 1: Trained model listed under Model registry

Follow the below mentioned step to deploy the trained model:

Figure 10.10: Model registry

  1. Click the trained model and then click on the version 1 of the model.

Step 2: Deploy to end point

Once the model is selected (along with the version) users will get the option to evaluate the model, deploy and test the model, and so on. Follow the steps mentioned below to deploy the model:

Figure 10.11: Image classification model deployment

  1. Click DEPLOY AND TEST.
  2. Click DEPLOY TO ENDPOINT.

Step 3: Define end point

Follow the steps mentioned below to define the end point:

Figure 10.12: Image classification endpoint definition

  1. Select Create new endpoint.
  2. Provide the Endpoint name.
  3. Click CONTINUE.

Step 4: Model settings

Follow the below mentioned steps to enable the explain ability of the model:

Figure 10.13: Image classification enabling explain ability

  1. Set the Traffic split to 100.
  2. Set the Number of compute nodes for predictions to be 1.
  3. Enable the Explainability options.
  4. Click EDIT.

Step 5: Feature attribution method selection

Follow the below mentioned steps to set the feature attribution method selection. In this example we are using Integrated gradients method for the explanations:

Figure 10.14: Image classification explain ability configuration

  1. Select Integrated gradients method for feature attribution method (Keep the values same for the all the parameters, what was set during the training phase).
  2. Click DONE.
  3. Click DEPLOY.

Check if the model is deployed properly and then proceed with the python code to get predictions and explanations.