Hello and welcome to two of our discussion on IT architecture. In this section we're going to talk about architectural styles which is focus on how we go from our layered software architecture to actually putting it on tiers which are on physical systems. We'll also talk about deployment strategies from the perspective of getting scalability, reliability, availability, and security. Let's now take a look at architectural styles. Architectural styles are the different ways in which we can go from the layered software architecture that we discussed to a tiered architecture which is actually running the software layers as physical implementations on systems. Now let's look at architectural styles. Architectural styles refer to a family of architectures that support certain characteristics. These characteristics are associated with certain organizational needs, and I will expand on that as we look at each one of these. As we look at these architectural styles, it's important to understand that they are still technology agnostic. However, there are some technologies that are better suited for certain architectures. For example, when we get to microservices architecture, we look at containers as a mechanism for running these microservices. Or when we look at MTR architecture, we could have clusters of systems as the appropriate mechanism to run an TR architecture. We're going to look at the Tr architecture, which is let's say the traditional architecture. We look at microservices architecture, which is the emerging architecture used quite a bit by large scale technology companies. We look at the big computer architecture for getting high levels of computing capabilities. And big data architecture for storing and processing large amounts of data. We talked about the logical conceptual software reference architecture, which had logical layers. As we map from logical layers to physical tiers, it is important to understand those layers describe groupings of functionality which ultimately needs to get implemented in components, or the components of application need to get implemented. When we look at tiers, they actually describe the physical distribution of that functionality or physical distribution of these application components on separate servers, on computers, networks, remote locations, containers, virtual machines, or even the cloud. Both layers and tiers use the same set of names, that is the presentation, business services and data layers or data tiers. But it's important to understand that when we think about logical layers, they're looking at a logical representation of the software architecture. When we look at physical tiers, we are looking at where the components that perform the functions defined in those logical layers are actually running. Now with that set, it is quite common to locate more than one layer on the same physical tier. It is also possible to take one layer and spread them over multiple physical tiers. As we think about these different combinations, we can think of distribution patterns as two tier, three tier, and the tier architecture. An entire architecture is basically a generic architecture model. The N here means that the number of tiers could depend on how an application is split up and actually run on multiple systems. We discuss, we divide applications into layers. These layers perform certain logical functions. These logical functions can then be implemented to run on specific tiers. These per architectures are typically used as part of an infrastructure, as a service hosted application. And of course they are used quite a bit in traditional on premises systems. They use for simple web applications. Usually when we think about migrating on premises applications to the cloud without re architecting those applications, then enter architecture is the most commonly used one. So when we look at an entire architecture, we really have different choices. You probably heard about a two tier architecture, three tier architecture, and basically denotes that these components can be spread out over multiple tiers. What we have here on the left is what we would call a non distributed deployment or a traditional two tier architecture. This was the architecture, this is the architecture used by mainframe systems, usually used with a limited number of physical servers. This has limited scalability that it has the M ability to grow as business needs grow. You will see here that the presentation layer, the business layer, and the data layer are all implemented into a single tier called the web tier. These are simple web based applications, or they could be traditional mainframe based applications. In a distributed deployment, we could have the presentation layer running on a web server to create a web tier. We could have the business logic and the data logic running on an application server to create an app tier. And then we would have the database server, which actually hosts the physical data. In an end tier, we could take this business logic, we could split that up into multiple tiers, all of which are connected to a single database server. The data tier can also be run on a database server, right, on a database management system, if you will. Now, moving on to a microservice based architecture. Here, if we were to think about the concept of a service and we break it down into very small services. We are going to implement small business capabilities and a large number of them, and we connect them together through an API gateway. A Microservices architecture is a collection of small autonomous services, each of which communicate using an application programming interface. This is essentially the message type and the interface message interfaces that we talked about implemented so that a service can interact with other services. Each service is self contained and it implements a single business capability. Services, and this is very important in the microservices architecture, are responsible for persisting their own data. They actually store and manage their own data in of course, a separate data store. This is very different from the traditional model where you had a data layer that is responsible for maintaining and persisting, or saving, or ensuring that the data is taken care of. As you can imagine, having a large number of services makes the management of a microservices based architecture very complex. We bring in this concept of orchestration. Orchestration is the automated configuration management and coordination of computer systems, applications, and services. Basically, orchestration helps IT to easily manage a very complex set of tasks and workflows that is associated with Microservices based architecture. I'm going to leave microservices based architecture at this level for now. Toward the end of the course, we will come back and do a much bigger, deeper dive into microservices based architecture to compare microservices with monolithic applications. If you look at the top of the graphic here, a traditional shopping cart web user interface could have four different components. It could have a product catalog and order processing module, invoicing module, and a shipping module, all working off of a common shared online shopping application database in a microservices architecture. Each of those modules are implemented separately. As a Microservice. We have a product catalog microservice, which actually manages and maintains its own data. We have the invoicing microservice that manages and maintains its own data. The order processing and shipping microservice collectively manage a common database across those two. Because orders and shipments are tightly bound together and it's hard to split them up into two different databases, right? That's more of a application logic constraint here. All of them communicate with an API. And the client application, which is basically the portal or a mobile application, connects to the API gateway. If the client wants to search the product, search for a product, they'll come here, go to the API gateway, hit the product catalog, they will search. Then they find the product that they want and add it to a shopping cart. Then the product catalog micro service calls the order processing Microservice. When the order is placed, the order processing Microservice calls the shipping Microservice. Right? Both of these create update the order. The order processing microservice will create the order. The shipping microservice will update the order with the shipping information when the order is ready to be shipped. The invoicing microservice will go back to the API gateway, call the invoicing, I'm sorry, the order processing microservice will call the invoice and microservice invoke the payment system and then generate an invoice and build the customer or collect the payment depending on how the process is set. The big advantage of Microservices is you can have a very agile development. Because each of these microservices are small and less complex. They implement a single business function. Microservices can be implemented by different teams. The microservices themselves can actually run on different run time ends. Now moving on to big computer architecture. In big computer architecture, the work that needs to be done is split across a large number of parallel tasks. The key thing to understand with the big computer architecture, this architecture is only suited if the workload can be parallelized, right? That if the workload can actually be split up into a large number of threads that can be run simultaneously. If you are able to run a large number of parallel tasks, then basically what happens is the client invokes the job, the scheduler coordinator, the job splits it up into a large number of parallel tasks. If there are jobs that cannot be split up like this, which those jobs that are tightly coupled will be offloaded to a different processing engine so that the massively parallel task can continue to run. Now in business, big computer architecture is typically used in financial modeling. It doesn't have a whole lot of applications in traditional things such as production management or HR. If you have a very complex product portfolio, doing planning might benefit from the use of big computer architecture. If you've got 100,000 products that you're planning for for the entire year, each of those plans could potentially be run as independent parallel tasks. Although there are going to be some couple tasks because the need for one material could depend on the need for another material, right? In finance, if you're doing like stock market analysis or analysis of a portfolio, then each of those stocks can be analyzed independently, Generating a large number of independent parallel tasks that can be run simultaneously, thereby taking advantage big computer architecture. The next one is a big data architecture. Here think about getting analytics on Twitter or the new X feed, right? You've got millions of records of data being created. Each record is actually fairly small. In big data architecture, the processing is actually low, but the volume is very high, right? The big data architecture is designed to handle the ingestion or the input, the processing and analysis of data that is too large for traditional data, we have data coming in from a data source, right? From the data source, you store the data, unless it has to be processed in real time. If it is to be processed in real time, for example, you are doing real time sentiment analysis on Twitter. Then the real time data goes to a stream processing engine, which is the data is processed as it comes in, the analyzed data is stored. Copy of analyzed data is actually used to create analytics and for reporting for data that is brought in and then stored. We can do batch processing, which is You are taking 100 million records and processing it together. Again, we will always store the analyzed data so that we don't have to re, run the analysis if you have to go back and see it again. And then the analytics and reporting will be a real time output of the analysis that is being done by the big data architecture. Basically, this type of architecture is used for storing and processing large amounts of unstructured data. We can do that with stored data, or we can do that with real time data, that is, live data or a live data feed. The next segment looks at how we can design well architected applications, right? How do we design applications that are reliable? Reliability means that a system can recover from failures and will continue to function even when a failure happens. We look at how can we design for redundancy. At the end of the day, the only way to avoid total failure is to build redundancy into an application. That is, you want to have additional copies of the application running. We will look at scalability. Now, scalability is extremely important from a business standpoint, which is the ability for an application to grow with demand. Scalability will allow you to handle peak demand on your system, right? You launch a new product and it becomes highly successful and users are placing a large number of orders. How do we make sure that your underlying systems have the requisite capacity? From a security standpoint, we want to make sure that we are able to protect applications and data from threats. And finally, we want an evolutionary architecture. That means that our systems must be able to evolve over time so that as new innovations happen, businesses can take advantage of innovations. Taking advantage of innovations from a technology standpoint can just become standard practice provided that ability to evolve is part of that architectural design. Let's start off by looking at a design for reliability. So when we said designed for reliability, what we're saying is we want our application to be self healing, then failures happen. Now, this level of reliability is typically required for our database servers. For our data logic or the data tier, right? We need that because if the database fails, then having the rest of the application running doesn't serve any purpose. Let's look at how this works. We have our client tier. So these are our client systems. They connect through a firewall. They connect to our application logic, which is the application tier. The application logic then connects at least two different database servers. These database servers are marked as active and passive. The active server is the one that would be actively taking requests and responding to requests from the application server. The passive server essentially copies all the data that is being written to the passive, to the active server. Right? When data is getting written, it is simultaneously written to both to the active or to the passive server. Now, the actual writing and the timing of that depends on the vendor, right? I'm describing a generic solution. Now, obviously when you have two different database servers, you have the issue of which one is the single point of truth and that is why one of them is designated as active. The active server has the single version of the truth, right? That is the data that is going to be used as our record of a transaction having happened. We have a heartbeat signal between the two, which basically means that the systems are monitoring each other to make sure that the other one is alive and kicking. Let's assume that our active server goes down, the active server goes down, the heartbeat fails. If the heartbeat fails, the passive server will mark itself as active and start taking over transactions. You will notice here that we are still writing to a common shared data store, and the data store could also have redundancy built in as in. Multiple mirrored disks that store the data. Right here, the active server will take over and the application server would not notice that the server has switched. The virtual host here takes care of reassigning the traffic to the new server. Now let's say we go in and we fix the server that went down. When we bring it back up, that comes back up as passing. Now, you could ask the question, hey, what if the active server dies? The passive server becomes the active server, and that one dies too. Then you have a problem, right? If you want to really plan for that scenario, as in you want multiple levels of redundancy, then you could have multiple database server. You could have an active passive passive system with three systems. Or you could have an active passive passive passive with five systems, right? It really depends on how sensitive your business is to potential interruption in the workload, right? Potential interruption in the availability of the application. Now let's look at designing for redundancy. You will see that when we designed for redundancy, what we are doing is avoiding single points of failure. And you will see that it also gives you reliability, right? We used a different mechanism for our database servers because of the issue of having a version of the truth with application servers, we could have multiple instances, multiple copies of the application server running simultaneously with a single common database. When a client makes a request, it comes out to the load balancing logic, which looks across at the three servers here and says, which one is the least lightly loaded? Let's say that's observer three. Then the traffic comes here. Observer three will write to the database, let's say another request comes in from this client. Now the load balancing logic says, I'm going to send it to application server on E. That then writes back to the database server. When we think about scalability, you can actually look at our previous model and say, hey, we could add more and more systems. That is called scaling out. Scalability is the ability to get more processing capacity from your IT infrastructure. Scaling out, add more systems. Scaling up means make your system more powerful, Are more capable, adding more Ram, adding more storage, adding more processing. This is very possible as long as the underlying system is designed to be scaling. Scaling up can only be done if the physical hardware underneath the system is designed for scalability. For example, the IBM Z series mainframe ships with CPU memory packaging called books. You can configure the system with a small number of books, all the way to 128 books you can buy it with say like 32, add another 32, add 32 more, and then finally add 32 more to max it out. Right, With the IBM Z systems, these books, which are basically bundles of CPU and memory, can be added while the system is running. Scaling up is typically used with database systems, right? Because adding more systems, we think about our design for reliability of database systems. The passive servers did not actually add extra capacity, right? They just added extra reliability. In order to get extra extra capacity, you have to make the system more capable. With application servers and web servers, we can scale up because there is no data. You can add more and more systems. With application systems, we typically tend to use mid size systems, and with web systems we tend to use Smaller systems and a large number of them. In a typical large scale IT infrastructure, we may have hundreds of depth systems, dozen, couple of dozen application systems, and three to five database systems, assuming that we need very, very high reliability. To illustrate this, let's look at an SAP Hana system. Scaling up means taking our system and then moving it from 1 Tb to 4 Tb of Ram inside the virtual machine, right? Basically adding more memory, adding more computational capability. Scaling out means having four systems, each having a terabyte of memory, right? We are using memory here for the scaling because SAP Hana systems are in memory systems and the big processing capacity that they need is main memory. Now let's look at how this manifest itself itself in a cloud scenario, right? If you had to think about two Amazon EC two instances which are infrastructure as a system can set up load balancing as we have higher and higher load, we can create an auto scaling group where these additional instances can be spun up or can be turned on in order to support a larger number of users right in the cloud. This is very simple to do in an on premise system or data sender. Adding these additional systems involved buying and installing physical systems, right? So you can see how cloud based solutions can be scaled up very quickly. Another way in which we can provide for reliability is by creating availability zones, right? One of the advantages of moving to the cloud is that the cloud provider may have multiple data centers. Now if you look at large corporations such as Kroger or Pro and Gamble, or Nest, or Ule Lever, they all have a number of different data centers that are geographically dispersed. Right here, you could take a set of data centers in North America and say that that's my North American availability zone. Or you could do that with a number of different data centers that are all on the west coast and say that's my west coast availability zone right now. These availability zones need to be far enough apart so that the individual data centers that make up the zone would not be impacted by natural disasters in another zone. So these availability zones need to be at least about 600 mi apart. Right? So the idea is that typical natural disasters don't span more than 600 mi, of course, except if you stay on either coast of the United States, right, then 600 mi is not enough. So we are talking about going inland east to west within the US rather than north to south, right, If you are on the coast. The next thing is to look at how we design for security. In order to design for security, our application needs to be designed with multiple roles, right? Users are partitioned into specific logical roles. You could have role 12, you would have all of these different users are mapped to different roles. And then depending on what role they are assigned, they're given a certain set of rights and privileges in the database. When I say rights and privileges could be read capability, it could be right capability, it could be modified capability, it could be a delete capability. Right? When a user accesses a system, they are assigned a role. That role is assigned an identity. And that identity then defines what permissions or what operations that particular user is able to perform within that particular database. Users who are all assigned to roll one will have the same set of rights and privileges. Users who are assigned to roll two would have a different set of rights and privileges. The last thing is looking at how we can design for evolution, right? Applications change over time, right? That's pretty much a given. Whether that is to fix bugs, add features, new technology comes in like AI are really to work on more scalability and resiliency if the parts of an application are very tightly coupled, that is, they're very tightly integrated. It becomes very hard to introduce changes into the system in order to design for evolution. We need highly cohesive, but loosely coupled services. That means collectively, our services need to be highly cohesive. That is, they must work together as a whole. However, they must be loosely coupled in that they must not be dependent on each other other than through messages pass through well defined interfaces. Right? A cohesive service is one that provides functionality that logically belongs together, Loosely coupled of one service can be changed without changing others, right? So you can do a bug fix, do a feature update of one service without adding, without having to touch or change another service. Microservices architecture are really suited for these decoupled autonomous services that implement a single business capability. The second one is that we need to expose open interfaces, right? So when you look at an API, the service should expose an API with a well defined API contract. And what we mean by a contract is a very structured way of interacting with the API and a very structured defined way of charging for the use of that API. If it is not a freely used or a free for use API, the API itself should be managed, The API should evolve, and the API should have version control that updates do not break upstream services that depend on that API. Now let's talk about one of the most important challenges in creating large scale IT landscapes. That's integration and interoperability. That is, how can we have all of these different systems that might come from different vendors? And when S a systems I'm talking about applications running on underlying IT hardware. How can we get them all to integrate, work together? How can we help them interoperate? That is, how can they actually depend on data from each other, from other systems in order to perform the tasks that they are designed to do? As we mentioned briefly previously, interfaces are implemented as APIs, or application programming interfaces. We can use API's both with traditional monolithic applications as well as with microservices. Now, when you use an API with a traditional monolithic application, the API provides access to all of the different business components within that application. While microservices, the API connects to each individual API which belongs to each of the individual micros. And the API's are aggregated together in what is called an API gateway. An API is a set of features and rules that exist within the application or within the service that acts as an interface between the application that offers those services and other entities, such as other software or even hardware. When we look at API's, there are different types of API's depending on who has got access to these APIs. If you are looking at an enterprise wide application, we might use private API's. These APIs API is for use internally, right? This means that a company maintains complete control over that API. Private API's are designed only for connecting to other systems that are designated by the owner of the API. A public API, on the other hand, is open to anybody that follows the Internet protocol. And it allows third parties to develop applications that interact with API's. And really if you think about it, applications that connect to Youtube. And take advantage of Youtube videos, but shows it to you in a different interface. There are applications that allow you to manage you. A Netflix que. That is very famous situation. Tweet, which Twitter was acquired. Musk started charging a lot for access to that API. The public API doesn't necessarily mean that it is free, it just means that it is available to the public as long as they meet the terms and conditions of accessing the API. Twitter, or X an API for academic and research institutions for conducting academic research. It used to be a free service until recently, and now it is a very expensive paid service. Finally, we have Partner API. These APIs are shared with specific business partners. Ups could share the API for shipping tracking with Amazon, which ships a lot of packages through UPS, or Fedex could expose their shipping API to Google, which ships all of their packages using the Fedex service. Right? These partner API's are shared with very specific business partners. And again, they can also be paid services that can be monetized. As we think about API's, they come under a broad umbrella of technologies called message brokers, right? Message brokers are inter application communication technology that helps to build a common integration mechanism. That allows you to build cloud native microservices based or servers, or hybrid cloud architectures. They're used to manage communications between on premises systems and cloud components. In a hybrid cloud environment, API is one way of doing it in a services based world. But when we want to connect on premises systems to cloud based systems that don't necessarily conform to that API model, we use message brokers. Using a message broker gives increased control over communication between the services that make sure that the data is sent securely. It's sent reliably and efficiently between the components of an application. We'll come back and look at the message brokers in much, much greater detail once we get into the cloud portion of the class. So as we think about the on premises systems and the interoperability in an end tier or a client server world. In theory, client server architecture should allow hardware and software from different vendors to work together. And that in theory is important there. In actual practice, it doesn't always work that way. Right? While you could argue that conforming to standards, it's great for everybody. Many vendors pursue a proprietary technology strategy to create, lock in of their customers or lock in of their ecosystem. In cases where you want to have two pieces of software, or two systems communicate with each other, that were not designed to communicate with each other or that are not inherently compatible. We have to use an integration layer. And that integration layer is made up of software called middleware. Middleware is actually conceptually very simple. It sits between two incompatible systems and translates. We've actually ever watched like a United Nations speech happening where you have a head of state speaking in one language, everybody else has got a headset on and you have a translation service running in the background that is translating that speech in real time. Mil ware effectively does the same thing, right? The communications coming from one system translates it into a format and structure that can be consumed by another system. And when that system, second system responds, it takes the response and then reformats it back into the structure and format that is required by the first system. Basically, it lies between the operating system and the applications. It could be between applications and other applications, and it basically enables communication and data management. Basically, middleware can essentially hide the various differences between applications and the underlying hardware and software components. And it can essentially shield programming level details from users. Right? However, middleware itself is fairly complex and requires very specific expertise to set up. There are multiple examples of middleware. There is DCE is the Common Object Request Broker architecture. Basically, Coba is the Open standard. Dce is the standard used by Microsoft and ODBC, or the Open Database. Connectivity is used for connecting to databases. As an example of middleware, Oracle's entire product portfolio is bound together through an integration layer called the Fusion Middleware. For those of you that know the history of Oracle, Oracle as a company has grown through acquisitions. Many of the product offerings that they have, they're by buying up companies that had those product offerings. Then what Oracle did create a middleware platform that connects all of these different applications into a single, unified enterprise system, right? If you compare that with Oracles primary competitor in the ERP space, the Enterprise Resource Planning or the enterprise system space, which is SAP. Sap primary application, called SAP Hana, is more or less completely homegrown, except for the database which was acquired. But you will see that SAP's architecture does not have a middleware layer because it was built up from the ground up by the same company, right? Oracle acquired multiple applications through acquisitions, and they built a fusion middleware architecture to bring all of it together into a single unified platform. I'm not arguing that one is better than the other, but architecturally, Oracle's enterprise business solutions and SAP's S for Hana are completely different from the use of middleware or from an integration standpoint. If you sum up in part one, we looked at the logical layered architecture. In part two, we looked at the physical tiered architecture, logical layers and physical tiers. Within physical tiers, we looked at how we can have patterns for reliability, availability, security. We looked at how we can create highly scalable systems. And we looked at how we can create systems for high performance computing and how we can architect systems for processing big data, right? This is foundational in terms of our discussion of software architecture, right? Basically, what we have done so far is look at software architecture both from the perspective of how it's designed. It's design, right, which is the layers and deployment, which is the tiers. We have looked at software architecture both from a design standpoint and from a deployment standpoint. All right, thank you. I will see you for the next one.