Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cloud & Networking

770 Articles
article-image-what-is-multi-layered-software-architecture
Packt Editorial Staff
17 May 2018
7 min read
Save for later

What is a multi layered software architecture?

Packt Editorial Staff
17 May 2018
7 min read
Multi layered software architecture is one of the most popular architectural patterns today. It moderates the increasing complexity of modern applications. It also makes it easier to work in a more agile manner. That's important when you consider the dominance of DevOps and other similar methodologies today. Sometimes called tiered architecture, or n-tier architecture, a multi layered software architecture consists of various layers, each of which corresponds to a different service or integration. Because each layer is separate, making changes to each layer is easier than having to tackle the entire architecture. Let's take a look at how a multi layered software architecture works, and what the advantages and disadvantages of it are. This has been taken from the book Architectural Patterns. Find it here. What does a layered software architecture consist of? Before we get into a multi layered architecture, let's start with the simplest form of layered architecture - three tiered architecture. This is a good place to start because all layered software architecture contains these three elements. These are the foundations: Presentation layer: This is the first and topmost layer which is present in the application. This tier provides presentation services, that is presentation, of content to the end user through GUI. This tier can be accessed through any type of client device like desktop, laptop, tablet, mobile, thin client, and so on. For the content to the displayed to the user, the relevant web pages should be fetched by the web browser or other presentation component which is running in the client device. To present the content, it is essential for this tier to interact with the other tiers that are present preceding it. Application layer: This is the middle tier of this architecture. This is the tier in which the business logic of the application runs. Business logic is the set of rules that are required for running the application as per the guidelines laid down by the organization. The components of this tier typically run on one or more application servers. Data layer: This is the lowest tier of this architecture and is mainly concerned with the storage and retrieval of application data. The application data is typically stored in a database server, file server, or any other device or media that supports data access logic and provides the necessary steps to ensure that only the data is exposed without providing any access to the data storage and retrieval mechanisms. This is done by the data tier by providing an API to the application tier. The provision of this API ensures complete transparency to the data operations which are done in this tier without affecting the application tier. For example, updates or upgrades to the systems in this tier do not affect the application tier of this architecture. The diagram below shows how a simple layered architecture with 3 tiers works:   These three layers are essential. But other layers can be built on top of them. That's when we get into multi layered architecture. It's sometimes called n-tiered architecture because the number of tiers or layers (n) could be anything! It depends on what you need and how much complexity you're able to handle. Multi layered software architecture A multi layered software architecture still has the presentation layer and data layer. It simply splits up and expands the application layer. These additional aspects within the application layer are essentially different services. This means your software should now be more scalable and have extra dimensions of functionality. Of course, the distribution of application code and functions among the various tiers will vary from one architectural design to another, but the concept remains the same. The diagram below illustrates what a multi layered software architecture looks like. As you can see, it's a little more complex that a three-tiered architecture, but it does increase scalability quite significantly: What are the benefits of a layered software architecture? A layered software architecture has a number of benefits - that's why it has become such a popular architectural pattern in recent years. Most importantly, tiered segregation allows you to manage and maintain each layer accordingly. In theory it should greatly simplify the way you manage your software infrastructure. The multi layered approach is particularly good for developing web-scale, production-grade, and cloud-hosted applications very quickly and relatively risk-free. It also makes it easier to update any legacy systems - when you're architecture is broken up into multiple layers, the changes that need to be made should be simpler and less extensive than they might otherwise have to be. When should you use a multi layered software architecture? Clearly, the argument for a multi layered software architecture is pretty clear. However, there are some instances when it is particularly appropriate: If you are building a system in which it is possible to split the application logic into smaller components that could be spread across several servers. This could lead to the design of multiple tiers in the application tier. If the system under consideration requires faster network communications, high reliability, and great performance, then n-tier has the capability to provide that as this architectural pattern is designed to reduce the overhead which is caused by network traffic. An example of a multi layered software architecture We can illustrate the working of an multi layered architecture with the help of an example of a shopping cart web application which is present in all e-commerce sites. The shopping cart web application is used by the e-commerce site user to complete the purchase of items through the e-commerce site. You'd expect the application to have several features that allow the user to: Add selected items to the cart Change the quantity of items in their cart Make payments The client tier, which is present in the shopping cart application, interacts with the end user through a GUI. The client tier also interacts with the application that runs in the application servers present in multiple tiers. Since the shopping cart is a web application, the client tier contains the web browser. The presentation tier present in the shopping cart application displays information related to the services like browsing merchandise, buying them, adding them to the shopping cart, and so on. The presentation tier communicates with other tiers by sending results to the client tier and all other tiers which are present in the network. The presentation tier also makes calls to database stored procedures and web services. All these activities are done with the objective of providing a quick response time to the end user. The presentation tier plays a vital role by acting as a glue which binds the entire shopping cart application together by allowing the functions present in different tiers to communicate with each other and display the outputs to the end user through the web browser. In this multi layered architecture, the business logic which is required for processing activities like calculation of shipping cost and so on are pulled from the application tier to the presentation tier. The application tier also acts as the integration layer and allows the applications to communicate seamlessly with both the data tier and the presentation tier. The last tier which is the data tier is used to maintain data. This layer typically contains database servers. This layer maintains data independent from the application server and the business logic. This approach provides enhanced scalability and performance to the data tier. Read next Microservices and Service Oriented Architecture What is serverless architecture and why should I be interested?
Read more
  • 0
  • 1
  • 107301

article-image-automate-tasks-using-azure-powershell-and-azure-cli-tutorial
Gebin George
12 Jul 2018
5 min read
Save for later

Automate tasks using Azure PowerShell and Azure CLI [Tutorial]

Gebin George
12 Jul 2018
5 min read
It is no surprise that we commonly face repetitive and time-consuming tasks. For example, you might want to create multiple storage accounts. You would have to follow the same steps multiple times to get your job done. This is why Microsoft supports its Azure services with multiple ways of automating most of the tasks that can be implemented in Azure. In this Azure Powershell tutorial,  we will learn how to automate redundant tasks on Azure cloud. This article is an excerpt from the book, Hands-On Networking with Azure, written by Mohamed Waly. Azure PowerShell PowerShell is commonly used with most Microsoft products, and Azure is no less important than any of these products. You can use Azure PowerShell cmdlets to manage Azure Networking tasks, however, you should be aware that Microsoft Azure has two types of cmdlets, one for the ASM model, and another for the ARM model. The main difference between cmdlets of the ASM model and the ARM model is, there will be an RM added to the cmdlet of the current portal. For example, if you want to create an ASM virtual network, you would use the following cmdlet: New-AzureVirtualNetwork But for the ARM model, you would use the following: New-AzureRMVirtualNetwork Often, this would be the case. But a few Cmdlets are totally different and some others don't even exist in the ASM model and do exist in the ARM model. By default, you can use Azure PowerShell cmdlets in Windows PowerShell, but you will have to install its module first. Installing the Azure PowerShell module There are two ways of installing the Azure PowerShell module on Windows: Download and install the module from the following link: https://www.microsoft.com/web/downloads/platform.aspx Install the module from PowerShell Gallery Installing the Azure PowerShell module from PowerShell Gallery The following are the required steps to get Azure PowerShell installed: Open PowerShell in an elevated mode. To install the Azure PowerShell module for the current portal run the following cmdlet Install-Module AzureRM. If your PowerShell requires a NuGet provider you will be asked to agree to install it, and you will have to agree for the installation policy modification, as the repository is not available on your environment, as shown in the following screenshot: Creating a virtual network in Azure portal using PowerShell To be able to run your PowerShell cmdlets against Azure successfully, you need to log in first to Azure using the following cmdlet: Login-AzureRMAccount Then, you will be prompted to enter the credentials of your Azure account. Voila! You are logged in and you can run Azure PowerShell cmdlets successfully. To create an Azure VNet, you first need to create the subnets that will be attached to this virtual network. Therefore, let's get started by creating the subnets: $NSubnet = New-AzureRMVirtualNetworkSubnetConfig –Name NSubnet -AddressPrefix 192.168.1.0/24 $GWSubnet = New-AzureRMVirtualNetworkSubnetConfig –Name GatewaySubnet -AddressPrefix 192.168.2.0/27 Now you are ready to create a virtual network by triggering the following cmdlet: New-AzureRMVirtualNetwork -ResourceGroupName PacktPub -Location WestEurope -Name PSVNet -AddressPrefix 192.168.0.0/16 -Subnet $NSubnet,$GWSubnet Congratulations! You have your virtual network up and running with two subnets associated to it, one of them is a gateway subnet. Adding address space to a virtual network using PowerShell To add an address space to a virtual network, you need to retrieve the virtual network first and store it in a variable by running the following cmdlet: $VNet = Get-AzureRMVirtualNetwork -ResourceGroupName PacktPub -Name PSVNet Then, you can add the address space by running the following cmdlet: $VNet.AddressSpace.AddressPrefixes.Add("10.1.0.0/16") Finally, you need to save the changes you have made by running the following cmdlet: Set-AzureRmVirtualNetwork -VirtualNetwork $VNet Azure CLI Azure CLI is an open source, cross-platform that supports implementing all the tasks you can do in Azure portal, with commands. Azure CLI comes in two flavors: Azure CLI 2.0: Which supports only the current Azure portal Azure CLI 1.0: Which supports both portals Throughout this book, we will be using Azure CLI 2.0, so let's get started with its installation. Installing Azure CLI 2.0 Perform the following steps to install Azure CLI 2.0: Download Azure CLI 2.0, from the following link: https://azurecliprod.blob.core.windows.net/msi/azure-cli-2.0.22.msi Once downloaded, you can start the installation: Once you click on Install, it will start to validate your environment to check whether it is compatible with it or not, then it starts the installation: Once the installation completes, you can click on Finish, and you are good to go: Once done, you can open cmd, and write az to access Azure CLI commands: Creating a virtual network using Azure CLI 2.0 To create a virtual network using Azure CLI 2.0, you have to follow these steps: Log in to your Azure account using the following command az login, you have to open the URL that pops up on the CLI, and then enter the following code: To create a new virtual network, you need to run the following command: az network vnet create --name CLIVNet --resource-group PacktPub --location westeurope --address-prefix 192.168.0.0/16 --subnet-name s1 --subnet-prefix 192.168.1.0/24 Adding a gateway subnet to a virtual network using Azure CLI 2.0 To add a gateway subnet to a virtual network, you need to run the following command: az network vnet subnet create --address-prefix 192.168.7.0/27 --name GatewaySubnet --resource-group PacktPub --vnet-name CLIVNet Adding an address space to a virtual network using Azure CLI 2.0 To add an address space to a virtual network, you can run the following command: az network vnet update address-prefixes –add <Add JSON String> Remember that you will need to add a JSON string that describes the address space. To summarize, we learned how to automate cloud tasks using PowerShell and Azure CLI. Check out the book Hands-On Networking with Azure, to learn how to build large-scale, real-world apps using Azure networking solutions. Creating Multitenant Applications in Azure Fine Tune Your Web Application by Profiling and Automation Putting Your Database at the Heart of Azure Solutions
Read more
  • 0
  • 0
  • 78995

article-image-mastering-promql-a-comprehensive-guide-to-prometheus-query-language
Rob Chapman, Peter Holmes
07 Nov 2024
15 min read
Save for later

Mastering PromQL: A Comprehensive Guide to Prometheus Query Language

Rob Chapman, Peter Holmes
07 Nov 2024
15 min read
This article is an excerpt from the book, "Observability with Grafana", by Rob Chapman, Peter Holmes. This book provides a holistic understanding of observability concepts using the Grafana Labs tools, teaching you how to fully leverage the LGTM stack.Introduction PromQL, or Prometheus Query Language, is a powerful tool designed to work with Prometheus, an open-source systems monitoring and alerting toolkit. Initially developed by SoundCloud in 2012 and later accepted by the Cloud Native Computing Foundation in 2016, Prometheus has become a crucial component of modern infrastructure monitoring. PromQL allows users to query data stored in Prometheus, enabling the creation of insightful dashboards and setting up alerts based on the performance metrics of applications and systems. This article will explore the core functionalities of PromQL, including how it interacts with metrics data and how it can be used to effectively monitor and analyze system performance. Introducing PromQL Prometheus was initially developed by SoundCloud in 2012; the project was accepted by the Cloud Native Computing Foundation in 2016 as the second incubated project (after Kubernetes), and version 1.0 was released shortly after. PromQL is an integral part of Prometheus, which is used to query stored data and produce dashboards and alerts. Before we delve into the details of the language, let’s briefly look at the following ways in which Prometheus-compatible systems  interact with metrics data: Ingesting metrics: Prometheus-compatible systems accept a timestamp, key-value labels, and a sample value. As the details of the Prometheus Time Series Database (TSDB) are  quite complicated, the following diagram shows a simplified example of how an individual sample for a metric is stored once it has been ingested:           Figure 5.1 – A simplified view of metric data stored in the TSDB The labels or dimensions of a metric: Prometheus labels provide metadata to identify data of interest. These labels create metrics, time series, and samples: * Each unique __name__ value creates a metric. In the preceding figure, the metric is app_ frontend_requests. * Each unique set of labels creates a time series. In the preceding figure, the set of all labels is the time series. * A time series will contain multiple samples, each with a unique timestamp. The preceding figure shows a single sample, but over time, multiple samples will be collected for each  time series. * The number of unique values for a metric label is referred to as the cardinality of the l abel. Highly cardinal labels should be avoided, as they signifi cantly increase the storage costs of the metric. The following diagram shows a single metric containing two time series and five samples:        Figure 5.2 – An example of samples from multiple time series In Grafana, we can see a representation of the time series and samples from a metric. To do this, follow these steps: 1. In your Grafana instance, select Explore in the menu. 2. Choose your Prometheus data source, which will be labeled as grafanacloud-<team>prom (default). 3. In the Metric dropdown, choose app_frontend_requests_total, and under Options, set Format to Table, and then click on Run query. Th is will show you all the samples and time series in the metric over the selected time range. You should see data like this:    Figure 5.3 – Visualizing the samples and time series that make up a metric Now that we understand the data structure, let’s explore PromQL. An overview of PromQL features In this section, we will take you through the features that PromQL has. We will start with an explanation of the data types, and then we will look at how to select data, how to work on multiple datasets, and how to use functions. As PromQL is a query language, it’s important to know how to manipulate data to produce alerts and dashboards. Data types PromQL offers three data types, which are important, as the functions and operators in PromQL will work diff erently depending on the data types presented: Instant vectors are a data type that stores a set of time series containing a single sample, all sharing the same timestamp – that is, it presents values at a specifi c instant in time:                             Figure 5.4 – An instant vector Range vectors store a set of time series, each containing a range of samples with different timestamps:                              Figure 5.5 – Range vectors Scalars are simple numeric values, with no labels or timestamps involved. Selecting data PromQL offers several tools for you to select data to show in a dashboard or a list, or just to understand a system’s state. Some of these are described in the following table: Table 5.1 – The selection operators available in PromQL In addition to the operators that allow us to select data, PromQL offers a selection of operators to compare multiple sets of data. Operators between two datasets Some data is easily provided by a single metric, while other useful information needs to be created from multiple metrics. The following operators allow you to combine datasets. Table 5.2 – The comparison operators available in PromQL Vector matching is an initially confusing topic; to clarify it, let’s consider examples for the three cases of vector matching – one-to-one, one-to-many/many-to-one, and many-to-many. By default, when combining vectors, all label names and values are matched. This means that for each element of the vector, the operator will try to find a single matching element from the second vector.  Let’s consider a simple example: Vector A: 10{color=blue,smell=ocean} 31{color=red,smell=cinnamon} 27{color=green,smell=grass} Vector B: 19{color=blue,smell=ocean} 8{color=red,smell=cinnamon} ‚ 14{color=green,smell=jungle} A{} + B{}: 29{color=blue,smell=ocean} 39 {color=red,smell=cinnamon} A{} + on (color) B{} or A{} + ignoring (smell) B{}: 29{color=blue} 39{color=red} 41{color=green} When color=blue and smell=ocean, A{} + B{} gives 10 + 19 = 29, and when color=red and smell=cinnamon, A{} + B{} gives 31 + 8 = 29. The other elements do not match the two vectors so are ignored. When we sum the vectors using on (color), we will only match on the color label; so now, the two green elements match and are summed. This example works when there is a one-to-one relationship of labels between vector A and vector B. However, sometimes there may be a many-to-one or one-to-many relationship – that is, vector A or vector B may have more than one element that matches the other vector. In these cases, Prometheus will give an error, and grouping syntax must be used. Let’s look at another example to illustrate this: Vector A: 7{color=blue,smell=ocean} 5{color=red,smell=cinamon} 2{color=blue,smell=powder} Vector B: 20{color=blue,smell=ocean} 8{color=red,smell=cinamon} ‚ 14{color=green,smell=jungle} A{} + on (color) group_left  B{}: 27{color=blue,smell=ocean} 13{color=red,smell=cinamon} 22{color=blue,smell=powder} Now, we have two different elements in vector A with color=blue. The group_left command will use the labels from vector A but only match on color. This leads to the third element of the combined vector having a value of 22, when the item matching in vector B has a different smell. The group_right operator will behave in the opposite direction. The final option is a many-to-many vector match. These matches use the logical operators and, unless, and or to combine parts of vectors A and B. Let’s see some examples: Vector A: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} 27{color=green,smell=grass} Vector B: 19{color=blue,smell=ocean} 8{color=red,smell=cinamon} ‚ 14{color=green,smell=jungle} A{} and B{}: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} A{} unless B{}: 27{color=green,smell=grass} A{} or B{}: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} 27{color=green,smell=grass} 14{color=green,smell=jungle} Unlike the previous examples, mathematical operators are not being used here, so the values of the elements are the values from vector A, but only the elements of A that match the logical condition in B are returned. ConclusionPromQL is an essential component of Prometheus, offering users a flexible and powerful means of querying and analyzing time-series data. By understanding its data types and operators, users can craft complex queries that provide deep insights into system performance. The language supports a variety of data selection and comparison operations, allowing for precise monitoring and alerting. Whether working with instant vectors, range vectors, or scalars, PromQL enables developers and operators to optimize their use of Prometheus for monitoring and alerting, ensuring systems remain performant and reliable. As organizations continue to embrace cloud-native architectures, mastering PromQL becomes increasingly vital for maintaining robust and efficient systems. Author BioRob Chapman is a creative IT engineer and founder at The Melt Cafe, with two decades of experience in the full application life cycle. Working over the years for companies such as the Environment Agency, BT Global Services, Microsoft, and Grafana, Rob has built a wealth of experience on large complex systems. More than anything, Rob loves saving energy, time, and money and has a track record for bringing production-related concerns forward so that they are addressed earlier in the development cycle, when they are cheaper and easier to solve. In his spare time, Rob is a Scout leader, and he enjoys hiking, climbing, and, most of all, spending time with his family and six children.Peter Holmes is a senior engineer with a deep interest in digital systems and how to use them to solve problems. With over 16 years of experience, he has worked in various roles in operations. Working at organizations such as Boots UK, Fujitsu Services, Anaplan, Thomson Reuters, and the NHS, he has experience in complex transformational projects, site reliability engineering, platform engineering, and leadership. Peter has a history of taking time to understand the customer and ensuring Day-2+ operations are as smooth and cost-effective as possible.
Read more
  • 0
  • 0
  • 78790

article-image-building-docker-images-using-dockerfiles
Aarthi Kumaraswamy
12 Apr 2018
8 min read
Save for later

Building Docker images using Dockerfiles

Aarthi Kumaraswamy
12 Apr 2018
8 min read
Docker images are read-only templates. They give us containers during runtime. Central to this is the concept of a 'base image'. Layers then sit on top of this base image. For example, you might have a base image of Fedora or Ubuntu, but you can then install packages or make modifications over the base image to create a new layer. The base image and new layer can then be treated as a completely  new image. In the image below, Debian is the base image and emacs and Apache are the two layers added on top of it. They are highly portable and can be shared easily: Source: Docker Image layers Layers are transparently laid on top of the base image to create a single coherent filesystem. There are a couple of ways to create images, one is by manually committing layers and the other way is through Dockerfiles. In this recipe, we'll create images with Dockerfiles. Dockerfiles help us in automating image creation and getting precisely the same image every time we want it. The Docker builder reads instructions from a text file (a Dockerfile) and executes them one after the other in order. It can be compared as Vagrant files, which allows you to configure VMs in a predictable manner. Getting ready A Dockerfile with build instructions. Create an empty directory: $ mkdir sample_image $ cd sample_image Create a file named Dockerfile with the following content: $ cat Dockerfile # Pick up the base image FROM fedora # Add author name MAINTAINER Neependra Khare # Add the command to run at the start of container CMD date How to do it… Run the following command inside the directory, where we created Dockerfile to build the image: $ docker build . We did not specify any repository or tag name while building the image. We can give those with the -toption as follows: $ docker build -t fedora/test . The preceding output is different from what we did earlier. However, here we are using a cache after each instruction. Docker tries to save the intermediate images as we saw earlier and tries to use them in subsequent builds to accelerate the build process. If you don't want to cache the intermediate images, then add the --no-cache option with the build. Let's take a look at the available images now: How it works… A context defines the files used to build the Docker image. In the preceding command, we define the context to the build. The build is done by the Docker daemon and the entire context is transferred to the daemon. This is why we see the Sending build context to Docker daemon 2.048 kB message. If there is a file named .dockerignore in the current working directory with the list of files and directories (new line separated), then those files and directories will be ignored by the build context. More details about .dockerignore can be found at https://docs.docker.com/reference/builder/#the-dockerignore-file. After executing each instruction, Docker commits the intermediate image and runs a container with it for the next instruction. After the next instruction has run, Docker will again commit the container to create the intermediate image and remove the intermediate container created in the previous step. For example, in the preceding screenshot, eb9f10384509 is an intermediate image and c5d4dd2b3db9 and ffb9303ab124 are the intermediate containers. After the last instruction is executed, the final image will be created. In this case, the final image is 4778dd1f1a7a: The -a option can be specified with the docker images command to look for intermediate layers: $ docker images -a There's more… The format of the Dockerfile is: INSTRUCTION arguments Generally, instructions are given in uppercase, but they are not case sensitive. They are evaluated in order. A # at the beginning is treated like a comment. Let's take a look at the different types of instructions: FROM: This must be the first instruction of any Dockerfile, which sets the base image for subsequent instructions. By default, the latest tag is assumed to be: FROM  <image> Alternatively, consider the following tag: FROM  <images>:<tag> There can be more than one FROM instruction in one Dockerfile to create multiple images. If only image names, such as Fedora and Ubuntu are given, then the images will be downloaded from the default Docker registry (Docker Hub). If you want to use private or third-party images, then you have to mention this as follows:  [registry_hostname[:port]/][user_name/](repository_name:version_tag) Here is an example using the preceding syntax: FROM registry-host:5000/nkhare/f20:httpd MAINTAINER: This sets the author for the generated image, MAINTAINER <name>. RUN: We can execute the RUN instruction in two ways—first, run in the shell (sh -c): RUN <command> <param1> ... <pamamN> Second, directly run an executable: RUN ["executable", "param1",...,"paramN" ] As we know with Docker, we create an overlay—a layer on top of another layer—to make the resulting image. Through each RUN instruction, we create and commit a layer on top of the earlier committed layer. A container can be started from any of the committed layers. By default, Docker tries to cache the layers committed by different RUN instructions, so that it can be used in subsequent builds. However, this behavior can be turned off using --no-cache flag while building the image. LABEL: Docker 1.6 added a new feature to the attached arbitrary key-value pair to Docker images and containers. We covered part of this in the Labeling and filtering containers recipe in Chapter 2, Working with Docker Containers. To give a label to an image, we use the LABEL instruction in the Dockerfile as LABEL distro=fedora21. CMD: The CMD instruction provides a default executable while starting a container. If the CMD instruction does not have an executable (parameter 2), then it will provide arguments to ENTRYPOINT. CMD  ["executable", "param1",...,"paramN" ] CMD ["param1", ... , "paramN"] CMD <command> <param1> ... <pamamN> Only one CMD instruction is allowed in a Dockerfile. If more than one is specified, then only the last one will be honored. ENTRYPOINT: This helps us configure the container as an executable. Similar to CMD, there can be at max one instruction for ENTRYPOINT; if more than one is specified, then only the last one will be honored: ENTRYPOINT  ["executable", "param1",...,"paramN" ] ENTRYPOINT <command> <param1> ... <pamamN> Once the parameters are defined with the ENTRYPOINT instruction, they cannot be overwritten at runtime. However, ENTRYPOINT can be used as CMD, if we want to use different parameters to ENTRYPOINT. EXPOSE: This exposes the network ports on the container on which it will listen at runtime: EXPOSE  <port> [<port> ... ] We can also expose a port while starting the container. We covered this in the Exposing a port while starting a container recipe in Chapter 2, Working with Docker Containers. ENV: This will set the environment variable <key> to <value>. It will be passed all the future instructions and will persist when a container is run from the resulting image: ENV <key> <value> ADD: This copies files from the source to the destination: ADD <src> <dest> The following one is for the path containing white spaces: ADD ["<src>"... "<dest>"] <src>: This must be the file or directory inside the build directory from which we are building an image, which is also called the context of the build. A source can be a remote URL as well. <dest>: This must be the absolute path inside the container in which the files/directories from the source will be copied. COPY: This is similar to ADD.COPY <src> <dest>: COPY  ["<src>"... "<dest>"] VOLUME: This instruction will create a mount point with the given name and flag it as mounting the external volume using the following syntax: VOLUME ["/data"] Alternatively, you can use the following code: VOLUME /data USER: This sets the username for any of the following run instructions using the following syntax: USER  <username>/<UID> WORKDIR: This sets the working directory for the RUN, CMD, and ENTRYPOINT instructions that follow it. It can have multiple entries in the same Dockerfile. A relative path can be given which will be relative to the earlier WORKDIR instruction using the following syntax: WORKDIR <PATH> ONBUILD: This adds trigger instructions to the image that will be executed later, when this image will be used as the base image of another image. This trigger will run as part of the FROM instruction in downstream Dockerfile using the following syntax: ONBUILD [INSTRUCTION] See also Look at the help option of docker build: $ docker build -help The documentation on the Docker website https://docs.docker.com/reference/builder/ You just enjoyed an excerpt from the book, DevOps: Puppet, Docker, and Kubernetes by Thomas Uphill, John Arundel, Neependra Khare, Hideto Saito, Hui-Chuan Chloe Lee, and Ke-Jou Carol Hsu. To master working with Docker containers, images and much more, check out this book today! Read other posts: How to publish Docker and integrate with Maven Building Scalable Microservices How to deploy RethinkDB using Docker  
Read more
  • 0
  • 0
  • 73363

article-image-introducing-powershell-remoting
Packt
21 Dec 2016
9 min read
Save for later

Introducing PowerShell Remoting

Packt
21 Dec 2016
9 min read
In this article by Sherif Talaat, the author of the book PowerShell 5.0 Advanced Administration Handbook, we will see the PowerShell v2 introduced a powerful new technology, PowerShell remoting, which was refined andexpanded upon for later versions of PowerShell. PowerShell remoting is based primarily upon standardized protocols andtechniques; it is possibly one of the most important aspects of Windows PowerShell. Today, a lot of Microsoft products rely upon it almost entirely for administrative communications across the network. The most important and exciting characteristic of PowerShell is itsremote management capability. PowerShell Remoting can control the target remote computer via the network. It uses Windows Remote Management (WinRM) which is based on Microsoft’s WS-Management protocol. Using PowerShell remoting, the administrator can execute various management operations on dozens of target computers across the network. In this article, we will cover the following topics: PowerShell remoting system requirements Enabling/Disabling remoting Executing remote commands Interactive remoting Sessions Saving remote sessions to disk Understanding session configuration (For more resources related to this topic, see here.) Windows PowerShellremoting It’s very simple: Windows PowerShell remoting is developed to help you ease your administration tasks. The idea is about using the PowerShell console on your local machine to manage and control remote computers in different locations, whether these locations are on a local network, a branch, or even in the cloud. Windows PowerShell remoting relies on Windows Remote Management (WinRM) to connect those computers together even if they’re not physically connected. Sounds cool and exciting, huh?! Windows Remote Management (WinRM) is a Microsoft implementation for the WS-Management protocol. WSMan is a standard Simple Object Access Protocol (SOAP) that allows hardware and operating systems, from different vendors, to interoperate and communicate together in order to access and exchange management information across the entire infrastructure. In order to be able to execute a PowerShell script on remote computers using PowerShell remoting, the user performing this remote execution must meet one of the following conditions: Be a member of the administrators’ group on the remote machine whether as a domain administrator or a local administrator Provide admin privileged credentials at the time of execution, either while establishing the remote session or using a –ComputerName parameter Has access to the PowerShell session configuration on the remote computer Now we understand what PowerShell remoting is, let’s jump to the interesting stuff and start playing with it. Enable/Disable PowerShell Remoting Before using Windows PowerShell remoting, we need to first ensure that it’s already on the computers we want to connect to and manage. You can validate whether PowerShell remoting is enabled on a computer using the Test-WSMan cmdlet. #Verify WinRM service status Test-WSMan –ComputerName Server02 If PowerShell remoting is enabled on the remote computer, (which means that the WinRM service is running), you will get an acknowledgement message similar to the message shown in the following screenshot: However, if the WinRM is not responding, either because it’s not enabled or because the computer is unreachable, you will get an error message similar to the message shown in the following screenshot: Okay, at this stage, we know which computers have remoting enabled and which need to be configured. In order to enable PowerShell remoting on a computer, we use the Enable-PSRemoting cmdlet. The Enable-PSRemoting cmdlet will prompt you with a message to inform you about the changes to be applied on the target computer and ask for your confirmation as shown in the following screenshot: You can skip this prompt message by using the –Forceparameter: #Enable PowerShell Remoting Enable-PSRemoting –Force In client OS versions of Windows, such as Windows 7, 8/8.1, and 10, the network connection type must be set either to domain or private. If it’s set to public, you will get a message as shown in the following: This is the Enable-PSRemoting cmdlet’s default behavior to stop you from enabling PowerShell remoting on a public network which might put your computer in risk. You can skip the network profile check using the –SkipNetworkProfileCheck parameter, or simply change the network profile as shown later in this article: #Enable PowerShell Remoting on Public Network Enable-PSRemoting –Force –SkipNetworkProfileCheck  If, for any reason, you want to temporarily disable a session configuration in order to prevent users from connecting to a local computer using that session configuration, you can use theDisable-PSSessionConfiguration cmdlet along with the -Name parameter to specify which session configuration you want to disable. If we don’t specify a configuration name for the –Name parameter, the default session configurationMicrosoft.PowerShell will be disabled. Later on, if you want to re-enable the session configuration, you can use the Enable-PSSessionConfiguration cmdlet with the -Name parameter to specify which session configuration you need to enable, similarly to the Disable-PSSessionConfiguration cmdlet. Delete a session configuration When you disable a session configuration, PowerShell just denies access to this session configuration by assigning deny all to the defined security descriptors. It doesn’t remove it, which is why you can re-enable it. If you want to permanently remove a session configuration, use theUnregister-PSSessionConfiguration cmdlet. Windows PowerShell Web Access (PSWA) Windows PowerShell Web Access (PSWA) was introduced for the first time as a new feature in Windows PowerShell 3.0.Yes, it is what you are guessing it is! PowerShell Web Access is a web-based version of the PowerShell console that allows you to run and execute PowerShell cmdlets and scripts from any web browser on any desktop, notebook, smartphone, or tablet that meet the following criteria: Allows cookies from the Windows PowerShell Web Access gateway website Capable of opening and reading HTTPS pages Opens and runs websites that use JavaScript PowerShell Web Access allows you to complete your administration tasks smoothly anywhere anytime using any device running a web browser, regardless of whether it is Microsoft or non-Microsoft. Installing and configuring Windows PowerShell Web Access The following are the steps to install and configure Windows PowerShell Web Access: Step 1: Installing the Windows PowerShell Web Access Windows feature In this step we will install the Windows PowerShell Web Access Windows feature. For the purpose of this task, we will use the Install-WindowsFeature cmdlet: #Installing PSWA feature Install-WindowsFeature WindowsPowerShellWebAccess –IncludeAllSubFeature –IncludeManagementTools The following screenshot depicts the output of the cmdlet: Now, we have the PowerShell Web Access feature installed. The next step is to configure it. Step 2: Configuring Windows PowerShell Web Access gateway To configure the PSWA gateway, we use the Install-PswaWebApplication cmdlet which will create an IIS Web Application that runs PowerShell Web Access and configures the SSL certificate. If you don’t have the SSL certificate, then you can use the –UseTestCertificate flag in order to generate and use a self-signed certificate: #Configure PWSA Gateway Install-PswaWebApplication –WebSiteName “Default Web Site“ –WebApplicationName “PSWA“ –UseTestCertificate   Use the–UseTestCertificate for testing purposes in your private lab only. Never use it in a production environment. In your production environments use a certificate from a trusted certificate issued by either your corporate’s Certificate Authority (CA) or a trusted certificate publisher. To verify successful installation and configuration of the gateway. Browse the PSWA URLhttps://<server_name>/PSWA as shown in the following screnshot: The PSWA WebApplication files are located at %windir%WebPowerShellWebAccesswwwroot. Step 3: Configuring PowerShell Web Access authorization rules Now, we have PSWA up and running. However no one will be able to sign-in and use it yet until we create the appropriate authorization rule. Because PSWA could be accessed from anywhere any time –which increases the security risks – PowerShell restricts any access to your network until you create and assign the right access to the right person. The authorization rule is the access control for your PSWA that adds an additional security layer to your PSWA. It is similar to the access list on the firewall and network devices. To create a new access authorization rule, we use the Add-PswaAuthorizationRule cmdlet along with the–UserName parameter to specify the name of the user who will get the access; the–-ComputerName parameter to specify which computer the user will has access to; and the–ConfigurationName parameter to specify the session configuration available to this user: #Adding PSWA Access Authorization RUle Add-PswaAuthorizationRule –UserName PSWAAdministrator –ComputerName PSWA –ConfigurationName Microsoft.PowerShell   The PSWA Authorization Rules files are located at %windir%WebPowerShellWebAccessdataAuthorizationRules.xml There are four different access authorization rules scenarios that we can enable on PowerShell Web Access. These scenarios are: Enable single user access to a single computer: For this scenario we use the –Username parameter to specify the single user, and the–ComputerName parameter to specify the single computer Enable single user access to a group of computers: For this scenario we use the –Username parameter to specify the single user, and the–ComputeGroupNameComputeGroupName parameter to specify the name of the active directory computer group Enable a group of users access to a single computer: For this scenario we use the –UserGroupName parameter to specify the name of active directory users’ group, and the–ComputerName parameter to specify the individual computer Enable groups of users access to a group of computers: For this scenario we use the –UserGroupName parameter to specify the name of active directory users group, and the –ComputeGroupNameComputeGroupName parameter to specify the name of the active directory computer group You can use the Get-PswaAuthorizationRule cmdlet to list all the configured access authorization rules, and use the Remove-PswaAuthorizationRule cmdlet to remove them. Sign-in to PowerShell Web Access Now, let’s verify the installation and start using the PSWA by signing-in to it: Open the Internet browser; you can choose whichever browser you like bearing in mind the browser requirements mentioned earlier. Enter https://<server_name>/PSWA. Enter User Name, Password, Connection Type, and Computer Name. Summary In this article, we learned about one of the most powerful features of PowerShell which is PowerShell remoting, including how to enable, prepare, and configure your environment to use PowerShell remoting. Moreover, we demonstrated some examples of how to use different methods to utilize this remote capability. We learned how to run remote commands on remote computers by using a temporary or persistent connection. Finally, we closed the article with PowerShell Web Access, including how it works and how to configure it. Resources for Article: Further resources on this subject: Installing/upgrading PowerShell [article] DevOps Tools and Technologies [article] Bringing DevOps to Network Operations [article]
Read more
  • 0
  • 0
  • 71804

article-image-automating-ocr-and-translation-with-google-cloud-functions-a-step-by-step-guide
Agnieszka Koziorowska, Wojciech Marusiak
05 Nov 2024
15 min read
Save for later

Automating OCR and Translation with Google Cloud Functions: A Step-by-Step Guide

Agnieszka Koziorowska, Wojciech Marusiak
05 Nov 2024
15 min read
This article is an excerpt from the book, "Google Cloud Associate Cloud Engineer Certification and Implementation Guide", by Agnieszka Koziorowska, Wojciech Marusiak. This book serves as a guide for students preparing for ACE certification, offering invaluable practical knowledge and hands-on experience in implementing various Google Cloud Platform services. By actively engaging with the content, you’ll gain the confidence and expertise needed to excel in your certification journey.Introduction In this article, we will walk you through an example of implementing Google Cloud Functions for optical character recognition (OCR) on Google Cloud Platform. This tutorial will demonstrate how to automate the process of extracting text from an image, translating the text, and storing the results using Cloud Functions, Pub/Sub, and Cloud Storage. By leveraging Google Cloud Vision and Translation APIs, we can create a workflow that efficiently handles image processing and text translation. The article provides detailed steps to set up and deploy Cloud Functions using Golang, covering everything from creating storage buckets to deploying and running your function to translate text. Google Cloud Functions Example Now that you’ve learned what Cloud Functions is, I’d like to show you how to implement a sample Cloud Function. We will guide you through optical character recognition (OCR) on Google Cloud Platform with Cloud Functions. Our use case is as follows: 1. An image with text is uploaded to Cloud Storage. 2. A triggered Cloud Function utilizes the Google Cloud Vision API to extract the text and identify the source language. 3. The text is queued for translation by publishing a message to a Pub/Sub topic. 4. A Cloud Function employs the Translation API to translate the text and stores the result in the translation queue. 5. Another Cloud Function saves the translated text from the translation queue to Cloud Storage. 6. The translated results are available in Cloud Storage as individual text files for each translation. We need to download the samples first; we will use Golang as the programming language. Source files can be downloaded from – https://github.com/GoogleCloudPlatform/golangsamples. Before working with the OCR function sample, we recommend enabling the Cloud Translation API and the Cloud Vision API. If they are not enabled, your function will throw errors, and the process will not be completed. Let’s start with deploying the function: 1. We need to create a Cloud Storage bucket.  Create your own bucket with unique name – please refer to documentation on bucket naming under following link: https://cloud.google.com/storage/docs/buckets We will use the following code: gsutil mb gs://wojciech_image_ocr_bucket 2. We also need to create a second bucket to store the results: gsutil mb gs://wojciech_image_ocr_bucket_results 3. We must create a Pub/Sub topic to publish the finished translation results. We can do so with the following code: gcloud pubsub topics create YOUR_TOPIC_NAME. We used the following command to create it: gcloud pubsub topics create wojciech_translate_topic 4. Creating a second Pub/Sub topic to publish translation results is necessary. We can use the following code to do so: gcloud pubsub topics create wojciech_translate_topic_results 5. Next, we will clone the Google Cloud GitHub repository with some Python sample code: git clone https://github.com/GoogleCloudPlatform/golang-samples 6. From the repository, we need to go to the golang-samples/functions/ocr/app/ file to be able to deploy the desired Cloud Function. 7. We recommend reviewing the included go files to review the code and understand it in more detail. Please change the values of your storage buckets and Pub/Sub topic names. 8. We will deploy the first function to process images. We will use the following command: gcloud functions deploy ocr-extract-go --runtime go119 --trigger-bucket wojciech_image_ocr_bucket --entry-point  ProcessImage --set-env-vars "^:^GCP_PROJECT=wmarusiak-book- 351718:TRANSLATE_TOPIC=wojciech_translate_topic:RESULT_ TOPIC=wojciech_translate_topic_results:TO_LANG=es,en,fr,ja" 9. After deploying the first Cloud Function, we must deploy the second one to translate the text.  We can use the following code snippet: gcloud functions deploy ocr-translate-go --runtime go119 --trigger-topic wojciech_translate_topic --entry-point  TranslateText --set-env-vars "GCP_PROJECT=wmarusiak-book- 351718,RESULT_TOPIC=wojciech_translate_topic_results" 10. The last part of the complete solution is a third Cloud Function that saves results to Cloud Storage. We will use the following snippet of code to do so: gcloud functions deploy ocr-save-go --runtime go119 --triggertopic wojciech_translate_topic_results --entry-point SaveResult  --set-env-vars "GCP_PROJECT=wmarusiak-book-351718,RESULT_ BUCKET=wojciech_image_ocr_bucket_results" 11. We are now free to upload any image containing text. It will be processed first, then translated and saved into our Cloud Storage bucket. 12. We uploaded four sample images that we downloaded from the Internet that contain some text. We can see many entries in the ocr-extract-go Cloud Function’s logs. Some Cloud Function log entries show us the detected language in the image and the other extracted text:  Figure 7.22 – Cloud Function logs from the ocr-extract-go function 13. ocr-translate-go translates detected text in the previous function:  Figure 7.23 – Cloud Function logs from the ocr-translate-go function 14. Finally, ocr-save-go saves the translated text into the Cloud Storage bucket:  Figure 7.24 – Cloud Function logs from the ocr-save-go function 15. If we go to the Cloud Storage bucket, we’ll see the saved translated files:  Figure 7.25 – Translated images saved in the Cloud Storage bucket 16. We can view the content directly from the Cloud Storage bucket by clicking Download next to the file, as shown in the following screenshot:  Figure 7.26 – Translated text from Polish to English stored in the Cloud Storage bucket Cloud Functions is a powerful and fast way to code, deploy, and use advanced features. We encourage you to try out and deploy Cloud Functions to understand the process of using them better. At the time of writing, Google Cloud Free Tier offers a generous number of free resources we can use. Cloud Functions offers the following with its free tier: 2 million invocations per month (this includes both background and HTTP invocations) 400,000 GB-seconds, 200,000 GHz-seconds of compute time 5 GB network egress per month Google Cloud has comprehensive tutorials that you can try to deploy. Go to https://cloud.google.com/functions/docs/tutorials to follow one. Conclusion In conclusion, Google Cloud Functions offer a powerful and scalable solution for automating tasks like optical character recognition and translation. Through this example, we have demonstrated how to use Cloud Functions, Pub/Sub, and the Google Cloud Vision and Translation APIs to build an end-to-end OCR and translation pipeline. By following the provided steps and code snippets, you can easily replicate this process for your own use cases. Google Cloud's generous Free Tier resources make it accessible to get started with Cloud Functions. We encourage you to explore more by deploying your own Cloud Functions and leveraging the full potential of Google Cloud Platform for serverless computing. Author BioAgnieszka is an experienced Systems Engineer who has been in the IT industry for 15 years. She is dedicated to supporting enterprise customers in the EMEA region with their transition to the cloud and hybrid cloud infrastructure by designing and architecting solutions that meet both business and technical requirements. Agnieszka is highly skilled in AWS, Google Cloud, and VMware solutions and holds certifications as a specialist in all three platforms. She strongly believes in the importance of knowledge sharing and learning from others to keep up with the ever-changing IT industry.With over 16 years in the IT industry, Wojciech is a seasoned and innovative IT professional with a proven track record of success. Leveraging extensive work experience in large and complex enterprise environments, Wojciech brings valuable knowledge to help customers and businesses achieve their goals with precision, professionalism, and cost-effectiveness. Holding leading certifications from AWS, Alibaba Cloud, Google Cloud, VMware, and Microsoft, Wojciech is dedicated to continuous learning and sharing knowledge, staying abreast of the latest industry trends and developments.
Read more
  • 0
  • 0
  • 70252
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-managing-ai-security-risks-with-zero-trust-a-strategic-guide
Mark Simos, Nikhil Kumar
29 Nov 2024
15 min read
Save for later

Managing AI Security Risks with Zero Trust: A Strategic Guide

Mark Simos, Nikhil Kumar
29 Nov 2024
15 min read
This article is an excerpt from the book, "Zero Trust Overview and Playbook Introduction", by Mark Simos, Nikhil Kumar. Get started on Zero Trust with this step-by-step playbook and learn everything you need to know for a successful Zero Trust journey with tailored guidance for every role, covering strategy, operations, architecture, implementation, and measuring success. This book will become an indispensable reference for everyone in your organization.IntroductionIn today’s rapidly evolving technological landscape, artificial intelligence (AI) is both a powerful tool and a significant security risk. Traditional security models focused on static perimeters are no longer sufficient to address AI-driven threats. A Zero Trust approach offers the agility and comprehensive safeguards needed to manage the unique and dynamic security risks associated with AI. This article explores how Zero Trust principles can be applied to mitigate AI risks and outlines the key priorities for effectively integrating AI into organizational security strategies.How can Zero Trust help manage AI security risk?A Zero Trust approach is required to effectively manage security risks related to AI. Classic network perimeter-centric approaches are built on more than 20-year-old assumptions of a static technology environment and are not agile enough to keep up with the rapidly evolving security requirements of AI.The following key elements of Zero Trust security enable you to manage AI risk:Data centricity: AI has dramatically elevated the importance of data security and AI requires a data-centric approach that can secure data throughout its life cycle in any location.Zero Trust provides this data-centric approach and the playbooks in this series guide the roles in your organizations through this implementation.Coordinated management of continuous dynamic risk: Like modern cybersecurity attacks, AI continuously disrupts core assumptions of business, technical, and security processes. This requires coordinated management of a complex and continuously changing security risk.Zero Trust solves this kind of problem using agile security strategies, policies, and architecture to manage the continuous changes to risks, tooling, processes, skills, and more. The playbooks in this series will help you make AI risk mitigation real by providing specific guidance on AI security risks for all impacted roles in the organization. Let’s take a look at which specific elements of Zero Trust are most important to managing AI risk.Zero Trust – the top four priorities for managing AI riskManaging AI risk requires prioritizing a few key areas of Zero Trust to address specific unique aspects of AI. The role of specific guidance in each playbook provides more detail on how each role will incorporate AI considerations into their daily work.These priorities follow the simple themes of learn it, use it, protect against it, and work as a team. This is similar to a rational approach for any major disruptive change to any other type of competition or conflict (a military organization learning about a new weapon, professional sports players learning about a new type of equipment or rule change, and so on).The top four priorities for managing AI risk are as follows:1. Learn it – educate everyone and set realistic expectations: The AI capabilities available today are very powerful, affect everyone, and are very different than what people expect them to be. It’s critical to educate every role in the organization, from board members and CEOs to individual contributors, as they all must understand what AI is, what AI really can and cannot do, as well as the AI usage policy and guidelines. Without this, people’s expectations may be wildly inaccurate and lead to highly impactful mistakes that could have easily been avoided.Education and expectation management is particularly urgent for AI because of these factors:Active use in attacks: Attackers are already using AI to impersonate voices, email writing styles, and more.Active use in business processes: AI is freely available for anyone to use. Job seekers are already submitting AI-generated resumes for your jobs that use your posted job descriptions, people are using public AI services to perform job tasks (and potentially disclosing sensitive information), and much more.Realism: The results are very realistic and convincing, especially if you don’t know how good AI is at creating fake images, videos, and text.How can Zero Trust help manage AI security risk?Confusion: Many people don’t have a good frame of reference for it because of the way AI has been portrayed in popular culture (which is very different from the current reality of AI).2. Use it – integrate AI into security: Immediately begin evaluating and integrating AI into your security tooling and processes to take advantage of their increased effectiveness and efficiency. This will allow you to quickly take advantage of this powerful technology to better manage security risk. AI will impact nearly every part of security, including the following:Security risk discovery, assessment, and management processesThreat detection and incident response processesArchitecture and engineering security defensesIntegrating security into the design and operation of systems…and many more3. Protect against it – update the security strategy, policy, and controls: Organizations must urgently update their strategy, policy, architecture, controls, and processes to account for the use of AI technology (by business units, technology teams, security teams, attackers, and more). This helps enable the organization to take full advantage of AI technology while minimizing security risk.The key focus areas should include the following:Plan for attacker use of AI: One of the first impacts most organizations will experience is rapid adoption by attackers to trick your people. Attackers are using AI to get an advantage on target organizations like yours, so you must update your security strategy, threat models, architectures, user education, and more to defend against attackers using AI or targeting you for your data. This should change the organization’s expectations and assumptions for the following aspects:Attacker techniques: Most attackers will experiment with and integrate AI capabilities into their attacks, such as imitating the voices of your colleagues on phone calls, imitating writing styles in phishing emails, creating convincing fake social media pictures and profiles, creating convincing fake company logos and profiles, and more.Attacker objectives: Attackers will target your data, AI systems, and other related assets because of their high value (directly to the attacker and/or to sell it to others). Your human-generated data is a prized high-value asset for training and grounding AI models and your innovative use of AI may be potentially valuable intellectual property, and more.Secure the organization’s AI usage: The organization must update its security strategy, plans, architecture, processes, and tooling to do the following:Secure usage of external AI: Establish clear policies and supporting processes and technology for using external AI systems safelySecure the organization’s AI and related systems: Protect the organization’s AI and related systems against attackersIn addition to protecting against traditional security attacks, the organization will also need to defend against AI-specific attack techniques that can extract source data, make the model generate unsafe or unintended results, steal the design of the AI model itself, and more. The playbooks include more details for each role to help them manage their part of this risk.Take a holistic approach: It’s important to secure the full life cycle and dependencies of the AI model, including the model itself, the data sources used by the model, the application that uses the model, the infrastructure it’s hosted on, third-party operators such as AI platforms, and other integrated components. This should also take a holistic view of the security life cycle to consider identification, protection, detection, response, recovery, and governance.Update acquisition and approval processes: This must be done quickly to ensure new AI technology (and other technology) meets the security, privacy, and ethical practices of the organization. This helps avoid extremely damaging avoidable problems such as transferring ownership of the organization’s data to vendors and other parties. You don’t want other organizations to grow and capture market share from you by using your data. You also want to avoid expensive privacy incidents and security incidents from attackers using your data against you.This should include supply chain risk considerations to mitigate both direct suppliers and Nth party risk (components of direct suppliers that have been sourced from other organizations). Finding and fixing problems later in the process is much more difficult and expensive than correcting them before or during acquisition, so it is critical to introduce these risk mitigations early.4. Work as a team – establish a coordinated AI approach: Set up an internal collaboration community or a formal Center of Excellence (CoE) team to ensure insights, learning, and best practices are being shared rapidly across teams. AI is a fast-moving space and will drive rapid continuous changes across business, technology, and security teams. You must have mechanisms in place to coordinate and collaborate across these different teams in your organization.How will AI impact Zero Trust?Each playbook describes the specific AI impacts and responsibilities for each affected role.AI shared responsibility model: Most AI technology will be a partnership with AI providers, so managing AI and AI security risk will follow a shared responsibility model between you and your AI providers. Some elements of AI security will be handled by the AI provider and some will be the responsibility of your organization (their customer).This is very similar to how cloud responsibility is managed today (and many AI providers are also cloud providers). This is also similar to a business that outsources some or all of its manufacturing, logistics, sales (for example, channel sales), or other business functions.Now, let’s take a look at how AI impacts Zero Trust.How will AI impact Zero Trust?AI will accelerate many aspects of Zero Trust because it dramatically improves the security tooling and people’s ability to use it. AI promises to reduce the burden and effort for important but tedious security tasks such as the following:Helping security analysts quickly query many data sources (without becoming an expert in query languages or tool interfaces)Helping writing incident response reportsIdentifying common follow-up actions to prevent repeat incidentSimplifying the interface between people and the complex systems they need to use for security will enable people with a broad range of skills to be more productive. Highly skilled people will be able to do more of what they are best at without repetitive and distracting tasks. People earlier in their careers will be able to quickly become more productive in a role, perform tasks at an expert level more quickly, and help them learn by answering questions and providing explanations.AI will NOT replace the need for security experts, nor the need to modernize security. AI will simplify many security processes and will allow fewer security people to do more, but it won’t replace the need for a security mindset or security expertise.Even with AI technology, people and processes will still be required for the following aspects:Ask the right security questions from AI systemsInterpret the results and evaluate their accuracyTake action on the AI results and coordinate across teamsPerform analysis and tasks that AI systems currently can’t cover:Identify, manage, and measure security risk for the organizationBuild, execute, and monitor a strategy and policyBuild and monitor relationships and processes between teamsIntegrate business, technical, and security capabilitiesEvaluate compliance requirements and ensure the organization is meeting them in good faithEvaluate the security of business and technical processesEvaluate the security posture and prioritize mitigation investmentsEvaluate the effectiveness of security processes, tools, and systemsPlan and implement security for technical systemsPlan and implement security for applications and productsRespond to and recover from attacksIn summary, AI will rapidly transform the attacks you face as well as your organization’s ability to manage security risk effectively. AI will require a Zero Trust approach and it will also help your teams do their jobs faster and more efficiently.The guidance in the Zero Trust Playbook Series will accelerate your ability to manage AI risk by guiding everyone through their part. It will help you rapidly align security to business risks and priorities and enable the security agility you need to effectively manage the changes from AI.Some of the questions that naturally come up are where to start and what to do first.ConclusionAs AI reshapes the cybersecurity landscape, adopting a Zero Trust framework is critical to effectively manage the associated risks. From securing data lifecycles to adapting to dynamic attacker strategies, Zero Trust principles provide the foundation for agile and robust AI risk management. By focusing on education, integration, protection, and collaboration, organizations can harness the benefits of AI while mitigating its risks. The Zero Trust Playbook Series offers practical guidance for all roles, ensuring security remains aligned with business priorities and prepared for the challenges AI introduces. Now is the time to embrace this transformative approach and future-proof your security strategies.Author BioMark Simos helps individuals and organizations meet cybersecurity, cloud, and digital transformation goals. Mark is the Lead Cybersecurity Architect for Microsoft where he leads the development of cybersecurity reference architectures, strategies, prescriptive planning roadmaps, best practices, and other security and Zero Trust guidance. Mark also co-chairs the Zero Trust working group at The Open Group and contributes to open standards and other publications like the Zero Trust Commandments. Mark has presented at numerous conferences including Black Hat, RSA Conference, Gartner Security & Risk Management, Microsoft Ignite and BlueHat, and Financial Executives International.Nikhil Kumar is Founder at ApTSi with prior leadership roles at Price Waterhouse and other firms. He has led setup and implementation of Digital Transformation and enterprise security initiatives (such as PCI Compliance) and built out Security Architectures. An Engineer and Computer Scientist with a passion for biology, Nikhil is an expert in Security, Information, and Computer Architecture. Known for communicating to the board and implementing with engineers and architects, he is an MIT mentor, innovator and pioneer. Nikhil has authored numerous books, standards, and articles, and presented at conferences globally. He co-chairs The Zero Trust Working Group, a global standards initiative led by the Open Group.
Read more
  • 1
  • 0
  • 69798

article-image-how-configure-squid-proxy-server
Packt
25 Apr 2011
8 min read
Save for later

How to Configure Squid Proxy Server

Packt
25 Apr 2011
8 min read
  Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid         Read more about this book       In this article by Kulbir Saini, author of Squid Proxy Server 3 Beginners Guide, we are going to learn to configure Squid according to the requirements of a given network. We will learn about the general syntax used for a Squid configuration file. Specifically, we will cover the following: Quick exposure to Squid Syntax of the configuration file HTTP port, the most important configuration directive Access Control Lists (ACLs) Controlling access to various components of Squid (For more resources on Proxy Servers, see here.) Quick start Let's have a look at the minimal configuration that you will need to get started. Get ready with the configuration file located at /opt/squid/etc/squid.conf, as we are going to make the changes and additions necessary to quickly set up a minimal proxy server. cache_dir ufs /opt/squid/var/cache/ 500 16 256acl my_machine src 192.0.2.21 # Replace with your IP addresshttp_access allow my_machine We should add the previous lines at the top of our current configuration file (ensuring that we change the IP address accordingly). Now, we need to create the cache directories. We can do that by using the following command: $ /opt/squid/sbin/squid -z We are now ready to run our proxy server, and this can be done by running the following command: $ /opt/squid/sbin/squid Squid will start listening on port 3128 (default) on all network interfaces on our machine. Now we can configure our browser to use Squid as an HTTP proxy server with the host as the IP address of our machine and port 3128. Once the browser is configured, try browsing to http://www.example.com/. That's it! We have configured Squid as an HTTP proxy server! Now try to browse to http://www.example.com:897/ and observe the message you receive. The message shown is an access denied message sent to you by Squid. Now, let's move on to understanding the configuration file in detail. Syntax of the configuration file Squid's configuration file can normally be found at /etc/squid/squid.conf, /usr/local/squid/etc/squid.conf, or ${prefix}/etc/squid.conf where ${prefix} is the value passed to the --prefix option, which is passed to the configure command before compiling Squid. In the newer versions of Squid, a documented version of squid.conf, known as squid.conf.documented, can be found along side squid.conf. In this article, we'll cover some of the import directives available in the configuration file. For a detailed description of all the directives used in the configuration file, please check http://www.squid-cache.org/Doc/config/. The syntax for Squid's documented configuration file is similar to many other programs for Linux/Unix. Generally, there are a few lines of comments containing useful related documentation before every directive used in the configuration file. This makes it easier to understand and configure directives, even for people who are not familiar with configuring applications using configuration files. Normally, we just need to read the comments and use the appropriate options available for a particular directive. The lines beginning with the character # are treated as comments and are completely ignored by Squid while parsing the configuration file. Additionally, any blank lines are also ignored. # Test comment. This and the above blank line will be ignored by Squid. Let's see a snippet from the documented configuration file (squid.conf.documented) # TAG: cache_effective_user# If you start Squid as root, it will change its effective/real# UID/GID to the user specified below. The default is to change# to UID of nobody.# see also; cache_effective_group#Default:# cache_effective_user nobody In the previous snippet, the first line mentions the name of the directive, that is in this case, cache_effective_user. The lines following the tag line provide brief information about the usage of a directive. The last line shows the default value for the directive, if none is specified. Types of directives Now, let's have a brief look at the different types of directives and the values that can be specified. Single valued directives These are directives which take only one value. These directives should not be used multiple times in the configuration file because the last occurrence of the directive will override all the previous declarations. For example, logfile_rotate should be specified only once. logfile_rotate 10# Few lines containing other configuration directiveslogfile_rotate 5 In this case, five logfile rotations will be made when we trigger Squid to rotate logfiles. Boolean-valued or toggle directives These are also single valued directives, but these directives are generally used to toggle features on or off. query_icmp onlog_icp_queries offurl_rewrite_bypass off We use these directives when we need to change the default behavior. Multi-valued directives Directives of this type generally take one or more than one value. We can either specify all the values on a single line after the directive or we can write them on multiple lines with a directive repeated every time. All the values for a directive are aggregated from different lines: hostname_aliases proxy.exmaple.com squid.example.com Optionally, we can pass them on separate lines as follows: dns_nameservers proxy.example.comdns_nameservers squid.example.com Both the previous code snippets will instruct Squid to use proxy.example.com and squid.example.com as aliases for the hostname of our proxy server. Directives with time as a value There are a few directives which take values with time as the unit. Squid understands the words seconds, minutes, hours, and so on, and these can be suffixed to numerical values to specify actual values. For example: request_timeout 3 hourspersistent_request_timeout 2 minutes Directives with file or memory size as values The values passed to these directives are generally suffixed with file or memory size units like bytes, KB, MB, or GB. For example: reply_body_max_size 10 MBcache_mem 512 MBmaximum_object_in_memory 8192 KB As we are familiar with the configuration file syntax now, let's open the squid.conf file and learn about the frequently used directives. Have a go hero – categorize the directives Open the documented Squid configuration file and find out at least three directives of each type that we discussed before. Don't use the directives already used in the examples. HTTP port This directive is used to specify the port where Squid will listen for client connections. The default behavior is to listen on port 3128 on all the available interfaces on a machine. Time for action – setting the HTTP port Now, we'll see the various ways to set the HTTP port in the squid.conf file: In its simplest form, we just specify the port on which we want Squid to listen: http_port 8080 We can also specify the IP address and port combination on which we want Squid to listen. We normally use this approach when we have multiple interfaces on our machine and we want Squid to listen only on the interface connected to local area network (LAN): http_port 192.0.2.25:3128 This will instruct Squid to listen on port 3128 on the interface with the IP address as 192.0.2.25. Another form in which we can specify http_port is by using hostname and port combination: http_port myproxy.example.com:8080 The hostname will be translated to an IP address by Squid and then Squid will listen on port 8080 on that particular IP address. Another aspect of this directive is that, it can take multiple values on separate lines. Let's see what the following lines will do: http_port 192.0.2.25:8080http_port lan1.example.com:3128http_port lan2.example.com:8081 These lines will trigger Squid to listen on three different IP addresses and port combinations. This is generally helpful when we have clients in different LANs, which are configured to use different ports for the proxy server. In the newer versions of Squid, we may also specify the mode of operation such as intercept, tproxy, accel, and so on. Intercept mode will support the interception of requests without needing to configure the client machines. http_port 3128 intercept tproxy mode is used to enable Linux Transparent Proxy support for spoofing outgoing connections using the client's IP address. http_port 8080 tproxy We should note that enabling intercept or tproxy mode disables any configured authentication mechanism. Also, IPv6 is supported for tproxy but requires very recent kernel versions. IPv6 is not supported in the intercept mode. Accelerator mode is enabled using the mode accel. It's a good idea to listen on port 80, if we are configuring Squid in accelerator mode. This mode can't be used as it is. We must specify at least one website we want to accelerate. http_port 80 accel defaultsite=website.example.com We should set the HTTP port carefully as the standard ports like 3128 or 8080 can pose a security risk if we don't secure the port properly. If we don't want to spend time on securing the port, we can use any arbitrary port number above 10000. What just happened? In this section, we learned about the usage of one of the most important directives, namely, http_port. We have learned about the various ways in which we can specify HTTP port, depending on the requirement. We can force Squid to listen on multiple interfaces and on different ports, on different interfaces.  
Read more
  • 0
  • 7
  • 64292

article-image-using-ipv6-packet-tracer
Packt
13 Jan 2014
6 min read
Save for later

Using IPv6 on Packet Tracer

Packt
13 Jan 2014
6 min read
This article is written by Jesin A the author of Packet Tracer Network Simulator. Cisco Packet Tracer is a powerful network simulation program and provides simulation, visualization, authoring, assessment, and shows collaboration capabilities of a network. This article explains the IPv6 addresses used in Packet Tracer. IPv4 has 4.3 billion addresses, which may seem mindboggling. However, it took only two decades for it to reach its depletion. IPv6 has come to the rescue in the form of 128-bit addresses. Packet Tracer supports a wide array of IPv6 features. We'll start by learning how to assign IP addresses to different devices and how to configure routing between them. Finally, we'll create a setup that enables IPv6 communication over IPv4 devices. Assigning IPv6 addresses Starting from Packet Trace Version 6, the IP Configuration utility under the Desktop tab of end devices has an option to enter an IPv6 address. Let's begin with a simple topology consisting of two PCs and a router connected to a switch, as shown in the following screenshot: There are three ways of assigning IPv6 addresses to a device and we'll see each one of them. Autoconfiguration Autoconfiguration requires the least amount of configuration but makes it difficult to remember the IPv6 addresses. This method uses the MAC address of the device to create an IPv6 address with the FE80:: prefix. Carry out the following steps to assign IPv6 addresses using Autoconfiguration: Begin by configuring the router. Enter the interface configuration mode and enable IPv6 on the interface. R0(config)#ipv6 unicast-routing R0(config)#interface FastEthernet0/0 R0(config-if)#ipv6 enable Next, we will configure a link local address and a global unicast address on this interface. We'll use eui-64 to reduce the configuration. R0(config-if)#ipv6 address autoconfig R0(config-if)#ipv6 add 2000::/64 eui-64 R0(config-if)#no shutdown Verify that the interface is up and has two IPv6 addresses. R0>sh ipv6 interface brief FastEthernet0/0 [up/up] FE80::2D0:58FF:FE65:E701 2000::2D0:58FF:FE65:E701 These IPv6 addresses may vary when you try them out, as they are based on the MAC address. Enable routing so that this router can be identified as a default gateway. R0(config)#ipv6 unicast-routing The configuration of the router is now done, let's move on to the PCs. Go to the Desktop tab of the PC, open IP Configuration , and under the IPv6 Configuration section, choose Auto Config . The gateway and the PC's IP address will be assigned automatically, as shown in the following screenshot: Use the simple PDU tool to test the connectivity; you'll see ICMPv6 packets moving between the nodes. To view the IPv6 address from the command line of PCs, use the ipv6config command. Static IPv6 IPv6 addresses can also be assigned statically on all devices. We'll use the same topology for this section too. We'll carry out the following steps to configure IPv6 addresses statically: Begin by configuring a static IPv6 address on the router. R0(config)#interface fastethernet0/0 R0(config-if)#ipv6 enable R0(config-if)#ipv6 address 2000::1/64 R0(config-if)#no shutdown Go to the Desktop tab of PC, open the IP Configuration utility, and enter an IPv6 address with the same prefix. Now use the simple PDU tool to test the connectivity. Once both the methods work fine, you can have a look at the IPv6 neighbors table. This is similar to the ARP table of IPv4. R0#sh ipv6 neighbor IPv6 Address Age Link-layer Addr State Interface 2000::2 0 00E0.A39E.05C4 REACH Fa0/0 2000::3 0 0001.43B9.0268 REACH Fa0/0 Now that we have configured IPv6 addresses on a single network, let's configure them on more networks and enable routing between them. IPv6 static and dynamic routing Similar to IPv4, IPv6 too supports both static and dynamic routing. Configuration commands for its static routing are similar to IPv4. Static routing Modifying the same topology that we used previously, let's add a router, switch, and two PCs to create a separate network, as shown in the following screenshot: The first network will use addresses starting from 2000:1::/64 and the second network will use addresses starting from 2000:2::/64. The link between both the routers will have IP addresses 2001::10/64 and 2001::20/64. Here is a table describing the topology: Device Interface IP address R1 FastEthernet0/0 2000:1::1/64   FastEthernet0/1 2001::10/64 PC0 FastEthernet 2000:1::2/64 PC1 FastEthernet 2000:1::3/64 R2 FastEthernet0/0 2000:2::1/64   FastEthernet0/1 2001::20/64 PC2 FastEthernet 2000:2::2/64 PC3 FastEthernet 2000:2::3/64 After the necessary IP addresses and gateways have been assigned, open the CLI tab for the R1 router, and start configuring routing by following the given commands: R1(config)#ipv6 unicast-routing R1(config)#ipv6 route 2000:2::/64 2001::20 Next, open the CLI tab for R2 and configure routing on it. R2(config)#ipv6 unicast-routing R2(config)#ipv6 route 2000:1::/64 2001::10 Now use the simple PDU tool to test the connectivity. You may also use the tracert command on a PC to see the path a packet takes. PC>tracert 2000:2::3 Tracing route to 2000:2::3 over a maximum of 30 hops: 1 63 ms 63 ms 47 ms 2000:1::1 2 94 ms 78 ms 94 ms 2001::20 3 156 ms 109 ms 129 ms 2000:2::3 Trace complete. Dynamic routing Packet Tracer offers the same dynamic routing protocols for IPv6: RIPv6, EIGRP, and OSPF. We'll be configuring RIPv6 in this section. Note that RIPv6 does not represent RIP Version 6; it is RIP for IPv6 addresses. For this exercise, we'll use the topology shown in the following screenshot: The additional IP assignment details alone are shown in the following table: Device Interface IPv6 Address R2 FastEthernet1/0 2001:1::10/64 R3 FastEthernet0/0 2000:3::1/64   FastEthernet0/1 2001:1::20/64 PC2 FastEthernet 2000:3::2/64 We'll see how to configure RIP on one router and you can do the same on the others. R1(config)#interface FastEthernet0/0 R1(config-if)#ipv6 address 2000:1::1/64 R1(config-if)#ipv6 rip Net1 enable R1(config-if)#ipv6 enable R1(config-if)#interface FastEthernet0/1 R1(config-if)#ipv6 address 2001::10/64 R1(config-if)#ipv6 rip Net1 enable R1(config-if)#ipv6 enable Note that the ipv6 rip command is used to enable RIP on a particular interface. Entering ipv6 rip Net1 enable on the first interface begins the RIPv6 process. The Net1 string can be any name that can be used to name the RIP process. Once configured, use the usual diagnostic tools (ping to simple PDU) to check the connectivity. To view the RIP database, use the following command: R1#sh ipv6 rip database RIP process "Net1" local RIB 2000:2::/64, metric 2, installed FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec 2000:3::/64, metric 3, installed FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec 2001::/64, metric 2 FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec 2001:1::/64, metric 2, installed FastEthernet0/1/FE80::201:97FF:FE87:E5A9, expires in 173 sec RIP process "LINK" local RIB Trace the route of the packet to see the path it takes. PC>tracert 2000:3::2 Tracing route to 2000:3::2 over a maximum of 30 hops: 1 31 ms 32 ms 31 ms 2000:1::1 2 50 ms 50 ms 63 ms 2001::20 3 94 ms 94 ms 94 ms 2001:1::20 4 125 ms 109 ms 125 ms 2000:3::2 Trace complete. Summary In this article, we learned how to use IPv6 with Packet Tracer. We saw the limitation of the IPv4 addresses. We also learned how to assign IPv6 addresses and how to configure IPv6 static and dynamic routing. Resources for Article : How to edit the attributes in QGIS Troubleshooting OpenStack Compute problems Creating Identity and Resource Pools in Cisco Unified Computing System
Read more
  • 0
  • 0
  • 61806

article-image-wireshark-analyze-malicious-emails-in-pop-imap-smtp
Vijin Boricha
29 Jul 2018
10 min read
Save for later

Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial]

Vijin Boricha
29 Jul 2018
10 min read
One of the contributing factors in the evolution of digital marketing and business is email. Email allows users to exchange real-time messages and other digital information such as files and images over the internet in an efficient manner. Each user is required to have a human-readable email address in the form of username@domainname.com. There are various email providers available on the internet, and any user can register to get a free email address. There are different email application-layer protocols available for sending and receiving mails, and the combination of these protocols helps with end-to-end email exchange between users in the same or different mail domains. In this article, we will look at the normal operation of email protocols and how to use Wireshark for basic analysis and troubleshooting. This article is an excerpt from Network Analysis using Wireshark 2 Cookbook - Second Edition written by Nagendra Kumar Nainar, Yogesh Ramdoss, Yoram Orzach. The three most commonly used application layer protocols are POP3, IMAP, and SMTP: POP3: Post Office Protocol 3 (POP3) is an application layer protocol used by email systems to retrieve mail from email servers. The email client uses POP3 commands such as LOGIN, LIST, RETR, DELE, QUIT to access and manipulate (retrieve or delete) the email from the server. POP3 uses TCP port 110 and wipes the mail from the server once it is downloaded to the local client. IMAP: Internet Mail Access Protocol (IMAP) is another application layer protocol used to retrieve mail from the email server. Unlike POP3, IMAP allows the user to read and access the mail concurrently from more than one client device. With current trends, it is very common to see users with more than one device to access emails (laptop, smartphone, and so on), and the use of IMAP allows the user to access mail any time, from any device. The current version of IMAP is 4 and it uses TCP port 143. SMTP: Simple Mail Transfer Protocol (SMTP) is an application layer protocol that is used to send email from the client to the mail server. When the sender and receiver are in different email domains, SMTP helps to exchange the mail between servers in different domains. It uses TCP port 25: As shown in the preceding diagram, SMTP is the email client used to send the mail to the mail server, and POP3 or IMAP is used to retrieve the email from the server. The email server uses SMTP to exchange the mail between different domains. In order to maintain the privacy of end users, most email servers use different encryption mechanisms at the transport layer. The transport layer port number will differ from the traditional email protocols if they are used over secured transport layer (TLS). For example, POP3 over TLS uses TCP port 995, IMAP4 over TLS uses TCP port 993, and SMTP over TLS uses port 465. Normal operation of mail protocols As we saw above, the common mail protocols for mail client to server and server to server communication are POP3, SMTP, and IMAP4. Another common method for accessing emails is web access to mail, where you have common mail servers such as Gmail, Yahoo!, and Hotmail. Examples include Outlook Web Access (OWA) and RPC over HTTPS for the Outlook web client from Microsoft. In this recipe, we will talk about the most common client-server and server-server protocols, POP3 and SMTP, and the normal operation of each protocol. Getting ready Port mirroring to capture the packets can be done either on the email client side or on the server side. How to do it... POP3 is usually used for client to server communications, while SMTP is usually used for server to server communications. POP3 communications POP3 is usually used for mail client to mail server communications. The normal operation of POP3 is as follows: Open the email client and enter the username and password for login access. Use POP as a display filter to list all the POP packets. It should be noted that this display filter will only list packets that use TCP port 110. If TLS is used, the filter will not list the POP packets. We may need to use tcp.port == 995 to list the POP3 packets over TLS. Check the authentication has been passed correctly. In the following screenshot, you can see a session opened with a username that starts with doronn@ (all IDs were deleted) and a password that starts with u6F. To see the TCP stream shown in the following screenshot, right-click on one of the packets in the stream and choose Follow TCP Stream from the drop-down menu: Any error messages in the authentication stage will prevent communications from being established. You can see an example of this in the following screenshot, where user authentication failed. In this case, we see that when the client gets a Logon failure, it closes the TCP connection: Use relevant display filters to list the specific packet. For example, pop.request.command == "USER" will list the POP request packet with the username and pop.request.command == "PASS" will list the POP packet carrying the password. A sample snapshot is as follows: During the mail transfer, be aware that mail clients can easily fill a narrow-band communications line. You can check this by simply configuring the I/O graphs with a filter on POP. Always check for common TCP indications: retransmissions, zero-window, window-full, and others. They can indicate a busy communication line, slow server, and other problems coming from the communication lines or end nodes and servers. These problems will mostly cause slow connectivity. When the POP3 protocol uses TLS for encryption, the payload details are not visible. We explain how the SSL captures can be decrypted in the There's more... section. IMAP communications IMAP is similar to POP3 in that it is used to retrieve the mail from the server by the client. The normal behavior of IMAP communication is as follows: Open the email client and enter the username and password for the relevant account. Compose a new message and send it from any email account. Retrieve the email on the client that is using IMAP. Different clients may have different ways of retrieving the email. Use the relevant button to trigger it. Check you received the email on your local client. SMTP communications SMTP is commonly used for the following purposes: Server to server communications, in which SMTP is the mail protocol that runs between the servers In some clients, POP3 or IMAP4 are configured for incoming messages (messages from the server to the client), while SMTP is configured for outgoing messages (messages from the client to the server) The normal behavior of SMTP communication is as follows: The local email client resolves the IP address of the configured SMTP server address. This triggers a TCP connection to port number 25 if SSL/TLS is not enabled. If SSL/TLS is enabled, a TCP connection is established over port 465. It exchanges SMTP messages to authenticate with the server. The client sends AUTH LOGIN to trigger the login authentication. Upon successful login, the client will be able to send mails. It sends SMTP message such as "MAIL FROM:<>", "RCPT TO:<>" carrying sender and receiver email addresses. Upon successful queuing, we get an OK response from the SMTP server. The following is a sample SMTP message flow between client and server: How it works... In this section, let's look into the normal operation of different email protocols with the use of Wireshark. Mail clients will mostly use POP3 for communication with the server. In some cases, they will use SMTP as well. IMAP4 is used when server manipulation is required, for example, when you need to see messages that exist on a remote server without downloading them to the client. Server to server communication is usually implemented by SMTP. The difference between IMAP and POP is that in IMAP, the mail is always stored on the server. If you delete it, it will be unavailable from any other machine. In POP, deleting a downloaded email may or may not delete that email on the server. In general, SMTP status codes are divided into three categories, which are structured in a way that helps you understand what exactly went wrong. The methods and details of SMTP status codes are discussed in the following section. POP3 POP3 is an application layer protocol used by mail clients to retrieve email messages from the server. A typical POP3 session will look like the following screenshot: It has the following steps: The client opens a TCP connection to the server. The server sends an OK message to the client (OK Messaging Multiplexor). The user sends the username and password. The protocol operations begin. NOOP (no operation) is a message sent to keep the connection open, STAT (status) is sent from the client to the server to query the message status. The server answers with the number of messages and their total size (in packet 1042, OK 0 0 means no messages and it has a total size of zero) When there are no mail messages on the server, the client send a QUIT message (1048), the server confirms it (packet 1136), and the TCP connection is closed (packets 1137, 1138, and 1227). In an encrypted connection, the process will look nearly the same (see the following screenshot). After the establishment of a connection (1), there are several POP messages (2), TLS connection establishment (3), and then the encrypted application data: IMAP The normal operation of IMAP is as follows: The email client resolves the IP address of the IMAP server: As shown in the preceding screenshot, the client establishes a TCP connection to port 143 when SSL/TSL is disabled. When SSL is enabled, the TCP session will be established over port 993. Once the session is established, the client sends an IMAP capability message requesting the server sends the capabilities supported by the server. This is followed by authentication for access to the server. When the authentication is successful, the server replies with response code 3 stating the login was a success: The client now sends the IMAP FETCH command to fetch any mails from the server. When the client is closed, it sends a logout message and clears the TCP session. SMTP The normal operation of SMTP is as follows: The email client resolves the IP address of the SMTP server: The client opens a TCP connection to the SMTP server on port 25 when SSL/TSL is not enabled. If SSL is enabled, the client will open the session on port 465: Upon successful TCP session establishment, the client will send an AUTH LOGIN message to prompt with the account username/password. The username and password will be sent to the SMTP client for account verification. SMTP will send a response code of 235 if authentication is successful: The client now sends the sender's email address to the SMTP server. The SMTP server responds with a response code of 250 if the sender's address is valid. Upon receiving an OK response from the server, the client will send the receiver's address. SMTP server will respond with a response code of 250 if the receiver's address is valid. The client will now push the actual email message. SMTP will respond with a response code of 250 and the response parameter OK: queued. The successfully queued message ensures that the mail is successfully sent and queued for delivery to the receiver address. We have learned how to analyse issues in POP, IMAP, and SMTP  and malicious emails. Get to know more about  DNS Protocol Analysis and FTP, HTTP/1, AND HTTP/2 from our book Network Analysis using Wireshark 2 Cookbook - Second Edition. What’s new in Wireshark 2.6? Analyzing enterprise application behavior with Wireshark 2 Capturing Wireshark Packets
Read more
  • 0
  • 0
  • 60094
article-image-managing-nano-server-windows-powershell-and-windows-powershell-dsc
Packt
05 Jul 2017
8 min read
Save for later

Managing Nano Server with Windows PowerShell and Windows PowerShell DSC

Packt
05 Jul 2017
8 min read
In this article by Charbel Nemnom, the author of the book Getting Started with Windows Nano Server, we will cover the following topics: Remote server graphical tools Server manager Hyper-V manager Microsoft management console Managing Nano Server with PowerShell (For more resources related to this topic, see here.) Remote server graphical tools Without the Graphical User Interface (GUI), it’s not easy to carry out the daily management and maintenance of Windows Server. For this reason, Microsoft integrated Nano Server with all the existing graphical tools that you are familiar with such as Hyper-V manager, failover cluster manager, server manager, registry editor, File explorer, disk and device manager, server configuration, computer management, users and groups console, and so on. All those tools and consoles are compatible to manage Nano Server remotely. The GUI is always the easiest way to use. In this section, we will discuss how to access and set the most common configurations in Nano Server with remote graphical tools. Server manager Before we start managing Nano Server, we need to obtain the IP address or the computer name of the Nano Server to connect to and remotely manage a Nano instance either physical or virtual machine. Login to your management machine and make sure you have installed the latest Remote Server Administration Tools (RSAT) for Windows Server 2016 or Windows 10. You can download the latest RSAT tools from the following link: https://www.microsoft.com/en-us/download/details.aspx?id=45520 Launch server manager as shown in Figure 1, and add your Nano Server(s) that you would like to manage: Figure 1: Managing Nano Server using server manager You can refresh the view and browse all events and services as you expect to see. I want to point out that Best Practices Analyzer (BPA) is not supported in Nano Server. BPA is completely cmdlets-based and written in C# back during the days of PowerShell 2.0. It is also statically using some .NET XML library code that was not part of .NET framework at that time. So, do not expect to see Best Practices Analyzer in server manager. Hyper-V manager The next console that you probably want to access is Hyper-V Manager, right click on Nano Server name in server manager and select Hyper-V Manager console as shown in Figure 2: Figure 2: Managing Nano Server using Hyper-V manager Hyper-V Manager will launch with full support as you expect when managing full Windows Server 2016 Hyper-V, free Hyper-V server, server core and Nano Server with Hyper-V role. Microsoft management console You can use the Microsoft Management Console (MMC) to manage Nano Server as well. From the command line type mmc.exe. From the File menu, Click Add/Remove Snap-in…and then select Computer Management and click Add. Choose Another computer and add the IP address or the computer name of your Nano Server machine. Click Ok. As shown in Figure 3, you can expand System Tools and check the tools that you are familiar with like (Event Viewer, Local Users and Groups, Shares,and Services). Please note that some of these MMC tools such as Task Scheduler and Disk Management cannot be used against Nano Server. Also, for certain tools you need to open some ports in Windows firewall: Figure 3: Managing Nano Server using Microsoft Management Console Managing Nano Server with PowerShell For most IT administrators, the graphical user interface is the easiest way to use. But on the other hand, PowerShell can bring a fast and an automated process. That's why in Windows Server 2016, the Nano Server deployment option of Windows Server comes with full PowerShell remoting support. The purpose of the core PowerShell engine, is to manage Nano Server instances at scale. PowerShell remoting including DSC, Windows Server cmdlets (network, storage, Hyper-V, and so on), Remote file transfer, Remote script authoring and debugging, and PowerShell Web access. Some of the new features in Windows PowerShell version 5.1 on Nano Server supports the following: Copying files via PowerShell sessions Remote file editing in PowerShell ISE Interactive script debugging over PowerShell session Remote script debugging within PowerShell ISE Remote host process connects and debug PowerShell version 5.1 is available in different editions which denote varying feature sets and platform compatibility. Desktop Edition targeting Full Server, Server Core and Windows Desktop, Core Edition targeting Nano Server and Windows IoT. You can find a list of Windows PowerShell features not available yet in Nano Server here. As Nano Server is still evolving, we will see what the next cadence update will bring for unavailable PowerShell features. If you want to manage your Nano Server, you can use PowerShell Remoting or if your Nano Server instance is running in a virtual machine you can also use PowerShell Direct, more on that at the end of this section. In order to manage a Nano server installation using PowerShell remoting carry out the following steps: You may need to start the WinRM service on your management machine to enable remote connections. From the PowerShell console type the following command: net start WinRM If you want to manage Nano Server in a workgroup environment, open PowerShell console, and type the following command, substituting server name or IP with the right value using your machine-name is the easiest to use, but if your device is not uniquely named on your network, you can use the IP address instead: Set-Item WSMan:localhostClientTrustedHosts -Value "servername or IP" If you want to connect multiple devices, you can use comma and quotation marks to separate each device. Set-Item WSMan:localhostClientTrustedHosts -Value "servername or IP, servername or IP" You can also set it to allow to connect to a specific network subnet using the following command: Set-Item WSMan:localhostClientTrustedHosts -Value 10.10.100.* To test Windows PowerShell remoting against Nano Server and check if it’s working, you can use the following command: Test-WSMan -ComputerName"servername or IP" -Credential servernameAdministrator -Authentication Negotiate You can now start an interactive session with Nano Server. Open an elevated PowerShell console and type the following command: Enter-PSSession -ComputerName "servername or IP" -Credential servernameAdministrator In the following example, we will create two virtual machines on Nano Server Hyper-V host using PowerShell remoting. From your management machine, open an elevated PowerShell console or PowerShell scripting environment ,and run the following script (make sure to update the variables to match your environment): #region Variables $NanoSRV='NANOSRV-HV01' $Cred=Get-Credential"DemoSuperNano" $Session=New-PSSession-ComputerName$NanoSRV-Credential$Cred $CimSesion=New-CimSession-ComputerName$NanoSRV-Credential$Cred $VMTemplatePath='C:Temp' $vSwitch='Ext_vSwitch' $VMName='DemoVM-0' #endregion # Copying VM Template from the management machine to Nano Server Get-ChildItem-Path$VMTemplatePath-filter*.VHDX-recurse|Copy-Item-ToSession$Session-DestinationD: 1..2|ForEach-Object { New-VM-CimSession$CimSesion-Name$VMName$_-VHDPath"D:$VMName$_.vhdx"-MemoryStartupBytes1024GB` -SwitchName$vSwitch-Generation2 Start-VM-CimSession$CimSesion-VMName$VMName$_-Passthru } In this script, we are creating a PowerShell session and CIM session to Nano Server. A CIM session is a client-side object representing a connection to a local computer or a remote computer. Then we are copying VM Templates from the management machine to Nano Server over PowerShell remoting, when the copy is completed, we are creating two virtual machines as Generation 2 and finally starting them. After a couple of seconds, you can launch Hyper-V Manager console and see the new VMs running on Nano Server host as shown in Figure 4: Figure 4: Creating virtual machines on Nano Server host using PowerShell remoting If you have installed Nano Server in a virtual machine running on a Hyper-V host, you can use PowerShell direct to connect directly from your Hyper-V host to your Nano Server VM without any network connection by using the following command: Enter-PSSession -VMName <VMName> -Credential.Administrator So instead of specifying the computer name, we specified the VM Name, PowerShell Direct is so powerful, it’s one of my favorite feature, you can configure a bunch of VMs from scratch in just couple of seconds without any network connection. Moreover, if you have Nano Server running as a Hyper-V host as shown in the example earlier, you could use PowerShell remoting first to connect to Nano Server from your management machine, and then leverage PowerShell Direct to manage your virtual machines running on top of Nano Server. In this example, we used two PowerShell technologies (PS remoting and PS Direct).This is so powerful and open many possibilities to effectively manage Nano Server. To do that, you can use the following command: #region Variables $NanoSRV='NANOSRV-HV01'#Nano Server name or IP address $DomainCred=Get-Credential"DemoSuperNano" $VMLocalCred=Get-Credential"~Administrator" $Session=New-PSSession-ComputerName$NanoSRV-Credential$DomainCred #endregion Invoke-Command-Session$Session-ScriptBlock { Get-VM Invoke-Command-VMName (Get-VM).Name-Credential$Using;VMLocalCred-ScriptBlock { hostname Tzutil/g } } In this script, we have created a PowerShell session into Nano Server physical host, and then we used PowerShell Direct to list all VMs, including their hostnames and time zone. The result is shown in Figure 5: Figure 5. Nested PowerShell remoting Summary In this article, we discussed how to manage a Nano Server installation using remote server graphic tools, and Windows PowerShell remoting. Resources for Article: Further resources on this subject: Exploring Windows PowerShell 5.0 [article] Exchange Server 2010 Windows PowerShell: Mailboxes and Reports [article] Exchange Server 2010 Windows PowerShell: Managing Mailboxes [article]
Read more
  • 0
  • 0
  • 58407

article-image-5-reasons-why-you-should-use-an-open-source-data-analytics-stack-in-2020
Amey Varangaonkar
28 Jan 2020
7 min read
Save for later

5 reasons why you should use an open-source data analytics stack in 2020

Amey Varangaonkar
28 Jan 2020
7 min read
Today, almost every company is trying to be data-driven in some sense or the other. Businesses across all the major verticals such as healthcare, telecommunications, banking, insurance, retail, education, etc. make use of data to better understand their customers, optimize their business processes and, ultimately, maximize their profits. This is a guest post sponsored by our friends at RudderStack. When it comes to using data for analytics, companies face two major challenges: Data tracking: Tracking the required data from a multitude of sources in order to get insights out of it. As an example, tracking customer activity data such as logins, signups, purchases, and even clicks such as bookmarks from platforms such as mobile apps and websites becomes an issue for many eCommerce businesses. Building a link between the Data and Business Intelligence: Once data is acquired, transforming it and making it compatible for a BI tool can often prove to be a substantial challenge. A well designed data analytics stack comes is essential in combating these challenges. It will ensure you're well-placed to use the data at your disposal in more intelligent ways. It will help you drive more value. What does a data analytics stack do? A data analytics stack is a combination of tools which when put together, allows you to bring together all of your data in one platform, and use it to get actionable insights that help in better decision-making. As seen the diagram above illustrates, a data analytics stack is built upon three fundamental steps: Data Integration: This step involves collecting and blending data from multiple sources and transforming them in a compatible format, for storage. The sources could be as varied as a database (e.g. MySQL), an organization’s log files, or event data such as clicks, logins, bookmarks, etc from mobile apps or websites. A data analytics stack allows you to use all of such data together and use it to perform meaningful analytics. Data Warehousing: This next step involves storing the data for the purpose of analytics. As the complexity of data grows, it is feasible to consolidate all the data in a single data warehouse. Some of the popular modern data warehouses include Amazon’s Redshift, Google BigQuery and platforms such as Snowflake and MarkLogic. Data Analytics: In this final step, we use a visualization tool to load the data from the warehouse and use it to extract meaningful insights and patterns from the data, in the form of charts, graphs and reports. Choosing a data analytics stack - proprietary or open-source? When it comes to choosing a data analytics stack, businesses are often left with two choices - buy it or build it. On one hand, there are proprietary tools such as Google Analytics, Amplitude, Mixpanel, etc. - where the vendors alone are responsible for their configuration and management to suit your needs. With the best in class features and services that come along with the tools, your primary focus can just be project management, rather than technology management. While using proprietary tools have their advantages, there are also some major cons to them that revolve mainly around cost, data sharing, privacy concerns, and more. As a result, businesses today are increasingly exploring the open-source alternatives to build their data analytics stack. The advantages of open source analytics tools Let's now look at the 5 main advantages that open-source tools have over these proprietary tools. Open source analytics tools are cost effective Proprietary analytics products can cost hundreds of thousands of dollars beyond their free tier. For small to medium-sized businesses, the return on investment does not often justify these costs. Open-source tools are free to use and even their enterprise versions are reasonably priced compared to their proprietary counterparts. So, with a lower up-front costs, reasonable expenses for training, maintenance and support, and no cost for licensing, open-source analytics tools are much more affordable. More importantly, they're better value for money. Open source analytics tools provide flexibility Proprietary SaaS analytics products will invariably set restrictions on the ways in which they can be used. This is especially the case with the trial or the lite versions of the tools, which are free. For example, full SQL is not supported by some tools. This makes it hard to combine and query external data alongside internal data. You'll also often find that warehouse dumps provide no support either. And when they do, they'll probably cost more and still have limited functionality. Data dumps from Google Analytics, for instance, can only be loaded into Google BigQuery. Also, these dumps are time-delayed. That means the loading process can be very slow.. With open-source software, you get complete flexibility: from the way you use your tools, how you combine to build your stack, and even how you use your data. If your requirements change - which, let's face it, they probably will - you can make the necessary changes without paying extra for customized solutions. Avoid vendor lock-in Vendor lock-in, also known as proprietary lock-in, is essentially a state where a customer becomes completely dependent on the vendor for their products and services. The customer is unable to switch to another vendor without paying a significant switching cost. Some organizations spend a considerable amount of money on proprietary tools and services that they heavily rely on. If these tools aren't updated and properly maintained, the organization using it is putting itself at a real competitive disadvantage. This is almost never the case with open-source tools. Constant innovation and change is the norm. Even if the individual or the organization handling the tool moves on, the community catn take over the project and maintain it. With open-source, you can rest assured that your tools will always be up-to-date without heavy reliance on anyone. Improved data security and privacy Privacy has become a talking point in many data-related discussions of late. This is thanks, in part, to data protection laws such as the GDPR and CCPA coming into force. High-profile data leaks have also kept the issue high on the agenda. An open-source stack analytics running inside your cloud or on-prem environment gives complete control of your data. This lets you decide which data is to be used when, and how. It lets you dictate how third parties can access and use your data, if at all. Open-source is the present It's hard to counter the fact that open-source is now mainstream. Companies like Microsoft, Apple, and IBM are now not only actively participating in the open-source community, they're also contributing to it. Open-source puts you on the front foot when it comes to innovation. With it, you'll be able to leverage the power of a vibrant developer community to develop better products in more efficient ways. How RudderStack helps you build an ideal open-source data analytics stack RudderStack is a completely open-source, enterprise-ready platform to simplify data management in the most secure and reliable way. It works as a perfect data integration platform by routing your event data from data sources such as websites, mobile apps and servers, to multiple destinations of your choice - thus helping you save time and effort. RudderStack integrates effortlessly with a multitude of destinations such as Google Analytics, Amplitude, MixPanel, Salesforce, HubSpot, Facebook Ads, and more, as well as popular data warehouses such as Amazon Redshift or S3. If performing efficient clickstream analytics is your goal, RudderStack offers you the perfect data pipeline to collect and route your data securely. Learn more about Rudderstack by visiting the RudderStack website, or check out its GitHub page to find out how it works.
Read more
  • 0
  • 0
  • 55561

article-image-applications-with-aws-services-amazon-dynamodb-amazon-kinesis
Natasha Mathur
05 Jul 2018
17 min read
Save for later

Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial]

Natasha Mathur
05 Jul 2018
17 min read
AWS provides hybrid capabilities for networking, storage, database, application development, and management tools for secure and seamless integration. In today's tutorial, we will integrate applications with the two popular AWS services namely Amazon DynamoDB and Amazon Kinesis. Amazon DynamoDB is a fast, fully managed, highly available, and scalable NoSQL database service from AWS. DynamoDB uses key-value and document store data models. Amazon Kinesis is used to collect real-time data to process and analyze it. This article is an excerpt from a book 'Expert AWS Development' written by Atul V. Mistry. By the end of this tutorial, you will know how to integrate applications with the relative AWS services and best practices. Amazon DynamoDB The Amazon DynamoDB service falls under the Database category. It is a fast NoSQL database service from Amazon. It is highly durable as it will replicate data across three distinct geographical facilities in AWS regions. It's great for web, mobile, gaming, and IoT applications. DynamoDB will take care of software patching, hardware provisioning, cluster scaling, setup, configuration, and replication. You can create a database table and store and retrieve any amount and variety of data. It will delete expired data automatically from the table. It will help to reduce the usage storage and cost of storing data which is no longer needed. Amazon DynamoDB Accelerator (DAX) is a highly available, fully managed, and in-memory cache. For millions of requests per second, it reduces the response time from milliseconds to microseconds. DynamoDB is allowed to store up to 400 KB of large text and binary objects. It uses SSD storage to provide high I/O performance. Integrating DynamoDB into an application The following diagram provides a high-level overview of integration between your application and DynamoDB: Please perform the following steps to understand this integration: Your application in your programming language which is using an AWS SDK. DynamoDB can work with one or more programmatic interfaces provided by AWS SDK. From your programming language, AWS SDK will construct an HTTP or HTTPS request with a DynamoDB low-level API. The AWS SDK will send a request to the DynamoDB endpoint. DynamoDB will process the request and send the response back to the AWS SDK. If the request is executed successfully, it will return HTTP 200 (OK) response code. If the request is not successful, it will return HTTP error code and error message. The AWS SDK will process the response and send the result back to the application. The AWS SDK provides three kinds of interfaces to connect with DynamoDB. These interfaces are as follows: Low-level interface Document interface Object persistence (high-level) interface Let's explore all three interfaces. The following diagram is the Movies table, which is created in DynamoDB and used in all our examples: Low-level interface AWS SDK programming languages provide low-level interfaces for DynamoDB. These SDKs provide methods that are similar to low-level DynamoDB API requests. The following example uses the Java language for the low-level interface of AWS SDKs. Here you can use Eclipse IDE for the example. In this Java program, we request getItem from the Movies table, pass the movie name as an attribute, and print the movie release year: Let's create the MovieLowLevelExample file. We have to import a few classes to work with the DynamoDB. AmazonDynamoDBClient is used to create the DynamoDB client instance. AttributeValue is used to construct the data. In AttributeValue, name is datatype and value is data: GetItemRequest is the input of GetItem GetItemResult is the output of GetItem The following code will create the dynamoDB client instance. You have to assign the credentials and region to this instance: Static AmazonDynamoDBClient dynamoDB; In the code, we have created HashMap, passing the value parameter as AttributeValue().withS(). It contains actual data and withS is the attribute of String: String tableName = "Movies"; HashMap<String, AttributeValue> key = new HashMap<String, AttributeValue>(); key.put("name", new AttributeValue().withS("Airplane")); GetItemRequest will create a request object, passing the table name and key as a parameter. It is the input of GetItem: GetItemRequest request = new GetItemRequest() .withTableName(tableName).withKey(key); GetItemResult will create the result object. It is the output of getItem where we are passing request as an input: GetItemResult result = dynamoDB.getItem(request); It will check the getItem null condition. If getItem is not null then create the object for AttributeValue. It will get the year from the result object and create an instance for yearObj. It will print the year value from yearObj: if (result.getItem() != null) { AttributeValue yearObj = result.getItem().get("year"); System.out.println("The movie Released in " + yearObj.getN()); } else { System.out.println("No matching movie was found"); } Document interface This interface enables you to do Create, Read, Update, and Delete (CRUD) operations on tables and indexes. The datatype will be implied with data from this interface and you do not need to specify it. The AWS SDKs for Java, Node.js, JavaScript, and .NET provides support for document interfaces. The following example uses the Java language for the document interface in AWS SDKs. Here you can use the Eclipse IDE for the example. In this Java program, we will create a table object from the Movies table, pass the movie name as attribute, and print the movie release year. We have to import a few classes. DynamoDB is the entry point to use this library in your class. GetItemOutcomeis is used to get items from the DynamoDB table. Table is used to get table details: static AmazonDynamoDB client; The preceding code will create the client instance. You have to assign the credentials and region to this instance: String tableName = "Movies"; DynamoDB docClient = new DynamoDB(client); Table movieTable = docClient.getTable(tableName); DynamoDB will create the instance of docClient by passing the client instance. It is the entry point for the document interface library. This docClient instance will get the table details by passing the tableName and assign it to the movieTable instance: GetItemOutcome outcome = movieTable.getItemOutcome("name","Airplane"); int yearObj = outcome.getItem().getInt("year"); System.out.println("The movie was released in " + yearObj); GetItemOutcome will create an outcome instance from movieTable by passing the name as key and movie name as parameter. It will retrieve the item year from the outcome object and store it into the yearObj object and print it: Object persistence (high-level) interface In the object persistence interface, you will not perform any CRUD operations directly on the data; instead, you have to create objects which represent DynamoDB tables and indexes and perform operations on those objects. It will allow you to write object-centric code and not database-centric code. The AWS SDKs for Java and .NET provide support for the object persistence interface. Let's create a DynamoDBMapper object in AWS SDK for Java. It will represent data in the Movies table. This is the MovieObjectMapper.java class. Here you can use the Eclipse IDE for the example. You need to import a few classes for annotations. DynamoDBAttribute is applied to the getter method. If it will apply to the class field then its getter and setter method must be declared in the same class. The DynamoDBHashKey annotation marks property as the hash key for the modeled class. The DynamoDBTable annotation marks DynamoDB as the table name: @DynamoDBTable(tableName="Movies") It specifies the table name: @DynamoDBHashKey(attributeName="name") public String getName() { return name;} public void setName(String name) {this.name = name;} @DynamoDBAttribute(attributeName = "year") public int getYear() { return year; } public void setYear(int year) { this.year = year; } In the preceding code, DynamoDBHashKey has been defined as the hash key for the name attribute and its getter and setter methods. DynamoDBAttribute specifies the column name and its getter and setter methods. Now create MovieObjectPersistenceExample.java to retrieve the movie year: static AmazonDynamoDB client; The preceding code will create the client instance. You have to assign the credentials and region to this instance. You need to import DynamoDBMapper, which will be used to fetch the year from the Movies table: DynamoDBMapper mapper = new DynamoDBMapper(client); MovieObjectMapper movieObjectMapper = new MovieObjectMapper(); movieObjectMapper.setName("Airplane"); The mapper object will be created from DynamoDBMapper by passing the client. The movieObjectMapper object will be created from the POJO class, which we created earlier. In this object, set the movie name as the parameter: MovieObjectMapper result = mapper.load(movieObjectMapper); if (result != null) { System.out.println("The song was released in "+ result.getYear()); } Create the result object by calling DynamoDBMapper object's load method. If the result is not null then it will print the year from the result's getYear() method. DynamoDB low-level API This API is a protocol-level interface which will convert every HTTP or HTTPS request into the correct format with a valid digital signature. It uses JavaScript Object Notation (JSON) as a transfer protocol. AWS SDK will construct requests on your behalf and it will help you concentrate on the application/business logic. The AWS SDK will send a request in JSON format to DynamoDB and DynamoDB will respond in JSON format back to the AWS SDK API. DynamoDB will not persist data in JSON format. Troubleshooting in Amazon DynamoDB The following are common problems and their solutions: If error logging is not enabled then enable it and check error log messages. Verify whether the DynamoDB table exists or not. Verify the IAM role specified for DynamoDB and its access permissions. AWS SDKs take care of propagating errors to your application for appropriate actions. Like Java programs, you should write a try-catch block to handle the error or exception. If you are not using an AWS SDK then you need to parse the content of low-level responses from DynamoDB. A few exceptions are as follows: AmazonServiceException: Client request sent to DynamoDB but DynamoDB was unable to process it and returned an error response AmazonClientException: Client is unable to get a response or parse the response from service ResourceNotFoundException: Requested table doesn't exist or is in CREATING state Now let's move on to Amazon Kinesis, which will help to collect and process real-time streaming data. Amazon Kinesis The Amazon Kinesis service is under the Analytics product category. This is a fully managed, real-time, highly scalable service. You can easily send data to other AWS services such as Amazon DynamoDB, AmazaonS3, and Amazon Redshift. You can ingest real-time data such as application logs, website clickstream data, IoT data, and social stream data into Amazon Kinesis. You can process and analyze data when it comes and responds immediately instead of waiting to collect all data before the process begins. Now, let's explore an example of using Kinesis streams and Kinesis Firehose using AWS SDK API for Java. Amazon Kinesis streams In this example, we will create the stream if it does not exist and then we will put the records into the stream. Here you can use Eclipse IDE for the example. You need to import a few classes. AmazonKinesis and AmazonKinesisClientBuilder are used to create the Kinesis clients. CreateStreamRequest will help to create the stream. DescribeStreamRequest will describe the stream request. PutRecordRequest will put the request into the stream and PutRecordResult will print the resulting record. ResourceNotFoundException will throw an exception when the stream does not exist. StreamDescription will provide the stream description: Static AmazonKinesis kinesisClient; kinesisClient is the instance of AmazonKinesis. You have to assign the credentials and region to this instance: final String streamName = "MyExampleStream"; final Integer streamSize = 1; DescribeStreamRequest describeStreamRequest = new DescribeStreamRequest().withStreamName(streamName); Here you are creating an instance of describeStreamRequest. For that, you will pass the streamNameas parameter to the withStreamName() method: StreamDescription streamDescription = kinesisClient.describeStream(describeStreamRequest).getStreamDescription(); It will create an instance of streamDescription. You can get information such as the stream name, stream status, and shards from this instance: CreateStreamRequest createStreamRequest = new CreateStreamRequest(); createStreamRequest.setStreamName(streamName); createStreamRequest.setShardCount(streamSize); kinesisClient.createStream(createStreamRequest); The createStreamRequest instance will help to create a stream request. You can set the stream name, shard count, and SDK request timeout. In the createStream method, you will pass the createStreamRequest: long createTime = System.currentTimeMillis(); PutRecordRequest putRecordRequest = new PutRecordRequest(); putRecordRequest.setStreamName(streamName); putRecordRequest.setData(ByteBuffer.wrap(String.format("testData-%d", createTime).getBytes())); putRecordRequest.setPartitionKey(String.format("partitionKey-%d", createTime)); Here we are creating a record request and putting it into the stream. We are setting the data and PartitionKey for the instance. It will create the records: PutRecordResult putRecordResult = kinesisClient.putRecord(putRecordRequest); It will create the record from the putRecord method and pass putRecordRequest as a parameter: System.out.printf("Success : Partition key "%s", ShardID "%s" and SequenceNumber "%s".n", putRecordRequest.getPartitionKey(), putRecordResult.getShardId(), putRecordResult.getSequenceNumber()); It will print the output on the console as follows: Troubleshooting tips for Kinesis streams The following are common problems and their solutions: Unauthorized KMS master key permission error: Without authorized permission on the master key, when a producer or consumer application tries to writes or reads an encrypted stream Provide access permission to an application using Key policies in AWS KMS or IAM policies with AWS KMS Sometimes producer becomes writing slower. Service limits exceeded: Check whether the producer is throwing throughput exceptions from the service, and validate what API operations are being throttled. You can also check Amazon Kinesis Streams limits because of different limits based on the call. If calls are not an issue, check you have selected a partition key that allows distributing put operations evenly across all shards, and that you don't have a particular partition key that's bumping into the service limits when the rest are not. This requires you to measure peak throughput and the number of shards in your stream. Producer optimization: It has either a large producer or small producer. A large producer is running from an EC2 instance or on-premises while a small producer is running from the web client, mobile app, or IoT device. Customers can use different strategies for latency. Kinesis Produce Library or multiple threads are useful while writing for buffer/micro-batch records, PutRecords for multi-record operation, PutRecord for single-record operation. Shard iterator expires unexpectedly: The shard iterator expires because its GetRecord methods have not been called for more than 5 minutes, or you have performed a restart of your consumer application. The shard iterator expires immediately before you use it. This might indicate that the DynamoDB table used by Kinesis does not have enough capacity to store the data. It might happen if you have a large number of shards. Increase the write capacity assigned to the shard table to solve this. Consumer application is reading at a slower rate: The following are common reasons for read throughput being slower than expected: Total reads for multiple consumer applications exceed per-shard limits. In the Kinesis stream, increase the number of shards. Maximum number of GetRecords per call may have been configured with a low limit value. The logic inside the processRecords call may be taking longer for a number of possible reasons; the logic may be CPU-intensive, bottlenecked on synchronization, or I/O blocking. We have covered Amazon Kinesis streams. Now, we will cover Kinesis Firehose. Amazon Kinesis Firehose Amazon Kinesis Firehose is a fully managed, highly available and durable service to load real-time streaming data easily into AWS services such as Amazon S3, Amazon Redshift, or Amazon Elasticsearch. It replicates your data synchronously at three different facilities. It will automatically scale as per throughput data. You can compress your data into different formats and also encrypt it before loading. AWS SDK for Java, Node.js, Python, .NET, and Ruby can be used to send data to a Kinesis Firehose stream using the Kinesis Firehose API. The Kinesis Firehose API provides two operations to send data to the Kinesis Firehose delivery stream: PutRecord: In one call, it will send one record PutRecordBatch: In one call, it will send multiple data records Let's explore an example using PutRecord. In this example, the MyFirehoseStream stream has been created. Here you can use Eclipse IDE for the example. You need to import a few classes such as AmazonKinesisFirehoseClient, which will help to create the client for accessing Firehose. PutRecordRequest and PutRecordResult will help to put the stream record request and its result: private static AmazonKinesisFirehoseClient client; AmazonKinesisFirehoseClient will create the instance firehoseClient. You have to assign the credentials and region to this instance: String data = "My Kinesis Firehose data"; String myFirehoseStream = "MyFirehoseStream"; Record record = new Record(); record.setData(ByteBuffer.wrap(data.getBytes(StandardCharsets.UTF_8))); As mentioned earlier, myFirehoseStream has already been created. A record in the delivery stream is a unit of data. In the setData method, we are passing a data blob. It is base-64 encoded. Before sending a request to the AWS service, Java will perform base-64 encoding on this field. A returned ByteBuffer is mutable. If you change the content of this byte buffer then it will reflect to all objects that have a reference to it. It's always best practice to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before reading from the buffer or using it. Now you have to mention the name of the delivery stream and the data records you want to create the PutRecordRequest instance: PutRecordRequest putRecordRequest = new PutRecordRequest() .withDeliveryStreamName(myFirehoseStream) .withRecord(record); putRecordRequest.setRecord(record); PutRecordResult putRecordResult = client.putRecord(putRecordRequest); System.out.println("Put Request Record ID: " + putRecordResult.getRecordId()); putRecordResult will write a single record into the delivery stream by passing the putRecordRequest and get the result and print the RecordID: PutRecordBatchRequest putRecordBatchRequest = new PutRecordBatchRequest().withDeliveryStreamName("MyFirehoseStream") .withRecords(getBatchRecords()); You have to mention the name of the delivery stream and the data records you want to create the PutRecordBatchRequest instance. The getBatchRecord method has been created to pass multiple records as mentioned in the next step: JSONObject jsonObject = new JSONObject(); jsonObject.put("userid", "userid_1"); jsonObject.put("password", "password1"); Record record = new Record().withData(ByteBuffer.wrap(jsonObject.toString().getBytes())); records.add(record); In the getBatchRecord method, you will create the jsonObject and put data into this jsonObject . You will pass jsonObject to create the record. These records add to a list of records and return it: PutRecordBatchResult putRecordBatchResult = client.putRecordBatch(putRecordBatchRequest); for(int i=0;i<putRecordBatchResult.getRequestResponses().size();i++){ System.out.println("Put Batch Request Record ID :"+i+": " + putRecordBatchResult.getRequestResponses().get(i).getRecordId()); } putRecordBatchResult will write multiple records into the delivery stream by passing the putRecordBatchRequest, get the result, and print the RecordID. You will see the output like the following screen: Troubleshooting tips for Kinesis Firehose Sometimes data is not delivered at specified destinations. The following are steps to solve common issues while working with Kinesis Firehose: Data not delivered to Amazon S3: If error logging is not enabled then enable it and check error log messages for delivery failure. Verify that the S3 bucket mentioned in the Kinesis Firehose delivery stream exists. Verify whether data transformation with Lambda is enabled, the Lambda function mentioned in your delivery stream exists, and Kinesis Firehose has attempted to invoke the Lambda function. Verify whether the IAM role specified in the delivery stream has given proper access to the S3 bucket and Lambda function or not. Verify your Kinesis Firehose metrics to check whether the data was sent to the Kinesis Firehose delivery stream successfully. Data not delivered to Amazon Redshift/Elasticsearch: For Amazon Redshift and Elasticsearch, verify the points mentioned in Data not delivered to Amazon S3, including the IAM role, configuration, and public access. For CloudWatch and IoT, delivery stream not available as target: Some AWS services can only send messages and events to a Kinesis Firehose delivery stream which is in the same region. Verify that your Kinesis Firehose delivery stream is located in the same region as your other services. We completed implementations, examples, and best practices for Amazon DynamoDB and Amazon Kinesis AWS services using AWS SDK. If you found this post useful, do check out the book 'Expert AWS Development' to learn application integration with other AWS services like Amazon Lambda, Amazon SQS, and Amazon SWF. A serverless online store on AWS could save you money. Build one. Why is AWS the preferred cloud platform for developers working with big data? Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider
Read more
  • 0
  • 1
  • 51765
article-image-exploring-windows-powershell-50
Packt
12 Oct 2015
16 min read
Save for later

Exploring Windows PowerShell 5.0

Packt
12 Oct 2015
16 min read
In this article by Chendrayan Venkatesan, the author of the book Windows PowerShell for .NET Developers, we will cover the following topics: Basics of Desired State Configuration (DSC) Parsing structured objects using PowerShell Exploring package management Exploring PowerShell Get-Module Exploring other enhanced features (For more resources related to this topic, see here.) Windows PowerShell 5.0 has many significant benefits, to know more features about its features refer to the following link: http://go.microsoft.com/fwlink/?LinkID=512808 A few highlights of Windows PowerShell 5.0 are as follows: Improved usability Backward compatibility Class and Enum keywords are introduced Parsing structured objects are made easy using ConvertFrom string command We have some new modules introduced in Windows PowerShell 5.0, such as Archive, Package Management (this was formerly known as OneGet) and so on ISE supported transcriptions Using PowerShell Get-Module cmdlet, we can find, install, and publish modules Debug at runspace can be done using Microsoft.PowerShell.Utility module Basics of Desired State Configuration Desired State Configuration also known as DSC is a new management platform in Windows PowerShell. Using DSC, we can deploy and manage configuration data for software servicing and manage the environment. DSC can be used to streamline datacenters and this was introduced along with Windows Management Framework 4.0 and it heavily extended into Windows Management Framework 5.0. Few highlights of DSC in April 2015 Preview are as follows: New cmdlets are introduced in WMF 5.0 Few DSC commands are updated and remarkable changes are made to the configuration management platform in PowerShell 5.0 DSC resources can be built using class, so no need of MOF file It's not mandatory to know PowerShell to learn DSC but it's a great added advantage. Similar to function we can also use configuration keyword but it has a huge difference because in DSC everything is declarative, which is a cool thing in Desired State Configuration. So before beginning this exercise, I created a DSCDemo lab machine in Azure cloud with Windows Server 2012 and it's available out of the box. So, the default PowerShell version is 4.0. For now let's create and define a simple configuration, which creates a file in the local host. Yeah! A simple New-Item command can do that but it's an imperative cmdlet and we need to write a program to tell the computer to create it, if it does not exist. Structure of the DSC configuration is as follows: Configuration Name { Node ComputerName { ResourceName <String> { } } } To create a simple text file with contents, we use the following code: Configuration FileDemo { Node $env:COMPUTERNAME { File FileDemo { Ensure = 'Present' DestinationPath = 'C:TempDemo.txt' Contents = 'PowerShell DSC Rocks!' Force = $true } } } Look at the following screenshot: Following are the steps represented in the preceding figure: Using the Configuration keyword, we are defining a configuration with the name FileDemo—it's a friendly name. Inside the Configuration block we created a Node block and also a file on the local host. File is the resource name. FileDemo is a friendly name of a resource and it's also a string. Properties of the file resource. This creates MOF file—we call this similar to function. But wait, here a code file is not yet created. We just created a MOF file. Look at the MOF file structure in the following image: We can manually edit the MOF and use it on another machine that has PS 4.0 installed on it. It's not mandatory to use PowerShell for generating MOF, if you are comfortable with PowerShell, you can directly write the MOF file. To explore the available DSC resources you can execute the following command: Get-DscResource The output is illustrated in the following image: Following are the steps represented in the preceding figure: Shows you how the resources are implemented. Binary, Composite, PowerShell, and so on. In the preceding example, we created a DSC Configuration that's FileDemo and that is listed as Composite. Name of the resource. Module name the resource belongs to. Properties of the resource. To know the Syntax of a particular DSC resource we can try the following code: Get-DscResource -Name Service -Syntax The output is illustrated in the following figure ,which shows the resource syntax in detail: Now, let's see how DSC works and its three different phases: The authoring phase. The staging phase. The "Make it so" phase. The authoring phase In this phase we will create a DSC Configuration using PowerShell and this outputs a MOF file. We saw a FileDemo example to create a configuration is considered to be an authoring phase. The staging phase In this phase the declarative MOF will be staged and it's as per node. DSC has a push and pull model, where push is simply pushing the configuration to target nodes. The custom providers need to be manually placed in target machines whereas in pull mode, we need to build an IIS Server that will have MOF for target nodes and this is well defined by the OData interface. In pull mode, the custom providers are downloaded to target system. The "Make it so" phase This is the phase for enacting the configuration, that is applying the configuration on the target nodes. Before we summarize the basics of DSC, let's see a few more DSC Commands. We can do this by executing the following command: Get-Command -Noun DSC* The output is as follows: We are using a PowerShell 4.0 stable release and not 5.0, so the version will not be available. Local Configuration Manager (LCM) is the engine for DSC and it runs on all nodes. LCM is responsible to call the configuration resources that are included in a DSC configuration script. Try executing Get-DscLocalConfigurationManager cmdlet to explore its properties. To Apply the LCM settings on target nodes we can use Set-DscLocalConfigurationManager cmdlet. Use case of classes in WMF 5.0 Using classes in PowerShell makes IT professionals, system administrators, and system engineers to start learning development in WMF. It's time for us to switch back to Windows PowerShell 5.0 because the Class keyword is supported from version 5.0 onwards. Why do we need to write class in PowerShell? Is there any special need? May be we will answer this in this section but this is one reason why I prefer to say that, PowerShell is far more than a scripting language. When the Class keyword was introduced, it mainly focused on creating DSC resources. But using class we can create objects like in any other object oriented programming language. The documentation that reads New-Object is not supported. But it's revised now. Indeed it supports the New-Object. The class we create in Windows PowerShell is a .NET framework type. How to create a PowerShell Class? It's easy, just use the Class keyword! The following steps will help you to create a PowerShell class. Create a class named ClassName {}—this is an empty class. Define properties in the class as Class ClassName {$Prop1 , $prop2} Instantiate the class as $var = [ClassName]::New() Now check the output of $var: Class ClassName { $Prop1 $Prop2 } $var = [ClassName]::new() $var Let's now have a look at how to create a class and its advantages. Let us define the properties in class: Class Catalog { #Properties $Model = 'Fujitsu' $Manufacturer = 'Life Book S Series' } $var = New-Object Catalog $var The following image shows the output of class, its members, and setting the property value: Now, by changing the property value, we get the following output: Now let's create a method with overloads. In the following example we have created a method name SetInformation that accepts two arguments $mdl and $mfgr and these are of string type. Using $var.SetInformation command with no parenthesis will show the overload definitions of the method. The code is as follows: Class Catalog { #Properties $Model = 'Fujitsu' $Manufacturer = 'Life Book S Series' SetInformation([String]$mdl,[String]$mfgr) { $this.Manufacturer = $mfgr $this.Model = $mdl } } $var = New-Object -TypeName Catalog $var.SetInformation #Output OverloadDefinitions ------------------- void SetInformation(string mdl, string mfgr) Let's set the model and manufacturer using set information, as follows: Class Catalog { #Properties $Model = 'Fujitsu' $Manufacturer = 'Life Book S Series' SetInformation([String]$mdl,[String]$mfgr) { $this.Manufacturer = $mfgr $this.Model = $mdl } } $var = New-Object -TypeName Catalog $var.SetInformation('Surface' , 'Microsoft') $var The output is illustrated in following image: Inside the PowerShell class we can use PowerShell cmdlets as well. The following code is just to give a demo of using PowerShell cmdlet. Class allows us to validate the parameters as well. Let's have a look at the following example: Class Order { [ValidateSet("Red" , "Blue" , "Green")] $color [ValidateSet("Audi")] $Manufacturer Book($Manufacturer , $color) { $this.color = $color $this.Manufacturer = $Manufacturer } } The parameter $Color and $Manufacturer has ValidateSet property and has a set of values. Now let's use New-Object and set the property with an argument which doesn't belong to this set, shown as follows: $var = New-Object Order $var.color = 'Orange' Now, we get the following error: Exception setting "color": "The argument "Orange" does not belong to the set "Red,Blue,Green" specified by the ValidateSet attribute. Supply an argument that is in the set and then try the command again." Let's set the argument values correctly to get the result using Book method, as follows: $var = New-Object Order $var.Book('Audi' , 'Red') $var The output is illustrated in the following figure: Constructors A constructor is a special type of method that creates new objects. It has the same name as the class and the return type is void. Multiple constructors are supported, but each one takes different numbers and types of parameters. In the following code, let's see the steps to create a simple constructor in PowerShell that simply creates a user in the active directory. Class ADUser { $identity $Name ADUser($Idenity , $Name) { New-ADUser -SamAccountName $Idenity -Name $Name $this.identity = $Idenity $this.Name = $Name } } $var = [ADUser]::new('Dummy' , 'Test Case User') $var We can also hide the properties in PowerShell class, for example let's create two properties and hide one. In theory, it just hides the property but we can use the property as follows: Class Hide { [String]$Name Hidden $ID } $var = [Hide]::new() $var The preceding code is illustrated in the following figure: Additionally, we can carry out operations, such as Get and Set, as shown in the following code: Class Hide { [String]$Name Hidden $ID } $var = [Hide]::new() $var.Id = '23' $var.Id This returns output as 23. To explore more about class use help about_Classes -Detailed. Parsing structured objects using PowerShell In Windows PowerShell 5.0 a new cmdlet ConvertFrom-String has been introduced and it's available in Microsoft.PowerShell.Utility. Using this command, we can parse the structured objects from any given string content. To see information, use help command with ConvertFrom-String -Detailed command. The help has an incorrect parameter as PropertyName. Copy paste will not work, so use help ConvertFrom-String –Parameter * and read the parameter—it's actually PropertyNames. Now, let's see an example of using ConvertFrom-String. Let us examine a scenario where a team has a custom code which generates log files for daily health check-up reports of their environment. Unfortunately, the tool delivered by the vendor is an EXE file and no source code is available. The log file format is as follows: "Error 4356 Lync" , "Warning 6781 SharePoint" , "Information 5436 Exchange", "Error 3432 Lync" , "Warning 4356 SharePoint" , "Information 5432 Exchange" There are many ways to manipulate this record but let's see how PowerShell cmdlet ConvertFrom-String helps us. Using the following code, we will simply extract the Type, EventID, and Server: "Error 4356 Lync" , "Warning 6781 SharePoint" , "Information 5436 Exchange", "Error 3432 Lync" , "Warning 4356 SharePoint" , "Information 5432 Exchange" | ConvertFrom-String -PropertyNames Type , EventID, Server Following figure shows the output of the code we just saw: Okay, what's interesting in this? It's cool because now your output is a PSCustom object and you can manipulate it as required. "Error 4356 Lync" , "Warning 6781 SharePoint" , "Information 5436 Exchange", "Error 3432 SharePoint" , "Warning 4356 SharePoint" , "Information 5432 Exchange" | ConvertFrom-String -PropertyNames Type , EventID, Server | ? {$_.Type -eq 'Error'} An output in Lync and SharePoint has some error logs that needs to be taken care of on priority. Since, requirement varies you can use this cmdlet as required. ConvertFrom-String has a delimiter parameter, which helps us to manipulate the strings as well. In the following example let's use the –Delimiter parameter that removes white space and returns properties, as follows: "Chen V" | ConvertFrom-String -Delimiter "s" -PropertyNames "FirstName" , "SurName" This results FirstName and SurName – FirstName as Chen and SurName as V In the preceding example, we walked you through using template file to manipulate the string as we need. To do this we need to use the parameter –Template Content. Use help ConvertFrom-String –Parameter Template Content Before we begin we need to create a template file. To do this let's ping a web site. Ping www.microsoft.com and the output returned is, as shown: Pinging e10088.dspb.akamaiedge.net [2.21.47.138] with 32 bytes of data: Reply from 2.21.47.138: bytes=32 time=37ms TTL=51 Reply from 2.21.47.138: bytes=32 time=35ms TTL=51 Reply from 2.21.47.138: bytes=32 time=35ms TTL=51 Reply from 2.21.47.138: bytes=32 time=36ms TTL=51 Ping statistics for 2.21.47.138: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 35ms, Maximum = 37ms, Average = 35ms Now, we have the information in some structure. Let's extract IP and bytes; to do this I replaced the IP and Bytes as {IP*:2.21.47.138} Pinging e10088.dspb.akamaiedge.net [2.21.47.138] with 32 bytes of data: Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=37ms TTL=51 Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=35ms TTL=51 Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=36ms TTL=51 Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=35ms TTL=51 Ping statistics for 2.21.47.138: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 35ms, Maximum = 37ms, Average = 35ms ConvertFrom-String has a debug parameter using which we can debug our template file. In the following example let's see the debugging output: ping www.microsoft.com | ConvertFrom-String -TemplateFile C:TempTemplate.txt -Debug As we mentioned earlier PowerShell 5.0 is a Preview release and has few bugs. Let's ignore those for now and focus on the features, which works fine and can be utilized in environment. Exploring package management In this topic, we will walk you through the features of package management, which is another great feature of Windows Management Framework 5.0. This was introduced in Windows 10 and was formerly known as OneGet. Using package management we can automate software discovery, installation of software, and inventorying. Do not think about Software Inventory Logging (SIL) for now. As we know, in Windows Software Installation, technology has its own way of doing installations, for example MSI type, MSU type, and so on. This is a real challenge for IT professionals and developers, to think about the unique automation of software installation or deployment. Now, we can do it using package management module. To begin with, let's see the package management Module using the following code: Get-Module -Name PackageManagement The output is illustrated as follows: Yeah, well we got an output that is a binary module. Okay, how to know the available cmdlets and their usage? PowerShell has the simplest way to do things, as shown in the following code: Get-Module -Name PackageManagement The available cmdlets are shown in the following image: Package Providers are the providers connected to package management (OneGet) and package sources are registered for providers. To view the list of providers and sources we use the following cmdlets: Now, let's have a look at the available packages—in the following example I am selecting the first 20 packages, for easy viewing: Okay, we have 20 packages so using Install-Package cmdlet, let us now install WindowsAzurePowerShell on our Windows 2012 Server. We need to ensure that the source are available prior to any installation. To do this just execute the cmdlet Get-PackageSource. If the chocolatey source didn't come up in the output, simply execute the following code—do not change any values. This code will install chocolatey package manager on your machine. Once the installation is done we need to restart the PowerShell: Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1')) Find-Package -Name WindowsAzurePowerShell | Install-Package -Verbose The command we just saw shows the confirmation dialog for chocolatey, which is the package source, as shown in the following figure: Click on Yes and install the package. Following are the steps represented in the figure that we just saw: Installs the prerequisites. Creates a temporary folder. Installation successful. Windows Server 2012 has .NET 4.5 in the box by default, so the verbose turned up as False for .NET 4.5, which says PowerShell not installed but WindowsAzurePowerShell is installed successfully. If you are trying to install the same package and the same version that is available on your system – the cmdlet will skip the installation. Find-Package -Name PowerShell Here | Install-Package -Verbose VERBOSE: Skipping installed package PowerShellHere 0.0.3 Explore all the package management cmdlets and automate your software deployments. Exploring PowerShell Get-Module PowerShell Get-Module is a module available in Windows PowerShell 5.0 preview. Following are few more modules: Search through modules in the gallery with Find-Module Save modules to your system from the gallery with Save-Module Install modules from the gallery with Install-Module Update your modules to the latest version with Update-Module Add your own custom repository with Register-PSRepository The following screenshot shows the additional cmdlets that are available: This will allow us to find a module from PowerShell gallery and install it in our environment. PS gallery is a repository of modules. Using Find-Module cmdlet we get a list of module available in the PS gallery. Pipe and install the required module, alternatively we can save the module and examine it before installation, to do this use Save-Module cmdlet. The following screenshot illustrates the installation and deletion of the xJEA module: We can also publish module in the PS gallery, which will be available over the internet to others. This is not a great module. All it does is get user-information from an active directory for the same account name—creates a function and saves it as PSM1 in module folder. In order to publish the module in PS gallery, we need to ensure that the module has manifest. Following are the steps to publish your module: Create a PSM1 file. Create a PSD1 file that is a manifest module (also known as data file). Get your NuGet API key from the PS gallery link shared above. Publish your module using the Publish-PSModule cmdlet. Following figure shows modules that are currently published: Following figure shows the commands to publish modules: Summary In this article, we saw that Windows PowerShell 5.0 preview has got a lot more significant features, such as enhancement in PowerShell DSC, cmdlets improvements and new cmdlets, ISE support transcriptions, support class, and using class. We can create Custom DSC resources with easy string manipulations, A new Network Switch module is introduced using which we can automate and manage Microsoft signed network switches. Resources for Article: Further resources on this subject: Windows Phone 8 Applications[article] The .NET Framework Primer[article] Unleashing Your Development Skills with PowerShell [article]
Read more
  • 0
  • 0
  • 51494

article-image-lambda-functions
Packt
05 Jul 2017
16 min read
Save for later

Lambda Functions

Packt
05 Jul 2017
16 min read
In this article, by Udita Gupta and Yohan Wadia, the authors of the book Mastering AWS Lambda, we are going to take things a step further by learning the anatomy of a typical Lambda Function and also how to actually write your own functions. We will cover the programming model for a Lambda function using simple functions as examples, the use of logs and exceptions and error handling. (For more resources related to this topic, see here.) The Lambda programming model Certain applications can be broken down into one or more simple nuggets of code called as functions and uploaded to AWS Lambda for execution. Lambda then takes care of provisioning the necessary resources to run your function along with other management activities such as auto-scaling of your functions, their availability, and so on. So what exactly are we supposed to do in all this? A developer basically has three tasks to perform when it comes to working with Lambda: Writing the code Packaging it for deployment Finally monitoring its execution and fine tuning In this section, we are going to explore the different components that actually make up a Lambda Function by understanding what AWS calls as a programming model or a programming pattern. As of date, AWS officially supports Node.js, Java, Python, and C# as the programming languages for writing Lambda functions, with each language following a generic programming pattern that comprises of certain concepts which we will see in the following sections. Handler The handler function is basically a function that Lambda calls first for execution. A handler function is capable of processing incoming event data that is passed to it as well as invoking other functions or methods from your code. We will be concentrating a lot of our code and development on Node.js; however, the programming model remains more or less the same for the other supported languages as well. A skeleton structure of a handler function is shown as follows: exports.myHandler = function(event, context, callback) { // Your code goes here. callback(); } Where, myHandler is the name of your handler function. By exporting it we make sure that Lambda knows which function it has to invoke first. The other parameters that are passed with the handler function are: event: Lambda uses this parameter to pass any event related data back to the handler. context: Lambda again uses this parameter to provide the handler with the function's runtime information such as the name of the function, the time it took to execute, and so on . callback: This parameter is used to return any data back to its caller. The callback parameter is the only optional parameter that gets passed when writing handlers. If not specified, AWS Lambda will call it implicitly and return the value as null. The callback parameter also supports two optional parameters in the form of error and result where error will return any of the function's error information back to the caller while result will return any result of your function's successful execution. Here are a few simple examples of invoking callbacks in your handler: callback() callback(null, 'Hello from Lambda') callback(error) The callback parameter is supported only in Node.js runtime v4.3. You will have to use the context methods in case your code supports earlier Node.js runtime (v0.10.42) Let us try out a simple handler example with a code: exports.myHandler = function(event, context, callback) { console.log("value = " + event.key); console.log("functionName = ", context.functionName); callback(null, "Yippee! Something worked!"); }; The following code snippet will print the value of an event (key) that we will pass to the function, print the function's name as part of the context object and finally print the success message Yippee! Something worked! if all goes well! Login to the AWS Management Console and select AWS Lambda from the dashboard. Select the Create a Lambda function option. From the Select blueprint page, select the Blank Function blueprint. Since we are not configuring any triggers for now, simple click on Next at the Configure triggers page. Provide a suitable Name and Description for your Lambda function and paste the preceding code snippet in the inline code editor as shown: Next, in the Lambda function handler and role section on the same page, type in the correct name of your Handler as shown. The handler name should match with the handler name in your function to work. Remember also to select the basic-lambda-role for your function's execution before selecting the Next button: In the Review page, select the Create function option. With your function now created, select the Test option to pass the sample event to our function. In the Sample event, pass the following event and select the Save and test option: { "key": "My Printed Value!!" } With your code execution completed, you should get a similar execution result as shown in the following figure. The important things to note here are the values for the event, context and callback parameters. You can note the callback message being returned back to the caller as the function executed successfully. The other event and context object values are printed in the Log output section as highlighted in the following figure: In case you end up with any errors, make sure the handler function name matches the handler name that you passed during the function's configuration. Context object The context object is a really useful utility when it comes to obtaining runtime information about your function. The context object can provide information such as the executing function's name, the time remaining before Lambda terminates your function's execution, the log name and stream associated with your function and much more. The context object also comes with its own methods that you can call to correctly terminate your function's executions such as context.succed(), context.fail(), context.done(), and so on. However, post April 2016, Lambda has transitioned the Node.js runtime from v0.10.42 to v4.3 which does support these methods however encourages to use the callback() for performing the same actions. Here are some of the commonly used context object methods and properties described as follows: getRemainingTimeInMillis(): This property returns the number of milliseconds left for execution before Lambda terminates your function. This comes in really handy when you want to perform some corrective actions before your function exits or gets timed out. callbackWaitsForEmptyEventLoop: This property is used to override the default behaviour of a callback() function, such as to wait till the entire event loop is processed and only then return back to the caller. If set to false, this property causes the callback() function to stop any further processing in the event loop even if there are any other tasks to be performed. The default value is set to true. functionName: This property returns the name of the executing Lambda function. functionVersion: The current version of the executing Lambda function. memoryLimitInMB: The amount of resource in terms of memory set for your Lambda function. logGroupName: This property returns the name of the CloudWatch Log Group that stores function's execution logs. logStreamName: This property returns the name of the CloudWatch Log Stream that stores function's execution logs. awsRequestID: This property returns the request ID associated with that particular function's execution. If you are using Lambda functions as mobile backend processing services, you can then extract additional information about your mobile application using the context of identity and clientContext objects. These are invoked using the AWS Mobile SDK. To learn more, click here http://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-context.html. Let us look at a simple example to understand the context object a bit better. In this example, we are using the context object callbackWaitsForEmptyEventLoop and demonstrating its working by setting the object's value to either yes or no on invocation: Login to the AWS Management Console and select AWS Lambda from the dashboard. Select the Create a Lambda function option. From the Select blueprint page, select the Blank Function blueprint. Since we are not configuring any triggers for now, simple click on Next at the Configure triggers page. Provide a suitable Name and Description for your Lambda function and paste the following code in the inline code editor: exports.myHandler = (event, context, callback) => { console.log('remaining time =', context.getRemainingTimeInMillis()); console.log('functionName =', context.functionName); console.log('AWSrequestID =', context.awsRequestId); console.log('logGroupName =', context.logGroupName); console.log('logStreamName =', context.logStreamName); switch (event.contextCallbackOption) { case "no": setTimeout(function(){ console.log("I am back from my timeout of 30 seconds!!"); },30000); // 30 seconds break break; case "yes": console.log("The callback won't wait for the setTimeout() n if the callbackWaitsForEmptyEventLoop is set to false"); setTimeout(function(){ console.log("I am back from my timeout of 30 seconds!!"); },30000); // 30 seconds break context.callbackWaitsForEmptyEventLoop = false; break; default: console.log("The Default code block"); } callback(null, 'Hello from Lambda'); }; Next, in the Lambda function handler and role section on the same page, type in the correct name of your Handler as shown. The handler name should match with the handler name in your function to work. Remember also to select the basic-lambda-role for your function's execution. The final change that we will do is change the Timeout value of our function from the default 3 seconds to 1 minute specifically for this example. Click Next to continue: In the Review page, select the Create function option. With your function now created, select the Test option to pass the sample event to our function. In the Sample event, pass the following event and select the Save and test option. You should see a similar output in the Log output window as shown: With the contextCallbackOption set to yes, the function does not wait for the 30 seconds setTimeout() function and will exit, however it prints the function's runtime information such as the remaining execution time, the function name, and so on. Now set the contextCallbackOption to no and re-run the test and verify the output. This time, you can see the setTimeout() function getting called and verify the same by comparing the remaining time left for execution with the earlier test run.   Logging You can always log your code's execution and activities using simple log statements. The following statements are supported for logging with Node.js runtime: console.log() console.error() console.warn() console.info() The logs can be viewed using both the Management Console as well as the CLI. Let us quickly explore both the options. Using the Management Console We have already been using Lambda's dashboard to view the function's execution logs, however the logs are only for the current execution. To view your function's logs from the past, you need to view them using the CloudWatch Logs section: To do so, search and select CloudWatch option from the AWS Management Console. Next, select the Logs option to display the function's logs as shown in the following figure: You can use the Filter option to filter out your Lambda logs by typing in the log group name prefix as /aws/lambda.   Select any of the present Log Groups and its corresponding Log Stream Name to view the complete and detailed execution logs of your function. If you do not see any Lambda logs listed out here it is mostly due to your Lambda execution role. Make sure your role has the necessary access rights to create the log group and log stream along with the capability to put log events. Using the CLI The CLI provides two ways using which you can view your function's execution logs: The first is using the Lambda function's invoke command itself. The invoke command when used with the --log-type parameter will print the latest 4 KB of log data that is written to CloudWatch Logs. To do so, first list out all available functions in your current region using the following command: # aws lambda list-functions Next, pick a Lambda function that you wish to invoke and substitute that function's name and payload with the following example snippet: # aws lambda invoke --invocation-type RequestResponse --function-name myFirstFunction --log-type Tail --payload '{"key1":"Lambda","key2":"is","key3":"awesome!"}' output.txt The second way is by using a combination of the context() object and the CloudWatch CLI. You can obtain your function's log group name and the log stream name using the context.logGroupName and the context.logStreamName. Next, substitute the data gathered from the output of these parameters in the following command: # aws logs get-log-events --log-group-name "/aws/lambda/myFirstFunction" --log-stream-name "2017/02/07/[$LATEST]1ae6ac9c77384794a3202802c683179a" If you run into the error The specified log stream does not exist in spite of providing correct values for the log group name and stream name; then make sure to add the "" escape character in the [$LATEST] as shown. Let us look at a few options that you can additionally pass with the get-log-events command: --start-time: The start of the log's time range. All times are in UTC. --end-time: The end of the log's time range. All times are in UTC. --next-token: The token for the next set of items to return. (You received this token from a previous call.) --limit: Used to set the maximum number of log events returned. By default the limit is set to either 10,000 log events. Alternatively, if you don't wish to use the context() objects in your code, you can still filter out the log group name and log stream name by using a combination of the following commands: # aws logs describe-log-groups --log-group-name-prefix "/aws/lambda/" The describe-log-groups command will list all the log groups that are prefixed with /aws/lambda. Make a note of your function's log group name from this output. Next, execute the following command to list your log group name's associated log stream names: # aws logs describe-log-streams --log-group-name "/aws/lambda/myFirstFunction" Make a note of the log stream name and substitute the same in the next and final command to view your log events for that particular log stream name: # aws logs get-log-events --log-group-name "/aws/lambda/myFirstFunction" --log-stream-name "2017/02/07/[$LATEST]1ae6ac9c77384794a3202802c683179a" Once again, make sure to add the backslash "" in the [$LATEST] to avoid the The specified log stream does not exist error. With the logging done, let's move on to the next piece of the programming model called exceptions. Exceptions and error handling Functions have the ability to notify AWS Lambda in case it failed to execute correctly. This is primarily done by the function passing the error object to Lambda which converts the same to a string and returns it to the user as an error message. The error messages that are returned also depend on the invocation type of the function; for example, if your function performs a synchronous execution (RequestResponse invocation type), then the error is returned back to the user and displayed on the Management Console as well as in the CloudWatch Logs. For any asynchronous executions (event invocation type), Lambda will not return anything. Instead it logs the error messages to CloudWatch Logs. Let us examine a function's error and exception handling capabilities with a simple example of a calculator function that accepts two numbers and an operand as the test events during invocation: Login to the AWS Management Console and select AWS Lambda from the dashboard. Select the Create a Lambda function option. From the Select blueprint page, select the Blank Function blueprint. Since we are not configuring any triggers for now, simple click on Next at the Configure triggers page. Provide a suitable Name and Description for your Lambda function and paste the following code in the inline code editor: exports.myHandler = (event, context, callback) => { console.log("Hello, Starting the "+ context.functionName +" Lambda Function"); console.log("The event we pass will have two numbers and an operand value"); // operand can be +, -, /, *, add, sub, mul, div console.log('Received event:', JSON.stringify(event, null, 2)); var error, result; if (isNaN(event.num1) || isNaN(event.num2)) { console.error("Invalid Numbers"); // different logging error = new Error("Invalid Numbers!"); // Exception Handling callback(error); } switch(event.operand) { case "+": case "add": result = event.num1 + event.num2; break; case "-": case "sub": result = event.num1 - event.num2; break; case "*": case "mul": result = event.num1 * event.num2; break; case "/": case "div": if(event.num2 === 0){ console.error("The divisor cannot be 0"); error = new Error("The divisor cannot be 0"); callback(error, null); } else{ result = event.num1/event.num2; } break; default: callback("Invalid Operand"); break; } console.log("The Result is: " + result); callback(null, result); }; Next, in the Lambda function handler and role section on the same page, type in the correct name of your Handler. The handler name should match with the handler name in your function to work. Remember also to select the basic-lambda-role for your function's execution. Leave the rest of the values to their defaults and click Next to continue. In the Review page, select the Create function option. With your function now created, select the Test option to pass the sample event to our function. In the Sample event, pass the following event and select the Save and test option. You should see a similar output in the Log output window as shown: { "num1": 3, "num2": 0, "operand": "div" } So what just happened there? Well first, we can print simple user friendly error messages with the help of the console.error() statement. Additionally, we can also print the stackTrace array of the error by passing the error in the callback() as shown: error = new Error("The divisor cannot be 0"); callback(error, null); You can also view the custom error message and the stackTrace JSON array both from the Lambda dashboard as well as from the CloudWatch Logs section. Next, give this code a couple of tries with some different permutations and combinations of events and check out the results. You can even write your own custom error messages and error handlers that can perform some additional task when an error is returned by the function. With this we come towards the end of a function's generic programming model and its components.  Summary We deep dived into the Lambda programming model and understood each of its sub components (handlers, context objects, errors and exceptions) with easy to follow examples.
Read more
  • 0
  • 0
  • 51421
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy