Advanced Node Dot Js Success Guide
Advanced Node Dot Js Success Guide
Advanced Node Dot Js Success Guide
js
Optimize, Deploy, and Maintain an
Enterprise-Scale Node.js Application
Introduction:
Charting Your Path to Real-World Success Using Node.js
Less than a decade after its initial release, Node.js has become a pivotal technology for building
enterprise-grade web applications. At the time, Node.js addressed a growing need for a way to
build fast and scalable server-side applications. Today, the explosion of dynamic, responsive,
data-driven online content and applications has turned this need into an absolute necessity.
If you’re reading this introduction, you probably don’t need a detailed walkthrough of what
Node.js can do, how to get started using it, or why it has become a core server-side technology
within some of the world’s biggest corporations. In fact, we’re going to avoid these topics
entirely. There are already many excellent resources available to introduce novice developers
to Node.js and the world of server-side JavaScript coding, so there’s no point in covering the
same ground.
Where we do want to focus our attention — and yours — is on a very different and very important
topic: How to launch and run an enterprise-scale product, service, or brand built on Node.js.
This is a topic that has not, in our opinion, gotten the attention and expert insights it
deserves. In most cases, this post-launch journey is far longer and has a bigger impact than
the development process itself. It is also where a Node.js application will succeed or fail at
translating promises and potential into real-world value, relevance, and business impact. At
different points in this journey, the leader of a development team will call upon an arsenal of
skills and strategies — learning how to maximize project efficiency, scale and manage growth,
anticipate and address security risks, balance cost and complexity, and perform many other
tasks, as well.
In the pages that follow, we’ll give you a practical foundation for success during the critical
first three months or so of a successful enterprise-scale Node.js journey. This time span covers
the period from pre-production planning to continuous deployment and testing — often in an
environment that demands massively scaling your codebase, team, and audience.
In addition to covering the key tools and techniques you’ll need at various points in this journey,
we’ll offer guidance in the form of Node.js best practices and case studies — building your
success upon other teams’ experiences.
This eBook isn’t meant to cover every one of these topics completely or to create an
exhaustive Node.js technical reference. Our goal is to give readers enough context and detail
to understand the issues, gain basic competence in dealing with them in real-world situations,
and to set them up for long-term success in building and deploying Node.js applications.
Before you get started, we recommend reviewing the table of contents to get a “lay of the land”
overview. Good luck, and enjoy the process of mastering the entire Node.js journey.
Chapter 2
14 Staying the Course: The First 24 Hours of Your Node.js Deployment
15 Two Keys to Surviving Day One — and Beyond
16 Common Day One Application Issues
18 Looking Ahead: Time to Hit the Open Road
Chapter 3
19 Ongoing Management
20 Memory Leaks
21 Managing Node.js Concurrency
22 Monitoring
22 To Conclude
23 Contributing Authors
Preparing for a release is always a critical point in any application In this chapter, we’ll walk you through a pre-release process with the following
development journey, and that’s certainly the case for Node.js projects. It’s areas of emphasis:
your team’s final opportunity to find and fix issues before they impact your
deployment process, your end users, or the business itself. While errors can • Optimizing Your Code
and will impact your Node.js production environment just as they will any • Best Practices for Error Handling
other type of software, our goal is to outline a process that minimizes the risk • Confirming Your Code Meets Security Requirements
of avoidable issues while also striking the right balance between efficiency,
• Configuring for a Production Environment
cost, and use of team resources.
• Deployment Considerations
Along the way, we’ll share our recommendations on tools, summarize key
best practices, alert you to common issues that may impact your codebase or
deployment process, and link out to additional information when appropriate.
1. Use JavaScript comments to embed the configuration information directly into the file.
2. Use a JavaScript, JSON, or YAML file to specify the configuration for the directory and
its subdirectories. This can be in the form of an .eslintrc file, or as an eslintConfig field in
the package.json file. ESLint will scan for and read the underlying file, or you can specify
the configuration file on the command line.
The following example code is from a JSON file configured for ESLint from an .eslintrc file.
The following best practices are taken from an excellent article on Handling Errors From Promises/Catch
Node.js error handling: The catch method assists error handling during the composition of the
• Given functions should deliver operational errors either synchronously promise. An example is provided below:
(using throw) or asynchronously (with a callback or event emitter), but not
both.
• When writing new functions, clearly document the arguments for it,
the types, and any constraints (e.g., “Must be a valid IP address”). This
documentation should also include any operational errors that may occur,
and how those errors are delivered.
• Missing or invalid arguments are programmer errors, and you should
always use throw when that occurs.
• Use the standard Error class and its related properties when delivering
errors. Add as much useful information in separate properties.
As a rule, however, every Node.js application should address the TrustProxy Setting
following security considerations and threat categories before it enters If you are running your application on a proxy server, you will need to
production: use the TrustProxy setting if you need to obtain the IP address of the
• Clickjacking: Refers to a category of attacks that tricks users into requesting client browser. You can set the TrustProxy setting to one of the
launching unintended events in the user interface following types:
• Content Security Policy: Instructs the client browser which location • Boolean
and related resources are allowed to be loaded in the browser • IP addresses
• CORS: Cross-Origin Resource Sharing • Number
• CSRF: Cross-Site Request Forgery • Function
• DOS: Denial of Service attacks Enabling TrustProxy also turns on reverse proxy support to assist in
• P3P: Platform for Privacy Preferences redirecting traffic to HTTPS instead of using HTTP protocol. The req.
• Socket Hijacking secure variable can also help you in sending traffic via the HTTPS
• Strict Transport Security protocol as well. The following is an example of how you can use the req.
• XSS: Cross-Site scripting secure variable to accomplish this:
TLS/SSL
You should ensure that any cookies sent in your application are secure in
this layer. The secure attribute of cookies instructs the browser to only
send that cookie if it is being sent by HTTPS.
Heroku is a good tool to create and manage environment variables for Node.js applications. You will need to configure the Node Package Manager (npm) within
Heroku before setting these variables. npm can read any configured environment variables, so long as these variables begin with NPM_CONFIG.
Node.js Environment Variables Here are a few providers available to host your Node.js application:
Production API Keys and Credentials • Heroku
The use of environment variables will allow you to store API keys and
• Microsoft Azure
related credentials, rather than having to assign a global variable for
them. In Node.js, you can access the environment variables through the • Google Cloud platform
process.env property. • Digital Ocean
• Amazon Web Services
Setting the Production Node
To ensure that the application knows it is running in a production Review Load Balancing Options
environment, you must set NODE_ENV=production. This ensures that The Node.js cluster module can be used to enable application load
the application pulls the production configuration when running in your balancing. There is also a “sticky session” module in Node.js that takes
production environment. incoming connections to the application and routes them based on the
originating IP address.
Process Clustering
You may want to take advantage of the cluster module to launch multiple
threads to handle the load of several Node.js processes. To set up process
clustering, be sure to include the cluster variable as follows:
var cluster = require(‘cluster);.
1. Load balancer
2. Database
Dealing successfully with these issues can ensure a much faster and more efficient deployment process — as well as much better relationships with other team
members involved in the process.
For private npm packages, you will need an npm credential on the • RAM overflow: This typically occurs when processes run out of RAM.
server(s) where the npm install command will need to run. This However, this can also occur if a Node.js process gets overwhelmed.
credential will give permission for private packages to be installed. • Memory leaks, with or without an overflow.
• Runaway recursive function: A message will state “Maximum call
Deploy and Run Using Node.js Supervisors or Managers stack size exceeded.”
As discussed in the section on error handling, use Node.js supervisors • CPU lockups: A process is locked up because the CPU is overwhelmed
and managers to automatically detect and handle failures. or caught in an unending blocking loop. The culprit of this loop can
usually be found in a ‘while(true) {}’ statement in the code.
• No server responses: This occurs if you forget to call ‘res.send()’ in the
code. The default TCP timeout in Node.js applications is 120 seconds.
No matter how you see it, deploying an enterprise application can In this chapter, we’ll discuss some of the most common examples of these
be harrowing. According to one recent survey, 30% of all application “Day One” deployment problems — particularly those that result in crashes or
deployments fail. Another survey reported that 77% of organizations have other high-impact issues. We’ll also touch on the use of application monitoring
software production release problems. Clearly, anyone tasked with deploying to help you detect and manage problems more effectively.
an application should be ready for things to go wrong — perhaps badly wrong.
As this chapter’s title suggests, it’s important to have patience and stay the
We’d like to tell you that the Node.js deployment process will be less course during this time, work through problems as they appear, and pay
challenging than these industry norms. And certainly, a robust pre-production special attention to our advice (Rule One) about sticking to a process for
process can help to minimize the impact of bugs, configuration failures, and pushing out fixes.
other avoidable problems.
Even the best preparation, however, can’t keep Murphy’s Law from governing
your Node.js deployments: Anything that can go wrong with a production
release will go wrong. Many of those problems — avoidable or not — will
surface during the first 24 hours after an application enters production use.
1. Rule One for Production Emergencies: Stick to Your Process Guns 2. Application Monitoring: Your New Best Friend
Application crashes and other visible, potentially high-impact failures can It’s hard to eliminate unpleasant surprises completely; they come with
trigger a panic response. Fixing the problem is the only thing that matters. the territory when you build business applications. What you can do
is minimize the gap between when a problem occurs —or even when
This is a recipe for disaster. The solution is always — always — to follow warning signs of a problem appear — and when you learn about them.
your established protocol for pushing changes to production.
A good application monitoring package gives you this capability,
Keep in mind that “simple” changes often become less simple when ensuring that you learn about a problem with an instant notification
you have a chance to think about them. In a clustered environment, for instead of messages from upset users (or your boss).
example, making a direct change will force you to mimic the exact same
change on every server. Even a random missed keystroke can turn into a Many monitoring solutions are available that work well with Node.js
situation much worse than the original problem. applications — noteworthy examples include AppDynamics’ Unified
Monitoring. Include monitoring in your standard deployment process,
There’s no mistake worse than one that’s both self-inflicted and and ensure it’s in place during the critical first 24 hours in production.
avoidable. Stick to a process that ends with a solution — not a crisis.
2. Exceeding API Rate Limits • Adapts to related search results, responding wherever possible,
Rate limiting is a common practice among web developers, since it gives directly from cache and thereby reducing the number of API
them a very effective way to restrict a variety of activities within defined calls required.
parameters. Rate limiting can be used to restrict the number of user • Clever design can also enhance the benefits of implementing
queries, as an event-throttling tactic, or as a way to limit API requests. caching. For example, If you know that there is a hard limit to
the maximum number of responses from a given search string
and the data footprint is not excessive, why not pre-fetch the
full set of responses? That way you can reduce the number of
API calls even further by having all possible responses already
sitting in cache.
If caching doesn’t ultimately provide the benefits you were expecting, 5. File Upload Issues
then do what’s necessary to keep your application available and as The server can be overwhelmed by upload requests immediately
functional as possible — which means, in this case, temporarily disabling after deployment to production. This occurs when the files are being
the feature(s) causing the problem and contacting the provider for help uploaded to the local file system. This is more of an operational issue
with a permanent solution. than a bug or other software-related failure, but it’s still a significant
source of unplanned downtime risk.
3. Troubleshooting WebSocket Issues If an error occurs during file uploads, however, there’s another potential
At this point, most developers working with Node.js will be acquainted issue to consider. Typically, a body parser such as Skipper can handle
with a WebSocket and with the benefits it offers versus using HTTP. As file uploads without a hiccup. An error message suggests a disconnect
of this writing, the most popular WebSocket library for Node.js is Socket. in that process.
IO, which also benefits from being relatively easy to implement and use.
6. Denial-of-Service (DoS) Attacks
Besides Socket.IO, there are many different WebSocket libraries Denial-of-service attacks pose a complex problem involving multiple
available for Node.js today. While each has its own API, they’re all built layers of protection across the networking stack. There isn’t much that
on top of TCP and do basically the same things. can be done at the API layer, but configuring Sails appropriately can
mitigate certain types of DoS attacks:
No matter which WebSocket library you use, any problem that impedes
• Sails sessions can be configured to use a separate session store
communication between your server and clients is obviously an urgent
(e.g., Redis), so your application can run without relying on the
one. The following procedure gives you a quick and easy way to
memory state of any one API server. This allows you to distribute
reproduce a suspected WebSocket issue:
load by deploying multiple copies of your Sails app across as many
servers as required. You do need to front end your servers with a
1. Create a WebSocket server in its own Node.js process.
load balancer, configured to ensure incoming requests are always
2. Connect to this server using the WebSocket client. This client directed to the least busy server. This significantly reduces the risk
should be running a separate process. of one overloaded server becoming a single point of failure.
• If you are not using the long-polling transport enabled in sails.config.
3. End the server process created in the first step. sockets, then Socket.IO connections can be configured to use a
4. Check the status code. If an error code of 1000 (normal) is reported, separate socket store (e.g., Redis). This eliminates the need for
then this is incorrect, as the browser would normally report an error sticky sessions at the load balancer and the possibility of would-be
code of 1006 in this circumstance. attackers directing their attacks against a specific server.
Even better news is that you’re also over the hump with your current Node.js deployment.
From here, you’ll be dealing with a more stable and reliable application — and that, in turn,
frees you to focus on ways to improve your application’s performance and to upgrade your
own process for building, testing, and deploying Node.js applications.
For the first time since beginning this process, you’ll probably feel a real sense of
momentum and progress — getting your Node.js project out of traffic and onto the open
road. Let’s see what this means in terms of your next-step projects and priorities — our
topic for Chapter Three.
To gain more control over your application’s garbage collector, you can provide flags to the
underlying JavaScript engine in your profile:
Node.js apps must fork multiple processes to maximize their available resources, which is
referred to as “clustering.” This is supported by the Node.js Cluster API which you can invoke
directly in your app. With the Cluster API, you can optimize your app’s performance and the
Heroku Node.js buildpack provides environment variables to help.
If you need more information, then check out the Heroku Dev Center, which is a great
source of articles on tuning and managing your Node.js app, including how to manage
Node.js concurrency.
To Conclude
Hopefully, this short eBook has provided helpful insight into
a best practice approach to deploying and managing your
Node.js applications. Like containers and microservices,
Node.js best practices continue to evolve, but you’re already
off to a great start and well-placed for success in the rest of
your Node.js journey.
Mike McNeil is the founder of Sails.js. The Sails framework was developed by Mike
McNeil to assist his team in building scalable Node.js projects for startup and enterprise
customers. Since its release in 2012, Sails has become one of the most widely-used web
application frameworks in the world.