Implementing Microsoft Sharepoint 2019 An Expert
Implementing Microsoft Sharepoint 2019 An Expert
Implementing Microsoft Sharepoint 2019 An Expert
Microsoft
SharePoint 2019
Lewin Wanzer
Angel Wood
BIRMINGHAM—MUMBAI
Implementing Microsoft SharePoint 2019
Copyright © 2020 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system,
or transmitted in any form or by any means, without the prior written permission of the
publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the
information presented. However, the information contained in this book is sold without
warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers
and distributors, will be held liable for any damages caused or alleged to have been caused
directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies
and products mentioned in this book by the appropriate use of capitals. However, Packt
Publishing cannot guarantee the accuracy of this information.
ISBN 978-1-78961-537-1
www.packt.com
To God, for giving me the opportunity to help others, to my mother, Janie Wanzer, and to the
memory of my father, Clayton, for their sacrifices and for instilling the values of courage and
perseverance. To my wife, Elsi, for being my loving partner, along with all my kids, old and
young, who supported me through this effort.
– Lewin Wanzer
I would like to thank God for the goodness he has shown to me and my family. I am grateful
to have this opportunity to share this knowledge with the technical community and beyond.
I dedicate this book to the memory of my mother, Jaqueline Price, and my grandmother,
Mary Price, for instilling in me the belief that strong, smart women can accomplish anything.
Lastly, thanks to my son, Jaire, for his support and for sharing me with this book as I spent
many hours locked away writing.
– Angel Wood
Packt.com
Subscribe to our online digital library for full access to over 7,000 books and videos, as
well as industry leading tools to help you plan your personal development and advance
your career. For more information, please visit our website.
Why subscribe?
• Spend less time learning and more time coding with practical eBooks and Videos
from over 4,000 industry professionals
• Improve your learning with Skill Plans built especially for you
• Get a free eBook or video every month
• Fully searchable for easy access to vital information
• Copy and paste, print, and bookmark content
Did you know that Packt offers eBook versions of every book published, with PDF and
ePub files available? You can upgrade to the eBook version at packt.com and as a print
book customer, you are entitled to a discount on the eBook copy. Get in touch with us at
customercare@packtpub.com for more details.
At www.packt.com, you can also read a collection of free technical articles, sign up
for a range of free newsletters, and receive exclusive discounts and offers on Packt books
and eBooks.
Contributors
About the authors
Lewin Wanzer is a seasoned SharePoint architect with over 30 years of IT experience,
of which he has spent 16 managing SQL Server and SharePoint environments. As an
architect, he specializes in governance, planning, taxonomy, design, infrastructure,
implementation, migration, maintenance, and support for SharePoint Enterprise and
Microsoft Cloud environments. He also has expertise in IT management, business
analysis, and process development, designing solutions and managing large projects,
bringing together many years of hands-on experience and knowledge.
It has been a long ride with SharePoint, but I would like to personally thank some key people
and entities that have helped my career when I needed it most: Microsoft, for the opportunity
to work as a PFE supporting SharePoint from 2011 to 2013, and the PFE who told me I
would never make it at Microsoft; Marion and Dallas Bishoff, for giving me an opportunity
to push myself to be a consultant; the teachers I have had throughout the years of schooling,
who have been positive and negative with me; finally, thanks to those who shared tech
concepts without thought or apprehension. These many thanks go to ALL of you!
Preface
1
Understanding Your Current State
Technical requirements 18 Syncing files with the OneDrive sync
client (NGSC) 33
Warning – deprecated features 18
PowerShell enhancements 33
Deprecated features 20
New health analyzer rules 35
List of removed features 25
Improved features 35
New features 27
Service application enhancements 28
Server support updates 37
Additional documentation links for Assessing the environment 39
central administration 28 Other collaborative tools 42
Communication sites 29
Fast Site Creation 29
Best practices 44
Increased storage file size in Windows Server and VMs 45
SharePoint document libraries 30 SQL Server 46
Modern lists and libraries 30 SharePoint Server 46
Modern sharing experiences 30
Governance 48
Modern search experiences 31
Stakeholders 49
Modern team sites 31
Why governance is important 50
Sharing using modern internet
Determining an approach 51
information APIs 31
Creating a governance plan 51
SharePoint home page 32
Creating sites from the home page 32 Summary 60
Site creation support for AAM zones 32
Questions 60
SMTP authentication when sending
emails 33
Use of # and % in file and folder names 33
ii Table of Contents
2
Planning and Architecture
Technical requirements 62 SharePoint licensing 78
Planning – overview 63 Battling contingency 78
3
Creating and Managing Virtual Machines
Technical requirements 108 Adding the server to the domain 125
Creating hosts and VMs 108 Configuring Hyper-V Manager
Server feature comparisons 110 on Windows Server 2019 128
Defining needed server roles 113 Hyper-V advanced post-installation
options 134
Installing Windows Server 2019
– host configuration 116 Creating VMs 135
Configuring the network and server Creating our first VM 136
names 120
Configuring network and internet Summary 148
access 122 Questions 149
4
Installation Concepts
Technical requirements 152 List of configuration details 154
Installation updates 152 Configuring SQL Server 2017 154
Table of Contents iii
5
Farm and Services Configuration
Technical requirements 196 Antivirus settings 214
Configuring SQL Server services Web Part Security 218
196 Block File Types 220
SQL properties 197 Creating service applications 221
Configuring SharePoint Services Registering service accounts 223
202
SharePoint MinRole resources 239
Setting server logging locations 202
Understanding the Distributed
Creating the state service 207
Cache service 242
Creating SharePoint logging services 207
Summary 243
Antivirus and security Questions 244
configurations 214
6
Finalizing the Farm – Going Live
Technical requirements 246 Creating web applications and
associations 259
Finalizing loose ends 246
Search and the User Profile service 249 Summary 296
Configuring your connection to Active Questions 296
Directory 256
iv Table of Contents
7
Finalizing the Farm to Go Live – Part II
Technical requirements 298 choice 321
Understanding workflows and Types of migration 324
Workflow Manager 298 Migration tools 327
Installing Workflow Manager 299 Scheduling migrations 331
8
Post-Implementation Operations and Maintenance
Technical requirements 355 Why is a development environment
important? 394
Post-implementation 355
Why do you need a test environment? 395
DNS cutover and on-premise
Production – why you should be
migrations 361
cautious 395
First-day blues 380
Understanding team dynamics 382 Summary 396
Incident management concepts 385 Questions 397
Exploring troubleshooting tools 387
9
Managing Performance and Troubleshooting
Technical requirements 400 Monitoring in Central Administration 433
Performance overview 400 Monitoring as a Site Collection Admin 434
10
SharePoint Advanced Reporting and Features
Technical requirements 454 Connecting to a SharePoint On-
Premises environment 487
Advanced reporting and
features 454 Power Automate Desktop 488
11
Enterprise Social Networking and Collaboration
Technical requirements 506 Network considerations 523
Social features in SharePoint Communication 524
2019 506 Using Microsoft Teams 525
Microsoft's direction in enterprise Best practices for organizing Teams 541
social networking 507 Limits and restriction considerations 542
Configuring an auditorium solution
Microsoft Teams overview 508 with Microsoft Teams 542
Microsoft Teams in the age of the Creating a live event 548
pandemic 510
SharePoint and Microsoft Teams 511 Summary 557
Tasked to suddenly roll out teams – Questions 557
now what? 513 Further reading 559
Deployment 522
12
SharePoint Framework
Technical requirements 562 solutions 584
Developer essentials 562 Visual Studio setup 590
Chapter 4, Installation Concepts, teaches you how to best install SQL Server to support
SharePoint Server 2019 step by step and also provides a step-by-step guide to SharePoint
Server 2019 installation to complete the installation of the server process.
Chapter 5, Farm and Service Configuration, covers SharePoint Server 2019 logging and
services, along with instructions on how to configure the services within the farm to
achieve the best configuration. We also review MinRoles and how they can be used
strategically, along with a good review of the Distributed Cache service.
Chapter 6, Finalizing the Farm – Going Live, sees you do some final configuration of
services, specifically the Search and User Profiles services. You will create web applications
and learn how to extend web applications using zoning and alternate access mappings.
Chapter 7, Finalizing the Farm to Go Live: Part II, walks you step by step through the
installation and teaches you more about integrated applications such as Workflow
Manager. We talk more in this chapter about authentication, software and hardware load
balancing, and migration concepts, and complete some final checks before we release our
environment to our user community.
Chapter 8, Post-Implementation Operations and Maintenance, is dedicated to day-one
scenarios and discusses how to prepare and what work needs to happen on the cutover
weekend. We pose many common questions at the beginning of this chapter that are
answered with our experience from the lessons we've learned to help prepare you.
Applying SharePoint updates and other maintenance is also covered in this chapter.
Chapter 9, Managing Performance and Troubleshooting, reviews the client's and developers'
roles in maintaining the performance, tools, and stability of your SharePoint 2019 farm.
We also discuss administrative responsibilities around the performance of your farm and
look at which areas to support for the best possible performance. Troubleshooting is also
discussed in the areas with tools and we provide some scenarios and fixes from our own
experience.
Chapter 10, SharePoint Advanced Reporting and Features, looks at deprecated reporting
features, such as Excel Services and BI, and what new features are offered as replacements
in the configuration of SharePoint Server 2019. We also dive into connecting to the cloud
and the options available once hybrid connectivity is in place. Since Power Tools are
important, we go through each tool that includes Azure integration and how they can be
used with your on-premises environment.
Chapter 11, Enterprise Social Networking and Collaboration, is dedicated to showing you
Microsoft Teams and how it has changed how we communicate in the new cloud and
SharePoint Server 2019 environment. We also show you some other cloud social tools that
are easily integrated into SharePoint Server 2019. If you are looking to use Live Meeting,
we have included some high-level details on setting up Teams for small auditoriums.
Preface ix
Chapter 12, SharePoint Framework, is dedicated to developers and what they will see has
changed in SharePoint Server 2019 farms. We give recommendations on the tools and
languages that developers need to get familiar with and some best practices to follow. We
also walk through SPFx and the setup of Node.js step by step.
You will have to download some of the integrated tools, such as SharePoint Designer 2013,
InfoPath 2013, and others mentioned throughout the book, to use them and install them
on a desktop. We do not cover these tools within the book, but research and practice with
them to learn how to develop workflows and forms.
If you are using the digital version of this book, we advise you to type the code yourself
or access the code via the GitHub repository (link available in the next section). Doing
so will help you avoid any potential errors related to the copying and pasting of code.
If you are looking to follow along with the book and build while you read, then be
prepared to get your own server or a hosted server in Azure or AWS to set up your
environment. This will require the purchase of either a used server from eBay or
a subscription to one of the cloud services mentioned here.
We also have other code bundles from our rich catalog of books and videos available at
https://github.com/PacktPublishing/. Check them out!
Conventions used
There are a number of text conventions used throughout this book.
Code in text: Indicates code words in text, database table names, folder names,
filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles.
Here is an example: "Mount the downloaded WebStorm-10*.dmg disk image file as
another disk in your system."
A block of code is set as follows:
When we wish to draw your attention to a particular part of a code block, the relevant
lines or items are set in bold:
[default]
exten => s,1,Dial(Zap/1|30)
exten => s,2,Voicemail(u100)
exten => s,102,Voicemail(b100)
exten => i,1,Voicemail(s0)
Preface xi
Bold: Indicates a new term, an important word, or words that you see onscreen. For
example, words in menus or dialog boxes appear in the text like this. Here is an example:
"Select System info from the Administration panel."
Get in touch
Feedback from our readers is always welcome.
General feedback: If you have questions about any aspect of this book, mention the book
title in the subject of your message and email us at customercare@packtpub.com.
Errata: Although we have taken every care to ensure the accuracy of our content, mistakes
do happen. If you have found a mistake in this book, we would be grateful if you would
report this to us. Please visit www.packtpub.com/support/errata, selecting your
book, clicking on the Errata Submission Form link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on the Internet,
we would be grateful if you would provide us with the location address or website name.
Please contact us at copyright@packt.com with a link to the material.
If you are interested in becoming an author: If there is a topic that you have expertise in
and you are interested in either writing or contributing to a book, please visit authors.
packtpub.com.
Reviews
Please leave a review. Once you have read and used this book, why not leave a review on
the site that you purchased it from? Potential readers can then see and use your unbiased
opinion to make purchase decisions, we at Packt can understand what you think about
our products, and our authors can see your feedback on their book. Thank you!
For more information about Packt, please visit packt.com.
1
Understanding Your
Current State
The current state of your environment is critical in how you proceed to start a new
enterprise SharePoint implementation, migration to a new version of SharePoint, hybrid
integration, or migration to the cloud. There are many checks and balances you need
to understand before moving forward with your SharePoint project. The reason most
projects fail with SharePoint is the lack of understanding and gathering of facts on the
environment from the enterprise, governance, content, customizations, errors, and server
configuration levels before you begin. Assessing the good and the bad in these areas gives
clarity on what needs to be planned, designed, and implemented to be successful. This
chapter will help you understand those areas that need to be identified to be changed,
updated, documented, and corrected before proceeding to new versions of SharePoint,
including the Microsoft 365 cloud.
The following topics will be covered in this chapter:
• Deprecated features
• New features
• Accessing the environment
• Best practices
• Governance
14 Understanding Your Current State
Technical requirements
The following will help you gain a better understanding of this chapter:
If a feature is referred to in this section as "removed," then we will consider this feature not
available and no longer supported by Microsoft in this version of SharePoint. This often
brings tears to my eyes, as some features can be widely used within your environment by
the user community. When relaying this information to a customer, this can be tough, but
it is important because you want to present a clear path to success. This also gives us, as
consultants and engineers, the opportunity to define how to recover from those features
that are lost. It is essential that we give our user community confidence that we will find
a way to make up for the lost functionality during our planning process. Third-party
solutions can sometimes be the answer to this issue but be careful with what third-party
companies you agree to purchase from. We will talk about third-party components later
in the book.
As you can see from these first four paragraphs, understanding your new application
software product version is one of the keys to success for planning and designing your
implementation. When installing fresh with no need for migration of a prior product's
content, this section may not be too useful to you; however, for those of you who are
upgrading and migrating content to this newer platform, this section is very important
to give clarity on the server platform, integrations, user features, administration changes,
developer platform changes, and other areas of the product that could have changed
between SharePoint 2016 to SharePoint 2019.
If you're migrating from other versions, such as 2007, 2010, or 2013, you will need to
go back and make a list of changes from all versions to see what features and changes
have happened between those versions as well. The reason we need to do that is that if
you're using SharePoint 2007, there will be a lot of change associated with your move
to SharePoint 2019. This change will bring a host of user questions, from the UI to site
collection features. Admins will see a big change in administration areas and how and
what other Microsoft products integrate within the product currently.
As part of understanding this platform, we also want to start making a list of areas lacking
in the supportability of the product. We will need to support all aspects to fully integrate
this platform into the enterprise.
Training and other aspects of learning are essential for users as well as admins to make
sure they do not get lost when implementing the product in the enterprise. Users'
understanding of the site features, how to navigate the sites, UI changes, and overall
SharePoint product progression through versions are essential use cases for requesting
training as part of this implementation. Training is something you do not want to leave
out as part of this project, even from a new installation perspective. I have seen one
implementation where no training was given to users and the product was not used
effectively for 10+ years due to the users not understanding how to effectively use
the product.
16 Understanding Your Current State
Deprecated features
The following listed features are deprecated in SharePoint Server 2019.
Aggregated newsfeed
The newsfeed was used to post status updates with everyone on a landing page or within
specific sites where you had permissions to do so. As users saw your post, they were able
to reply and continue responding to the conversation, as seen in similar functionality
on other social platforms. Users could also like your posts as well using the newsfeed.
The newsfeed functionality, which was made available by way of newsfeed.aspx and
mainly accessed via the Newsfeed tile in the app launcher, will be set to read-only. The
tile located in the app launcher will be removed, as well as the option to create newsfeed
capability.
If you are currently using the newsfeed capabilities, I recommend other alternatives, such
as Microsoft Teams or Yammer, but I would be wary of adopting Yammer as a full solution
at this time as I see this being a potential removed or deprecated feature soon.
Help content
In the past, content to support help within sites had been ineffective in supporting users.
We as administrators depend on this to give relevant information to users for issues they
may run into during their experience or even some instructions for how to manipulate
content within SharePoint. This has not been the case from my perspective in older
versions of SharePoint and it really needed some fine-tuning to get the type of information
to support the product efficiently.
Within this version, enhancements have been completed to make sure users access quality
help content. Help content will also be consistent and centralized as it will be accessed by
syncing with the Microsoft 365 help engine. Support for the legacy on-premises integrated
help engine will be a thing of the past. In this release, the legacy SharePoint help engine
will remain supported. The help content in the legacy help engine will be deprecated.
18 Understanding Your Current State
PerformancePoint Services
If you are wondering how to support Key Performance Indicators (KPIs) and
dashboarding in SharePoint, PerformancePoint is an integrated service to support these
types of solutions. This service has been around since the 2010 version of the product.
It's a cool integration, and as you can see, it will be replaced soon with newer alternatives
mentioned in other areas of this section. At this point, it would be great to start migrating
your solutions and moving away from this service so that you can be ahead of the
game before it's too late. The reason for the deprecation of PerformancePoint Services
is that it has dependencies on Microsoft Silverlight. This technology will no longer be
supported as of October 12, 2021. PerformancePoint Services will remain supported in
this version of the product. I strongly advise you to explore Power BI as an alternative
to PerformancePoint Services. There are many new business intelligence enhancements
being made due to big investments in Power BI.
20 Understanding Your Current State
SharePoint Designer
If we take a look at the progression of SharePoint over the years, you can see that
SharePoint Designer was an essential part of the product versions released for 2007 and
2010. As Microsoft made major changes in 2013, you can see that SharePoint Designer
would not be around much longer due to the integration of outside product integrations
to manipulate SharePoint sites. We can also see that as the Microsoft 365 platform grows
with product announcements, support for the replacement of SharePoint Designer is
on the way. Microsoft has announced that there will not be a new SharePoint Designer
client shipped with this release. Microsoft has also said that the SharePoint Designer 2013
product will work with SharePoint Server 2019 for the remainder of the client support life
cycle, which is until 2026. SharePoint Designer 2013 will not be supported beyond that
support life cycle. You can see that things are coming to an end for this product, but the
question is when.
Site mailbox
Site mailboxes provide a way to centralize and share email using collaboration with teams
and groups. With mailboxes growing due to the amount of email produced, this offers
some options for teams, especially related to forwarding mail due to sharing project-
related messages. Since this feature's introduction, there has not been much adoption and
the feature has seen a decline since it was offered in Microsoft 365. Microsoft has made
this feature deprecated in Microsoft 365 and now announced that they will also do the
same for SharePoint 2019. The reason behind this is the decline in the use of the feature.
Personally, I believe this is a great feature and I am not sure why more companies didn't
move toward using it. Just the use of forwarding email within the enterprise probably
takes up the majority of mailbox space to manage project and team-shared emails. I
can only see good in this feature, but Microsoft has spoken. Shared mailboxes would be
the alternative for the replacement of this functionality, and in hybrid scenarios using
Microsoft 365 groups as an alternative.
Site manager
The main functionality of the site manager is now available in the modern document
library copy/move announced on the Microsoft community website. The site manager
feature will be supported, but it is deprecated in SharePoint Server 2019. Only site
collection administrators will have permission to access the site manager page and the
UI entry points to this page will be removed. You will have to know the page URL to get
access to the feature. You can also copy/move content to OneDrive as well, as OneDrive
is able to be browsed using this feature.
Warning – deprecated features 21
Digest authentication
Microsoft will be deprecating the digest authentication feature in Internet Information
Services (IIS). This announcement came from the Windows team. When using the
SharePoint prerequisite installer to prep the server for SharePoint, the installer will no
longer attempt to install this Windows feature. When I have been on the road visiting
many companies, I have never seen this authentication used for SharePoint. It seems
Microsoft is aware that this isn't a very popular authentication mechanism. There
are other alternative mechanism choices that could help those that use this specific
mechanism. As alternatives, Kerberos, NTLM, and SAML are available to explore.
22 Understanding Your Current State
Multi-tenancy
You may be surprised to hear that this was one removal I was hoping for. I have never
been a fan of multi-tenancy. I think it brings too many challenges to the planning and
design of the platform. It also brings challenges to the performance, configuration, and
support of growing sites. There was the need to provide more support for this platform
that just wasn't there, which made me stay away from recommending it to customers.
Microsoft continues to innovate this feature in SharePoint Online but there is much more
complexity in the configuration and bigger cost in providing this in the on-premises
environment. Microsoft has announced that this feature will no longer be supported in the
SharePoint Server 2019 release. Customers who are currently using this feature can still be
supported up to the SharePoint Server 2016 version. Of course, you can always move to
Microsoft 365, where it's fully supported.
New features
With the ever-changing SharePoint platform, I want to make sure you understand all of
the changes to this new platform before proceeding to the installation of the product. It
is easy to get lost in the mix of all the changes and information you may see online. This
is not to say do not check behind me. As I write this book, things could be changing at
the same time. So, make sure to do that but also make sure to document and plan this
implementation well. Leaving out any crucial details during this assessment could bring
about a migraine headache, so please be thorough and make sure to work as a team with
other IT areas for support of this SharePoint 2019 on-premises implementation.
24 Understanding Your Current State
As part of our assessment to understand your current state, we also need to investigate
new features. This is useful for both those new to SharePoint Server and those that have
implemented SharePoint with prior versions of the product. New features cover many
different areas within the platform, as we saw in the Depreciated features section of this
chapter. As part of this section, we will list the same areas of changed components so that
all bases are covered to start planning and designing your implementation. As we saw in
SharePoint 2016, there were many changes to the platform. Integrations that once were
common were not common anymore. The installation has become more complex but for
good reasons, as server components have been individualized for performance. So, in
this version, we need to expect the same types of updates and even more integration with
Microsoft 365 using the hybrid features.
Here is the list of new features for SharePoint Server 2019.
Communication sites
Communication sites hit the scene in SharePoint Server 2013. These sites were created to
help better share news, showcase stories, or broadcast messages to other users within the
site. With communication sites, there is now a new Hero web part that can display up to
five items with rich images, wording, and associated links to draw attention to the content
you deem as highly visible on your communication site.
Fast Site Creation is used when creating sites in the following entry points:
PowerShell enhancements
This section lists the new PowerShell cmdlets for SharePoint Server 2019.
The stsadm.exe -o sync command will also be supported for those who still rely
on STSADM.
• People Picker health rule: The People Picker health analyzer rule has been added
to detect whether servers in the farm are missing the encryption key. The encrypted
key is needed to retrieve People Picker credentials in SharePoint. The new health
analyzer rule detects when the People Picker is configured to find users in another
forest or domain with a one-way trust to the SharePoint farm's domain. When the
rule is run, it returns information if the rule finds any missing encryption keys. The
process will then notify the SharePoint farm administrator.
• SMTP authentication health rule: There is a new SMTP authentication health rule
that has been added for SMTP authentication. This health analyzer rule notifies the
SharePoint farm administrator if any servers in the SharePoint farm are missing the
encryption key needed to retrieve the credentials for authentication.
Improved features
The following listed features are improved in SharePoint Server 2019.
Distributed Cache now uses background garbage collection by default: Distributed
Cache, during my travels helping customers, has always been a configuration left out
and forgotten. This configuration is very important to the stability of your SharePoint
farm and should not be left out. Make sure to configure this as one of the first steps once
your farm has been created and services are stable. Make sure to follow the Microsoft
configuration best practices for this service to work effectively in your environment. In
SharePoint Server 2019, Distributed Cache has changed to include AppFabric velocity
cache. AppFabric velocity cache will be used for background garbage collection. This new
component helps to provide a more sound experience for features that depend on the
Distributed Cache service.
File path limit of 400 characters: In SharePoint Server 2019, the file path length limit
was increased from 260 characters to 400 characters. The file path is all the characters
used when typing in the URL after the domain or server name and port number. This
will be helpful as it can help with content that may be nested within sites that require
longer URLs.
32 Understanding Your Current State
• A reverse proxy
• STS certificate
• Inbound connectivity
• Operational Active Directory (AD DS), SharePoint Server Farm, and Microsoft 365
Organization (E1 – minimized functionality with hybrid federated search results
only, E3 and E4)
Once the farm has reached the minimum requirements needed to support the hybrid
connectivity to Microsoft 365, the page will give you direct access to launch the
Hybrid Configuration wizard. The links have also been added throughout the central
administration site, so you always have access to get to the Hybrid Configuration wizard.
Recycle bin improvements: In SharePoint Server 2019, users can restore items they have
deleted personally, along with other items that other users have deleted. Users would need
to have edit permissions on the deleted items for them to be available in the recycle bin.
Sharing email template: Sharing email notifications have been updated to
use the modern template design. We can now set the SPWebApplication.
SharingEmailUseSharePoint2016Templates property to true so that if we
want to continue using the previous email sharing template, we can do so.
Suite navigation and app launcher improvements: Microsoft has changed the suite
navigation and app launcher experience in SharePoint Server 2019. The experience using
these interfaces is very similar to the Microsoft 365 experience. Users using the hybrid
experience will now have a seamless experience using both SharePoint Server 2019 and
SharePoint Online.
The SPFX Framework: This is the latest from Microsoft to support the development of
custom web parts for both SharePoint and Teams clients. The SPFX Framework is used to
customize modern sites, but modern sites cannot be fully customized. Developers can use
Angular, React, or JavaScript with SPFX for customizing SharePoint. We can also integrate
many applications with the modern UI easily but the developer has fewer options
compared to developing on classic sites. Classic sites are fully customizable using JSOM
and CSOM along with the REST API for customizing classic sites. There is now increased
support for JSOM and CSOM.
Server support updates 33
Telemetry privacy experience: SharePoint Server 2019 now has an updated telemetry
management experience. As you set up your farm, you can provide an email address that
will be used for the telemetry contact as part of your organization. This is in anticipation
of future telemetry reporting capabilities that will allow customers to associate SharePoint
Server and OneDrive sync client telemetry with their hybrid tenancy. The email address
provided cannot be used outside of the SharePoint farm. This address will not be sent to
Microsoft for any reason. The farm data will be used to generate a unique hash value that
represents your farm. This will make your data unique when uploading telemetry data
to Microsoft. As customers start to associate telemetry with hybrid tenancy, the email
address will be used as part of the process to show ownership of the data. If you do not
want to provide this data to Microsoft, you have a choice to opt in or out at any time.
Visio Services accessibility: Visio Services has introduced a few accessibility
improvements. The improvements are for high-contrast displays, keyboard navigation,
and Narrator. Within Visio Services, users will be able to use the different panes from the
keyboard shortcuts listed:
• Drive filesystem support: SharePoint Server 2019 provides support for drives that
are formatted with the Resilient File System (ReFS).
• Single-label domain names: SharePoint Server 2019 will not support single-label
domain names.
34 Understanding Your Current State
• SQL updates: In prior versions of SharePoint, SQL Server Express was supported
to provide a single server installation for the quick setup of SharePoint Server. In
SharePoint Server 2019, SQL Server Express is not supported as you will have to
install a separate SQL Server instance to support even a small test or development
environment. In Azure, the use of SQL Azure (the SaaS service) is not available
for support for any SharePoint databases. Please take note that these changes will
change the way you provide lower-level environments.
• Office client installations: SharePoint Server 2019 does not support Office
applications installed on the same computer. The minimum supported version for
SharePoint 2019 is the Office 2010 client.
• .NET Framework 3.5: As admins, we are aware of how to install .NET Framework
3.5 on our servers when preparing our operating systems for SharePoint. Prep for
.NET Framework 3.5 will continue to be supported using a manual installation as
there have been no updates to improve this feature installation for automatic install
using the prerequisite process.
Now that we have identified all the areas where there are new features within SharePoint
2019 and have learned about the old features that have been deprecated, we can now
focus on how we move forward with our new farm configuration. Moving forward takes
documentation and thinking about what our goals are for this project in terms of the
administrators and users who use the services.
We will start that process in the next section of this chapter with accessing the
environment, which will give us the opportunity to see where we are in the current
environment and where we need to be in the new environment. This also gives us the
opportunity to understand where features will be enhanced and where features will stay
the same or be non-existent in our new environment. Having this laid out now will help
you design the right solutions to support the users that currently use the environment
so that no interruption to services happens during the migration; or even if you are
starting from scratch, this helps to match requirements to the services you are providing
in the farm.
Assessing the environment 35
• Authentication methods
• Web applications
• Customizations
36 Understanding Your Current State
• Workflow
• Application services
• Search
• Farm configuration
• DEV, TEST, and production environments
• Content database health (unused, version, and size)
• Orphaned sites and site collections
• Custom and third-party solutions, active and inactive
• Server health (Windows logs)
• Server performance and configuration (SQL and SP)
• Farm Health Analyzer issues
• Microsoft update status
• Governance health
• Organizational health for support of the environment
• Network performance and issues
• Current known issues
• Multi-cloud complexity
Grades should be given to each area assessed so that you can determine the next steps for
that particular finding:
The outcome of this assessment would be to develop a remediation plan based on the
findings. This remediation plan could be used to clean up the current environment to
support the migration or you could start clean and avoid the potholes you created in the
old environment using a migration tool to migrate the content.
One thing I also want to point out is to make sure that your current environment is
working and satisfying our user community. We may want to send out a survey or two
to see what responses we get from our user community as well. One of the biggest errors
we can make is only focusing on the technical aspects of this assessment and not the
satisfaction of the users who use it every day.
There are also developers that work on this platform that we need to talk to as well.
We want to make sure we are providing all they need in our current or new SharePoint
environment to satisfy their workspace. Sometimes, developers are left out of talks as well
but they are another big part of the whole equation. There are also servers and processes
that they are part of that may need some tweaking in our new environment. These
may require servers and software to bring the environment up to speed for them to use
effectively.
This assessment process must be thorough and really doesn't have to take a long
period of time. You can gather details in a month or so depending on the size of your
environment and user community. Email is an effective way to gather interview results
with short conversations on the phone. You can also create a SharePoint list to gather
those requirements and list details about the solutions these users are looking for to help
them with their workload. In the new environment, take some of those requirements
for solutions and develop those solutions quickly to show how SharePoint can help, and
show those solutions as a demo when you start talking to other departments about new
requirements for new solutions.
Note
Remember that some solutions you develop will cross department lines, so
we want to be able to know what solutions we have available in case another
department asks for a similar solution. Keep a list of those solutions and details
in a SharePoint list and on hand for demonstrations.
38 Understanding Your Current State
My point is not to demonize all apps or say that they're bad because they are not
Microsoft-built, but I want to make it clear that we have to set boundaries on what
applications are being used in our environment. We want to keep our data within
the control of our users but also in control under our enterprise umbrella. The
communication of these approved applications should be clear and written on your
company's intranet.
Determining how to proceed or go forward is up to you. As in my example, Slack could
be one of the mobile apps you see users applying to their business processes. When
evaluating your state of collaboration, you may want to keep this in place or determine
an alternative for them to use in place of that third-party application. Microsoft has
introduced many new applications as part of the Microsoft 365 cloud that can be
integrated with SharePoint on-premises and the cloud. Microsoft Teams would be a great
alternative instead of using Slack because of its integration with SharePoint.
Make sure to assess all areas of collaboration, which includes wikis, shared drives, mobile
apps, cloud applications such as Google Apps, and other areas of collaboration users could
be using within your desktop and mobile platforms. Take these to the governance board
and determine how to communicate these efforts to create a safe and secure environment
for your company enterprise data so that nothing is shared outside of your control. Create
requirements and use cases from these assessed areas as well to determine your path
forward.
Our assessment is the key to the success of this project and I cannot stress enough the
importance of knowing and understanding where you are currently. Again, even if you are
starting fresh with SharePoint, this is an important step to go through and you should do
what is told in this chapter. This really gives you the opportunity to start fresh and to make
improvements to the service.
The biggest failures come from not knowing where you are currently and not being
honest about where you are with errors/blotched configurations and where you want to
go with the service for the future. Requirements and requests for services are missed. This
could mean someone implements a separate system that does one of SharePoint's many
processes and you could have included this in the overall SharePoint service agreement to
support that new process.
With the assessment, you are looking at all of those areas to make sure you share the
capabilities with others so that these duplicate applications do not get implemented.
You can also avoid having duplicate solutions within SharePoint, but at the same time
duplicate those solutions for others using the platform. Again, this goes for multi-cloud
implementations as well as these seem to have spawned at a lot of companies. You need to
do a deep evaluation and inventory at this time. This saves money and time!
40 Understanding Your Current State
As we go through our assessment, we will look at best practices that are given by Microsoft
along with software limitations. This gives us a guide on what resources are required and
their limits that we can use based on services within SharePoint. Best practices help us
manage our server resources and understand what we need to run our environment with
no processing bottlenecks.
Best practices
SharePoint is being improved by Microsoft within new versions of the software, which
brings new features that require us as IT professionals to look at each version with a fresh
set of eyes. Microsoft releases best practices for these newer versions of SharePoint to help
us plan our environments based on the testing Microsoft does as part of its new product
delivery. These tests push the limits and boundaries of the product, and based on those
tests, Microsoft gives us a list of areas we need to adhere to so that we get the most out of
the product.
Microsoft's boundaries and limits page is dedicated to defining limitations within the
versions of SharePoint released. These are best practices shared by Microsoft to give tested
areas a maximum threshold for configuration. These tests are done when Microsoft has
finished developing the product to gather these points for sharing with the community.
These limitations are also valuable to your organization as part of the design and
configuration of your SharePoint farm. Communication of these values should be shared
with other administrators and users as the items are defined across the board.
Capacity (threshold) limitations are included as part of the boundaries and limits
dedicated to SharePoint. Depending on your version, you will see different capacity
limitations and available configuration areas. The capacity limitations do give some
hard-limit areas and some areas are defined, but you have the ability to go beyond them
based on other resources you have defined in your environment, such as servers and other
hardware supporting the environment.
Best practices are important to the design of any system as again, they help us provide
a stable environment for our users. I can say from experience that not all the best practices
and boundaries will be hard limits we cannot go beyond, but they give us a baseline to
work from and test in our own environments. In this section, you will see a subset of the
list of server best practices I follow when preparing to install the product as part of my
assessment.
Best practices 41
The reason for defining the best practices listed is that we need to make sure we know the
pitfalls as part of the guidance in our assessment. You can describe these best practices as
areas you want to avoid, almost like a removed feature in a way. Whether we are looking
at our old environment or prepping for a new environment, these best practices will go
a long way in avoiding bad situations, configurations, and performance issues.
Best practices can come from Microsoft or other IT technology; as well as these are things
we have learned through using SharePoint over the years that have not changed but have
stayed true throughout each version of the software, and some new things. Some of these
best practices actually void your support for Microsoft as well. You really want to pay good
attention to these items and mark them as part of your assessment.
Again, there are a lot more items we can share here, and if you want to review Microsoft's
list of best practices, refer to the following link: https://social.technet.
microsoft.com/wiki/contents/articles/12438.sharepoint-
community-best-practices.aspx#Performance_related_best_
practice.
The following is a short list of things I always think about when assessing environments
and getting geared up for planning and designing my new environment.
SQL Server
SQL Server is the core data store for a SharePoint farm. This server should be planned
very carefully for performance and redundancy. Please review my short list of best
practices:
SharePoint Server
SharePoint is a complex server installation and requires planning and precise
configuration. Please review my short list of initial best practices before you start
the journey into planning and design:
• 2–3 hard drives for storage SharePoint Server configurations (C drive for the
operating system, D drive for applications and install files (optional), and
an E drive for SharePoint LOGS and Search Indexes).
• Always provide minimum server resource requirements for all environments.
• Ports for SharePoint need to be defined for communication between the servers
within the environment.
Best practices 43
• Antivirus exclusions need to be configured so that areas of the hard drive are free
from virus protection.
• All service accounts must have names under 20 characters long.
• Make sure to use a separate admin account to install the product (not your personal
account) and use several service accounts to define your service application pools.
• Create a farm admins group in AD and assign local admin rights to this group on
each server.
• Remember that the farm account only needs admin rights on the server deemed
the User Profile Service server and that right can be taken away after the service is
started.
• Always use PowerShell to create your farm so that you can name your databases and
other resources.
• Use AD groups and assign permissions within the sites for the best performance.
• If doing custom builds of the operating system, make sure you test those builds,
doing a full installation before implementing your production farm.
• Always define quotas for your site collections.
I have only mentioned a few things to be aware of in this chapter and have defined many
other areas of best practices throughout the book. Some are from Microsoft and others
are lessons learned that I follow and have jotted down through my years working with
SharePoint.
As we go into our next section, on governance, be aware that all these chapters were
put together for a reason. All of these areas outline the pre-installation work you need
to do before you design your SharePoint farm. If you move forward without planning
and assessments, you will find out later that you totally missed the boat. The problem
is, this takes time, and if you want to give your community a great SharePoint service,
make sure to follow the guidelines given in this book. I cannot define everything due to
the limitations I have on pages in the book, but this will point you in the right direction
to have a vision of how you should be implementing SharePoint for a new or migration
project in the real world.
44 Understanding Your Current State
Governance
A challenge for companies today is deciding who owns the collaborative and social
environments in our company – is it IT or the business? In the case of collaboration
tools and social tools, it's the business, as IT is only a part of the governance stakeholder
community. SharePoint is included due to how widespread the data that could be included
in the content on-premises and on the cloud. This reaches across mobile, desktop, and
web platforms presented to our employees every day.
The other challenge is managing those available apps and enterprise solutions and
bringing the best solutions to our customers and departments within our entity. We
should never go into any project without first seeing how it reflects on our enterprise and
the data that is presented in that solution. As stated, if you miss the mark on governance,
you will fail the implementation.
As part of our environment assessment in this chapter, we need to take a look at
governance policies and standards to support our current and or newly implemented
systems. When creating and implementing any new application systems within your
enterprise, you need to provide policy guidance around the system to support it using
policies. This would include actions such as content updates, role assignment, branding,
and even training, as a few of many examples. The support of any system can be easy to
complex, and for that reason can stretch over many areas of management.
The areas of support included to be defined are as follows:
To provide the best policies and standards to support our new or changing system, we
need to include all of the areas mentioned. To support these policies, we would look to
governance as the mechanism to provide a composed list of system rules and policies
to be established within an organization. These policies would bind to a system or
application, and/or standardize these areas from a global perspective. Each can bring an
individual scope as well, in which a unique policy standard may be required depending on
that application's requirements.
Governance 45
This system of rules known as governance should meet the requirements of the users
while keeping administrative and managerial control of that data for compliance, policies,
processes, and all associated content. Included in this process is the process to check that
the appropriate systems run appropriate applications on their platforms and also give
the users on the board the ability to choose the right policies for the applications being
developed.
Governance of systems in your enterprise is needed to support applications and processes
that bring insight into ultimately all company data. With governance, you can define
where content lives, define shared policies across applications, and create policies to
protect those data areas. Since this is a centralized process and governing body, you will
also catch projects of the same outcome where you can recommend centralizing some
applications for different departments in your organization. This will help not to replicate
systems of the same kind but to use already developed platforms that can be repeated
within the organization to save money.
Stakeholders
The governance rules are created by a team of stakeholders within the company in
which everyone provides input to build individual policies that can be related to a
certain application or broad policies that affect all applications for individual or multiple
platforms. Rules are also created by a host of different stakeholders. They could range
from top management personnel to users who actually use the applications within the
environment. These rules should be implemented by this team or teams every time a new
application or system is requested.
The reason we want stakeholders to come from different areas of the company is to
provide different perspectives from all levels of management, the user community, and
senior management. This helps to provide total coverage of ideas and a variety of issues
that can be recommended to help prevent areas of the applications that may not affect
those unaware due to the level of involvement.
As these policies and standards are being created, they also need to be communicated
to the user community. With regard to that, I have seen a formal list of applications on
intranet sites that were relayed by management but when assessing the environment, there
were many other applications being used in the enterprise that were not approved for use.
The confusion started when SharePoint was initially released, as no training for the users
was given. All users interviewed had many things they could use SharePoint for, but did
not know how to create those applications.
46 Understanding Your Current State
This left a big collaborative asset within the enterprise that got no use and was deemed
an unusable product. Users were convinced that SharePoint was very hard to use and
there was no hope for the platform. This started a revolution of all of the enterprise users
and their management to find easier ways to do their work using mobile applications
that most users needed for the same issues. You can see, even in this situation, how
important training is to users when releasing a system, as well as how governance could
have weighed in to solve common application requirements and prevent these rogue
applications from being used.
In the cloud, we also have hybrid connected SharePoint on-premised environments and
OneDrive, which also bring a new element of governed content that crosses networks.
Each system has administrative controls, but some of the controls in the cloud are not
in our hands. Policies for these types of connected data repositories also need to be
considered in our governance document, as well as data compliance and security across
networks.
Determining an approach
With systems such as SharePoint, you could have enterprise implementations of
SharePoint that have been used for many years and those that are just getting started with
the implementation of the product for the first time. If you are in the situation of having
a system in place with no governance, it's time to get started. If you are starting fresh, it is
also time to get started.
Do not release without governance or a plan in place for a SharePoint environment.
SharePoint as a product has many areas to be concerned about for governance stability
and the stability of the product. It was created to give users more control of their daily
requirements, such as security and other ways of sharing, but can easily get out of hand if
you do not provide policies and standards for this environment.
In the next section, we will go through some of the areas of concern and get you started
with creating a governance plan. This process takes time and should be factored into
your project. Although it is one of many areas of concern in your planning, some of these
items can be determined while designing the system. The documentation that needs to be
produced during a SharePoint implementation can be time-consuming but it is needed to
really build this platform on a solid foundation.
When looking at our collaborative environments, we take the biggest piece and build
around it. Collaboration can be made up of a centralized solution and many other small
solutions that make up a total solution. All solutions will have different governance plans
but can be built to integrate those areas related to others, which is what collaboration is.
Creating this document is very important and I cannot stress enough how far this dictates
the success or failure of a collaborative project.
The following are some guidelines on how to create your document and some high-level
details about each area.
Vision statement
Using a vision statement in your governance document describes overall what you want
to achieve with SharePoint or any application you are preparing this document for. This
document should bring high-level details about the guidance provided within this living
policies guide. You should make sure to make your vision statement clear and provide
choices and critical guidelines that will be documented here. Again, this document is
living and will always change. Keeping ongoing versions of this document is critical to the
life cycle of governance in the environment you are writing it for.
This vision statement and details should be shared on the intranet for all users to see. This
gives a way to communicate to the whole enterprise and not just SharePoint users.
Guiding principles
Most companies in their vision of web content have preferences that outline best practices
for all users and site designers that they will have to adhere to and understand.
Guiding principles define a company's preferences and support the vision and goals of the
company. The critical statements that come out of this exercise best reflect the company's
outlook and include best practices that all users and site designers must abide by to ensure
the success of the company's implemented solution. It is common for your organization to
share many of the same principles that I have seen in successful SharePoint deployments,
which would include best practices from Microsoft. In some cases, you may deviate from
those best practices, but for the most part, you will include a good majority of those
common goals with the implementation.
Creating subsites
When end users have control to create their own content within a SharePoint site, we have
given them control to build their own content hierarchies. We need to guide users on best
practices when giving them free reign to organize their content. So, in this area of your
governance plan, you want to set standards on the areas that are affected by users when
they do create new subsites.
They should make sure to set the following:
• Content ownership: Provide a content owner who has responsibility for the content
and make them aware of any other governance practices related to their job.
• Security: Provide security for the new content and create new security groups when
needed for highly sensitive information.
• Content administration: Someone should be managing the information and
making sure that current and new sites are backed up and have the ability to be
restored.
• Navigation: As information gets added to the site, make sure to not build deep
hierarchical content within the site and keep the information easy for access by
other users.
Posting content
When posting content, we want to make sure that documents and content posted to the
site are not duplicated. There are features within SharePoint that can help with this, but
ultimately, this is the job of all users. Duplicated information adds to your content size,
especially when documents are concerned, and can also confuse users who may not get
the latest version of a document with relevant information.
The site sponsor is ultimately responsible, but users are responsible as well. As a team,
everyone should be working to ensure that posted content is accurate and complies with
policies that are put in place for record retention.
Things you can do to define these types of policies are as follows:
• Content formats and naming conventions: Some content, such as documents, may
require templates to make the record a valid record. Naming conventions also weigh
in as naming documents in certain formats may be a requirement by the sponsor.
• Content that contains links: Define responsibility for who should update links
within documents to make sure that the links are still valid.
Content auditing
Within SharePoint, there is the ability to audit content changes. You should consider
a policy to define a review process of sites and even types of content. Content that is
available to all users in your company should be governed using a content management
process. This process is used to ensure true, trusted, and vetted content is being displayed
to users. The review process should be as frequent as you need but shouldn't be longer
than a 1-year period.
Records retention
Records management is becoming a big deal in the SharePoint world. I see a lot of
information on these types of solutions popping up everywhere. In the governance of
these types of solutions, we need to create clear, defined policies on how records will be
used within your solutions. Codes for records should be used to identify content as a
record. You need to choose the appropriate method to apply records management policies,
which is especially useful because this dictates how users find information as well.
Data compliance
In the latest versions of SharePoint, data compliance has come to the forefront. There are
built-in compliance features for administrators for server roles and services. There are
also compliance site collection features we need to create policies and standards around
as well. These site collections have areas of automation that can be configured to help
manage compliance and send notifications on faults with content within our sites. Policies
and standards should be created to manage what constitutes a violation of data and what
type of data should be protected.
Page layouts
Page layouts are very important to present data to your users. Users can easily get
confused from the visual layout of a page, especially when data could be in no organized
manner. All users with permissions to change and add web parts should be familiar with
design and usability best practices.
Branding
Most organizations brand sites to give a look and feel based on the colors and logo of the
company. There are sometimes different departments that would like to project a slightly
different appeal to their entity within the company. In this case, there should be policies
in place to consider slight deviations of branding for these reasons. We also want to keep
in mind that users also want to know where they are within the site as well. For these
reasons, we need to be careful about how we brand our sites and what we approve as part
of our look and feel. Define standards and policies to help guide the branding within your
company.
Announcements
Using the announcement web part to relay information to your users is very useful as a
SharePoint feature. With that, we need to make sure the information is projected well and
is precise and descriptive. The title and announcement text should encapsulate all that
needs to be said within a descriptive title and no more than a few sentences. We should
avoid large fonts and not create emphasis on words using italics or underlining.
Links
The site designer will need to review and edit documents and other content to make sure
that links have the option of opening either in the same window or in a new window. Also,
there should be policies for links within documents, pages within the sites, and links that
go outside to your intranet.
Content-specific guidelines
To really impact your community with accessible content for the end users, we must make
content easy to find but also structured. 508 compliance also comes into play because not
everyone will have the ability to access content using formal methods. These policies for
easily accessible data will include those areas that will be addressed by tagging content.
Accessibility is important and is not always a thing we talk about or put a lot of effort into.
This is due to the vast majority of our users not usually requiring accessibility features.
There are many things we can do to help those who cannot easily access data in an
out-of-the-box format. Third parties and other manual ways of including all users can be
achieved using these methods.
Security
Security is one of the most important design elements in a SharePoint site. When you
think about security during the design process, make sure you understand how content
will need to be secured in the site as this will affect how the site is accessed. Site structure
and page layout could be affected due to this important element.
It is important to ensure that everyone who has permissions to assign security roles
understands how SharePoint security works and has had some type of training to ensure
that company policies are followed, especially with the introduction of the cloud and
hybrid connectivity.
There are options to create security groups using AD and SharePoint. Users would need to
understand when to use these different groups and why they are being used. Please make
sure to define policies around your content clearly so that everyone understands how
SharePoint security works and understands your policies and procedures.
56 Understanding Your Current State
Training
When it comes to governance, there could be a big learning curve as some users may have
never been a part of a process like this before. This brings another area of concern, but
you do not want to turn away those that are deemed as part of the team due to this issue. I
know of training centers that key on governance as a topic of choice but you can also talk
with Microsoft to help you work on your governance plan as well as training those who
will be part of the team.
Training for governance is too important to miss! To get the most out of the personnel you
select as part of this process, please train your staff to understand what the governance
plan is all about.
Summary
Understanding your environment is very important before starting a new installation or
upgrading a project to a new version of SharePoint. Again, make sure you know all the
deprecated features, new features, boundaries and limits, best practices, and governance.
All these areas need to be investigated as part of an assessment to figure out what areas
need help in your current environment and what areas are changing.
Remember to document all areas of your environment if you haven't already. It is very
important to understand your position. Along with documentation, governance is a
very important part of the implementation process and if you do not have this in place
currently, make sure you do it now. Please take my advice and implement governance in
your SharePoint enterprise. Do not let this slip through the cracks and do not release your
environment without it. I have seen too many environments fail or have daily fires because
of governance not being implemented.
In the next chapter, we will start to plan our project by reviewing our current architecture
and server roles within the environment. Along with that, we will start to look at some
scenarios and focus on where we are and where we need to be by the end of our project.
Questions 57
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
• Assessments
• Design
• Research
• Needs
• Use cases
• Project resource scheduling
The planning of tasks also gives metrics on the time to complete them, which involves
project milestones and start and end dates for targeted deliveries. This planning
will encompass all time-related information needed to move the project forward at
a steady pace.
60 Planning and Architecture
• Planning – overview
• Planning – how to find the best architecture based on requirements
• Planning – cost of your environment
• Planning – resources
• Planning – SharePoint farm design
You will learn in this chapter how to plan and prepare for a SharePoint implementation.
This task is very detailed and there are varying reasons why this is the case. There
are many areas to cover to make sure you get a clear understanding and successfully
implement a project of small to large size. You will also learn how to manage costs and
resources and how to design your farm based on company security and requirements.
Technical requirements
For you to understand this chapter and the knowledge shared, the following requirements
must be met:
Planning – overview
Planning a new SharePoint environment or a migration from an older SharePoint
environment requires us to define future goals for collaboration solutions within our
enterprise, as well as research to provide clear requirements and project schedules.
Whether it's SharePoint or other collaborative applications, planning to add additional
systems to your environment requires great insight. This requires attention to detail
and building teams that can do the work to support the scheduling and delivery of the
application.
For the record, I am not a project manager in any light but I understand what things
should be identified as specific to a SharePoint implementation project. It doesn't take
a PMP certification to understand the tasks and timelines needed for a successful
implementation. These certified project managers look at these tasks differently from
architects and measure against them using other mechanisms. In this planning chapter,
I am just going to touch on the areas needed to help you as an engineer or architect
understand your responsibilities and what areas need to be concentrated on as part of this
process from an IT standpoint.
SharePoint is often a huge piece of the pie or central focus for many enterprise
collaboration environments. There could be other cloud apps or large applications that
play a part as well, but in this planning exercise, we will concentrate on SharePoint with
some caveats where the integration or confirmation of other applications needs to be
present in our plan. With SharePoint, there could be integration with the cloud or other
Microsoft communication platforms, such as Teams, Skype, or OneDrive, which could
play a part in how fast we can provide our environment to our user community. This may
also require a different team or consultant who may be providing those services.
As you saw in the first chapter of this book, we assessed our environment, new features,
and deprecated features, and did an overall assessment of our current state, be it new
or already in place. In our planning process, we will use those documented areas to
determine what needs to be completed to finish our project and what resources we may
need to do so. We will also determine the time it will take based on these assessed areas by
using the data to figure out what tasks are needed to complete the project.
62 Planning and Architecture
These tasks could range from reporting or the clean up of our old environment to working
to create our new environment. Some of our tasks could even relate to our last SharePoint
project and the requirements will remain the same or look very similar. If you have an old
SharePoint project plan, you can use some of those areas in that plan for a new project
for SharePoint Server 2019. SharePoint at its core does not change, as you will see when
we install a VM and host servers. Most of everything you have done in the past you will
also do now. You will install SQL Server and SharePoint, which works very similarly to
in previous installations. This can be said about every SharePoint installation we have
installed since SharePoint 2007, with the exception of those environments where SQL
Express was used for a single server installation, as this is not supported in SharePoint
Server 2016 or 2019.
There could also be other projects in progress or in the planning stages that relate to
your SharePoint project. This could mean integration points and/or other types of
intercommunication that would need to happen to make sure your project is successful.
An example of that could be an AD restructuring project. AD would be required to be
completed before SharePoint could be installed and configured due to the fact that AD is
the method of authentication needed for users and service accounts to be functional. AD
restructuring, be it on-premises or on the Azure cloud, as a project could put our overall
project on hold because identities needed for administrators and service accounts would
not be able to be created. The reason AD restructuring is important is there could be a
change in the domain structure or we could be upgrading to a newer level of server, cloud,
or integrated authentication method, such as SAML. This could delay your project if you
are bound to using this new AD structure.
One of the goals we need to understand before going forward is to determine the
architecture and the reason SharePoint is being implemented. The architecture is
important because it drives our cost, which also drives what resources we need to support
the environment. As part of our assessment, we found out things that we need to change
within our current environment and things we need to bring to the table to support our
SharePoint 2019 environment. Let's take some fake data and make some determinations
from that data to create some architectures to work with.
Exercise 1
The current SharePoint 2013 farm architecture used by your company that was assessed is
structured as follows.
The architecture is as follows:
• 1 TB of data
• No custom applications
• No third-party applications
• File size max: 2 GB
• Claims authentication
• Supports 3,500 users
Planning – how to find the best architecture based on requirements 65
In our current company's configuration, you will see in the following diagram that we are
lacking coverage for all the tiers within our design:
• SQL 2008 R2 Enterprise: Four CPUs, 8 GB RAM, C drive = 60 GB, E drive = 60 GB,
F drive = 3 TB, L drive = 1 TB
• Hardware load balancer
• Four virtual server hosts
• 2 TB of data
• Four custom applications
• One third-party solution
• File size max: 2 GB
• Scan to SharePoint capability
• Classic authentication
• One web application with forms-based authentication
• Supports 6,600 users
In the following diagram, we can see that there are four hosts and VMs are dispersed
across the hosts for the best recovery and stability of the SharePoint farm:
Let's look at some scenarios to gain a better understanding of how we can plan our
architecture.
Scenario one
In our assessment, we found that we currently support 3,500 users running SharePoint
Server 2013 Standard and the company is planning to grow to 5,500 users by the end of
the year due to an acquisition of another company. SharePoint adoption is low at this
point at your company but the company you acquired uses SharePoint 2010 Enterprise
heavily, using Key Performance Indicators (KPIs), SQL Reporting Services, and Power
BI. Your management plans to adopt some of their business applications for use within
your business processes. Your management would also like you to take into consideration
that as the company grows with this acquisition, the understanding is that this will bring
more customers, which then brings more revenue for the company. This will, in turn,
create more jobs and more users to support the enterprise.
Recommendations
In this situation, we have many things to look at from a planning perspective. We need
to take into consideration our environment, the acquired environment, and our move
to SharePoint Server 2019. Some of the big things that jump out to me in looking at
the requirements are that we have a 2010 Enterprise environment and a 2013 Standard
environment. This means we need to make sure that our SharePoint 2019 environment
supports the applications developed in the acquired SharePoint farm. This would mean
upgrading the SharePoint environment from Standard to Enterprise as well as upgrading
SQL Server to Enterprise level. In this case, we would also need to upgrade our Windows
operating system to Windows Server 2019.
Authentication stands out as well as we have both farms using different authentication
methods. This would need to be resolved as part of the integration of both farms into
a single farm. The acquired farm could be using classic authentication, which is obsolete
and not supported in SharePoint 2019. The course for upgrading from classic to claims
would be to migrate the users over in that environment using the original AD domain.
Once the authentication process is completed using PowerShell, we would then migrate
the content once it had been updated to claims authentication. Testing this process in your
dev and test environments first would reveal any troubleshooting efforts you may need to
make before trying this in your production environment.
68 Planning and Architecture
The next recommendation in planning for the new farm is we would need to resolve
areas of the farm architecture that will not support the users from a service standpoint for
redundancy. The architecture is also not built for growth as the company could bring on
more users in the next year due to the acquisition. In this plan, we need to make sure we
can accommodate the new users that come on board, which means we may want to add
one or two more web frontends. Adding one would be ideal, but when you have three,
it gives you flexibility when doing updates to your servers as there are always two more
available to handle the load. We also need to think about MySites and how we approach
that configuration. We should look at using OneDrive as our target if no applications are
being run against files in SharePoint currently.
When thinking along the lines of redundancy as part of the solutions, we also need to find
a new way to support our load balancing for our web frontends, which is more reliable
and doesn't take away from our web frontend service performance. We would need to
look at a hardware load balancer at this point and maybe even hire a person to handle this
from a technology perspective if you do not have someone in-house already. This is very
important as you add more users to the platform and spread those users across two or
more servers to handle the load. Power BI and other data-related services will need to be
considered as well as these could give a much more intense user experience and workload
on the web frontends.
There is also the need to redefine the number of hosts that support the VMs in your
environment. Currently, we have two hosts with four servers, which provide no
redundancy for each area of our platform. In our platform for SharePoint, you will see that
we can mock this up with MinRole, as seen in this diagram:
When following some host best practices, you can see that we need to create redundancy
from a host perspective so that if one host server is lost, we could still be somewhat
effective at keeping the service up and alive until that host server is brought back online.
There are other tools available as well for the cloning and duplication of VMs, which can
be considered as a partial solution.
When cloning and duplication tools are used, these tools can help with duplicating
servers. One thing to mention is SharePoint doesn't like servers to be duplicated or cloned
because of the complexity of the farm/server configuration, so there would be no way
to capture the total configuration of a server without some deep recreation and manual
steps. It's best to keep servers on standby that are already in a warm standby state. These
servers should be part of the farm or at a state where the operating system is installed and
configured and SharePoint is ready for connectivity to the farm. In these situations, third-
party tools can come in handy, as you can use them to recreate your server farm from
scratch – I specifically refer you to AvePoint's tools. At this point, we would install the
server as usual with a new server name and configure the server quickly using PowerShell
or AutoSPInstaller.
To provide support for SQL Reporting Services and Power BI, which is non-existent
in our company farm but does exist in our acquired farm, we would need more server
power to support these services, as well as a plan for the installation of these integrations
for the farm. When looking at our new features, we can see that SQL configurations
have changed in support of SharePoint Server 2019. This would mean that you will have
to understand which version of SQL to use (2016 or 2017), which was mentioned in
Chapter 1, Understanding Your Current State. Plan your SQL supported services using the
information in the new features area to make sure to architect your solutions based on a
new configuration.
Scenario two
In our assessment, we found out that we currently have authentication concerns. Each
environment acquired and our company farm support different authorization methods.
We also noticed that both environments are currently using separate domains and neither
one has trust between them. Both domains are using AD for user login but the acquired
farm is also integrated with SAML for authentication within the enterprise.
The acquisition farm uses custom code within the farm to support user profiles. The
custom code brings in user identities from the source of employee profiles instead of AD,
which this code is established on one server in the farm. We need to find out how we
support AD, SAML, claims authentication, and services for user profiles going forward in
SharePoint 2019.
70 Planning and Architecture
Since we are on SharePoint 2010, shredded storage has not been introduced. So, the size
of your overall content is one to one. So, if there is a 2 GB file in a library, all versioned
documents are 2 GB as well. Once you migrate to the newer version of SharePoint,
these documents will change in size if you use a migration tool. If you are using content
database migration, these documents will stay the same size in the database and will
not change until you start adding versions in the new 2019 library. Once you start using
shredded storage, you will see a dramatic decrease in document sizes and storage being
used in site collections.
The need to check the sizes of your content databases and site collections is so important
as you should have been managing them as you managed your current farm. Some of
these databases could be over the best practice limits and may need to be broken up, or
site collections may need to be moved to a new content database. Shredded storage will
play a part in this as well once you get over to your new environment if you are on an
older version of SharePoint. You will see some dramatic downsizing in your file sizes for
versioning.
Note
Make sure to examine some of your data within your libraries. I had one
customer who had one Excel spreadsheet that was set to have unlimited
versions in SharePoint 2007. When calculating the size of that one file, it came
up as a 20 GB footprint in the content database. At this point, with all the
versions of this document over the years, they limited the versions and saved
over 18 GB of storage on one file.
Recommendations
There have been a lot of updates to the SharePoint architecture in the last two versions
of the product. As for authentication, SharePoint only supports AD out of the box as of
SharePoint 2016 and 2019. This change really limited what authentication methods could
be used to connect to SharePoint. Since we already have differences with user accounts
using classic and claims identities, we would need to fix that issue first as stated, and then
figure out support for SAML. Forms-based authentication is still supported in SharePoint
2019 and is still configured pretty much the same as it always has been.
Microsoft provides a product that gives a solution for these types of issues and integrates
with SharePoint seamlessly. Microsoft Identity Manager supports authentication stores
that are outside of AD. This server would need to be built and configured to support a
final configuration for SharePoint 2019. Migrating to SAML and using PIV cards is a
project you would want to plan out thoroughly with a lot of testing.
Planning – how to find the best architecture based on requirements 71
With this type of security, an Active Directory Federation Services (ADFS) or trusted
identity provider will come into play, and the ADFS server will be needed for the users
to authenticate properly. ADFS is a server that gives access to sites within SharePoint for
external access. There are other providers, such as Okta, who also provide these services
in the cloud in which the integration is a solution and PowerShell would be used to create
realms and connectivity to Okta for authentication.
You can configure this integration after you go through the process of implementing
the new farm. This would need to be tested in a lower environment, or even a separate
environment, before implementation within the production environment to make sure the
authentication works as it should do based on your requirements.
There are many scenarios we can go through with this assessment data but I wanted to
point out some obvious details to get you thinking about where you are in your planning
and how you can start rebuilding your company's SharePoint environment with Server
2019. Please make sure that you have checked under the covers of every configuration,
content database, service database, service, Global Assembly Cache (GAC), and site
collection with the utmost granularity. The last thing you want is to be blindsided after
your move to the new farm.
Looking at these scenarios gives a sense of what things we need to be aware of to support
our environment. Essentially, the standard for a web frontend configured with Microsoft-
recommended resources can support up to 15,000 users concurrently. This means that you
need to understand your community and how many users you will be supporting with
your SharePoint services. If you have more than that number of users, you will need to
add web frontends and use a load balancer to distribute traffic between them.
In doing this analysis for web and app tiers, you will notice a cost associated with the
hosts that will support your server VMs and costs for software and other incidentals,
such as third-party solutions. You would also need to include outside network hardware
and software needed for load balancers and other interfaces that may be required in your
environment. As you can see, costs will add up.
In the next section, we will talk about costs and how these costs can be lowered by
solutions and other known recommendations we can show to help you create a shopping
list to get started with. Once you have really gone through your planning process, your
shopping cart will be completed and you can order equipment and other software to help
get ready for your installation. Right now, we need to make sure we are thinking correctly
about what we are building and what we need in place to support our services.
72 Planning and Architecture
Virtualization
Which virtual platform will you use? You have many choices when finding a platform to
support your VM servers. Some of the solutions cost a considerable amount, but then
there is Hyper-V, which is part of your Windows license. There is also other virtual server
technology, such as VMware, which is a valid platform as well. Find your platform for
your virtualization and find out which platform is cost-efficient while giving you the
support and stability needed for your environment. Host cost can also save you money as
the cost for the server resources needed to support a development environment will not
be the same as a production or User Acceptance Testing (UAT) environment. Plan your
resources and do not just buy the same amount of resources for all the environments you
plan to support.
If you are considering Azure, make sure you understand the benefits you can achieve
using Azure:
When configuring your Azure environment, you will want to build in high availability.
The following diagram explains how your Azure environment should look for SharePoint
Server 2019:
AWS
AWS is another cloud offering that can provide a cloud space to implement your
infrastructure for your SharePoint environment. AWS provides similar supported cloud
infrastructure that can be compared to Azure. While being not as complex as Azure, it
takes a large learning curve to get started on both platforms, but as we are technologists,
we know how to dig deep to learn new technologies.
Microsoft 365
Migrating to Microsoft 365 is another option to get SharePoint without the server
hardware. There are other issues that go along with this type of implementation because
you have to think about other Microsoft applications that would need to be moved as a
part of this migration. Some of the solutions you use today may need to be rebuilt using
other technologies provided with Microsoft 365.
We will see Exchange as part of the offerings for Microsoft 365. This is a migration that
will take time and planning, which could lead to putting your SharePoint migration off
until this is completed. The Office server would not be needed as part of your planning
because it's built into Microsoft 365. This would be one server application we would not
need to worry about as the functionality is already available.
As licenses are complex with the offerings in the cloud for the suite, there are different
levels of licensing that are available. The higher the cost, the more features and space
you receive. The platform also offers government tenants that are specifically created
for government entities and do not support some of the flash and glamour that normal
tenants support. You would need to look at these options very closely and find something
you can move into.
Storage is key in this environment. There may be some splitting up of sites needed before
your move. Splitting sites involves taking content from a site collection and moving into
another site collection so that the site can grow and stay under a quota. As you sort the
sites that are being moved from on-premises to the cloud to make them work as part of
Microsoft 365, please be sure to take a look at the size of subsites and areas within the site
collection to make the best decisions going forward.
Make sure to make the right decision going forward. Stop here and evaluate where you are
and what your future is for SharePoint and other Microsoft ecosystem applications within
your environment. Stepping to the cloud makes sense in some cases where you want to
save money, but there is more to it than money when you look at the whole picture.
76 Planning and Architecture
SharePoint licensing
Finding the best licensing for your on-premises implementation can be as easy as asking
yourself whether you will be using business intelligence as part of your configuration. If
you don't need it, then you should get the licensing for the standard version of SharePoint.
Enterprise licenses are expensive, but with the need for data and functionalities that this
version provides for reporting, KPIs, and other data reporting, this is your go-to solution.
With development environments, you will want to support the same version of SharePoint
so that developers are developing on that same platform. There are versions of SharePoint
that are free with Microsoft Partner subscriptions that can be used for this purpose, as well
as free versions that can be downloaded as 90-day trials that also work. Ultimately, you
want to gravitate to a license that can be sustained with no issues or complex situations
coming up in the future. It's better to get a license that works forever than to plan your
development on a 90-day license for your operating system, SQL, or SharePoint.
Battling contingency
Backup and Disaster Recovery (DR) are some of the most important solutions you will
implement within a SharePoint farm. Without it, you will fail and fail miserably. The way
you implement backup and DR will impact everything you build in SharePoint and how
you could restore it from scratch if the need arose. This can be a rewarding experience in
which you get a solution that makes this process easy, or you can stick with SQL backups
and make your administrators work harder to restore and provide consistent support for
your SharePoint implementation.
To start, as mentioned, SQL server backups are the basic backup plan for a SharePoint
farm. Backing up databases, logs, and other areas of the server is very important to
recreate the server when a disaster happens. In most cases, you will have an Always On
configuration for the data tier of your architecture. Rebuilding this configuration and
the associated data can be very complex without a tool to help you manage the databases
involved.
If you add RBS to the mix of your supported configurations, you then add a different
level of backup support that would need to be in place to support this integration. This
solution is very complex and if your backup is not done correctly, you may not be able to
restore your farm correctly, which could lead to disaster. The content used in a farm that is
supported by RBS is programmatically associated as content within the sites using links to
the content to associate the content on the disk within SharePoint.
Planning – cost of your environment 77
Note
RBS is a way to keep large files outside of the database on a disk so that the files
can be retrieved without disrupting the database and other users working with
content in that database. The larger the file, the more work for the database
to bring it to a user. If this is kept outside of the database, you get better
performance when using large files.
Business intelligence as a service adds complexity to the farm and to the backup and DR
solution that supports the farm. Business intelligence integrations need to be planned
and even put on their own web application so that they can be used separately from the
rest of the users in the farm. Separation using its own application pool separates the web
application from the rest of the sites to provide better performance for those using this
service. This service pulls on the server resources and will cause a delay in your data
rendering if you're not careful to follow some best practices.
Solutions for backup and restore are available but there is only one I recommend and trust
as I have seen this solution work and have supported it in many environments. AvePoint
has a backup solution that helps in almost all scenarios for contingency planning. Their
solution is superior because they have been in this space from the beginning. Over the
years, I have seen many solutions they have offered and they really get administrators'
pain points.
As I have stated, AvePoint offers a full backup and restore solution. The solution runs from
its own server in your environment and uses agents to communicate with your farm and
not embedded farm solutions. This is a very good way of integrating a solution from a
third party into a farm. The solution is easy to clean up, not like embedded farm solutions
I have seen that require troubleshooting to uninstall. Their suite of products gives you
many solution options, including backup and restore, blob storage solutions, and so many
other ways to support your farm.
AvePoint also has a solution for replicating content to another farm over a data
connection. This will sync the content so that all content that is updated in the production
farm gets updated in the DR farm. There are many configurations that can be determined
for your environment but the bottom line is, you want to make sure you cover all areas
with your backup solution that supports the SharePoint farm so that you can recover
either from backup or a standby DR site.
78 Planning and Architecture
Monitoring
Often, we think that as we add products or third-party solutions to our architecture, we
are absorbing costs that we could be saving, but we have to look at this under different
circumstances to support the farm proactively. The areas where we save money are not
always cost savings but also downtime, which may cost you more in the long run. When
looking at SCOM, which is a monitoring application used to monitor services and
performance in SharePoint, this product can save a lot of cost in downtime, which in some
cases can add up to dollar amounts for employees not being able to perform their jobs.
If you look at it from a hosting model where we want the best up-time possible for our
customers, this is a component we would not want to leave out of our architecture.
The savings monitoring can bring to the table are invaluable to our architecture, as we
can find out areas of concern before something happens that can bring down the service.
When a service is down, you lose confidence and you also lose money. Most customers
will request information on why the service was down and may ask for money back for
that downtime, in the end, depending on the severity.
Customers do understand, but when you have a business that depends on a service to
work, they expect the service to work as it should. We often take monitoring for granted
and use other tools and checks to figure out our pain points, such as Windows logs, ULS
logs, and Task Manager, where we can monitor the resources on our servers. This is not
the best effort for SharePoint as there are many services that could be in limbo if not
monitored consistently.
There are many tools out there that work to provide monitoring, but SCOM is one of the
more integrated tools that can work within your farm and other Microsoft products in
your enterprise. This product gives you a one-stop solution for your Microsoft products
and interface into those enterprise areas that need constant monitoring. There is another
tool I really like called SolarWinds that also gives you real-time monitoring of services
and server resources.
There are other areas where the benefit of having other applications integrated into
our architecture outweighs the costs that come up in the SharePoint cost for hardware
and support. Make sure you protect your investment while saving in other areas of the
platform.
Now that we have understood how we can manage the cost of our environment, let's look
at the aspect of resources.
Planning – resources 79
Planning – resources
Adding resources also adds to the cost of our project, but in most cases, the resources
requested will be needed to implement and support the project. Resources are generally
handled by your project manager but with full transparency supporting the IT team and
management. Everyone involved needs to make sure that there is a good project plan as
part of structuring this project and make sure there is the availability of resources, as well
as enough team members to support it. The team also needs to make sure that they have
the cycles to start and finish the jobs they are assigned to complete.
There could also be concerns with resources that have been assigned to the team where
there could be other projects these resources have that could take priority or eat into the
time they can spend on the project. One of the biggest errors I have seen in the field is
under-resourced SharePoint projects. This is one area where you do want to pay attention
and make sure you either hire personnel or contract the positions to get the work
completed.
The change management board and operations teams also come into play in planning
as since we are adding or changing resources within the environment, these changes
need to be confirmed by the owners of the environment. This can also add more pain
to the implementation and take up time you were not planning for. Make sure to add in
planning for this change review to make sure there are no hidden scenarios where you
will get behind on the project. One thing you can do also is talk to this group before you
get started. This will help them understand what's coming and they may be able to give
you details they may need to move forward successfully. I notice this especially in secured
areas where the SharePoint service would be used.
Another issue that is always seen from my experience is that I will run into a SharePoint
admin that has not had much experience with the product and/or there is one SharePoint
admin supporting a large farm alone. I have also seen cases where SharePoint is running
without any real supervision but relied on heavily within the company. In these cases,
these scenarios almost always bring to light the issue that the support personnel are
overwhelmed and do not know where to start to fix issues and expand when added
requirements arise. The team, in some cases, can also be running the help desk, managing
customers, applying updates to the servers, and supporting all other areas that come with
the SharePoint territory. Make sure you budget for the right size team for the environment
you want to support.
80 Planning and Architecture
Assessment review
An assessment review is a meeting of the minds of IT and management supporting the
efforts of the implementation. This could mean your CIO, director, and, in some cases, the
CEO would be part of this meeting. The goal of this meeting is to review your assessment
and its results. As assessments can be done over intervals of time, these meetings may
happen when assessments are needed for the farm environment. The attendees would
have a chance to review the document before the meeting so that they can come to the
meeting with an understanding and questions about the document as well. This gives
those teams a heads up on what the project consists of, gives them an idea of the resources
that may be needed during this project, and helps them to plan for what resources can be
provided for you to work with during the project.
This is a somewhat difficult task because you do not want to overstate the project goals
and you do not want to show huge costs for the project, nor do you want to overstate the
requirements with solutions not needed. When management is reviewing whether to
approve the work, you want to be cautious and explain things in good detail (where they
need to be explained) and leave some areas out that are common and not elaborate on
them in great detail if no questions are asked. This is because the scope of the meeting
could be directed in the wrong direction due to comments on details that are already clear
and understood.
However, we should be prepared for anything during the meeting and questions could
come out of the left field. Make sure you know your presentation well and practice it with
the other team members so that everyone understands the direction the meeting should
go. If things are rehearsed or talked about in depth, there will be no error in the delivery of
the message sent during the meeting. Keep the meeting simple and to the point, validating
the assessment and areas of concern, how you plan to remediate those concerns, and the
direction or path for the future.
Management will be all ears and listening to certain details. In the meetings I have been
a part of, most managers and even executives do not want to hear too many technical
details. They want to hear concerns, fixes, and costs, as well as how you are addressing
those concerns. Those areas of technical concerns help with new solutions you plan to
implement, but overall, management has no ears for IP addresses, protocols, and so on in
most cases.
Now, I am telling you this from my experience, but there could be some CEOs and upper
management who may want to hear more technical details to figure out why you are
heading in a certain direction for the platform. Some CEOs and other management are
technical, and in some cases, you will need to explain your position more thoroughly. You
need to gauge your situation and plan for it accordingly.
82 Planning and Architecture
Prioritizing requirements
As things change within an organization, the goals set out within an organization may
change as well. When an organization makes those changes to its future goals, this can
bring changes in personnel, security, structure, and business processes. As part of those
effective changes, IT can be affected, which can bring change to the way the IT department
delivers solutions and secures data and content, platform support, governance, and other
IT policies, which, in turn, will affect the way the technical teams support the enterprise.
As a part of the goals of this planning prioritization exercise, we need to define the
purpose and motivation for our new or migrated SharePoint farm to this newer version of
SharePoint Server 2019. When defining priorities, please consider the following items:
• Top tasks
• Milestones
• Deliverables
• Schedules
• Organizational charts
• Resources
All these areas can affect our implementation project in ways where we will need
to rework our project schedule. We don't want this to happen, so planning around
current and long-term projects is a must to avoid any situations where you have to, for
example, change your schedule or resources in the midst of implementation. If you want
management to get involved and give you some grief, just let that happen and you will see
things fall apart.
Another example would be a change in deliverables. If we had a certain set of deliverables
and all of a sudden you forgot about RBS, and have to make a change where you have to
go back and ask for money, this brings the pain. You want to make sure you prepare all
areas, especially technical solutions that form the environment, to support the farm and
its functional components.
Prioritizing what's important as part of the implementation is needed because there are
logistics involved as well. You need hardware before software, you need a network before
you can configure servers, and you need SQL installed and configured before SharePoint
can be installed. These examples help you understand how important order is in your
project plan.
Planning – SharePoint farm design 83
With that, there will be some tasks that take priority in these examples. In the case of
ordering software, finding out when the software would be delivered is an example. The
delivery of equipment and software can ruin a project plan as well. It's best to get a list of
the items you need to start the process before you get started writing your project plan, so
you don't have any items that fall by the wayside. Licensing and even funding sometimes
take time to procure, especially in government projects. Make sure to put in requests
once you finalize your architecture so that these areas can be in the works while you are
finalizing your plan.
This is a good place to start evaluating the timing and prepping resources. Look over your
work and make sure you have taken everything into consideration. Changing things now
can make a big difference in your timing, so you do not want to change too much at this
point. Also, evaluate the platform you are proposing. Make sure you're making the right
decisions and press forward.
Some sites I have seen have terrible designs and content structure. Content exposure is
so important on landing pages; for example, the name of the site is important with a real
description of the department's responsibilities in the company. This verbiage makes
it clear to the user looking for information that they are in the right place to get the
information they are looking for. Listing the name of the site collection administrator
and their contact information is another piece of good information that should be a part
of the site collection landing page. There should also be some information on some team
members and links to the department forms where information is collected, for instance,
an employee vacation request form. This could be useful for the users surfing your site
where they shouldn't have to have lower-level access to fill out a form that could be used
by everyone.
As always, security is another area of concern of most site owners, but it doesn't have to
be. Using the landing page as a read-only page is commonplace and in SharePoint 2019,
you can use a team or communication site as your landing page as well. If the user is not a
team member, then give them access to what they need from the landing page and secure
the content they need to have access to with permissions. Make sure to create groups for
your department users so that those users do not get mixed in with users with read-only
access. All employee groups would get read-only access and internal groups that work
behind the scenes within the site could give these users deeper access as needed.
If we look at the external sharing of information in our farm, we could see how some
of the components within SharePoint 2019 and the cloud affected as part of your
implementation. There are some things we want to be aware of in opening our content
for sharing outside the company. Overriding that governance rule will make your admins
work a little harder. You also need to trust that your users are sharing information that
is OK to share. The security of your content is most important when it comes to sharing.
I have seen some bad practices that could cost a company their secrets because external
sharing was enabled and other mobile solution apps were being used within the company
without IT knowing. Be on top of external sharing and make sure to secure your content.
Site collections are a great way to capture a bulk of content at a time as these can be a
one-to-one relationship with content databases. In this configuration, you now have a
database that encapsulates all the content for one particular department. Management of
a content database is what you would like to work with as a farm admin and not a content
database with mixed site collections from different departments. This site hierarchy
gives you complete isolation of the content from a security and recovery perspective. All
backups are of that database and content can be restored individually without having to
interfere with other departments' content if something were to happen to the database.
Migration efforts are clean as well, with no mixed site migrations or work beforehand to
separate sites into separate content databases.
Planning – SharePoint farm design 85
Database naming conventions should be used, as well as site collection URL names. We
should, as administrators, be using good naming conventions to make sure our content
is organized and named so that we understand what that content is. This goes along with
naming our site collections and providing content in the site collection that describes it.
There is nothing like going to a client's office, reviewing their farm with them, and seeing
errors where content databases cannot be contacted. In review, I sometimes find out they
do not know what the database is supporting, what the content is being used for, or even
what department it supports. This all happens because it is named improperly. Please
don't be that client.
Site collection templates should also be used to add functionality to your farm's logical
structure when needed. One of the big perks of using some of the site templates are
data compliance, record center, and document center templates. These templates can
enhance how your logical structure works together. These templates can be used to create
automated processes that protect content and store content. These processes use key data
fields to set policies that automate the protection of data. Make sure to use these features
in templates so that you have a well-rounded process in place when needed.
Note
RAM comes into play as in the initial farm, we will only see 5% of our RAM
on each server supporting the MinRole dedicated to this service. The more we
play with the configuration of distributed cache, the more RAM you may have
to reserve for this service. We talk about this more in Chapter 9, Managing
Performance and Troubleshooting.
86 Planning and Architecture
Physical design is not hard like it used to be. Before, we had to worry about services and
what services were started on what server. Now, with MinRoles, that issue goes away if
you use them. We can now be alerted for compliance based on the role of the server in the
farm. If a service is not supposed to be started, we will see that the server will be out of
compliance. MinRoles help with these types of scenarios by keeping the services needed to
support the role of the server minimal. In the past, we were responsible for managing this
ourselves, and I am sure Microsoft saw the writing on the wall from its long list of support
tickets, which gave them the idea to put this MinRole in place. The MinRole was created
to help administrators and was started in the MOSS version of the product, but was
discontinued in SharePoint 2010. The MinRole helps administrators follow best practices
and wisely use server resources to support how farm server resources are configured and
to support the performance of those resources. Here are some examples of MinRoles:
Looking at these areas of concern, we can create an architecture for the hosts and the farm
shown in the following figure for SharePoint Server 2019:
Dev and test environments will also need to be stood up as part of the architecture. Dev
environments vary in size. Usually, we have one SharePoint server and one SQL server
as part of this environment just to do some little development on. This would support
out-of-the-box development for different departments using separate site collections.
Scheduling of reboots and testing should be coordinated with the users that require this
development farm.
When talking about larger environments where there are several developers, each
developer should get their own server. The problem with SharePoint 2019 is that it is not
like older versions where you can install SQL Server on the same box and/or use SQL
Express, as it is not supported. So, each developer, in this case, needs to have a separate
SQL server for each server farm, or use a large SQL server to support all developers, using
separate namespaces. This can work, but having those developers independent makes
more sense as each should have their own environment.
Team Foundation Server comes in to play as well, so any code that's developed can
be pushed for deployment and tested in the test environment before it's pushed to
production. Pre-test environments can also be stood up in some cases where you want to
validate before putting the code on your test environment. Test environments should be a
duplicate of the production environment and should have minimal use and test content.
This environment should be clean and the only changes made should be good changes
that were tested previously.
Time and money are always part of the equation, so with that, we need to do our due
diligence when planning our IT architecture. We need to review our requirements and
make the right decisions to bring the best solution to management for the best cost.
Looking at our current state and our acquired farm, we have to combine each of the farms
and solutions into one. This can be a complex process depending on your situation.
In this situation, we have some clear guidance, and we understand this from the
information that was given in this chapter. We have pointed out the server resources
currently in place, and some details on the farm configuration. Assessment data has been
gathered and again, all bases have been covered when finding out the information on
SharePoint Server 2019.
What role does the cloud play in your environment? Can we build this into our
architecture to save on costs? Well, in this scenario, we don't have any requirement to add
the cloud to our migration process or even a hybrid solution, but you may find yourself in
this situation. If you do, study the cloud thoroughly; there are many subscription models,
and within those models, there are solutions that you may need that may not be offered in
others.
Planning – SharePoint farm design 89
Call Microsoft for support if you get stumped and do not guess on this type of
configuration. Licensing can be very complex and it may not make sense to you. Make
sure to ask the experts when coming up with strategies to move to the cloud.
As you can see, building server architecture is not hard but you have to take into
consideration all the requirements and research: how you can create a low-cost, extremely
robust, and best-performing server farm for your company.
Now, with the vast information from Microsoft and blogs online, there is no excuse to
not have documents in place to support your efforts for the SharePoint enterprise. Yes,
it takes more time, but be thorough so that you can be successful and have references on
why, what, and how you created this platform. I cannot express the importance of this and
other documents that I am walking you through in this book enough. These are needed
to support the product and user communities using the SharePoint farms for services
and solutions. This also helps you to understand as an admin or architect what you have
designed. As you configure and support the farm in the enterprise, this document, which
is the foundation of the build, will be able to be referenced and updated in case you need
to review a configuration or make changes as you grow.
So, in my efforts to help organizations do the job of supporting their environment with
documentation, I have composed an outline of what I use to document the design of
my SharePoint environments. This documentation is for the design of the SharePoint
environment and is not intended to collect any SharePoint installation step-by-step
instructions, but rather to cover the build of the environment that frames the moving
parts and how they are working together to support the farm. The document should
include an executive summary, solution design, and security.
Executive summary
This section of the document should be used to give an overview of what the document
represents in support of the SharePoint project. Here, we would write a summary of the
document within this area for executives who may not have the time to read through the
entire document. This portion of the document includes the underlying topics listed:
• Purpose: This section of the document should be used to give a brief description
of the document's purpose. In our case, this document represents the design and
configuration for the SharePoint enterprise environment we are deploying.
• Scope: This section of the document lists the scope of the environment. This could
include the Office server, SQL Server Reporting Services, PerformancePoint,
PowerPivot, storage configurations, third-party integration, or anything that is not a
native SharePoint installation.
• Out of scope: This section of the document lists the services or solutions within the
environment that will be out of scope, such as business intelligence, RBS, hybrid
connectivity, or third-party tools you may have discussed previously that didn't
make it within the budget.
Planning – SharePoint farm design 91
Solution design
This section of the document provides details of each portion of the solution. So, in our
case, since we are designing a SharePoint enterprise solution, we will need to mention
all the design requirements we reviewed to create this environment. You would want
to mention what services will be used to support the environment from a SharePoint
perspective.
92 Planning and Architecture
We would also need to mention how the environment will be supported from a security,
capacity, scalability, and availability perspective. You would also include contingency plans
as bullets on how you would support the environment in a disaster crisis. Mention any
security design considerations you took during your research as well. A summary of these
items and then each of the following listed areas would be presented in a more complete
explanation:
• Network: In this section, you would want to mention best practices; for example,
SharePoint Server roles as in a Web Front End (WFE) server would not run any
SharePoint services or the WFE should have index partitions for search residing on
the server as part of the search configuration. You may want to mention any other
network-related areas, such as VLAN configuration and domain information that
might weigh in on the design. Include any diagrams as well.
• Hardware: In this section, you would want to mention best practices related
to hardware. This could be related to SQL Server as you may not virtualize this
hardware or mention which type of virtualization you will be using, such as
Hyper-V or VMware. Make sure to mention anything you are changing in the
hardware design from the norm.
• Software: In this section, you would want to list any software that would be used
for the installation of this enterprise solution. This would also include listing the
prerequisites as well as any extra tools, such as ULS Viewer or Fiddler, which are
utilities that you might use for administration.
• Environments: In this section, we want to list the environments and their purposes
supporting the enterprise solution, which could be dev, test, UAT, and production.
We would want to mention any best practices, such as any separate service accounts
and/or domains that will be used for each environment for security separation.
You would also want to mention a release management plan or guidance on how
this process will be managed within the environments. Include any diagrams or
supporting documentation names and links as well.
• Virtualization: In this section, we want to list all servers that will be created in
the environment, using a table with names, descriptions, quantity, CPU, memory,
operating system type, and other information that can be captured for each server
you are creating to support the environment. We also need to mention any best
practices that we will use in the configuration of our hosts and VMs to support the
environment. As an example, mentioning a best practice such as no VM server will
use dynamic memory allocation would be the best practice of what to list here.
Planning – SharePoint farm design 93
• Capacity and storage: In this section, we talk about the databases and storage
needed for the overall enterprise solution. User capacity would need to be tested
and can be tested using a Visual Studio load testing application. This would be good
to use so that you can verify the number of users the hardware you configured will
support and give the solution great performance based on best practices. Defining
the types of drives could mean OneDrive or search considerations in the cloud that
will be used as part of the environment, and their speeds are crucial as well as this
can keep the cost down by using slower disks in areas where they can be utilized.
We should also document the sizes of all databases and configuration details and
define all database size limitations within this section of the document using a
table. If you're pre-creating databases, this is a good place to list those databases,
and if you are not, this is a good reference for naming conventions as you install the
product. As a documented process, we would also want to document our database
maintenance plan here as well.
Don't forget to define drive sizes for our servers for the expected disk space needed
to support search indexing. When defining the search indexing location, we must
remember that indexes can be replicated to other servers depending on their role
in the farm. This location is defined during the SharePoint Server installation
process as a data location or will reside on the cloud if you use search in a hybrid
configuration.
We also need to remember logging for each component we are using to support the
environment. This includes SQL with Always On, SharePoint, and IIS. We also want
to remember that logs grow substantially when migrating content from one farm to
another. Make sure to account for these sizes when building your servers.
• Web applications: In this section, we define the dedicated web applications that
will be used as part of our solution for the administration of SharePoint and for
supporting our user community. Web application settings should be captured as
part of each of these web applications, which define areas such as authentication
type, time zone, file upload size, and others, and should be documented as part of
this section of the document.
You should also define standards that should be followed in naming web
applications, the application pools associated, and databases associated. We should
also define our sites and what configuration details should be listed here in this
document. This should also include all site collection best practices and areas of
web applications that should be followed within some guidelines for site collection
admins.
Planning – SharePoint farm design 95
Note
Remember, host-named site collections are discontinued in SharePoint
Server 2019.
• Central administration: In this section, list all details pertaining to the central
administration site, which could be a vanity URL or the port that the site will run
on. We would also want to mention access to the site as this could be a limited
community of users. With that, an AD group should be created, which should also
be listed here.
• Site collections: In this section, let's cover how we will create site collections and
how they will be used within our organization. You could mention that a site
collection will be created for each department or that a site collection will be created
for each site as determined by our stakeholders. We also want to mention quotas
and other details related to metadata, content types, and site columns.
• Email: In this section, we need to define how email will be used within the
SharePoint enterprise solution. Define use with workflows and notifications as well
as the configuration of TLS using the new configuration settings within SharePoint
Server 2019.
• Search: In this section, we need to define all that is search. Define the configuration,
any content sources that will be configured, redundancy configurations, index and
capacity, crawl logs, and any special configurations. Crawling of content should
also be captured here as schedules and other targeted content sources as per their
requirements. Service accounts and the configuration for using those accounts
should be defined as part of this section. Remember, hybrid could come into play in
this configuration as well.
• Profile imports: In this section, we define profile imports and scheduling for
imports within our SharePoint configuration. Any service accounts that are needed
that will be used as part of the import would need to be defined in this section as
well. Remember, Azure AD could come into play here as well.
• Monitoring: In this section, we define how we monitor the enterprise solution and
mention what tools we will use to do so. As we define these areas, we want to make
sure that we have made mention of any third-party solutions at the beginning of
the document that are in scope. This would include monitoring of the network
components, services, servers, VMs, and so on. We should also set any thresholds
per component that need to be captured in this document so that others that are
administrators of this environment understand those thresholds and make sure to
adhere to those maximums. These thresholds would be defined by testing using the
Visual Studio tools for simulation or other tools as you see fit.
96 Planning and Architecture
• Backup and recovery: In this section, we list all areas of the farm that should be
backed up that are essential in recovering the farm in case of disaster (DR). These
documented areas should be listed and configured in your backup software. The
frequency of these areas should also be documented as some may not need to be
backed up as often, such as a records management platform where documents
are finalized and in a view-only access area. We would also need to consider SQL
database backups, which should be part of a maintenance plan but could also be
picked up as part of a daily backup for offsite storage. Timing and frequency should
be considered as part of this documentation.
• Software updates: In this section, we need to define how software updates will take
place, either manually or automatically. In most environments I have been in, most
do their patching manually. This would be my recommendation as well. We should
be testing our environments through development, test, UAT, and production
to make sure these new updates are working as they should. This alleviates the
destruction of our production environment as we test through those lower
environments for verification of the patches.
Most companies also have separate teams that support individual IT areas, so in that
case, you will have an overlap of support for updating servers, SharePoint, and SQL
Server. In this case, scheduling becomes a factor or you have your SharePoint admin
update all areas of the server. In a lot of cases, SharePoint admins can have many
servers, depending on how their environment is configured, and could have to do
updates in a certain order based on the application.
Some third-party tools require servers as support for their platform. My
recommendation is to give the SharePoint admin access to update all servers
supporting the environment. If the environment is built for redundancy, then we
should be equipped to provide users around-the-clock access to the environment. In
this case, we can use what are called zero-downtime methods so that we can update
the server without disruption. This would mean all levels of the stack would have
redundancy: WFE, the application, and the database.
• Ports: There is a pretty detailed list of ports needed for SharePoint to communicate
between servers within the enterprise. These ports should be documented here
either through diagrams or a table listing those ports and descriptions. With
SharePoint, there are intra-server communications, which refer to how SharePoint
services communicate, and extra-server communications, which refer to how
SharePoint communicates with other servers and applications supporting the
platform. We will detail that list and the script to configure them later in this book.
Planning – SharePoint farm design 97
Note
Documenting any of these sections with supporting diagrams is also a great
way to present solutions as part of this documentation.
Security
This section of the document should be used to give an overview of what the document
represents in support of the SharePoint project. Here, we would write a summary of the
document within this area for executives who may not have the time to read through the
entire document. This portion of the document includes the underlying topics listed here:
• General: In this section, we need to define general security practices that will be
followed within our SharePoint environment. Some of the areas that could be
mentioned in this area would be the central administration location, direct user
access to databases, separate accounts being used to separate services used within
the environment, and any other SQL or environment general security best practices
you will follow that are part of this enterprise solution.
• Physical security: In this section, we need to define any physical layers of security,
as in server rooms and PIV card security, which would count as physical security
accessing servers and desktops. Physical security could also be part of a certification
you hold as more companies are using these certifications for government
contracting. Any physical security enhancements and authorizations would be good
to mention in this section.
• Authentication and authorization: In this section, we need to define which types
of security you will use for authentication and authorization. If you're using AD,
you are most likely going to use claims authentication and Kerberos for integration
with AD. There could be a mention of other authentication areas, such as SAML,
which provides the integration of PIV cards for more physical security options.
Authorization can be mentioned, as in how you use groups within the solutions,
either SharePoint groups or AD groups.
Note
Always use AD groups for your SharePoint security when you can. If there are
areas where they just don't work, then use SharePoint groups.
Planning – SharePoint farm design 99
• Antivirus: In this section, we need to define the type of antivirus we will use to
support the upload and download of documents within our SharePoint farm. This
is one area I saw during my travels that was abandoned because everyone thought
since they had antivirus on desktops, there would be no need for it to be installed
on the farm. My opinion is you always need to be careful and don't take chances
with your data.
With antivirus, there are areas on the server that need to be documented as well,
which should be excluded from the antivirus scanning those areas on the hard
drive. Those areas are documented on Microsoft's website. These exclusions help to
protect files from being compromised by the antivirus program and interrupting the
SharePoint service.
• Auditing and policies: In this section, we need to define how we will audit users
and what policies we will put in place to ensure data compliance. Information
Rights Management (IRM) and a new feature called Data Compliance that was
introduced in SharePoint Server 2016 can be used to flag documents and make sure
data is protected from use by users who do not need access.
• Security principles: In this section, we need to define how admin and service
accounts will be used in the solution for the separation of duties. Document their
purpose, local policy settings, and database access in this section. Use a table to
pull together the best results. Local service and network service accounts should
be documented here as well. Also, document group permissions as well, as out-of-
the-box SharePoint creates groups that are used within the configuration. These are
often forgotten about. Document these groups and make sure they are a part of your
documentation for reference.
• Group policy: In this section, we need to define GPO settings. This is one area
where a lot of admins get stuck and wonder why security isn't working in the
environment. We need to make sure that all of these policies are documented, as
well as GPO settings.
• Blocked file types: In this section, we need to define a list of file types that will be
blocked from use within the SharePoint libraries. These file extensions should be
documented so that they are captured for reference.
• Appendix: If needed, add an appendix to the document.
100 Planning and Architecture
You are not finished with creating your design document as this document is a living
document created to capture changes in your design. Keep this document safe in a place
where only you and your team can get to it. Have someone review it as well to make sure
you hit all areas of your design to support the services you are planning to configure in
your farm. The key to a successful implementation is getting things right the first time and
always checking and rechecking against the Microsoft best practices and your assessment.
Disaster recovery
In this section, we need to look at recovery from the unknown glitches that can happen
that bring down the server environment, where the service is unavailable or partially
available. This could be the loss of the application services or even just the loss of data. In
the case of SharePoint, there are network components that can also cause issues as well,
such as load balancers, routers, and DNS, which can alter the availability of the service.
With that, we need to take into consideration the company's policies on the Recovery
Time Objective (RTO) and Recovery Point Objective (RPO). There are disaster recovery
concepts that can be followed that support SharePoint 2019. These concepts have not
changed since SharePoint 2013 was in play, which includes standby recovery options,
service application redundancy, and third-party solutions specifically for SharePoint:
• The RTO defines the time metric to calculate how quickly we can recover from a
disaster. This would be the expectation the company would like to support as part of
an Statement of Work (SOW), which supports the service and is given to users or
departments using the service. This is so that customers understand all the services
provided and situations that may come into play within the environment.
• The RPO defines the point in which you would like to be able to recover. This means
that the point of recovery could be a full SQL database backup time and maybe you
also have a differential backup that was done later, combining these two backups, so
overlaying the differential backup would then bring our point of recovery to a more
recent time. This would be defined in the backup strategies, which we will talk about
later in the book.
• Standby recovery options are provided using separate data center locations to
house a system or service so that in the case of a service or group of systems that go
down, those systems can be recovered and provide the service from a different set of
servers or services. There are three different models for these options:
The first is cold standby, which is a data center that could be back online within
hours.
The second is warm standby, which is a data center that could be back online within
minutes or hours.
Summary 101
The third is hot standby, which would make the systems redundant, and if
something were to happen, the users would not see much of a change in service.
This would facilitate an almost-instant recovery, with some caveats of URL changes,
DNS updates, and other networking configurations that could be automated or
done manually.
• Service application redundancy can be configured when you configure services
within SharePoint to be redundant as part of an outage, which would then require
services to be running on hot standby servers. This is not a bad solution but we have
to remember that SharePoint also brings with it a database side. In that case, we also
must remember that we can also ship logs to another database server in another
data center as well. This would push all database changes from the production farm
to the DR farm in another location. These databases that support SharePoint can
also be in read-only mode in the event that you do not want any changes to be made
once failed over. This could come in handy in some scenarios.
• Third-party solutions come into play, and I refer you to AvePoint again as they
provide some solutions that provide synchronization of content between locations
and disaster recovery solutions just for SharePoint. If you have the funding, these
are good to take a look at, but also remember that this comes at the cost of new
servers to support the functionality and more time spent by your administrators
managing these services as well.
Summary
In conclusion, the design of your SharePoint farm means everything to the support of
the service. This means you cannot miss much when implementing a SharePoint farm.
Making sure to document your infrastructure is key due to three things you will need now
and in the future:
• Having a place to create, share, and save your design for review and changes
• Making sure you do not miss anything during the design process
• Giving someone else a chance to help, troubleshoot, and take over if you leave the
company
102 Planning and Architecture
Again, I cannot stress enough to document! I will say this many times in this book but
documentation helps you figure things out because you write it out and don't just keep it
in your thoughts. We admins tend to want to be smart off the top of our head but some
things need to be written down. Don't keep things hidden in your mind; make sure to put
them on paper. I have heard others who I have worked with always ask themselves why
their team member doesn't know something, because it's obvious. Well, writing things
down will make it even more obvious and get people working independently so that you
don't have to hold their hand.
As part of that, always make sure to go through the scenarios when examining your
current state. The assessment, best practices, and other planning information help to
bring your design into a perfect scenario for your company. Remember that detail needs
to be captured. Any little nuances, such as a custom port for incoming mail or as little as a
service account used for a custom service – all these items need to be captured so that the
design is understood. No one should be hunting you down for information or to see how
things were designed.
We learned a lot in this chapter from the many angles that SharePoint has to offer. Some
parts I didn't even touch upon. This chapter taught you a few things that will help you
plan for success in many different scenarios. Although I didn't go too deeply into some
of the areas I wanted to, this gives you a road map on where you need to go and what you
need to focus on when designing and planning your architecture. Planning is everything
and it creates the design for the farm as it answers many questions for you and your
management. Always make sure to focus on planning as this is the only way to a successful
SharePoint environment. There is no other way.
In the next chapter, we'll look at how we can go about creating and managing VMs.
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
1. What standby recovery options would require some configuration to bring online?
2. What Microsoft Server product is used to support the development cycle and uses
the three environments to help push and version developed code through those
environments?
Questions 103
We will also take a look at general changes in the Windows server over these past couple
of operating system versions to make sure you are caught up and understand where Server
2019 is currently. Changes within the operating system can help you to use new features to
support the environment to help bring you better use of your server features.
The following topics will be covered in this chapter:
Technical requirements
For you to understand this chapter and the knowledge shared, the following requirements
must be met:
You can find the code files present in this chapter on GitHub at https://github.
com/PacktPublishing/Implementing-Microsoft-SharePoint-2019.
Host and VM configurations are the backbone of your server architecture and are the first
step in starting to build a platform to support our farm. SharePoint runs well on hardware
built to support it, as in any application, but the difference with SharePoint is that it
covers many tiers and we need to think about those tiers as we look at starting our project
implementation.
Some of your research should have already been completed as you should have already
created a design document, but this doesn't mean we can't change a few things along the
way as long as you document. We need to make sure to play our part in making changes
and discussing these changes with other people on our team. Some of the areas that we
may need to stop and think about in this portion of our implementation are the following:
• Server feature comparisons (which server is better for my needs: 2016 or 2019?)
• SQL Server version (which server is better for my needs: 2016, 2017, or 2019?)
• SharePoint Server version (which server is better for my needs: 2016, 2019, and/or
Microsoft 365?)
• Third-party products, network products, and other data and backup integrations
Do not move forward until you have really finalized your choice of these supporting
servers and you understand the differences. You don't want to go down the road of
selecting a platform that doesn't support your future plans. You may see that a feature
you need is not available in your version of operating system, AD, Exchange, SharePoint,
or SQL. You may need a feature to support either an old application or new technology
you were hoping to break ground with. Some of the differences in technology also come
with added server resources, so make sure you are aware of those areas. Document all
shortcomings, especially if you want to move forward with a product but you already
know there is a hiccup in the support for the farm; just document this in the design
document or in a separate document so that these items are noted.
The reason why I make this point is that there have been so many changes from
SharePoint 2007 to SharePoint 2019 and the server platforms that support it, so it is
possible that you may overlook some areas. Make sure you go back and look at the
changes that have been made and ensure you are making the right decisions. Even
some third-party products have areas that you wouldn't think of asking about, such
as the product I have used for migration, AvePoint's FLY tool, which does not support
SharePoint 2007. You want to make sure not to assume anything.
108 Creating and Managing Virtual Machines
So, say you were building all this architecture to migrate from 2007 to 2019 and you
thought you had the product to do it. Then, you get to the point where you need to use it
and find out that it doesn't support your old environment. This would be a big loss of time
because now you're back to square one trying to find a solution. One solution that still
costs a lot of time would be migrating to 2010 first and then migrating to 2019 from that
environment using the tool. This would take time because in SharePoint 2007, the main
support was for classic authentication. When moving to 2019, you have to move to claims,
so you would need to migrate to 2010 and update all the users from classic to claims and
test. This could cause a major setback in the time spent.
So, my take on this is before you start, have a final discussion on these topics with your
team and finalize anything you may need to move forward. Look at all aspects of the
operating system, software, and third-party integrations before you move forward. This
could be getting approvals on builds, approvals on funding, and other areas. You don't
want to hold the project up because you didn't do enough research, so I am adding this
point to give you a heads up to always do a second round of quick checking and research.
This may take a week or two but it will be worth the effort in the long run to minimize
time loss.
• SAC: Microsoft will release two releases a year for Server Core.
• LTSC: Microsoft will release feature updates every 2 to 3 years.
• Configuration version
• Virtualization security
• Windows containers
• Windows PowerShell Direct
• Desktop Experience
• System Insights
• Server Core
• Windows Defender Advanced
• SDN
• Shielded VMs
• HTTP/2
• Storage Migration Service
• Storage Spaces Direct
• Storage Replica
• Failover clustering
• Linux containers
• Kubernetes
• Container improvements
• Network performance
• Windows Time service
• High-performance SDN gateway
• Delay Background Transport
• Memory support
Defining needed server roles 111
As you can see, there are no real differences here but more additions in Windows Server
2019. I am only adding this information to let you know what changes were implemented
in Server 2016 because you need to know these features as they are part of 2019 as well.
In most cases, as I have stated in this book, you are moving from an older version of
SharePoint, so you really need to understand what has transpired from version to version
in your operating system as well as SQL and SharePoint. This will give you an overall
understanding of what your systems have the ability to support.
When looking at features and areas to support our environment, we also need to define
the roles needed within the farm to support the services. In the next section, we will talk
about server roles and the importance of these roles in the environment.
Within our architecture, we must include a design to support redundancy, backup, and
restore, build enough resources for performance concerns, as well as build a stable server
architecture. The last thing we want in our host configurations is the possibility of error
and downtime associated with misconfiguration or calculations of resources. The best
thing to do is to take your time and figure out what resources will be needed to support
your environment, especially looking at hardware types (Dell, HP, NetApp, EMC),
rack space planning, networking equipment, and cabling. We also want to give some
measurements to our installation by looking at server resources such as CPU, memory,
disk space, types of disks, and disk speed.
When choosing server resources, make sure to look at and evaluate the speed of the
hardware resources you choose. These choices make a big difference in how the server
responds to processing and deals with memory, as well as the requirements for disk
speed. The disk choice could be dependent on what those disks are supporting within
your server resource. We could use an SAS drive for our operating system and partitions
where you need the fastest response. There are also RAID configurations we could use for
redundancy, such as mirroring and/or RAID 10, to make sure our data is always protected.
Disk arrays have changed over the years and could be another investment you could bring
to the table. These new technologies bring better ways to manage disk space along with the
ability for enormous growth using connected hardware.
Logging may not be something we will need the fastest disk for, so we could take a hit on
those resources and choose lower-speed disks. We could also use lower-speed disks to
support backup file locations and record center locations where there is not much activity
or user interaction. This takes planning, coordination, and documentation of those areas
to support the configurations you are setting in this case. I can't stress enough in this book
about documentation. If you're not already doing it, please start doing it on this project. It
will save you a lot of headache and pain down the road.
In another vein, you may not need as much memory for a VM server in certain
environments or use cases. I can tell you that I've been in a bad situation before when
I used less than the recommended processor and memory for an Exchange and Skype
implementation that I carried out in a test environment. It caused a lot of errors within
my Outlook and Skype clients due to the server not having enough memory to process
presence within the application. So, this can be very important depending on the
application. If you have the resources to do so, do as Microsoft recommends because it can
save you a headache.
Defining needed server roles 113
So, let's take my environment as an example and look at all the servers I will be building:
• AD server
• Web frontend servers
• App servers
• Two SQL servers
• Exchange server
You could call this an evaluation, proof of concept, or DEV environment, so not many
people will be using this environment. The support for users is very minimal, which
means I do not need a lot of resources to really support a lot of overhead. My server
footprints will be small and resources will also be minimal. So, when looking at our
example, we will create servers that do not need many resources, but we want enough
resources to support our environment from a best practice perspective. Please take into
consideration that we are not building production in our examples.
When making decisions on hardware, always consult the manufacturer to make sure you
are making the right decision. You will need to explain what your project is about and
what you want to get out of the hardware you purchase. Tell them about any software you
are going to use, in this case, SharePoint and SQL Server, and how many users you will be
supporting.
If you believe your CPU will be your biggest worry, then do some research before your
conversation; ask the manufacturer about any details you found, test the scenarios, and
look at reports from old environments as well to make the best decisions going forward.
Make sure you buy/create a server that can provide a better response to your environment
needs and not take on the same issues you had in the past from a hardware standpoint.
In my environment, I am using a Dell R710 server and installing Windows Server 2019 to
build my host servers. I am doing this to show the new advanced features that Windows
Server 2019 offers. In that configuration, my hosts have five Network Interface Cards
(NIC), two eight core CPUs, 144 GB of disk space for my C drive, 2 TB of space for my
VM locations, and 80 GB of RAM.
My environment has four host servers available and we will use two of the servers as
SQL servers to support the database tier. SQL Server is the backbone of our SharePoint
environment and we should not skimp on making this tier as strong as possible. For my
other two hosts, I will use them as my SharePoint and/or web application tier. On this tier,
any third-party applications can always be added as supported applications if needed.
114 Creating and Managing Virtual Machines
We also need to make sure that as we build our servers, we also relate them to what
application services we will be running on our SharePoint servers, especially using
MinRoles. We will look into this more in Chapter 4, Installation Concepts, which will point
out services we need to be careful with. Distributed Cache is one service where we need
to be very careful with how it's configured, either as a MinRole as an individual server
or accompanied by other services, and assign what resources it needs to run efficiently.
MinRoles play a big part in what application services strictly run on a SharePoint
server, so defining server resources goes hand in hand with our MinRole choices for our
SharePoint servers.
In this environment, I will be using Hyper-V (because I am a Microsoft guy) to support
the virtualization within the server builds. The host server will need to have dedicated
server resources as well to make sure that the server can not only support the platform
but also support itself as it needs resources to keep the server running and process any
details used by the Hyper-V host. So, in our configuration, we need to make sure that out
of the 80 GB of RAM and eight cores available, we set aside some good resources for our
hosts. The host will not be working very hard in my environment because of the light user
community.
So, let's get started with configuring our first host that will support AD!
Again, to start the Windows installation, use a shared mounted DVD or mounted shared
USB or folder. Once you have inserted the media, you will see the Windows setup screen,
and then you can take the following steps to start the installation of Windows 2019 Server:
3. Click Next to continue the installation, choosing the desired operating system you
would like to install:
Note
Standard (Desktop Experience) supports small, virtualized environments and
Datacenter (Desktop Experience) supports highly virtualized and cloud-based
environments. The choices without Desktop Experience are core, which is
supported using the command line only.
5. Click Next to continue the installation (you may have many drives here and some
may need formatting – use the tools as needed):
7. After your server reboots, you will be prompted to supply a password for the
administrator credentials:
We have completed our installation of Windows Server. We will now configure our
network and server names for the servers that will support our farm.
1. On the desktop, press Ctrl + Alt + Delete to log in to the server with your admin
account.
2. Click the start button to pull up a list of available options. Choose the file manager.
3. Right-click on This PC on the Windows menu and choose Properties:
Installing Windows Server 2019 – host configuration 119
1. Once the server is installed, go to Control Panel and choose the Network and
Internet feature. Next, select Network and Sharing Center:
Installing Windows Server 2019 – host configuration 121
5. Select Use the following IP address. Insert your static IP address for this
server, the subnet mask, and the default gateway. Also, enter your preferred DNS
server and alternate DNS server addresses. Then, click OK:
1. Go to Control Panel by typing control panel in the run or search area of the
desktop. You can also find Control Panel by clicking start as well.
2. Add the domain name you will use in your environment (this should be the
name of the AD domain created for your farm) and the correct server name in
the Computer name area of the form:
3. Type in the domain admin account and password and click OK (if you do not get
this screen and get an error, please check your DNS, and even rebooting may help at
this point):
We have now added our new server(s) to the domain. Repeat these steps as needed within
the configuration of your servers. Now that we have our host established, installing
Hyper-V is next on our list to be completed.
126 Creating and Managing Virtual Machines
If you would like to use PowerShell to install Hyper-V, make sure before running
PowerShell that you start the process to get Hyper-V installed and that you are running it
with elevated privileges in the GUI-mode installation. You must run as an administrator
to run these commands successfully:
If you want to see the installation visually, then take the following steps:
1. Open Server Manager from the desktop of your server by searching for server
manager or click the icon for Server Manager in your start window. Select the
following option and click Next:
5. Make sure to check all the Hyper-V roles, as shown in the following screenshot:
9. Select the default locations for hard disks and configuration files. Choose a location
that can grow, not your operating system disk. Once you have added a location for
each option, click Next:
Note
These concepts only work in Windows Server 2016 and 2019 and are not
supported in earlier versions of the Windows operating system.
As for the overall host configuration, please review this list of checks to make sure you
have the locations and settings in place. If you are using live migration, please refer to
those items that should be evaluated before starting to use Hyper-V:
There are also commands and other areas of Hyper-V that can be set through the
command line and cannot be seen in the GUI. Please look through the list of cmdlets
available to you and do some research. You may find that you need to use some of the
available cmdlets to configure the Hyper-V configuration further. To list the commands
available for Hyper-V, use the following command:
PowerShell is powerful! Make sure you learn to use it as you go through this book to open
your mind to more ways to administer the SharePoint farm and servers in the farm. Most
administrators use PowerShell to manage everything.
Now that we have our server host configured, we can create VMs to support the resources
needed within the environment.
Creating VMs
Since we now have hosts to create VMs, we can start the process of creating VMs to
support our farm.
The checklist of items we need to create our VMs is the following:
1. Once opened, right-click the server name and select New | Virtual Machine…:
2. In Specify Name and Location, type in the name of your server and browse to
select where your VMs will be located on your server. Click Next after reading the
information about the creation wizard:
Note
This should be storage you have set aside on separate drive space; please avoid
using the C drive!
3. Select the generation of server you would like to use as part of your farm. For the
latest and greatest features, please select Generation 2:
4. Select the amount of memory you would like this server to use for support of
the farm:
Note
Please do not use Dynamic Memory! This is not a supported feature for the
support of SharePoint environments. You will not be supported by Microsoft if
you were to call in for support.
5. Select the network adapter you would like to use for networking configuration:
Tip
You can always change this later by using NIC Teaming or both a switch with
NIC teaming. This is just a demonstration of the basic functionality to get you
started.
6. The name of the server file for the VM is shown, as well as the location of the VM
and the amount of disk space you would like this server to have for its first drive:
Tip
Refer back to your drive configurations in your design document to make sure
you build your server correctly.
7. If you have an ISO image or a DVD of the installation for Windows, please connect
it here using the Browse… button. If you are just creating the servers, then you can
install the operating system later as well:
Note
This allows the VM to connect to resources on the host computer very easily
and even allows drag-and-drop files. This is with Generation 2 VM servers and
Windows Server 2016 or 2019 only!
142 Creating and Managing Virtual Machines
9. Once completed, go back and click on Integration Services and select Guest
services:
10. Start the installation by choosing your language and keyboard method. Click Next:
12. Choose the Windows operating system version and experience you would like to
use for the server:
Tip
Remember that for all SharePoint servers, you will need to use the Desktop
Experience installation. SharePoint does not support the core installation.
14. Choose the drive you would like to install Windows Server on and click Next:
Summary
In this chapter, we covered a lot to create the basics of the SharePoint platform. We
learned about creating VMs and configuring the host to support the SharePoint resources
supporting the platform. We also covered the differences between operating systems
and how they have changed over the years in terms of features and support of the
environment. Make sure that you understand these details because they will tailor the way
that your environment is configured and managed going forward.
Do not under resource your host or VM servers. Make sure you have plenty of disk space
and memory to support the environment. Also, remember that dynamic memory is not
supported on SharePoint servers. This has changed since the Distributed Cache service
was introduced. If you call for Microsoft support, they will not support you with VMs
configured in this manner. Don't make mistakes from the beginning. With the low cost of
server components, this should not be a limitation any more.
Questions 147
If you really apply yourself in these details, there is much more to research, as you can see
from the notes I have provided in this chapter. Please make sure to research the topics I
have covered and get a clear understanding of Hyper-V and the technology using the GUI
for configuration – and, of course, PowerShell. Do not skip this as a step in your learning
as PowerShell is a key way to manage servers and I suspect that it will be the only way to
manage servers and cloud resources in the future.
It takes time to learn but, at the same time, it is not rocket science. Do not be intimidated
because the more time you spend on it, the better you will be at managing servers with
no GUI or visual aspects. Always remember to test your scripts before you run them in
a production environment. This is why I have stressed in this chapter and the book to
create DEV and TEST environments so that users are not impacted by mistakes. This
also applies to SharePoint as there are PowerShell cmdlets for that platform as well. Move
toward it openly and embrace its power.
In the next chapter, we will start our installations and do some pre-installation tasks to
finalize our server configurations.
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
1. What is the difference between the choice of a Standard Desktop Experience and
the DataCenter Desktop Experience?
2. The data location for your VMs should be managed on the same drive as your
operating system. True or False?
3. When creating the host and VM servers, should you use DHCP or a static address
to configure IP addresses for your servers?
4. What are the main three environments we need to have to support our SharePoint
development processes ultimately? Why?
5. How do MinRoles relate to server resources?
6. Can the Distributed Cache service be affected by the configuration of your VM?
4
Installation
Concepts
In this chapter, we will discuss SQL and SharePoint installation concepts and how to
craft installations to fit the requirements your team is trying to build to support the
infrastructure. In my travels, I have seen instances where many companies did not pay
much attention to these initial steps on how the farm was installed, which later made
it difficult to make changes in the infrastructure. We must take into consideration
the different versions of software we should be using to make our infrastructure and
project successful by implementing features that support our final build goals. Your
server resource platform, as well as the version of the software, plays a big part in the
infrastructure requirement efforts you are trying to support.
During our build, we must also pay attention to the configuration settings that also come
into play to support the users in your community as these settings can make a difference
in how the farm supports user requirement efforts. Some settings also play into how
we recover from the disasters and hiccups we may face in our farm when the farm is in
production use and available to our users.
In this chapter, we will go through the installation and configuration of SharePoint
and SQL Server. The issues we have seen in most installations take the form of botched
configurations. Looking closely at assessment reports from many different SharePoint
farms and SQL Server instances will help bring up some obvious points in this book to
help you avoid some pitfalls in the future.
150 Installation Concepts
We will also expand on some points that admins and others may gloss over as a non-issue
at the beginning because they do this as part of their installation process. From what we
have seen, we believe that most of the topics covered in this chapter need to be reviewed
as there could be something you might have missed.
The following topics will be covered in this chapter:
• Installation updates
• Configuring SQL Server 2017
• Configuring SharePoint 2019 prerequisites
• SharePoint 2019 installation
Technical requirements
For you to understand this chapter and the knowledge shared, the following requirements
must be met. Please review these points to ensure your understanding:
You can find the code files present in this chapter on GitHub at https://github.
com/PacktPublishing/Implementing-Microsoft-SharePoint-2019.
Let's get started!
Installation updates
We can see a different installation and evaluation process for the new versions of the
Windows operating system, SQL, and SharePoint. In this section, we will go through
initial installations of SQL Server 2017 Enterprise and SharePoint 2019 Enterprise to show
you the step-by-step process of these new installations and point out the new and notable
areas to key in on.
When we were writing this book, we really wanted to make sure to get the point across
about changes. The reason why we mention this several times in this book is that a lot
of you will be coming from a different version of SharePoint to upgrade to SharePoint
2019. So, we want to make sure to cover these areas well as some of you may be
skipping versions of SharePoint, SQL, and operating systems to move to new Microsoft
applications. Skipping over versions like this requires some research and there will be
things you need to understand.
Installation updates 151
Some of the things in this installation that we need to talk about again are as follows:
• SQL Server and SharePoint do not coexist on the same server in this version.
• SharePoint 2019 only supports these versions of SQL Server: 2016, 2017, and 2019.
• SQL Server Express, SQL Azure, and SQL 2017 on Linux are not supported.
• Windows Server must be installed using Server with Desktop Experience.
As part of our configuration, we are required to use one of these server operating systems:
For the current installation scenarios, you can refer to the following site: https://
docs.microsoft.com/en-us/sharepoint/install/hardware-and-
software-requirements-2019.
Use one of the following operating systems:
A list of admin and user accounts needed for the installations of SQL and SharePoint is as
follows (could be more depending on your needs):
• SPAdmin: Admin account to manage the farm, mainly used for Windows and
SharePoint updates
• SPFarm: Farm account for the farm
• SPSearch: Runs the search application service
• SPWebApp: Runs the web applications in the farm; sometimes good to use separate
service accounts in some situations
• SPCTWTS: Claims to Windows Token Service account needed for this service
• SPService: Service account for all services
• SPProfile: Runs the User Profile service in the farm
• SPUPSREP: For User Profile service connectivity to Active Directory (AD)
• SPCacheWrite: Cache account for the web application that has full control access
• SPCacheRead: Cache account for the web application that has read-only access
152 Installation Concepts
Now that we have looked at the installation details, let's move on to the configuration
details.
As you can see, this list is short and I am sure there are more items you could need during
your installation. There are other items that I could have missed due to my environment
being a demo of a basic configuration and not a more secure and sophisticated farm.
In the next section, we will go through the installation of SQL Server 2017 and break
down the installation concepts based on requirements.
Installation of SQL Server 2017 can be done on a new platform, which is exciting to see.
Linux is now a platform that will support SQL Server 2016, 2017, and 2019, which you
can now run on the Linux operating system. What does this mean to me? This could mean
a lower cost for operating system licenses and also, Linux is known to be more stable than
Windows (depending on who you talk to), so this could give you some benefits of moving
your databases to a Linux server.
The installation prerequisites are as follows:
• The SQL admin account has been added to the local administrator group.
• All service accounts have been identified for supporting the installation.
• Local policies have been configured for the SQL admin account.
• Turn off your firewall or configure the port configuration.
Important Note
REMEMBER: If you are migrating to a cloud service, make sure you check how
ports are configured as even if you have the firewall off on the server, the cloud
service still requires you to configure the firewall outside the server. This would
then allow you to open the ports necessary for the farm to communicate locally
and over the internet. Do this configuration at the beginning so that you have
no issues with setting up services at the end!
The SQL outbound ports are now set, so let's set our inbound ports.
Now that you have created your VM for SQL Server, we need to set our feature
installation, which consists of features being configured on the Windows server to support
SQL Server 2017. Open a PowerShell window and run the following command as an
administrator:
Install-WindowsFeature NET-HTTP-Activation,NET-Non-HTTP-
Activ,NET-WCF-Pipe-Activation45,NET-WCF-HTTP-Activation45,Web-
Server,Web-WebServer,Web-Common-Http,Web-Static-Content,Web-
Default-Doc,Web-Dir-Browsing,Web-Http-Errors,Web-App-
Dev,Web-Asp-Net,Web-Asp-Net45,Web-Net-Ext,Web-Net-Ext45,Web-
ISAPI-Ext,Web-ISAPI-Filter,Web-Health,Web-Http-Logging,Web-
Log-Libraries,Web-Request-Monitor,Web-Http-Tracing,Web-
Security,Web-Basic-Auth,Web-Windows-Auth,Web-Filtering,Web-
Performance,Web-Stat-Compression,Web-Dyn-Compression,Web-
Mgmt-Tools,Web-Mgmt-Console,WAS,WAS-Process-Model,WAS-NET-
Environment,WAS-Config-APIs,Windows-Identity-Foundation,Xps-
Viewer -IncludeManagementTools -Verbose -Source (windows server
installation location\sxs)
In the past, we had to manually install features using a server manager, which could take
up a lot of time depending on how many servers you had to prepare for installation.
To get started with the SQL installation, follow the steps given here:
1. Log in with your SQL admin account only; do not use your personal account to
install SQL Server. Also, have the media available on a DVD or USB to get started.
After you have started the installation, you will see a window open called SQL
Server Installation Center, as shown in the following screenshot:
2. Click Installation on the left main navigation list and click the New SQL Server
stand-alone installation or add features to an existing installation link in the
main area of the screen at the top:
6. The SQL Server installation finds any updates and installs them. Your server must
be connected to the internet to use Windows Update. Click Next to continue the
installation:
12. Service account configuration has not changed. Please add your service accounts
to select the service you would like them to run. In my case, I have my SQL admin
account, which I am logged into while running the setup and installation. The
accounts I need for this configuration are to be used as service accounts. I have an
account named SQL User mentioned in this chapter. You can name this account
what you want but in my case, I named the account SQL Service. I will use it to
configure the services. In some cases, admins have separate services using separate
accounts. Click the Collation tab to continue the installation:
Important Note
As you select the account, make sure to click and choose them from the people
picker; if not, you will receive an error that the account cannot be found.
13. The collation at this point on the server must be configured to be case-insensitive.
There have been many blogs talking about this and from a Microsoft standpoint,
this does not need to be set to any specific collation as long as it's case-insensitive.
The collations of the created databases must be set correctly with the SharePoint
default collation, which is Latin1_General_CI_AS_KS_WS. Click Next to
continue the installation.
Configuring SQL Server 2017 165
14. Setting authentication for administrators should be completed using the current
account you are logged in with as a given and adding any other accounts that will
be deemed administrators of the SQL instance. We do not want to select Mixed
Mode for our SQL server. As a best practice, we should only be using Windows
Authentication Mode to support SharePoint environments. If you plan to use
Mixed Mode, in most cases, it means you plan to house other databases on the
server as well. This goes against Microsoft's best practices as all SQL servers
deemed that the data tier for SharePoint should only support that SharePoint
farm. Click Next to continue the installation.
15. This screen has not changed much with the SQL Server installation process and
it still wants you to share the location in which it should hold the files for the
configuration. Update the locations as needed in this window.
In theory, we should have several locations for this and should not have all files
created on the same drive. As part of my installation process, I used separate disks
to hold the locations of my database files, which are separated by configuration type.
So, in theory, this configuration out of the box does not work for me, which is why
I pre-create my databases.
The reason for pre-creating my databases is because I use separate drive spaces to
house my config databases, content databases, service databases, search databases,
TempDB databases, and any other databases that are part of my installation outside
of SharePoint, like how Workflow Manager would be a house on its own drive
space. The reason I do this type of setup is I want to get the best performance out of
my databases and have them use their own space. So, on this screen, I would set up
my targeted locations for these areas but the split will also happen later when more
databases are created.
166 Installation Concepts
16. Click TempDB to see the setting available for configuration and continue the
installation:
18. FILESTREAM is used for Remote Blob Storage, also known as RBS. If you are
planning to configure that service, you need to click the checkboxes associated with
this feature so that it can be installed. Click Next to continue the installation:
21. Now that all the configurations have been set, we will click on Install and start the
installation process for the SQL server.
22. The installation is progressing and this does progress slowly, so just be patient and
you will start seeing results.
23. Installation is complete. Please click Close to finish the install and afterward, reboot
your server. Your server is now ready for further configuration, which we will tackle
in Chapter 5, Farm and Services Configuration.
We are not finished installing our SQL database server. If you need to add any other
custom utilities or updates to your database server, please do so now. Once we get our
SharePoint farm configured, we will set up our maintenance plans to back up our content
and services.
Note that the Office 2019 client cannot be installed on the same server as SharePoint
Server 2019.
Configuring SharePoint 2019 prerequisites 169
Preparing the server is the same as we did for SQL Server 2017 in the Configuring SQL
Server 2017 section of this chapter. Repeat these steps on all SQL and SharePoint servers:
Install-WindowsFeature NET-HTTP-Activation,NET-Non-HTTP-
Activ,NET-WCF-Pipe-Activation45,NET-WCF-HTTP-Activation45,Web-
Server,Web-WebServer,Web-Common-Http,Web-Static-Content,Web-
Default-Doc,Web-Dir-Browsing,Web-Http-Errors,Web-App-
Dev,Web-Asp-Net,Web-Asp-Net45,Web-Net-Ext,Web-Net-Ext45,Web-
ISAPI-Ext,Web-ISAPI-Filter,Web-Health,Web-Http-Logging,Web-
Log-Libraries,Web-Request-Monitor,Web-Http-Tracing,Web-
Security,Web-Basic-Auth,Web-Windows-Auth,Web-Filtering,Web-
Performance,Web-Stat-Compression,Web-Dyn-Compression,Web-
Mgmt-Tools,Web-Mgmt-Console,WAS,WAS-Process-Model,WAS-NET-
Environment,WAS-Config-APIs,Windows-Identity-Foundation,Xps-
Viewer -IncludeManagementTools -Verbose -Source (windows server
installation location\sxs)
Important Note
Make sure to include the Windows Server media SXS location in the
-Source parameter of the script.
Once you have run the feature installation script on the SharePoint server, you will see the
confirmation that the installation succeeded, like so:
The local policy settings for SharePoint service accounts are as follows:
Configuring SharePoint 2019 prerequisites 171
• Cache accounts
• Workflow Manager accounts
• Office Online Server accounts
• SharePoint crawl account (only needed if you want to separate at that level for
security)
• Business Connectivity service (accounts may be needed to connect to outside
data sources)
The local policy is very important as the rights given to each service account are reflected
in the local policy settings. If the service account is not given the proper rights, you
will see errors in your event logs pertaining to that particular service, which can be
misleading. This is due to the errors given not telling you specifically what the issue is
in some cases.
172 Installation Concepts
Domain policies come into play as well as they can overwrite these local policies set by
SharePoint automatically. This usually happens only when the server is rebooted, so you
could think you have a great configuration until you reboot one day and then the service
is down. Be very careful how you use local and domain policies within a SharePoint and
SQL Server configuration. Talk to your AD group to make sure these areas within the
domain and local server policies have been covered.
protocol=TCP localport=12290-12291
netsh advfirewall firewall add rule name="SharePoint Open Port
5725" dir=out action=allow protocol=TCP localport=5725
netsh advfirewall firewall add rule name="SharePoint Open Port
389" dir=out action=allow protocol=TCP localport=389
netsh advfirewall firewall add rule name="SharePoint Open Port
389" dir=out action=allow protocol=UDP localport=389
netsh advfirewall firewall add rule name="SharePoint Open Port
88" dir=out action=allow protocol=TCP localport=88
netsh advfirewall firewall add rule name="SharePoint Open Port
88" dir=out action=allow protocol=UDP localport=88
netsh advfirewall firewall add rule name="SharePoint Open Port
53" dir=out action=allow protocol=TCP localport=53
netsh advfirewall firewall add rule name="SharePoint Open Port
53" dir=out action=allow protocol=UDP localport=53
netsh advfirewall firewall add rule name="SharePoint Open Port
464" dir=out action=allow protocol=UDP localport=464
netsh advfirewall firewall add rule name="SharePoint Open Port
809" dir=out action=allow protocol=TCP localport=809
Once we have completed the outbound ports, let's configure the inbound ports on your
SharePoint server next.
Now that we have completed our outbound and inbound port configuration, let's learn
about the preparation toolkit.
New-ItemProperty HKLM:\System\CurrentControlSet\Control\Lsa
-Name "DisableLoopbackCheck" -Value "1" -PropertyType DWORD
The following are optional software installations that support SharePoint 2019. These are
in support of business intelligence service capabilities and may be required to support
these services:
• .NET Framework Data Provider for SQL Server (part of Microsoft .NET
Framework).
• .NET Framework Data Provider for OLE DB (part of Microsoft .NET Framework).
• SharePoint Workflow Manager: You can install SharePoint Workflow Manager on
a dedicated computer.
176 Installation Concepts
• Microsoft SQL Server 2008 R2 Reporting Services Add-in for Microsoft SharePoint
Technologies: This add-on is used by Access Services for SharePoint Server 2019.
• Microsoft SQL Server 2012 Data-Tier Application (DAC) Framework 64-bit
edition
• Microsoft SQL Server 2012 Transact-SQL ScriptDom 64-bit edition
• Microsoft System CLR Types for Microsoft SQL Server 2012 64-bit edition
• Microsoft SQL Server 2012 with SP1 LocalDB 64-bit edition
• Microsoft Data Services for.NET Framework 4 and Silverlight 4 (formerly ADO.
NET Data Services)
• Exchange Web Services Managed API, version 1.2
In our case, I will explain both and provide details on how to install them. There is one
new way to install via PowerShell as well, which I will also point out as an option.
When installing from our SharePoint DVD, ISO, or USB installation, we will see that
Microsoft has included a new choice to install SharePoint Server 2019. Now, instead of
needing to access the prerequisite installation from the file manager, we can do this from
the splash menu. Add the prerequisites to the folder within the installation, which means
you need to copy the installation to the server you are installing from so that these files
can be associated with the install.
Important Note
Before we start the installation preparation using the tools provided by
SharePoint Server 2019, we need to make sure we copy our installation to
a hard drive location on the server. We need to do this to provide any updates,
as in cumulative updates you may want to include during the installation
and all the prerequisite files needed to finish the preparation for the server
installation.
Configuring SharePoint 2019 prerequisites 177
Once you have added all the updates and prerequisite files to a local installation folder,
follow the steps given here:
1. Click on the Install software prerequisites link on the splash page to get started:
3. Check the box to accept the license agreement and click Next:
5. The server will reboot automatically during the installation and will continue after
the server comes back up. You will be prompted with the following screen of the
completed installation; just click Finish:
Please run the following script to install the prerequisites using Command Prompt:
.\prerequisiteinstaller.exe
/SQLNCli:c:\(Folder)\sqlncli.msi
/Sync:c:\(Folder)\Synchronization.msi
/AppFabric:c:\(Folder)\WindowsServerAppFabricSetup_x64.exe
/IDFX11:c:\(Folder)\MicrosoftIdentityExtensions-64.msi
/MSIPCClient:c:\(Folder)\setup_msipc_x64.exe
/WCFDataServices56:c:\(Folder)\WcfDataServices56.exe
/MSVCRT11:c:\(Folder)\vcredist_x64.exe
/MSVCRT141:c:\(Folder)\vc_redist.x64.exe
/KB3092423:c:\(Folder)\AppFabric-KB3092423-x64-ENU.exe
/DotNet472:c:\(Folder)\NDP472-KB4054530-x86-x64-AllOS-ENU.exe
/MSVCRT11:<file> Install Visual C++ Redistributable Package
for Visual Studio 2012 from <file>.
/MSVCRT141:<file> Install Visual C++ Redistributable Package
for Visual Studio 2017 from <file>.
1. Once you encounter the splash screen for SharePoint 2019, choose Install
SharePoint Server under the Install menu:
SharePoint 2019 installation 181
Tip
Make sure to change the data location as I have stated earlier to have at least
two drives available, one for the operating system and then another for data,
which will house your logs and search index. Also, make sure to create the
logging and data drive large enough to grow your intended logs and search
data, which you will need for now and a 2 to 3 year period.
We will use SQL aliases for the connectivity from SharePoint to the SQL server, so if we
lose our SQL server, we can recreate the SQL server and use the same alias to connect to
the farm from our SharePoint servers, and then our database server name never changes.
You cannot do this using a named instance of a SQL server that is being used as the
connecting SQL server name in your SharePoint farm.
Other areas of the configuration, such as logging, monitoring, and services, will be
explained in Chapter 5, Farm and Services Configuration. These will be the SharePoint
configurations needed to get ready to install service applications, use databases, and
set server locations further. The following steps show the configuration for database
connectivity settings so that we can complete the installation of SharePoint Server 2019:
3. Enable TCP/IP:
Important Note
You can dynamically determine the port by keeping the checkbox checked,
which sets the port on its own, or you can set the port manually to a different
port other than 1433, which is the default for SQL connectivity.
8. Click Next to continue, which will test connectivity to your SQL server using the
alias name:
Note
With the new SharePoint 2016 and SharePoint 2019 servers, we now need to
use a new parameter: $ServerRole. This determines the MinRole that will
be used on this server resource.
1. Copy the script from GitHub called FarmCreation.ps1 and run this script to
create your farm. Remember to change the fields where needed:
Add-PSSnapin "Microsoft.SharePoint.PowerShell
#Configuration Settings
$DatabaseServer = "SPSQLCONNECT"
$ConfigDatabase = "2019_Farm_Config"
SharePoint 2019 installation 189
$AdminContentDB = "2019_Farm_Content_Admin"
$Passphrase = "ENTER A PHRASE"
$FarmAccountName = "Domain\SP_Farm"
$ServerRole="APPLICATION"
#Get the Farm Account Credentials
$FarmAccount = Get-Credential $FarmAccountName
$Passphrase = (ConvertTo-SecureString $Passphrase
-AsPlainText -force)
#Create SharePoint Farm
Write-Host "Creating Configuration Database and Central
Admin Content Database..."
New-SPConfigurationDatabase -DatabaseServer
$DatabaseServer -DatabaseName $ConfigDatabase
-AdministrationContentDatabaseName $AdminContentDB
-Passphrase $Passphrase -FarmCredentials
$FarmAccount -LocalServerRole $ServerRole
-SkipRegisterAsDistributedCacheHost
$Farm = Get-SPFarm -ErrorAction SilentlyContinue
-ErrorVariable err
if ($Farm -ne $null)
{
Write-Host "Installing SharePoint Resources..."
Initialize-SPResourceSecurity
Write-Host "Installing Farm Services ..."
Install-SPService
Write-Host "Installing SharePoint Features..."
Install-SPFeature -AllExistingFeatures
Write-Host "Creating Central Administration..."
New-SPCentralAdministration -Port 2019
-WindowsAuthProvider NTLM
Write-Host "Installing Application Content..."
Install-SPApplicationContent
Write-Host "SharePoint 2019 Farm Created Successfully!"
}
190 Installation Concepts
Important Note
Install-SpHelpCollection is no longer needed in
our script as part of the SharePoint 2019 configuration. Also, if
-LocalServerRole $ServerRole is not specified, the
server will be created as a custom role. We also do not want to create a
Distributed Cache service on this initial app server. So, we will include
-SkipRegisterAsDistributedCacheHost.
2. Monitoring progress: As you wait for the script to run, check the SQL server by
refreshing the databases to see whether databases have started to be created:
Remote installations
You can install using remote installations, where you can use AutoSPInstaller to
install SharePoint on multiple servers using one script from one server. The script first
installs SharePoint locally to establish a baseline installation on the local server where the
script is being executed. The script then installs SharePoint remotely using PowerShell
Remoting and Windows Remote Management (WinRM) on the other servers you have
configured in the script. These installations can be done all at once in a parallel or serial
process based on the configuration file. WinRM must be enabled on the servers where
you want SharePoint to be installed remotely. To learn more about remote installation,
find AutoSPInstaller at the following link or review GitHub for more information:
https://autospinstaller.com/.
192 Installation Concepts
Summary
If you are familiar with SharePoint, you can see that not a lot has changed in the
installation process. There have been some cool additions to help with the process, but
overall, we can see that if you know SharePoint installation, you can get through this
pretty easily. The key things to remember are using the MinRoles, setting logging locations
correctly, reviewing scripts, and installing all the prerequisites for the server, SQL and
SharePoint, before installing.
In the next chapter, we will go through more configurations and understand how to put
this farm altogether. There are many steps and variations to this configuration that we
cannot cover in this book. We condensed as much as possible into the scope of the book.
Although we are very clear about setting some areas of the configuration, you will see that
some areas can be customized, which we will state in the following chapters.
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
1. Why should we use a script to create our farm and not the configuration wizard?
2. If we lost our SQL server due to a disaster, we could recover our databases on
a named SQL Server instance and reconnect the farm with no issues. True or False?
3. When installing our SharePoint servers, which firewall port supports Office
Online Server?
4. Local policy is needed when installing SharePoint. Why?
5. Domain policies can interfere with the configuration of SharePoint and SQL Server.
True or False?
6. If I wanted to install my farm and other server resources all in one executed script,
can I use PowerShell to do so?
7. What parameter in our farm creation script is needed only for SharePoint Server
2016 and 2019?
5
Farm and Services
Configuration
SharePoint's key to success is the services it provides. In this chapter, we'll learn how
to make all the necessary performance tweaks, as well as how to configure the services
related to logging, monitoring, and integrating services that support content, plus other
application integrations. SharePoint, as a product, changes with every version, so expect
to see some changes in terms of how these services are configured and supported.
Security will also be reviewed in this chapter: besides the services SharePoint provides,
we need to look at the service accounts that support them. Application pool best
practices and other areas that can be configured will also be covered; there has always
been some speculation on the best practices surrounding these areas of security and
application pools.
194 Farm and Services Configuration
Knowing which services support which resources of the platform will also be mentioned
in this chapter. We need to understand how to use our MinRoles as we add more servers
to the farm, as well as what services those applications support. We also need to know
how to make the best choices when adding these server roles and resources to the
environment. To do this, we need to follow the design for the farm we finished and follow
best practices.
The following topics will be covered in this chapter:
Technical requirements
For you to understand this chapter and the knowledge provided, you will need the
following:
SQL Server is the main component and the foundation of the SharePoint Server Farm.
Without it, we cannot support SharePoint in any way. So, since this server supports all the
data that will be used for configuration, services, content, and security, we really need to
make sure that this part of our farm is performing up to par and has been configured to
support the farm through growth, redundancy, and performance.
We have compiled a list of areas we want to change and/or update as part of our SQL
Server configuration that help us create databases and improve their perform on existing
databases. These changes must be made so that there's better support for SharePoint Farm.
This is because SharePoint uses SQL Server as its central repository for configuration,
services, and content. Please make the changes mentioned in the following subsections
while ensuring they support the servers you have built as part of your farm.
SQL properties
Now that we have successfully installed SQL Server, we need to check its configuration.
Start by right-clicking the server's name:
Selecting Properties in SQL Server Management Studio will present the property settings
for the server:
Use the following commands to change the value of SQL Server once it has been installed.
In newer versions of SharePoint, this setting is set automatically, but if you see that it is not
set to 1, please do so using the following SQL query:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO
sp_configure 'max degree of parallelism', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO
Within the configuration, please make sure that the following areas have been updated
and set to support SharePoint Server 2016 and the performance of SQL Server:
Once you have run this SQL command, you will see the following message, stating that
you have successfully applied the setting:
• IIS
• Diagnostic Logging
• Usage Logging
• Index (see Search configuration for details)
IIS is the service that runs SharePoint websites on the server and creates logs as part of the
service. When you log into the IIS Manager, you will see the sites and application pools
that are available and have been created, as shown in the following screenshot:
Configuring SharePoint Services 201
To get to the logging area of the IIS service, please click on the server's name and then
select Logging from the list of available configuration areas in the middle of the screen:
While we're configuring logging, we want to make sure that all the logs are collected by
Site, which is the default option. The Directory location needs to be changed here. As
shown in the preceding screenshot, I have changed mine to reflect the second drive on my
server. This should be done on every server in the farm, including SQL, to make sure you
set the location to a drive that has the capacity to hold the log. After setting the location
of the file, you can set your preferences in terms of Log Event Destination and Schedule.
You can also customize the fields you would like to see in your logs since you may want to
add more detail to your log captures.
Select the information you would like to collect, as well as the schedule, from the
following configuration parameters:
Figure 5.10 – Log Event Destination and Log File Rollover settings
I have set Schedule to Daily so that I get a daily log of the site for troubleshooting if
needed. The daily log file's size depends on the user traffic. So, the more users you have,
the bigger these files become. Also, remember that these logs are kept by site. You can
always change this if you like to place them in one server log file. However, this is not
something I recommend.
204 Farm and Services Configuration
If you click on the Select Fields button on the main screen, you will be brought to the
following selection screen:
Always run PowerShell using the run as administrator option so that you are using your
administrative privileges while running commands and scripts. If you do not, your
process will fail.
Let's install the logging services within SharePoint by performing the following steps:
1. Navigate to the Central Administration link on your server, right-click it, and
run as administrator. Then, click the Start button and right-click the SharePoint
Central Administration icon and run as administrator. Never open this area
without running as administrator. You can also set this up on your taskbar and run
as administrator as well. You will see the following screen, which is your Central
Administration site:
A menu will appear for working with the monitoring aspects of the farm's
configuration.
3. Click on Configure diagnostic logging. In this area, we want to select All
Categories and click the box next to the selection we want to hone in on during our
log capture. Find out more about these categories and figure out the areas you are
looking to target as part of your troubleshooting efforts:
4. If you have specific areas you would like to monitor, you can select them
individually as well:
Restrict the space on the trace logs using the next configuration tab. I have it set this
to 1 GB. This is great for getting a good chunk of data but not having the system
strain to open it on the server or transfer it to others.
7. We will now configure usage logging. Click on Configure usage and health data
collection under the Reporting section:
With that, we have finished setting up our server login locations. Next, we will move on
and understand the antivirus and security configurations within SharePoint Server 2019.
These will help protect our farm from viruses and vulnerabilities.
• Antivirus settings
• Web Part Security
• Blocked File Types
Antivirus settings
The antivirus settings are configured in the farm, while Web Part Security and Blocked
File Types are set at the web application level. These settings should be taken very
seriously. Since antivirus is usually related only to the server and its protection,
antivirus is typically overlooked and is missed out during the installation process.
Another reason for this being missed out on is because almost every admin I have met
thinks that having antivirus on all their PCs and servers secures their content from
viruses. From my experience, there have been instances where I have seen infected files
get into a SharePoint site.
We should always make sure we configure the farm so that it's not vulnerable to the
many different types of attacks that can be performed, and also secure it from all types of
vulnerabilities. All incidents are created by overlooking steps in the configuration process,
and deeming them as minimal can cause incidents. These types of vulnerabilities can be
brought on easily, especially when you're hosting a SharePoint environment where the
user does not understand security and you do not have control of the users' desktop.
Antivirus and security configurations 213
As shown in the following screenshot, antivirus can be set in a few different ways, but
again, you must have a product that integrates fully with SharePoint for these settings
to work:
SharePoint antivirus protection also occurs at the file level. Here, we need to make sure we
not disrupt SharePoint and its file management process. For the following list of folders,
we will need to make some exclusions. Configure all the SharePoint Server antivirus
software so that the following folders and subfolders are excluded from antivirus scanning:
• Drive:\Program Files\Common Files\Microsoft Shared\Web
Server Extensions
If you do not want to exclude the whole Web Server Extensions folder from antivirus
scanning, you can just exclude the following folders:
• Drive:\Program Files\Common Files\Microsoft Shared\Web
Server Extensions\16
• Drive:\Program Files\Common Files\Microsoft Shared\Web
Server Extensions\16\Logs
• Drive:\Program Files\Microsoft Office Servers\16.0\Data\
Office Server\Applications
• Drive:\Windows\Microsoft.NET\Framework64\v4.0.30319\
Temporary ASP.NET Files
• Drive:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config
• Drive: \Users\ServiceAccount\AppData\Local\Temp\WebTempDir
Note: The WebTempDir folder is a replacement for the FrontPageTempDir
folder.
• Drive:\ProgramData\Microsoft\SharePoint
• Drive:\Users\account that the search service is running
as\AppData\Local\Temp
Note: The search account creates a folder in the Gthrsvc_spsearch4 Temp
folder, which must be periodically written to.
• Drive:\WINDOWS\System32\LogFiles
• Drive:\Windows\Syswow64\LogFiles
Note that if you use a specific account for SharePoint services or application pools
identities, you may also have to exclude the following folders:
• Drive:\Users\ServiceAccount\AppData\Local\Temp
Antivirus and security configurations 215
• Drive:\Users\Default\AppData\Local\Temp
• Any location where you decided to store the disk-based binary large object
(BLOB) cache (for example, C:\Blobcache).
• Drive:\inetpub\wwwroot\wss\VirtualDirectories and all the folders
inside it
• Drive:\inetpub\temp\IIS Temporary Compressed Files
SQL Server's antivirus settings will need to be configured in the same way on each SQL
Server that is being used within the environment. So, if you have three nodes, all three
servers would need this configuration on all servers. The exclusions needed for these
servers are listed here.
As a best practice and to avoid downtime, please configure your SQL Server antivirus
software settings so that the following file locations are not scanned. Doing this will help
improve the performance of the server. This will also alleviate any files being locked
while the SQL Server service is working. If some of these file types become infected, the
antivirus software will not be able to detect this:
The following are the processes you must exclude from virus scanning:
• %ProgramFiles%\Microsoft SQL Server\<Version.><Instance
Name>\MSSQL\Binn\SQLServr.exe
• %ProgramFiles%\Microsoft SQL Server\<Version>.<Instance
Name>\Reporting Services\ReportServer\Bin\
ReportingServicesService.exe
• %ProgramFiles%\Microsoft SQL Server\MSAS13.<Instance
Name>\OLAP\Bin\MSMDSrv.exe
Setting the antivirus on our server is very important and you will face issues if you do not
set these areas correctly. There are many areas that need to be excluded, but it is worth
configuring these areas so that they support your farm and its services. They will help the
server run as it should without hindering performance and availability.
There are also other ways to protect data, including using security methods such as Data
Compliance Protection in the cloud and Data Compliance Site Templates, both of which
are available within SharePoint. You can find out more by researching these topics online.
We will go over some of the aspects of these features later in this book.
In this section, you will learn how to configure services and find out what services you
really need. In our planning session in Chapter 2, Planning and Architecture, I mentioned
that this was something we should have looked at while planning. The reason why we
must review those areas at the planning stage is because during our assessment, we looked
at our old farm. When looking at our old farm, we found out what services were already
running and what services will support the farm going forward. So, we have already
started our list.
Now, we need to understand what those new services are and how they will be configured
to support the farm. If you have not thought about this already, please make sure you do
so before starting this section. In Chapter 2, Planning and Architecture, we went through
this in detail, so you should be ready to start. If not, you will need to understand the
following:
In this section, we will create a matrix of services that can be individually dispersed across
these web applications or shared between them. Whatever the case, we want to make
sure we are managing our services so that we support the server resources that have been
allocated for the farm, as well as the users who need these services within their sites. If
security is a concern, then this will also play into how many services we create and what
web applications they will support.
The following screenshot is our first glance at our Services on Server since we just created
our new farm:
Creating service applications 221
To do this via SharePoint's Central Administration site, perform the following steps:
Add the account to the User name area of the form. The account must be in the
following format: domain\username. You can enable automatic password
changes via SharePoint. I have used this in the past to update passwords for my
service accounts and it worked like a charm. The only issue that occurs is when
you may need to sign in using that account as part of troubleshooting. So, if you
want to know your password for the account, just do this as part of your monthly
maintenance process and document this password change if you don't want to use
the randomly automated password updates.
3. To do this using PowerShell, use the following commands:
$Password = "Ki90@T887"
$Account= "XYZ\WebAppPool"
$pass = convertto-securestring $Password -asplaintext
-force
$cred = new-object management.automation.pscredential
$Account ,$pass
$res = New-SPManagedAccount -Credential $cred
Make sure that you include a password that is up to your security standards. Remember
that user accounts cannot be more than 20 characters in length; if they are, you will never
see them added to the list of registered accounts. Now that we have added our accounts,
we can use them in our service application and web application configurations.
Under the Configurable drop-down, you can view all the accounts that have been created
and registered for use with the service applications you will be creating in this section:
To ensure we do not include any redundant steps, let's explain the process of adding a
service application.
You will find yourself going back to the main menu within the Central Administration
Service Application menu to create a new service application once you've created one.
Perform the following steps to create a service application. We will be using these steps for
all our services since most require the same inputs for creation:
1. From the Central Administration site, to get to the service application creation
area, click on Application Management:
4. Upon clicking on the New button in the top-left corner, you will see a list of service
applications that can be created:
1. Create a name for your service. Remember your naming conventions for service
applications. Choose a service account that will manage this service and click the
checkbox shown in the following screenshot to add this service application to your
list of proxies:
2. Provide the SQL Server name for the database server, which is our alias name, and
the database name for the database that will be hosting the data for this service:
Important Note
Naming conventions are very important. Remember that service applications
can be related to the overall company, a department, or custom segments of
your company. Name it so that you understand who this service belongs to.
This also stands when you're naming databases. If the service and/or database
is just going to be used overall for the farm, then name it similar to how I
named it so that it supports a global aspect. Any specifics in your naming are
welcome, but try not to make the database names too long.
Creating service applications 229
We can also use PowerShell to create the service. To open the SharePoint Management
Shell, type in SharePoint on your desktop; you will see it listed as an application. When
using the SharePoint Management Shell, always click Run As Administrator. Right-click
the application link and choose Run As Administrator:
Add-pssnapin Microsoft.SharePoint.PowerShell
New- New-SPServiceApplicationPool -Name "Word Conversion
Services Application Pool" -Account <<service application
account>>
Get- New-SPServiceApplicationPool -Identity <<application pool
name>> | New-SPWordConversionServiceApplication -Name "Word
Conversion Services"
Now, we can create our next application service: Visio Graphics Service.
Note
The SharePoint Web Services Default Application Pool is an application pool
that's created by SharePoint when the application is installed. This can be used
to support new services you create, though you can create another service
application pool to support these services. Please be aware that the Farm
Account will run this application pool by default.
There are best practices you need to follow since we do not want to have
an overwhelming amount of application pools. In some cases, you can use
separate application pools to give individuality to the service you are creating.
This would use a different service account other than the farm account to limit
access to the service account being compromised. If you can, limit the pools to
10 as that is a best practice.
I have seen some bad configurations in my career. I had one local government
client call me to come onsite to take a look at their configuration. The servers
they had for the web tier were running out of memory. The client had 34
path-based web applications running with different applications pools and
service accounts. This was the shortest call I had while working at Microsoft;
I resolved the issue by adding memory and CPU to the servers. This is not
a great configuration, but it worked. They could have used Host Named Site
Collections but chose not to do so. The moral of the story is that best practices
are there for a reason. Choose your path and understand the consequences that
may result from your actions.
Important Note
In my examples, I placed the name of my main service at the end of all the
names of my service databases. I did this to categorize them in the list of
databases in SQL Server. This way, all my services databases will be in one area
and all my content databases will be in another.
232 Farm and Services Configuration
New-SPBusinessDataCatalogServiceApplication -ApplicationPool
"SharePoint Web Services Default" -DatabaseName "NewBdcDB"
-DatabaseServer "YourDomain\SharePoint" -Name "YourServiceApp"
You also have the option to create this service application using PowerShell:
New-SPTranslationServiceApplication -Name
TranslationService -ApplicationPool 'SharePoint Web
Services Default' -DatabaseServer Server1 -DatabaseName
TranslationServiceDatabase
1. Add a name for the Managed Metadata Service application, along with the SQL alias
and Database Name you need for the service application:
2. You can also use the same services application pool or create a new one for this
service application. Click OK to create the service application:
To create the Managed Metadata Service with a Content Type Hub, use the
following code:
There are other authentication methods available, such as Kerberos and SAML, that we
will explain later in this book.
Starting this service can be done using a simple PowerShell command that can be used
to start any service in SharePoint. We can use this command for all the services we have
created once the service application has been created. Search and User Profiles have
different processes that need to be followed.
This service, C2WTS, does not require us to create a service application. We only need to
start the service. Open SharePoint Management Shell to run PowerShell commands:
When you wish to find the ServiceGUID for the service you want to start, run the
following command:
Get-SPServiceInstance
New-SPSecureStoreServiceApplication -ApplicationPool
'SharePoint Web Services Default' -AuditingEnabled:$false
-DatabaseName 'Secure Store' -Name 'Secure Store Service
Application'
Make sure that you only create the services you need. You do not have to follow my
creation plan in this book. Creating services you don't need is not a good practice. Finalize
the services you think you will need and create them.
Once these service applications have been added, it does not hurt to run PSconfig to
solidify the farm and make sure all configurations are working as they should. You can do
this using the Configuration Wizard, which can be found by typing in SharePoint on your
desktop or by using PowerShell and running the following script:
Once you've done this, shut down/restart or reboot your servers, starting with SharePoint
first and then SQL. Make sure SQL is running first before the SharePoint server so
that you don't get any connectivity errors. Once completed, we can start adding servers
to the farm. Always check your event logs once you've rebooted to see if there are any
new errors.
Important Note
Running PSConfig.exe with no parameters does not upgrade or do any
good to the farm's content. Without any parameters running, this command is
useless. Make sure that you add parameters to apply upgrades when running a
full upgrade command, not just the initial executable. When you add content
to the farm, you will add more parameters to upgrade content and features.
There are two different ways to add a new server to our farm:
#"Custom","WebFrontEnd","Application","DistributedCache",
"SingleServerFarm","Search","ApplicationWithSearch",
"WebFrontEndWithDistributedCache"
Connect-SPConfigurationDatabase -DatabaseServer
$DBServer -DatabaseName $DBName -PassPhrase $Passphrase
-LocalServerRole $ServerRole
Always remember to update the parameters of this script, as well as the Minrole or Server
Role in your script, to make sure you choose the right server roles for your farm. In my
farm, I will be adding the following roles:
Once you've run the script and/or added the servers to the farm using either method, you
will see the servers in the System Settings area of the Central Administration site, under
Servers in the Farm.
240 Farm and Services Configuration
• Authentication
• Newsfeeds, micro blogging, and conversations
• OneNote client access
• Security trimming (Search)
• Page load performance
This service provides caching functionalities that allow users to quickly retrieve data
without any dependency on the database. All this information is stored in memory on the
servers or MinRoles you have deemed Distributed Cache servers. Distributed Cache spans
across these servers as a cache cluster so that each host can save data without duplicating
or copying data from other servers running the service.
This service is very important, and in some cases, I have seen where this service was not
recoverable. You want to be very careful when working with this service and monitor
it as much as possible. Do not administer your AppFabric Caching Service from the
services console. It is recommended that you leave this service as-is from your initial
configuration. I have seen farms having to be rebuilt due to this service and its parameters
being changed.
To check your Distributed Cache, run the following command using PowerShell:
Get-CacheHost
Summary 241
At the time of installation, 5% of the total physical memory will be assigned to the
Distributed Cache service. This is known as the cache's size and can be altered if needed.
Since we are setting up a farm, we must have at least one cache host running in the farm.
If you have two or more servers that you want to run this service, just know that the size
of each cache host is 5% of your physical memory. The maximum size you can make your
cache is 16 GB per cache host, so you want to make sure you set aside memory for this
service. You should have thought about this when you were planning. We also want to
make sure we are aware that we can only have 16 servers running the Distributed Cache
service in a farm.
We will learn more about this service and how to make our servers perform well in
Chapter 9, Managing Performance and Troubleshooting.
Summary
This chapter was the meat of the installation process for SharePoint. Although there is
more to come, this chapter laid the foundation for the farm. If we didn't cover these areas,
we would not have a base installation to move forward with and, as pertaining to the
server resources, have a complete server infrastructure that supports a SharePoint and
SQL Server environment.
Be mindful of your settings when you're configuring the performance of SQL Server,
especially the MAXDOP settings. If they are changed, the SharePoint environment will
not run correctly and you will see performance hits on your SQL servers. This is because
SharePoint is built for MAXDOP to be set to 1. Also, building redundancy into your SQL
Server platform is also key to supporting the environment. As we mentioned previously,
SQL is the key to the data that feeds SharePoint from a configuration, service, and content
point of view. Without a fully performant SQL Server, great performance will not exist in
your environment.
Make sure that you add servers as needed that support the SharePoint infrastructure
that's required for your environment. Separating services as needed and adding user web
frontends to support your user community is key to building a farm that can support
many users and services. Scaling out your farm is easy to do, and there are many ways to
isolate and combine traffic for all your services and user traffic.
We also learned that MinRoles are the key to this separation and service compliance.
Never add a server where you do not know what the resource is being used for. If you use
a custom MinRole, always make sure to configure it so that it only does what you need
it to do and uses the resources it has to only run the services it needs to run. The reason
we use MinRoles is so that we can control where services are processed and keep our
resources free from running unnecessary processes.
242 Farm and Services Configuration
As you can see, paying attention to small details and documenting your configuration
is key to updating your farm. It also allows you to ensure you have covered all your
requirements, which is the key to success. Configuration management plays a part in
all this because you want to make sure the servers match how you want the service to
perform. Again, SQL is a big part of SharePoint, and we cannot run SharePoint without
it. However, if it's configured incorrectly, we will see performance issues. Follow the
guidelines and best practices at hand to make sure you are building the best farm possible.
Although this is a lot of work to document, it will pay off in the end.
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
1. Why should we separate application pools for different services within our
configuration?
2. Antivirus exclusions don't need to be completed before installation. True or False?
3. Which service application is optional to use a separate database server as a best
practice?
4. What setting is required on SQL Server to install SharePoint? What is the setting for
that configuration?
5. When setting up Blocked File Types, I can set these file types per site collection.
True or False?
6. If running psconfig, do I need to run the command using extended parameters?
If so, when? Why?
6
Finalizing the
Farm – Going Live
Finalizing the farm to "go live" is the last step of the process of supporting the release of
your new farm. Within this book, you will find two chapters about the "go live" aspects
of your implementation. This is due to the amount of instruction needed to prepare
you for your "go live" date. There are a few steps that we still need to complete related to
installation, the configuration of services, and overall configuration, which we will cover
in this chapter; but what you will also see in this chapter is that there are many areas that
we still need to configure to complete the core system.
You will see, while dealing with those lingering configurations, that migration is not the
last thing we need to do. There are other areas of preparedness that need to be addressed,
as is the case with the setup of development and testing environments. We need to handle
any custom configurations that may have been missed, out-of-the-box workflows and
custom workflows, backup and restore solution integration, and any operating areas
that we need to check, update, and pass on to others who will support the new farm and
overall environment.
We will also talk a little about stress testing the environment and the overall testing of
the new sites, as well as covering other integrations that need to be checked and installed
to support sites as they were supported in the older environment (or schedule them for
deployment in our new environment).
244 Finalizing the Farm – Going Live
The checklist is vast, and we need to make sure that we are bringing a solid environment
to the company before release. This chapter will follow that checklist and get you prepared
for the release of your new SharePoint farm!
The following topics will be covered in this chapter:
Technical requirements
To follow along with this chapter, the following requirements must be met:
You can find the code files for this chapter on GitHub at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019.
Here, as a precaution, it's a good idea to go back and assess your users and departments
to make sure you have accounted for all developed solutions, out-of-the-box solutions,
workflows, identified retired web parts, and anything that could be an area of concern for
functionality and could be missed during this change. I can't stress this enough. The more
you miss, the more you will pay for it after users start to work on the new farm.
You need to make sure you know what these users are doing in their site collections and
sites. Don't sleep until you get all the answers. At the end of the day, this is what makes
you, your department, and SharePoint look bad when you don't migrate everything or
miss a functionality that users have been using for years and all of a sudden it's gone. If
that functionality has changed, make sure you let the department know and give them
a new way to make it work in SharePoint Server 2019.
Believe me, I have been caught with my pants down on a couple of migration projects.
One time, there was a tool that I used, and I will not mention the name of this tool, but
it malfunctioned as it was not adhering to the migration parameters I had chosen. If I
chose to move only the top-level site and not subsites, it would just move all sites anyway.
This actually made my contract go over hours because I had to go back and delete all
the subsites that were not on the list to move. We will talk more about tools later in this
chapter, and I will give my opinions and recommendations on tools and how they can play
into your migration and help in other ways to support your infrastructure.
Another situation I was in a few years ago was not having got enough information from
users and admins. After doing the migration and moving over to the new farm, there were
two critical components missing from the new farm. One was a script used for identity
management that worked with the User Profile service to clean up those users no longer
working in the company, and another was a custom solution that was developed for a site
but was not identified during the migration to the new farm. This caused a lot of problems
during the post-implementation as some functionality was missing, and it was very
stressful.
So, please be careful: your career and reputation can be shattered when you are not
attentive to areas that make a difference in a successful migration. Everyone remembers
the bad things, and even if it is one little thing, it will outweigh the good things.
Let's look at some things to remember during the final actions before releasing the farm to
the community:
• Workflows keep items locked in lists and libraries if it's in progress on the
source site.
• Migration tools will not move locked items or custom solutions.
246 Finalizing the Farm – Going Live
• Release items by having the department finalize the workflow process or stop the
workflow on the item.
• Make a list of those items in progress so the department can restart the workflow in
the new environment.
• Workflows and solutions may have to be recreated, especially those coming from
2007 and 2010.
• Workflows and solutions created in Visual Studio from earlier versions will have to
be updated to support SharePoint 2019's code structure.
• The workflow history does not migrate with the content database or migration tool
– If needed, go to the workflow history list (For example, http://sharepoint/
mysite/lists/Workflow%20History/AllItems.aspx) and make copies
of the lists for your records and for the department. You can slice and dice with
columns to get information sorted for printing or saving.
• Remember that the workflow history can be captured as part of the default list or
a new history list and can be associated with the workflow.
• Make sure to check the settings of lists moved with tools, as all the required settings
for advanced and versioning are not set. This can cause some issues with approval
processes in the task list and in the list itself.
• Check your old servers for any PowerShell or custom code used to do any server-
side functionality, such as backups, or as in my situation, user profile cleanup.
• Check the Global Assembly Cache (GAC) on the old server to make sure there are
no custom solutions hiding there that do not show up in the farm solutions list.
There are other areas you can check that might not be included in my list of last-minute
checks, but the best thing to do is communicate. Double-check with your admins and site
collection admins before making any final migrations. We will talk about testing in this
chapter, as well as the pre-migration tasks, test migrations, and final migrations that we
need to implement. I will show you what these tasks are all about and how they are used to
finalize the movement of content.
Finalizing loose ends 247
Implementing the Search service with redundancy again cannot be done within the GUI
in Central Administration. There are also other restrictions in the GUI that will come
up in the configuration, such as database names being grayed out for where you cannot
update the names. If you are shooting for full naming and custom database names for all
service databases, you will have to create this service application using PowerShell. So,
let's open up the SharePoint 2019 shell command line and run the following script from
a file location on our primary Search server. The script is shown here and is available on
the GitHub site for this book:
Note
Download the script and instructions on the GitHub link at the top of the
chapter. Please make sure to set the parameters in the script to fit your server
names, index locations, and any other areas you need to customize for your
environment.
One of the key things to remember is that users use Search to find information. So,
content must be crawled by connecting to content sources to crawl for the information to
come up in the search results. The index you create is what the users will search against, so
timing is everything, which is why there are a couple of ways to create crawling schedules.
You can use continuous crawling or provide a schedule for full or incremental crawls.
Most companies I have worked for have used continuous crawling as their way to get
information in the index, but this doesn't mean that the other crawling types are not
useful. A full crawl is still useful because it processes and indexes all the items in a content
source. It doesn't matter whether the previous crawl did that or not. This keeps your index
clean and up to date.
Tip
Remember, it's always good to run a full crawl after you migrate content from
another farm or source so that the information is indexed.
The incremental crawl only crawls items that have been newly created or modified since
the last crawl. These crawls don't take a lot of time and usually are pretty quick, depending
on how much new content has been added. These modifications would include the
following updates:
• Content
• Metadata (remember the Managed Metadata service)
• Permissions
• Deletions from the content
The continuous crawl is sort of similar to the incremental crawl. The difference, though,
is that the continuous crawl checks the SharePoint change logs about every 15 minutes. If
there is an item in the change log, the crawler finds the content and processes the updates
to the index. It's good to have a mixture of continuous crawl and incremental crawls, and
this is because the continuous crawl does not fix any errors. Please create a schedule to run
these as needed, but if you run an incremental every 4 hours, this would help to clean up
any errors; use a full crawl as needed.
Users also have tools to make their search results as good as they can be by tagging
content. It's best for users to tag their information so that content can be found easily. This
leads to relevant search results, and there are tools for users to tune how content is found
as well. They can also create silos of results based on content using the Search web part.
This can help them to hone in on any content, such as content in a specific site or library,
to provide the search results from that specific content only.
250 Finalizing the Farm – Going Live
SharePoint 2019 Server provides the same modern search experience as Microsoft 365.
You will see a big difference, as the search box is placed at the top of the SharePoint site
in the header bar. The search experience in SharePoint 2019 is personal. The results for
one user will most likely be different than another user's results in this new version of
SharePoint. We talk more about it in Chapter 12, SharePoint Framework.
One Search consideration to be aware of is that if you actually get close to having 10
million items indexed, you may want to configure your server as I did in this book using
cloning. This will help to spread the service over two servers, which will process items
faster and bring better performance for your Search service. The Search service may
benefit from being assigned as a MinRole service in some environments where you want
to isolate the service to server resources for support as well.
Always keep your configuration of Search clean and separate for processing within
the farm, along with where the use of those Search components will be located in a
cloned configuration. In the Search configuration for my farm, you can see that I have
redundancy on my app servers and redundancy on my web frontend servers. My web
frontend servers in this configuration provide query processing and index partitions.
My index partitions are updated from the application servers that run the crawl, which
propagates over the network after every crawl to my web frontend servers. The results
from the user queries are compiled on the web frontend from the index, so that the results
from the search come back quickly because the user is on that server resource at the time
of the search. This is to not have them go over the network to bring back the results. Make
sure to make the crawler servers IIS configurations and HOST file understand that the
website is local and there is no need to hit a WFE to crawl the website:
Tip
Always configure your crawling server to crawl sites on the local server using
the HOST file and not across the network to hit a web frontend to connect to
the sites for crawling. The performance for this process will be very slow for
crawling. Your users will be affected as well because of the crawl on the sites
they use, and the propagation of the index will also be slowed down if the
network is not able to push data from server to server quickly. Remember to
check your network connections and configure them to be full duplex and not
auto-negotiate.
Now that we have covered the Search service, we can work on setting up the User Profile
service. User Profile requires that the Search configuration be completed because there are
settings within User Profile that refer to a configured Search service.
Important note
Make sure that you added the farm account to the local administrator's group
of the server you choose to run this service! This is part of the configuration
process, as you will see in the following steps. Make sure of this!
252 Finalizing the Farm – Going Live
Now you could do this the old way in Central Administration, but I ran into an
error with User Profile Service Application using the UI. The error I encountered
was a grayed-out screen for the social database. I could not name it or change the
SQL Server name in the form. We will create the service using PowerShell; you
should get used to using PowerShell. The reason why I added the GUI in this book
is to give those who are beginners a way to understand what we are doing. Code
sometimes looks foreign and I would rather that all were able to get something out
of this book than to just add a bunch of scripts and have people be confused.
3. So, in this case, we will use the following script to create our User Profile service.
Please remember to right-click the PowerShell icon and choose to run as
administrator.
4. We must create the service application pool first to associate with the User Profile
service. Make sure you have added the application pool service account as a
managed account before moving forward:
$AppPoolAccount = "XYZINC\spuserprofapp"
$AppPoolName = "UPS_App_Pool"
6. As you can see in the following screenshot, when we run the PowerShell script, we
get a new service application for the User Profile service:
8. As you can see in the following screenshot, we now have a proxy for our User Profile
service application:
Now that you have that information, let's get started on connecting to AD!
Important Note
Please make sure you have accounted for a replication account to connect to
AD, as this service account needs to be used as part of the connection settings
in this configuration.
There have always been special needs for each customer I have worked with, so there
could be a need for more web applications. Follow and repeat the following steps to add
more web applications as needed. During this process, we will need IP addresses and SSL
certificates, which you have to use during the configuration of the site. Please have those
ready as these are prerequisites for this section.
The association of services to web applications is something we need to discuss as well as
what constitutes a web application that needs a separate service association. We will also
look at server and service compliance for how we can keep our servers running efficiently
using the MinRole topology we chose to configure our farms with. We will see how server/
service compliance works and how it really helps us to manage resources more efficiently.
Web applications
Web applications define how content is accessed from the network in SharePoint through
the web browser. If there is no web application, there is no content. So, in this section,
we want to focus on how to create web applications and what needs to be brainstormed
before you start creating. In a previous section, we did discuss web applications and the
reasons for them (in our planning section), but again we need to make sure at this time
that we really have thought everything through and also make any changes that necessary
due to new developments as well.
When starting to think about web applications, make sure you have done the following:
• Captured all areas of content you would like to categorize on this URL level
• Handled the zoning of the web application and how it will be accessed
• Associated service applications
• Managed the service accounts and app pools that will run this web application
• Handled the isolation and security of the web application on a server resource
When looking at our company and the departments and functional areas we need to
support, one of the reasons why we would split content into separate web applications
within SharePoint is to secure it. Securing content within a web application, which is the
top-level access point, can be an area of concern. Making this our focus for security gives
us a clear security break from other app pool service accounts, users, and groups accessing
the web application and service associations at this level. The key to this split at this level is
to make sure that access is not given to someone by accident.
Finalizing loose ends 259
Keeping content separate also can be done for performance reasons and/or reasons to do
with customization. Customized sites with coding bring a different level of processing and
support to a web application. This also can be the case with web applications that host a lot
of data, such as PowerPivot or Excel data and other reporting data. Such a web application
would be more resource-intensive than other web applications with no coding that are
used for normal, out-of-the-box functionality.
Another reason for separation would be to separate groups of content and use separate
URLs within the company for departments within the organization. For example, we can
have a subdomain such as hr.xyzinc.com to represent the human resources division
and another subdomain of it.xyzinc.com to represent the information technology
division. This will give a clear separation of sites and content for the company department
structure. This type of configuration will make it hard to share information, but I have
seen these configurations help in organizing and protecting sensitive information.
To create a web application, follow the steps given here:
4. Complete the form presented to configure your web application. Name your web
application, Port and use a host header if needed:
5. Input the URL and choose a new application pool name. Update the service account
you will use for the application pool. Input a Database Server name and a database
name for the content database supporting the initial creation of the web application:
6. Complete this section using the values given in the following screenshot:
Assigning service applications is a part of the web application. If you have separate
service applications, you can create new services and assign them to the appropriate web
application using Central Administration.
To create a web application using PowerShell, use the following command:
Configuring host named site collections using PowerShell is very similar as there are few
differences in the approach for this type of site hierarchy. The top-level site is a path-
based site and the site collections below it are host named sites. You create those using the
following PowerShell command:
New-SPSite 'http://portal.contoso.com'
-HostHeaderWebApplication (Get-SPWebApplication 'Contoso
Sites') -Name 'Portal' -Description 'Customer root' -OwnerAlias
'contoso\administrator' -language 1033 -Template 'STS#0'
There are some other configurations needed within the web application you have created.
There are selections of features within the web application and within site collections that
need to be set. We look at those web application features in the next section.
When you have finished creating web applications in your environment be sure to look at
a load balancing strategy to support redundancy within the Web Tier. We talk about load
balancers and those technologies in Chapter 7, Finalizing the Farm to Go Live - Part II.
Also, make sure to define Site Quotas as part of your web application configurations and
the site collection level to keep the size of your site collections in a best practice capacity.
Finalizing loose ends 265
Managing features
There are features that we can activate from Central Administration to turn on features
related to the web application. As stated, we can separate services and features based on
the need within the web application. For example, let's say video processing is not needed
in the MySite web application based on company policy. Using these feature activation
settings, I can make sure that that feature is deactivated at the web application level:
This helps bring another level of feature separation to content and users for the sake of
resources and company policies.
Here is how to turn on features using PowerShell:
Knowing the features you want to be enabled on the web application level is key. As you
can see, there are some choices. Make sure to choose what level of features you want based
on the URL or web application. Now that we have our main web application completed,
let's move on to create a MySite web application that will create a host for personal sites
used by users in the farm.
1. Create a new web application and name it. Add a host header if needed:
2. Update the application pool and assign a service account for the web application:
3. Update the Database Name field with the name of the database that will hold the
site collection. Scrolling down, you will see the next section of the form:
4. Assign service applications as part of the web application. If you have separate
service applications, we will go over how to create those later in the chapter.
5. Add a default site collection under the MySite web application:
Important Note
You can create quota templates at the beginning or end of your creation of a
site collection. They can be applied to web applications using the tools within
Central Administration under Application Management | Site Collection
Quotas and Locks. Again, these can be created prior to the creation of site
collections so they can be applied individually as well.
272 Finalizing the Farm – Going Live
9. Setting up quotas for your site collections is key to setting limits on your site
collection growth. If you followed my recommendation of having individual site
collections nested in content databases, then quotas will work well for you from
a management perspective. Make sure to set the quotas as per Microsoft's best
practices of 2-4 GB limits. If it is a record center site collection you are creating,
then you can set this to be unlimited, due to the best practice being a limited
number of resources accessing the data:
If you need to do some creation based on departments or other criteria, you need to
create these site collections by using PowerShell or by taking the database offline in the
SharePoint Content Database utility. This will allow you to place the site collections in the
databases you want.
Finalizing loose ends 273
Important Note
MySites should have a set quota due to situations where users may use a MySite
to keep all their files. By setting a limit, users will understand how much
storage space they have and will only keep relevant files in SharePoint and not
keep every file on their desktop in the MySite. Quotas can be set for different
types of site collections using the Application Management menu in Central
Administration.
Quotas are very helpful for identifying the growth of content and creating plans around
data size. This also helps when you have those plans identified and made into quota
templates so that you can apply them quickly and easily.
Now let's look at self-service policies and how they work in the farm:
$w = Get-SPWebApplication http://spweb.xyzinc.local
$w.SelfServiceSiteCreationEnabled =$True $w.Update()
Setting the self-service site to On for MySites means that users will be able to create their
own MySite from the SharePoint site. This only happens once, but this setting gives them
that ability. We can also set it to Off and not have them create a site. This is where I see
admins getting scared, but you do not have to use MySites in SharePoint; things only need
to be configured so that the searching of profiles works in the environment.
Now that we have finished with the MySites web application settings, let's go back and
finish up our User Profile service and MySites configuration using the following form:
Now let's configure our managed paths, if there are any you need to configure as part of
your web application. You can look at your old farm to make sure you understand how
these paths need to be in place during migration processes so that the paths present in
your prior environment match up with the content you are migrating:
2. Find inetpub, which is where your websites for SharePoint are running, and find
the site you are looking for. Open the wwwroot folder:
1. Set the time zone for the web application and other parameters:
The user settings provider is the setting for the configured User Profile service
provider, where you can have one or more User Profile service applications in your
environment. This gives you an option to choose.
2. Please set Browser File Handling to what you believe it needs to be in your
environment. I have mostly used Permissive settings as this opens files within
document libraries and does not download files to the desktop:
4. Notice the default maximum upload size has changed. Make a change if necessary:
Figure 6.45 – Recycle bin, maximum upload size, and cookie settings
Finalizing loose ends 285
Important Note
Remember that you only have a certain amount of storage designated for this
environment. Set the document upload size to a setting you know you can
handle as the documents really add up at these larger sizes. Also, remember
that Shredded Storage is used as well to make sure that documents are not
duplicated and the subset that was changed is captured in versioning.
5. The following is the PowerShell example for setting parameters within the general
settings of a web application:
There are many other parameters for this command using $webapp and the related area
of focus to set parameters. The parameters for this command can be found on GitHub.
Zones
When setting up web applications in SharePoint, we have a couple of things to think
about. One of them is zoning. Zoning helps us to define web application access from
different URLs. Adding a URL to a zone gives the users in that zone a way to access the
same content as the default zone but from another URL; for example, hr.xyzinc.
local could be accessed through our internet zone using hr.xyzinc.com. Separate
access pipelines could be used from a router or an ISA server perspective to have users
access a site through a certain URL. This method also creates a separate IIS site as well, so
you can configure the site as you would any individual site in IIS.
The different zones available to create these access points are as follows:
• Default
• Intranet
• Internet
• Extranet
• Custom
286 Finalizing the Farm – Going Live
Use the zones within the configuration to determine what the URL for the alternate access
mapping is being used for. This helps users to understand why URLs were created and also
to put URLs in some type of category.
There is another method you can use to create separate URLs, which is using extended
web applications. Let's look at that method in the next section.
Finalizing loose ends 287
Figure 6.47 – A diagram of two web applications using the same content database
288 Finalizing the Farm – Going Live
2. Choose your authentication method and the zone that this extended web
application should belong to:
Now that we have finished configuring the web application, let's learn about service
associations, which we discussed earlier in this chapter.
Associating services
Service association is very important, and this is a configuration that I do not see a lot of
companies using even when they have a bunch of web applications running in a farm.
Creating default services is great if you have one web application, but if you have many,
we should use this configuration method to help isolate services. Some web application
content may not use all the services configured in a farm. So, why should we associate
services that they will not use?
Let's associate services in the following steps so that you can see how this can help
you in managing your server resources and make better choices in configuring your
environment. As you can see, there are two web applications in the following figure:
1. Create your new service application to support your new web application:
Finalizing loose ends 291
3. Choosing [custom] now allows us to see that we have two separate service
association configurations:
Use this PowerShell command to set cache accounts for your web applications:
These settings are key to the performance of the farm. Without them, you could see a
slowdown or no response at all. Make sure to complete this step before going live for the
new sites in your new farm.
Summary
This chapter showed how SharePoint configuration is not as easy as people would believe.
It does take some documentation, design, and thought to make it happen. We didn't even
cover all that we need to do. Again, there are varying configurations and other integrations
that could have happened here. Our aim in this book is to cover the basics and add some
details to get you thinking about how you can make things work in your environment.
We covered a lot in this chapter, but the main point of this chapter was to make sure you
have covered all your areas of focus. In the end, you do not want to release a product that
is not complete. Make sure you have documented your backup and restore process and
that you follow best practices for all your configurations. Double-check things and work
with the departments to resolve any issues upfront and make sure to tell them about any
foreseen issues beforehand. The more communication before going live, the better.
294 Finalizing the Farm – Going Live
In the next chapter, we will talk about more final configurations and other integrations
needed within the farm. Workflow Manager is a big part of the integrations needed
to make this farm work successfully, give the farm some useful tools for the users to
collaborate, and automate business processes.
Other topics covered will include authentication and more on migration. We have
mentioned migration many times in this book, but in the next chapter, we will cover more
on this topic to help you choose the best tool for you or use a content database and other
methods that are available.
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
1. When creating web applications for users, which web application zone must be
created first before we can start extending web applications and adding alternate
access mappings?
2. Which method for migration could be used best if there were no custom solutions
involved within the content?
3. When crawling content, how does the crawler connect to the content?
4. How many items must be available in the farm's search item count before we need
to upgrade the Search service to a cloned service?
5. When configuring the User Profile service, what account must be added to the
administrators group of the server running the service?
6. True or false: A MySite is a mandatory web application that needs to be created as
part of the SharePoint farm.
7
Finalizing the Farm
to Go Live – Part II
As you will see, in this chapter, we still have a lot of areas to account for. These areas are
very important to talk about as they relate to finalizing the farm and the content to be
presented within our sites. Load balancing is also another area we will talk about. This
process is usually planned in advance but is typically saved for last. We will also cover
migration as you will need to have your environment set up to migrate your content from
the prior farm.
We will also install Workflow Manager in this chapter and show you how to set up the
application step by step. We will provide tips on what's important to understand as part
of this solution integration. Workflow Manager is pretty easy to install but has some
requirements you may need to be aware of before you start the configuration process.
We'll address these in this chapter.
Finally, we want to make sure we have covered all our bases. I can't stress this enough,
which is why you will hear me say this many times during these last few chapters: we must
look at our old environment and make sure we have accounted for everything so that we
can move forward gracefully.
296 Finalizing the Farm to Go Live – Part II
Technical requirements
To understand this chapter and the knowledge provided within, you must meet the
following requirements:
You can find the code files present in this chapter in this book's GitHub repository at
https://github.com/PacktPublishing/Implementing-Microsoft-
SharePoint-2019.
You will also notice that, in Microsoft 365 SharePoint Designer 2010, workflows have
been discontinued. In SharePoint 2019, however, these workflows are still supported.
SharePoint 2019 Server gives you the option to move sites back to on-premises if need be.
This gives you time to figure out a more suitable way to overcome the discontinuance of
2010 workflows in the cloud. You can also use Power Tools to redevelop them, which is a
learning process. I am mentioning this here because you should not confuse Microsoft 365
and SharePoint On-Premises. These platforms are different and abide by different rules
and guidance. You control your On-Premises environment and as long as the workflow
platform can be configured, you can use it with SharePoint 2019 Server, which I believe
will never go away.
One of the main issues I see most customers having during migrations is that they
don't know how many workflows they have in their environment. If you want to find
out where your workflows are located from a web application perspective, there are
PowerShell scripts you can use to find workflows in a web application and even a site
collection, but your best bet is to spot check for these manually. I have seen some errors
where PowerShell could not find all workflows, so manual checks as well as PowerShell
would work in this case. You can also make this a task for your site collection admins
as they know their processes and can give you a good count. This is why spreading out
responsibility is very important; a farm admin will not know everything. The one thing
that can come out of this is identifying unused workflows, which could be flagged for
deletion. This helps clean up stale content before migration.
To create a farm of Workflow Manager servers, you need to log in as the Setup/Admin
account for SharePoint on all servers. As part of this farm, the configuration does not
use built-in accounts for the services. Only three servers are needed to support high
availability. Please use a separate server for this installation and do not install this
application as an addition to any SharePoint Server, although it can be configured that
way. This is highly recommended!
Remember to create a separate service account and log in with that account to run the
workflow manager. You can also run this using the Web Platform Installer for the
workflow manager 2013 executable and follow the steps given here for its installation:
Figure 7.1 – Accepting the License Agreement for Web Platform Installer
2. Click Yes to allow this app to make changes to your device:
Understanding workflows and Workflow Manager 299
9. Update SQL Server, which in my case is a SQL alias name. Also, input the farm's
account and password. If you want to run Workflow Manager using HTTP, you
must check the Allow Workflow management over HTTP on this computer box.
Input a key that acts as a secret password for the certificate. This allows you to add
new servers to the Workflow Farm:
10. Once you've input all the configuration details, you have the opportunity to go
back and make changes using the back arrow at the bottom-right corner. If you
are satisfied with the configuration, just click the tick button to start the
configuration process:
11. As the configuration process starts to work, you will notice that it goes through a
series of processes that need to be completed:
For the client to work, you will need to create a self-signed certificate from the Workflow
Server and install it on the SharePoint servers, in the Trusted Provider folder in Certificate
Manager.
This is a requirement and will complete the installation process. Please click the checkbox
to close the installation screen.
If you didn't use the right service account for Workflow Manager, you can update it using
the following PowerShell command:
Stop-SBFarm
Set-SBFarm -RunAsAccount Domain\Username
$RunAsPassword = ConvertTo-SecureString -AsPlainText -Force
'<password>'
[RunOnAllNodes] Update-SBHost -RunAsPassword $RunAsPassword
Start-SBFarm
If you do not verify that Workflow Manager PowerShell is activated, you will get the
following error:
So, before you try to update the account, make sure you have PowerShell for Workflow
Manager active. To do this, go to Start | All Programs | Workflow Manager | Workflow
Manager PowerShell.
Then, use the following command to set the Workflow Manager Service Account:
To check your installation using the Workflow Manager Shell, use the following
command:
Get-WFFarm
You can also use the following command, which checks the status of your farm:
Get-WFFarmStatus
306 Finalizing the Farm to Go Live – Part II
To register Workflow Manager to a web application for use with out of the box SharePoint
workflows and SharePoint Designer workflows, please use the following command:
Add-PSSnapin Microsoft.SharePoint.Powershell
$wfProxy = Get-SPWorkflowServiceApplicationProxy
$wfProxy.GetWorkflowServiceAddress((Get-SPSite -Limit 1
-WarningAction SilentlyContinue))
This will complete the installation of Workflow Manager for integration with SharePoint
2019. Please remember to install this integration on a separate server. It is better to keep
all integrations, even Office Web Apps and other server integrations, separate when
possible so that if the product fails or has complications, it does not affect your farm as
well. How much harm is one more server with a few resources going to do in comparison
to having a product you cannot uninstall or may cause damage to your farm? Separate
everything you can, and it will save you from issues in the long run.
Authentication
Authentication is used in SharePoint 2019 to validate a user for the purpose of using
SharePoint sites, lists, libraries, and documents within the farm. SharePoint then
authorizes use after verifying the user has rights to a particular site within the farm.
You will see this process when you use Fiddler to troubleshoot issues from a desktop
where a user is signing in. You will notice a 401 error right from the start, before the user
actually enters their username and password. This is to make sure the authorization has
completed. It does this by authenticating to a security source first before access to the site
can be given.
Authentication 307
Claims authentication
There are many ways to integrate claims authentication methods. Out of the box,
SharePoint does this automatically using Active Directory (AD) and Claims Tokens,
changing the user account in Active Directory to claims identities. As we saw in Chapter
6, Finalizing the Farm – Going Live, there is a service, called the User Profile Service, that
requires you to use a service account for SharePoint to connect to it. This account is given
replication rights access to AD to basically store a copy of the user list you choose within
SharePoint. In SharePoint, you would then also set up the User Profile Service to pull in
user identity profiles from AD into SharePoint. Based on those identities, the accounts can
add to sites and content within SharePoint. If they are not listed, they will not be able to
authenticate, which is why you should run this process on a schedule.
308 Finalizing the Farm to Go Live – Part II
Within Okta, a cloud-based claims provider, there is a two-fold process for adding users
to what is called, in Okta, an Application. You can add a user to a SharePoint site, but they
will not get access to the content until that user has been mapped to the Application in
Okta. This helps with managing users because if they do not have an account in Okta and
are mapped to the application, which is the URL of a Host Named Site Collection or Path-
Based Web Application, they still won't get access to the site.
Again, there are some third-party companies that do offer cloud authentication
integration with SharePoint. Most companies I have seen, such as Okta for on-premises
integration, use farm solutions that have been deployed on the farm to support claims
authentication. This is a great way to integrate a solution for authentication, but what
happens when a solution breaks or gets retracted by accident? That's when you have an
issue. This is my only reservation when using Farm Solutions, because too many things
can happen that can cause big problems in the on-premise environment. Some things you
can do to control changes are as follows:
• Control the Central Administration site by only adding admins to the Farm
Admin Group.
• Implement a change management process to manage changes in the environment.
• Clean up all old identities within your AD, Farm Admin group, and Cloud Provider
Admin group.
• Always use Service Accounts to run your services on cloud solutions.
• Never use a personal account for your farm installation or any service accounts used
in configuration.
Please follow these recommendations to deflect any mishaps within your environment.
Windows authentication
Windows authentication is available in SharePoint 2019 and if you are coming from an
older version of SharePoint like SharePoint 2007 which uses Classic Authentication, you
will need to update your web applications so that they can use Windows authentication,
which is really just Claims Authentication. This has been in place since SharePoint 2010
and the conversion needs to happen now. There is no way around this, and there is a
process you need to follow to migrate your accounts. Classic authentication is no longer
supported and was deprecated in SharePoint 2013. If you are migrating from one of
the older versions of SharePoint, then you want to convert your identities into claims
identities using PowerShell.
Authentication 309
Note
Refer to the following link for more information on this process:
https://docs.microsoft.com/en-us/sharepoint/
upgrade-and-update/migrate-from-classic-mode-to-
claims-based-authentication-in-sharepoint-2013
There are other protocols that integrate with Windows authentication as well. Both NTLM
and Kerberos are methods that help users authenticate without prompts for credentials,
which in some cases can happen many times until they are fully authenticated. This
happens when there are hops within the authentication where it needs to be verified by
different content within your site. This could be an image or even a broken inheritance
library, but SharePoint will not let you fully be free until you have been verified by all the
content within the site.
NTLM
In this protocol, in most cases, no setup or configuration will be needed for this
authentication method. The reason for that is it is most likely integrated already if you're
using a Windows Server or client. When you set up your web applications, you will see the
option for this protocol, which you can select from the list provided.
Kerberos
This is another selection you will see when creating web applications in SharePoint. This
is known as a ticketing authentication process and requires some configuration in Active
Directory. The client and server computers used within a configuration must have a
trusted connection to the domain's Key Distribution Center (KDC). When configuring
Kerberos, you need to set up Service Principal Names (SPNs) in AD DS. This must be
completed before you install SharePoint because these configurations must be available to
SharePoint before it is up and running.
The use cases for Kerberos have been listed here so that you can see how it can be used
to help secure your environment. This is better than using out of the box authentication
methods:
Figure out if Kerberos is the method you want to use for your implementation. This
should have been discussed back in the planning stage as well. There is a lot to learn when
building SharePoint farms, so make sure you investigate and make the right choice for
your environment.
SAML authentication
SAML token-based providers are more than popular these days because most companies
have moved to using Federated Authentication methods. You can use single sign-on
and smart cards to complete this authentication process for convenience within the
environment. This brings a seamless process so that users can authenticate once to many
resources and sites within an environment without the need to log in.
Many domains can also be authenticated using this method. It can even be used in
collaboration with other companies that may be partners since the authentication to other
federated domains can be included in this implementation. This gives users many options
when it comes to collaboration, as well as other areas of the business where you may want
to share resources.
As you can see, SAML authentication is a good choice as it is an easy and secure way to
manage authentication across multiple enterprise domains and other resource domains.
One point to mention is that you will lose control of the people picker when using this
method in SharePoint. The people picker will search for and find users, but you cannot
base this search on a valid user, group, or claim.
Zones are very important in a SharePoint web application configuration as they provide
logical areas within a web application that separate authentication based on a web
application. Within the web application, you can establish separate URLs across the zones
to access the content within one web application.
You can also implement multiple authentication methods in a single zone, which gives
you some different ways to create and establish authentication, depending on the method
or who you want to access the zone. I have seen cases where there has been code within
the site to determine what you see based on your authentication method. If you want to
control an anonymous site, then you can control what content users see within the site
settings of the site using this method as well.
Authentication 311
Some other things to note are that Windows authentication and forms-based
authentication are the only methods that you can only have one instance per zone.
(Remember that you must have Windows authentication established on the default web
application for search crawling to take place! You can also have more than one form of
authentication in each zone.)
To architect a SAML token-based solution, you must create some components so that the
architecture supports it, such as the following:
You can use multiple user repositories in order to work within a SAML claims
architecture. These all use the Security Token Service to communicate. As you will see
when configuring web applications, you will be required to add an entry to establish
trust with the Trusted Identity Provider area within the form. Once that has been
established, the certificate will need to be copied and/or pulled into SharePoint via Central
Administration so that these security validations have a verifiable handshake.
There are other configurations that may also need to take place, such as PowerShell
scripting and, in some cases, hardware that supports the configuration. The following
links provides help in terms of using SAML and ADFS. You can read more about the
process of implementing Federated Authentication at https://docs.microsoft.
com/en-us/sharepoint/security-for-sharepoint-server/implement-
saml-based-authentication-in-sharepoint-server.
As you can see, authentication has changed a bit, but there is a way to get what you want
using a third-party tool or even new Microsoft technologies. The methods provided at
this link are only supported by SharePoint 2013 farms or newer. The important thing to
remember is that you always want to set up for the future. I recommend a cloud-based
authentication method so that even if you are thinking about moving to the cloud, it
doesn't become a hassle because your cloud provider can be authenticated on-premises or
in a cloud environment. It is the best way to be ready for any transitions in the future.
312 Finalizing the Farm to Go Live – Part II
In Azure, there are cloud software load balancers that work but since they are cloud-based,
they always perform as they should separate from the web server resources. Realistically,
in an on-premises environment, it's best to use a hardware load balancer. This could also
be true for the cloud in a hybrid scenario.
Load balancers can play a big role in your network in terms of how the network is
configured, what protocols are used to authenticate it, and other configurations we have
not mentioned in this book, such as session stickiness. The thing we wanted to point out
is that you should make sure you bring this to the table early and map out your process of
load balancing your SharePoint sites beforehand. Test and make sure things work before
going live.
• File Size
• Individual Permission
• URLs (file paths) and file names
• File sizes
• Character limitations
• Custom solutions
• Branding
• InfoPath
• Workflow state and history
• Permissions (do you have access to all the files?)
• Folders with more than 5000 items
• Unsupported site templates
• Orphaned users
• Checked out files
• Unsupported list templates
• File extensions
We must also remember that even with mapped drives, we can also migrate to OneDrive.
Security and various options for folder organization will help the customer manage those
files. It's not like SharePoint, but if a customer wants to implement this type of storage
location, be ready to answer questions. This is contingent with if you are in a hybrid
environment where you already have Microsoft 365 set up.
If you're working with SharePoint, you will also need to look at versioning settings,
column types, and any updated aspects that are moving from one version of SharePoint
to another. Help them understand that moving to this new location will give them more
control over the data from a security aspect; waiting for an admin to update security for
new and existing users within mapped drives is also possible.
316 Finalizing the Farm to Go Live – Part II
This sometimes gets them excited about moving and helps them relax, which will
make your users feel as though everything is going to be OK. Show them that you have
investigated everything and build confidence in the migration by having meetings about
this migration. You can even suggest the use of retention libraries for holding documents
online either with OneDrive or SharePoint so that the customer understands there are
choices when it comes to how old documents can be handled. Processes can be put in
place for handling new files that need to be moved to a separate location for safe keeping
as well.
As we mentioned previously, this is not an easy task. You are in the hot seat to figure out
how to get 100% of your content identified from one version to the other. We will talk
about some of the areas to focus on and which migration method will work for your
migration scenario. I have also identified some migration tools within the method to give
you a breakdown of my experiences with those tools.
We will also talk about creating schedules for your departments and testing with users
as we also want to break down the use of preliminary migrations and final migration
testing. If you have not, and I will say this again, create new dev and test environments
to make sure you can provide areas within the environment where users can test their
content and have new ways of accessing it.
As part of our environment, we should have dev, test, and production environments,
which gives us a completely functional SharePoint environment. This means we have
a place we can test out migrations and have the user test them too. If you do not have
this development environment set up, then you should not be reading this chapter of this
book. The last thing you want to do is test in your test and production environments;
always do this during the development phase. If you have these environments set up,
then we can move on and find out what we need to do in order to start moving content
from our old farm to the new one. Make sure all your farms have been configured
correctly and are the same so that the results of the tests will be the same across the board.
Configuration management is important!
During migrations, you also want to make sure you put source content in read-only mode
when doing final migrations so that nothing is changed during this process. There are
many things you need to make sure are in place before we start this migration. Take full
backups of content before the migration, as well as once you have got the content over to
the new farm. This backup should be taken once you have settled all issues and confirmed
that all the errors have been fixed with the customer. This helps create a baseline that acts
as a safeguard from the migration of both the source and target farms.
Exploring SharePoint migration 317
The biggest thing you will encounter with this part of your project is a failure. The reason
why it is more crucial at this point to succeed is that when you fail in front of IT staff, it is
OK; you can recover since it's a small subset of the project. However, when you get to the
migration stage, this affects the company because everyone is watching. This also brings
a spotlight on why some users do not like SharePoint. Do not give them any ammunition
to criticize SharePoint because it can cause issues and it's something you will have to live
with for a while. Believe me, success is the goal here.
318 Finalizing the Farm to Go Live – Part II
Some of the things you can do to be more in tune with your customers or department
include understanding what they do. Knowing what they do and how they use SharePoint
can go a long way into supporting the department during the migration process. If
possible and if you have time, find out who used to work in those departments that might
have data that is still valid in the source content. Knowing some of those user accounts
that may not be in Active Directory now will help to troubleshoot errors going forward.
Some tools that are used when migrating lists and libraries do not work well when
versioning items that could have a user account specified in a people and group column
because, when the item is migrated, it will fail. Make sure to account for what you can do
beforehand and use the tool mapping component to map users and groups if needed. By
doing this, accounts that are not there can be masked with the admin account or another
account so that the item does not fail during the migration.
Find out what your customer or department needs from you. Whether it is handholding
or more information for them to feel comfortable, make sure you provide it. There might
be multiple questions you wish to ask, some of which are as follows:
This may add more to your workload, which is why you want to make sure you account
for any scheduling hold-ups now by interviewing all departments before you finalize your
schedule, as stated previously. If there is a person on the staff that would like to help or be
that go-to person, find that out now so that they can be your liaison between you and the
rest of the team.
Find out the immediate or future goals of the department. This is crucial so that you can
get an understanding of where they would like to go with SharePoint. This also gives you
the opportunity to discuss how SharePoint can help them manage data more efficiently.
Having a demo ready for some of the new features within SharePoint, as discussed earlier
in this book, can show you are prepared and that these features are working as they should
in the new version. This gives the business analyst some information on what could be
done to help them in the future. This is the time to start being proactive and not reactive
with your customers. Start fresh and make this transition useful for everyone.
Exploring SharePoint migration 319
One thing I do to speed up the migration process is create as many server named sites
as possible to extend my web applications. This will give you many URL points of entry
for content migrations so that you can use separate servers to push the content through,
instead of using a DNS name where all traffic will hit one or two servers. The server
resources are then separate from one another and you can have different admins pointing
different tool installations to different servers. You will see no bottlenecks pushing content
through as long as your SQL server can keep up with the influx of data.
Tools also allows you to make choices for other Microsoft cloud services and third-party
solutions such as Google, Dropbox, and OneDrive. If you have a tool that can connect to
those types of services and you want to migrate content from those repositories, this is the
way to do it. A migration tool provides such integrations and can be helpful in moving
these documents over to a new platform. It will also leave behind database errors found in
the content, such as errors with solutions that were left behind from other versions, plus
other content that may have faced errors due to database issues.
You can also use the tool in combination with content database migration and migrate
your databases first, before using the tool on the weekend to push incremental changes
to the content. This cuts down how long it will take you to move content save you money
since with most tools, you must pay per GB. Unfortunately, you cannot use this method
when moving to the cloud.
Important Note
Remember: your logging files will grow while you're completing this process,
and large amounts of logging data will be created on your SQL and SharePoint
servers. Make sure that all your logging services for IIS and SharePoint are
pointed to the Data or Log drive on your machine and that it has a lot of space
during this process!
PowerShell migration
PowerShell migration is another way we can migrate content. It is more tedious and
requires you to be skilled in PowerShell. I do not recommend this method for newbies or
admins who do not have this skill set already. The chances of errors occurring is high, and
you do not want that when you are on a schedule. It is not time to practice PowerShell at
this point.
Tip
Some examples of PowerShell being used as a migration tool can be found on
Microsoft's website at https://docs.microsoft.com/en-us/
sharepointmigration/overview-spmt-ps-cmdlets.
Exploring SharePoint migration 321
Manual migrations
As an example, let's say I have a folder on a server that is being used as a shared folder.
This folder has documents in it that need to be migrated to a SharePoint library. I can
take these files from the shared folder and copy them directly into the library using the
Windows Explorer functionality within the SharePoint library. This is an easy way to
migrate documents from desktops, but the limitation is that you can only copy up to 100
documents at a time. You also do not have the opportunity to add column information or
metadata to these documents as they are being uploaded.
Important Note
Beware! Do not make the mistake of having a live site on a destination farm
where you have not finished configuring your services. Make sure you have
completed all configurations before you put any new "live" sites in your new
production environment. If you need to work with some of the departments
and have sites that are live while migrating, ensure your dev and test
environments are set up so that you can test any migrated content before it's
put into production as a live site. You can then make configuration changes
there and not in production to test the farm.
There are other ways we can use manual processes to move data as well. Microsoft Office
tools such as Microsoft Access and Microsoft Excel allow us to move content from one
list to another or one farm to another. All lists are basically Excel and Access databases.
So, capturing columns and pulling data back into another list is fairly straightforward
using PowerShell and even just Access itself. Look around and see what you can find. In
some instances, this may work when you are just moving small items or concentrating
on one document library and don't want to purchase something, or you don't want users
bothering the admin when it comes to moving the list through Central Administration.
322 Finalizing the Farm to Go Live – Part II
The Central Administration tool, which can be used to export lists and libraries, can be
seen in the following screenshot:
Types of migration
In this section, we will cover different types of migration. This will provide you with a use
case to follow once you start the migration process. The types of migration that can be
achieved are as follows:
• Version to version
• Server to server
• Service to service
Exploring SharePoint migration 323
• Cross-web applications
• Cloud migrations from on-premises
• Third-party to on-premises and cloud
• Files shares
Say, for instance, you chose Path-Based Web Applications as your standard application.
Now, in terms of migration, all content needs to be migrated all at once, regardless of
how many databases you have in that web application. If you do not move all the content
databases at once, your users will have no way of getting to the content because there is no
way to split the URL so that it talks to two separate farms.
However, you can migrate Host Named Site Collections one at a time or all at once. This
is because the Host Name is the key to getting to the content, and it is associated with
the site collection and not the web application. This is great when it comes to these types
of situations where content can be very hard to move to another service or farm. Plan
accordingly so that you do not get caught in a situation like this, especially when you're
managing the sizes of your content databases.
Cross-web applications (internal farm): These migrations also happen during the
production use of a SharePoint farm. You need to understand how or why you need to use
any other migration methods mentioned at the beginning of this section. The approach
you should use for list and library migrations would be to use a tool or use the area within
Central Administration to extract the site, list, or library and then move the content from
one site to another. Some tweaks may be involved here, but this is something you should
explore, depending on your requirements.
Migration to the cloud: This can only be achieved using a tool or a very time-consuming
manual method. Of course, a tool is preferred in this scenario due to the amount of time
you would have to spend going through files and uploading them to SharePoint Online.
The manual process would be to download all files to a local system and then upload them
back into the new farm libraries. This would mean reapplying all permissions, as well as
losing version history. We can see why a tool is very important when migrating to the
cloud. Most tools charge you per GB, so please budget for this!
I remember migrating from DocuShare a few years ago and the only tool available was the
command line. During that process, you could export all the files, but a lot of information
was lost. At the time of writing, there are lots of command-line tools, such as JSOM, that
can help define these files and permissions. They are exported in a flat file structure to
help pull that information into SharePoint and apply some metadata from the old system,
which in some cases is not SharePoint.
You can also migrate many different types of information using tools to SharePoint or
OneDrive. Some tools give you different capabilities with different applications. Some
connect to Google Docs, Dropbox, and even some other applications. I have seen with
some tools, such as Saketa, offer Microsoft Teams migrations as part of their tool set.
Exploring SharePoint migration 325
Microsoft also has a free tool that can be used to perform migrations from an on-premises
environment to the cloud. I have only used this tool once and it works as it should.
However, at the time, it didn't have a lot of bells and whistles compared to what other
tools offer. I believe they have made updates to this tool, so if you are looking for one, try
it and see if it will work for your project.
Important Note
Microsoft has its own migration tool that only migrates to the cloud. You can
find more information at https://docs.microsoft.com/en-us/
sharepointmigration/introducing-the-sharepoint-
migration-tool.
File shares: File shares are always forgotten, but the return on investment in migrating
these files to SharePoint is huge. When we leave these files shared on servers, you are
requiring these files to be hosted by one or many servers with extended storage space.
These servers cost you money to run and always require disk space upgrades. Instead, pull
these files into SharePoint and use the databases for sharing and managing these files.
OneDrive can also be used to hold personal files and get rid of MySites and the
storage comes for free with your plan. The SharePoint 2019 Server must be set up to
use a hybrid connection to the cloud, which we talk about in Chapter 10, SharePoint
Advanced Reporting and Features.
The users win as well because they gain functionality when using SharePoint, and your
help desk will love you for it. With this, you have the benefit of versioning and much
more control over your files. You can also place the files into segregated site collections/
document libraries so that there will be no slips in security, unlike file shares. They will
also not be calling the help desk to add users to certain folders, which will cut down
administration use as well. Users can also use files for workflow processes, versioning, and
other collaborative means. They can include these files in links throughout sites in order
to share information using the latest version of the file via Manage Copies.
Migration tools
Most tools run on any OS platform and provide a lot of options when it comes to
migrating using mappings, PowerShell, and scheduling. My experience with ShareGate
has been great, but it does not offer an array of solutions as it seems to concentrate on the
migration and governance space. As far as the tool goes, it is great and provides everything
you need to prepare for a smooth migration process. I have not had many issues using the
product, and I would say that it is first class!
326 Finalizing the Farm to Go Live – Part II
ShareGate has been the tool of choice for my migrations. I really like both ShareGate and
Saketa. Unlike other tools in this space, ShareGate gives you a great desktop and many
functionalities you will not see in other tools. The tool is inexpensive and runs on any
Windows platform. Again, pricing is tiered, so you can find a comfortable spot that fits
your budget based on your data requirements.
Important Note
If you are performing an on-premises migration, make sure to try out content
database migrations first. This will save you money in the long run. You can
then use ShareGate to push the deltas needed to finalize the updates to the
user's content when performing on-premises migrations.
• Auto-generates PowerShell scripts for all types of migrations with copy options.
• You can pre-test your migration to figure out errors before performing a real-time
migration.
• Governance reporting features for pinpointing issues with governance.
• Gives you the option to import user and group mappings from Excel files.
• ShareGate Shell can be used to migrate on schedules and allows you to use
PowerShell within the tool.
• Advanced options for managing more complex migration strategies.
• Provides a connection manager for performing SharePoint and OneDrive
migrations.
• Performance improves when using Insane Mode for migrations.
• General bug fixes and updates are provided regularly.
• Easy to use.
Exploring SharePoint migration 327
The following is a list of some of the tools that can be used for SharePoint migration:
If you are looking for a planning and assessment tool, Microsoft provides the SharePoint
Migration Assessment Tool (SMAT), a command-line tool that scans SharePoint farms
to identify issues before you migrate. This will give you a report so that you can go back
and fix any issues before migration. This tool can also be found in the preceding link.
Managed metadata: This is a more complex migration. Some of the migration tools that
are available allow you to migrate metadata using the methods provided by the tool. The
issue with managed metadata is that there is a GUID for each term set, and the term
within the service will not change if you migrate the data. So, if you are doing a service
and content database migration, this method of migrating the databases works because
there is no GUID change. If you cannot get these methods working and you must do a
new installation, you will have to recreate the service and the terms within it.
When performing a content database migration, these details for the managed metadata
can be brought over by backing up and restoring the managed metadata service database.
When you're creating your managed metadata service, you will use that restored backup
to create the new service on the new farm. If you are using a content type hub, update that
link using the following PowerShell script:
If this is not the method you would like to use to restore the service database, you can use
the export-import method. When you migrate the service in this way, you will have to
create the service first and then import the information using Excel. You can do that using
the following PowerShell script:
Important Note
Make sure you do not have any duplicates in your file before moving forward.
If you have duplicates, the process will error on import.
Exploring SharePoint migration 329
Remember that if you were using managed metadata previously in your farm, then you
need to migrate this service database and create a service application first before migrating
any content. The content, when migrated, will find out whether this data is available so
that it can populate the data in the fields that use metadata columns. In some cases, it may
give you the functionality to migrate this metadata, depending on the tool you are using.
When performing content database migrations, you will have to move the metadata first.
You also need to check that there is a content type hub in your past farm. If so, you will
want to make sure you configure that in the new SharePoint 2019 farm. You can do this by
setting up a new site collection solely for this purpose and adding the URL to the Content
Type Hub form field when editing your Managed metadata Service application. You can
also update the Content Type Hub using PowerShell, as shown in the preceding script, to
create the service application. If you have already created it without it, use this script:
As you can see, there is a sequence you must follow when performing migrations, and
scheduling is a big part of this process. Next, we will learn how and when to schedule
migrations.
Scheduling migrations
At this point, everyone knows they are moving to the new SharePoint environment.
Everyone is on edge to do this because, at the end of the day, most are afraid of change.
So, how do we schedule migrations? My take on this is that you start with the departments
that are most eager to jump in. The reason you want to do this is that you will get the
most help with the process from resources within the department. This creates a team
effort that you might not get from another department, which could defeat the migration
before it starts.
330 Finalizing the Farm to Go Live – Part II
If you don't have any eager departments, which I doubt you will, take one department
that you have some camaraderie with or one of the hardest you will have to move due
to complexity and test their migration first. You can at least test migration with this first
department and have the users test the migrated content and compare it between the
source and the destination. You should be able to give them an error report from any
migration type and work with them on how you will mitigate the errors when performing
the final migration of the content.
In the Types of migration section, we discussed that migration tools have some scheduling
automation capabilities that you can use to configure a migration and create a scheduled
task when a particular content migration should run. You can also migrate using
PowerShell by scheduling a command to execute on the server at a certain time, but most
tools have this integrated with the product.
When running scheduled migrations, you do not have to be present. You could migrate
several departments at a time if the content size is reasonable. This also depends on what
your time frame is because you want to make sure your users have access to the content
you're migrating, maybe the next morning or after the weekend on Monday morning.
You will need to support the migration from time to time to monitor the progress of the
migrations you are scheduling. Believe me, there are always errors. To help with such
errors, you may need to assign tasks to people in your department to help monitor their
progress. If you have tested your migrations and are pretty sure you have done the testing
well or if you know there is not much complexity in the site you're migrating, then you
may be able to let the scheduled migration run on its own.
If you were testing and used the most difficult customer based on the complexity of the
content, then this would have given you a realistic look at the tool and any errors that
come up, based on this content. Always take this on with all resources on the ground. If it
is just you, get support from Microsoft, a third-party company, or from the company that
owns the tools you purchased. This can be done successfully if you plan out the migration
properly and identify any errors and mitigate them prior in a test migration.
Again, this may take a couple of test migrations, but the mitigation of errors that surface
is the key to success and how quickly you can turn them around. Make sure you always
account for the time you may have to spend on the backend recreating content in a
site. Again, if the solution is not available in the next version of the farm, you have
to make sure you recreate it using a new tool and communicate the difficulty and/or
discontinuation of the product to the customer.
Exploring farm solutions 331
Important Note
If you have a couple or even a few web applications, NEVER EVER migrate one
web application if the services on the new farm are not stable. If you are still in
configuration mode and making updates to services, DO NOT migrate anyone
to that unstable farm. Once you have a stable service platform and you have
tested those services, you can migrate your first set of user content.
Success is the best way to build confidence in the company as you make these migrations.
Choose the right department to work with and test your migration. Review your errors
and figure out how to fix them before the final migrations. This may take another test
migration, but do not let the first migration fail. Do your best to be successful and have
the department that is using the new environment brag about how easy it was to migrate
so that others will be at ease and want to move immediately.
In the next section, we will look at farm solutions. They play a big part in recreating
content in sites. You may have some issues with this, depending on whether you chose
a customized or third-party solution.
If the company is no longer in business or has discontinued the solution, have we found
a new one to replace it? What can we do to find a solution to replace it with? Well, the
first thing we can do is find out what the solution does. There could be another company
that has a similar solution you can replace it with. The issues you will encounter will be
from a content perspective and will depend on how that content was captured within this
solution. You may have to rebuild the content within the solution in the new environment.
This will have to be accounted for on the backend of the migration as time spent to do this
process. Going into this in detail is beyond the scope of this book, but identify the content
and how it is presented and try to do the same in the new environment using the new
solution.
Test your third-party solution as well, especially any new solutions you may purchase. You
want to make sure it works as it should. Document how the users can use it by coming
up with use cases and showcasing them in a demo or in a PowerPoint. As always, keeping
communication with your users active is the most important thing you can do during this
time. The more you can communicate good and bad things, the better off you will be.
We should also be moving away from the Farm Solution model for custom solutions.
There is a new model available in SharePoint 2019. This new model is called the
SharePoint Add-In Model and it allows us to get away from Farm Solutions. It helps with
migrating solutions to the cloud later. This requires transforming your farm solutions into
SharePoint Add-in Model solutions. This would involve doing the following:
Note
To deploy newly created custom web parts, you must create an App Catalog.
Please review Chapter 12, SharePoint Framework, and follow the instructions
provided to set up an environment for developers.
The biggest thing to be aware of while doing a process like this is downtime and how this
will impact your business. If the solutions you are transforming are key to the business,
then you will need to be very careful of how you implement this change. You want to
make sure your users are aware of this and also understand you are making this change.
You can also run Farm Solutions and Add-Ins in parallel. The time needed to complete or
even plan would be great to show the user community so that they have documentation of
what is going to happen.
Exploring farm solutions 333
There are two ways we can deploy the new add-in to the farm. One is to do this in-place,
which is where you deploy the add-in and then, after making sure the site is using the
SharePoint Add-In, you can retract the feature.
The other way you can deploy during a cutover would be to swing content, which extracts
the content from the existing site collection where the farm solutions are currently
deployed. Migrate the content to a new site collection that uses the new SharePoint
Add-In that has been configured.
There are advantages and disadvantages to using either of these migrations strategies,
but the big thing is to find out what you believe will work for you, test it, and then
implement that process during your migration. Farm Solutions can also be tricky, so I
believe the cleanest model is Swing Content. This is because I have seen cases where farm
solutions get errors once they've been retracted and it takes time to figure out how to get
the solution removed from the farm. This would be a terrible thing to happen, but it has
happened to me many times. So, we can't get too comfortable. It's always best to build on
clean environments.
There is also a process we need to follow to look into the site in general. This is because
some of the elements of farm solutions interface with the users in a page. We should do
the following to ensure we have done everything to replace our solutions properly:
There are steps you can follow to work through this list of UI-related items related to
Farm Solutions. Please research those areas to figure out what you need to do to complete
this process of moving toward a cloud ready, on-premises platform. Remember to use a
naming convention for the new solutions and features you may migrate to the new farm.
334 Finalizing the Farm to Go Live – Part II
User testing
One of the best tools for testing workstations or desktops for troubleshooting errors in
your environment is Fiddler. Fiddler will interface with the network card to show you how
the device is connecting to SharePoint. It will also give you error messages if something is
not working correctly from a protocol/port perspective. This is one of the best tools I have
seen on the market because of how thorough the tool is with reporting, but also how easy
it is to install and run.
Workstation configurations with DHCP are something to also remember. The user's
workstations must be configured with the proper network configuration to reach the
SharePoint site. If you are using DHCP, take a look at what has been configured for the
user's workstations and make sure it is correct. The last thing you want is a user that is
unable to work due to a misconfigured workstation.
336 Finalizing the Farm to Go Live – Part II
Mobile devices work well with SharePoint as well. As you already know, SharePoint
can be rendered from a mobile device. You want to make sure you have configured the
sites so that the mobile view is activated when it is being used for that feature. This will
be activated by default, but you want to check these views if you have more than one
administrator to make sure it is still active.
User testing should be documented and there should be some type of testing script for the
users to follow. This is generally a walkthrough SharePoint functionality that ensures the
site is working as it should. It can also look at the department's workload.
As an example, you may wish to upload documents, but there are other tasks that make
sense to test as well. Uploading different types of documents would be a great test. This
would mean uploading the most popular document types, such as .docx, .pdf, .xlsx,
and others. This ensures everything on the farm, as far as uploads are concerned, is
working properly.
So, as part of your testing, you could go a step further and try uploading documents
you do not want to capture as part of SharePoint libraries. This means that if you have
a stipulation where access documents should not be uploaded to SharePoint and you have
set the blocked file type, then you want to test this as part of your testing process.
Make sure you step through all the SharePoint default testing steps, as well as the user's
department steps, to create your test plan. This also gets users more acquainted with
the new SharePoint look and feel and the changes that have been made to the new UI.
In Chapter 5, Farm and Services, I listed all the areas that should be focused on when it
comes to user testing.
Important Note
Run Get-SPProduct -local on each of your machines on the farm.
This command ensures the updated version on your server is currently in the
database.
So, to sanity check and solidify our environment as part of testing, we are going to do
some server configuration reviews. We have already done a stress test and user testing.
You should take this seriously and review what you have learned from the stress tests
to see where your servers are falling short. I have a list of performance monitoring
parameters we can monitor that can be downloaded from this book's GitHub repository.
Take those performance areas and run performance monitoring on them while testing the
system until you have fine-tuned all your server resources. Yes, there are Microsoft best
practices for minimum server configurations for resources, but as you know, things can
be incorrect in our configuration as well. In this chapter, we're looking at those areas and
fine-tuning our environment to make our server resources as responsive as possible. We
will look at performance monitoring in more detail in the chapters to follow.
Application creep
Sometimes, in the world of IT, we make a mistake by adding too many applications to a
server or database server and do not even realize it. One of the biggest no-nos you can
do in SharePoint is add another database from another application to a SQL Server that
supports SharePoint. This is a well-known best practice and should not be taken lightly.
In my experience, I have had two different scenarios occur with applications:
• Non SharePoint Database additions being made to a SQL Server instance that
supports SharePoint
• Custom applications being developed that overran server resources at given times
of the day
338 Finalizing the Farm to Go Live – Part II
This is not to say there are not more scenarios out there that fall into this category. What I
learned from these scenarios is the following:
• Database additions from other applications can kill SQL Server performance. You
could run every day with no issue but once there is a problem with that application,
you have problems with SharePoint. When a database being used by another
application is running on a SQL Server supporting SharePoint, depending on how
much that application is used in the environment, that application can slow down
the performance of your SharePoint Server.
• We must look at one of the best practices for SharePoint, which is MAXDOP.
MAXDOP set to 1 is not a usual setting to use for other applications. So, when
adding a database that will not run well on a database server with that setting, you
are just compounding problems. Make sure you have no other outside application
databases running on the SQL Server you're using that supports SharePoint.
In our configuration, we also need to make sure we have selected the proper servers that
support our integrations, such as SQL Reporting Services and other BI applications. In
most cases, you want to justify having servers that support just these services. Even PWA
and Search need to be evaluated for this isolation. This all depends on the size of your user
community and how many users are using the service.
Another item to be aware of is stress testing custom developed solutions. When I worked
at Microsoft as a PFE, I had a customer at a bank where they were having issues with
a process every afternoon. Everyone using SharePoint would complain about slow
performance and the system would just slow down dramatically. I witnessed this firsthand
the first day I accessed the customer's site. That evening, I set up performance monitoring
on all the servers and the next day, we captured data from the incident.
After reviewing the results, it came down to a custom application that had been developed
in house to reconcile some list data that had been captured during the day. The process
was very resource-intensive and just slowed everything down. The reason why this
happened every day is because they performed this reconciliation every day at the same
time when the company closed at 3 p.m. So, from 3 p.m. to 5 p.m. SharePoint was dead
due to this custom application.
Performing the final checks 339
So, make sure you check all your servers at this time and make sure there are no excess
applications, databases, or processes that are running that are unnecessary or located
on the SharePoint Servers that do not need to be. This also goes for system supported
applications such as antivirus and other server applications required by your IT
management team. You must make the judgment call on these applications, but know that
some can destroy the performance of your server.
NIC settings
Network Interface Cards (NICs) can also be a bottleneck. NICs have many parameters
and we need to make sure we set our network interfaces properly:
• Updating your NIC firmware and drivers is critical if you wish to support the server.
Please make sure you are on the latest version of the firmware and drivers on all
your NICs.
• NIC teaming can affect the performance of the server on the network if it's not been
configured properly. Make sure you confirm your configurations during your server
testing.
• Next, we will look at MinRoles and service compliance.
Services on Server lists all the services based on the MinRole type:
Services in Farm shows the services and if they are disabled or enabled:
If you have been using SharePoint Server for a while through different versions over
the years, you will have seen that the Outgoing E-Mail Settings page has changed
significantly:
#Parameters
$EmailTo = "yourname@company.com"
$EmailSubject = "Test Email from your company SharePoint Farm"
Backup and restore 343
#Send-Mail Message:
Send-MailMessage -To $EmailTo -From $EmailFrom -Subject
$EmailSubject -Body $EmailBody -BodyAsHtml -SmtpServer
$SmtpServer
Be careful when you're configuring your environment. Just because you set these values
using the GUI or PowerShell does not mean they work. There may be some middle
configurations with Exchange or other SMTP servers that will need to be completed
before this works correctly. Test all incoming and outgoing mail configurations and the
functionality within lists and libraries. Notifications for alerts should also be included in
your testing to ensure this functionality works as it should.
I want to make sure to mention the data compliance features that are available within
SharePoint Server 2019, Microsoft 365, and Azure. We do not want to omit that in the
cloud, there are other features to support our data and how it is accessed and managed as
well. We have not covered these areas thoroughly within the book, but we want to make
sure to mention them as part of the backup and data protection strategy, because these
features can be easily implemented in your SharePoint environment with some planning:
• Data Retention: This controls the life cycle of content within a SharePoint site. It can
be a scheduled or in-place action from a user. Files, documents, folders, and content
types can be moved, deleted, and managed.
• Records Management: This helps with declaring records within libraries in
SharePoint sites. These documents cannot be deleted once declared a record.
• Data Loss Prevention: This helps to protect documents from malicious or accidental
sharing. Details within the document can be scanned and identified, such as a social
security number to prevent a user from sharing the information, or to limit sharing
to only a subset of users.
• eDiscovery: This provides a way to legally hold content within SharePoint sites. You
can provide this feature to regulate email and documents to help with legal disputes
and other issues that might arise within your company, such as sharing intellectual
property.
• Rights Management: This allows admins to set up policies on information within
SharePoint and Exchange to protect against sharing outside the organization. It also
provides deeper security on documents, and the policies can be valid externally.
Azure also has Information Management, which can be used from the cloud
and within SharePoint farms as well. The configuration for this integration is not
complex but only allows you to create new policies within document libraries
on-premises. There is no way to centralize policies when integrating with the cloud.
Please take a look at these areas as part of your backup and restore plan because they can
help as you look at creating a plan for supporting the farm. There are strategies with all
of these features that will help you understand how they fit into your backup and restore
(Disaster Recovery) planning.
Backup and restore 345
How do you plan to back up and restore your environment? The first area I would
target would be SQL Server. When looking at SharePoint, you will see that SQL Server
captures all your data in a database based on configuration, content, and services.
So, in this case, we know what we need to concentrate on to successfully restore our
environment if needed.
Storage is the first thing that comes to mind when we start looking at backups. Do you
have sufficient storage when starting the planning process to hold the content from the
SQL database server? Did you make some adjustments along the way that could have
changed how much data is planned for the environment due to migration data? These are
questions you need to answer now before going live.
As a process, things change, especially when you're doing migrations. You never know
when something may come up during the process; for example, a department may want to
bring over a file share or have documents on another system and they would like to move
to a SharePoint library. Things change and you must roll with the punches in this type of
scenario. There is no way to really plan for some of the challenges that you may face when
it comes to personalities, department managers, relationships, and the overall data that
can be left out of the mix when dealing with migrations.
Since we have planned for migration of content, we need to circle back and figure out if
we have captured everything. Again, our first stop in solving the restoration issue we have
with the environment is to back up our SQL Server databases. Next, to really find a sweet
spot for total server restoration, we need to back up our servers. While doing this, we need
to make sure we do a full server backup, including server state. Next, any components that
are third-party need to be fully backed up.
SQL backups are easy to set up as a maintenance plan. They can be scheduled and perform
maintenance on the databases as they are going through this scheduled plan. There are
a few ways to do this, depending on the size of your environment. For small environments,
this is a no-brainer: create a maintenance plan and follow some best practices. If you do
not have a lot of processes running on the server or in the environment, nothing should
really step on the toes of this process. Schedule it daily with various times for log backups
during the day and you are done. You can also use PowerShell backups, which I will
mention later in this section, to manage this process.
346 Finalizing the Farm to Go Live – Part II
Larger environments require some thought and details due to other application processes,
storage space, responsibility, and overall management of the backup from creation to
off-site storage. So, if you are in a larger environment than a two to three server farm, we
need to define some of these areas of support for our farm. With larger environments, you
need to have some type of application to support the backup of the farm. There are many
companies that offer tools that back up servers and have integration with SharePoint as
well. Find a tool that works for you or, if you have one, see if they support integration with
SharePoint.
As we mentioned previously, SharePoint should be the sole occupant on any SQL Server
you deemed to be a SharePoint default SQL Server that holds the databases that support
the configuration of the environment. So, in turn, we should not have any issues with
outside application processes residing on the database server other than SharePoint. If you
do, you need to figure out how to move those processes that support the other servers or
be ready for some heavy troubleshooting and complexity when something happens to the
performance of the farm.
Make sure to test your management tools for successful backups and restores. I have had
an issue a few times where third-party tools lock the databases and SharePoint cannot
function at the same time. There have also been issues where the database was in a
different format than what SharePoint likes after restoring. Do not take the company's
word for it and make sure these processes work before going live.
Since we know that SQL is our main focus point, let's focus on storage space. We defined
this earlier, but we need to make sure we have enough space after all the migrations have
taken place and new sites with new content come on board later. Let's go back and define
how much space each will need and assess the situation so that we can increase the space
if needed. Please make sure to define your file sizes when you perform migrations as these
will play a big part in how much space is needed. Sometimes, we make mistakes by not
setting limits on file sizes, which can eat up our space quickly. This should be discussed as
part of your department interview process.
Who is responsible for the backups? Is it the SQL admin, SharePoint admin, or the server
admin? Or is there a backup administrator? At this point, you need to figure out who
is responsible for what. In the best scenario, one group should be responsible for the
backups of the farm. So, if you have a backup team, let them be responsible. However,
note that they must take on any best practices the SharePoint or SQL Server teams have to
support the process. The more you can divide the responsibility, the better off your teams
will function. This takes teamwork and cross training from requirements standpoint so
that everyone understands the need of the environment.
Backup and restore 347
Backup scheduling can cause some conflicts as well. Yes, we have scheduled backups, but
which backups go offsite, and which backups come back in a certain amount of time? We
need to make sure the management of our backups is documented and always monitored
so that if we are sending media offsite, we can ensure they are labeled correctly and we
know what's on the media leaving the building. Build your process on a spreadsheet or
in a database if you like. Just make sure it is always downloaded so that, in the case of
a disaster, you always have access to the latest copy. Make sure you have some type of
chain of custody sign off within the process as well so that everyone understands what
they did with this media as they did it. This is a very important part of our backup and
restores the process.
Backup management also plays into how we restore data. Do we have the right media?
Was the media labeled correctly? Restoring data from a SQL perspective is a process
that has been documented many times on Microsoft's website and other blogs. We will
not go into that here, but just know that SQL Server is your friend. It is what helps you
restore any SharePoint component. Services, content, and configuration all have databases
associated with the farm. Use those backups to restore as needed.
You can also use the Central Administration Backup and Restore area for restoring
content from an unattached content database. The word "unattached" in this case means
that the content database is part of your SQL Server list of live databases, but it is not part
of the farm list of active databases. In this case, you have not attached the database yet.
This is a powerful tool for recovering content, but it takes a little longer than a recycle bin
or a migration tool due to the steps involved. However, this can be done while users are
working on the system. Unattached Content Database Data Recovery is located in the
Backup and Restore section of Central Administration:
There is also the recycle bin that works for files and other components that have been
created in SharePoint. Here, you can find deleted components and documents and restore
them back to their original locations. The content is available for as many days as you
wish. By default, as shown in the General Settings section of any web application, this is
90 days. Once the 90-day period is up, the content is transferred to an admin recycle bin
for the number of days you set. So, anyone who deletes any content can go back to those
first 90 days and restore the content, after which the admin will have to restore it for them.
As we mentioned previously, there are different ways we can use the PowerShell backup
tool to capture content and services. However, this is based on scripting, which can
be a little clunky to use and requires your admin to be on their toes. These types of
backups need to be monitored and checked daily because they are not 100% functioning
applications. They rely on server scheduling and other processes to run. So, do not let this
go past you; check the scripts and errors on your server.
Disaster Recovery is not as complex as you may think. If you have a site already, the
easiest thing to do is set up a new farm in that location. Figure out how this farm should
be configured as a standby. There are a couple of ways to do this using the Hot, Warm, and
Cold scenarios for the off-site location. As far as content goes, using SQL Server to push
logs to replicate the content to the new SQL Server locations would be the best way to
configure the updates of content, either daily or hourly. The timing is up to you.
Having the bandwidth to move the content in big chunks is also another link that is very
important. If you don't have the bandwidth, you can expect many delays in terms of how
fast the data is moved from one site to another. Make sure to replicate the farm's resources
as much as possible. If you have three WFE servers and two app servers, make sure you
have that same configuration on the offsite farm. If you have changed and added a service
application to the production farm, make sure to add that to the off-site farm. Managing
these processes is the key to a successful Disaster Recovery plan for SharePoint.
As a recommendation, AvePoint cannot be beaten for the solutions they bring to the table
in terms of overall farm management. If you are looking for a product that can help you
manage servers, disaster recovery, content, and retention, look no further. Call them today
because this is where you want to invest your time and money to support the environment
– especially large environments that have more than three servers. You want to make sure
you are covered so that you can sleep sound at night.
Do not forget to investigate solutions, regardless of whether it is the one I am
recommending or something different. I have seen too many one-man teams managing
a farm and other resources in the network and pulling their hair out. Do not pass this
opportunity up at this point, even if you must push your delivery date back for your users.
This is everything to your organization at this point, especially if they plan on building
tools and other applications within the product.
Summary 349
Summary
This chapter has shown you how SharePoint configuration is not as easy as people would
believe. It takes some documentation, design, and thought process to make this happen.
We covered a lot in this chapter, but the main point was to make sure you have covered all
your areas of focus.
In the end, you do not want to release a product that is not complete. Make sure you have
your backup and restore process documented and that you follow best practices for all
your configuration. Double-check and work with your departments to resolve any issues
upfront and make sure to tell them about any foreseen issues beforehand. The better your
communication before going live, the better everything will be.
In the next chapter, Chapter 8, Post-Implementation Operations and Maintenance, we
will discover what to expect on the scheduled cutover day. You will be provided with
some tips on how to handle this, as well as some same day things to think about. We will
also go over security, responsibilities, and the tools we can use for troubleshooting and
maintenance, which will come in handy when we wish to support the environment.
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
As part of the migration, we also go into finalizing those documented errors and updates
that need to be documented after the users start using the farm. We need to provide
guidance on how they will be handled and how those incidents are pushed through our
help desk process as part of the support for the farm. Where do we document those
errors? How do we assign those tasks?
As far as the farm, we will look at the system errors and updates that may be needed to
the farm configuration. We will look at some ways to troubleshoot and how to document
those changes made to correct those errors for future reference. There also could be
situations that are unforeseen, for which I will share some of my experiences with you to
prepare you for what could happen.
Some of the questions you will ask yourself as part of this process are the following:
We will tackle these questions and more in this chapter and make sure to give you
guidance and recommendations on how to handle all situations that can come up in this
new farm release. The main goal of this release is to be successful so your conduct and
technical expertise will come into play in this area to support your deployment.
Remember that maintenance has also started on your farm as well, as this is your first day
of release. We will investigate how to keep up your maintenance of the farm and what
things are crucial on day one to make sure maintenance is working and any scheduled
jobs or manual processes are documented and in place. Maintenance also includes the
other environments, DEV and TEST, if you have set them up. We should be monitoring
these areas as well to make sure teams using these environments are happy and there are
no issues.
Technical requirements 353
This is also a big day and exciting day as well, because of all your hard work. Be prepared
and do not be on edge. Stay calm and collected as you will have issues, no matter how
big or small the migration was nor how big or small the farm you deployed was. If you
have followed this book from a high level and have collected documentation and done
due diligence, you are more than 90% there, with some caveats depending on your
environment. Now let's go through a day in the life of management, SharePoint admins,
and supporting teams on the first day of release.
Technical requirements
For the reader to understand this chapter and the knowledge shared, the following
requirements must be met:
Post-implementation
In the world of SharePoint, you hear so many horror stories about migrations and
implementations that go wrong. These stories are very true. On my way to a customer site,
I met a CTO who was doing a SharePoint implementation for 2010. He told me about the
issues he faced while using SharePoint and I explained to him that SharePoint is all about
planning and understanding what you want from it. He also mentioned he was unaware
of some of the best practices I had mentioned, as were his team, and he said they did not
use many service accounts in the configuration. They also had slow-running sites and the
users were confused about how to use the platform.
So on the first day, please do not be surprised with comments, issues you may run into,
or those for which you think you have covered everything thoroughly, because something
always happens that cannot be explained in SharePoint in my experience. But you should
be able to alleviate most issues that plague implementations if you follow this book,
implement my recommendations, and dig deep in the areas mentioned. It is a lot of work
in the beginning, but it pays off in the end.
354 Post-Implementation Operations and Maintenance
In looking at the chapters that I have written, again I gave you as much information as
I have noted and gathered over the years to help you prepare for this kick-off. Again it's
up to you on what you use, and what you consider important or trivial, but from my
experience, you have to take everything into consideration when standing up a new
SharePoint environment or migration. The more thorough you are, the better your
deployment. It takes a lot of handholding in some cases and a lot of patience to move
forward with these types of releases.
Please make sure you have documented all your configurations and designs for your
farm prior to releasing the farm. The more you document, the more information you
have to find resolutions to those issues that may come up. Documentation is the key to
a successful deployment as I have stated. You need a design document and an installation
guide to really make the best of securing the details of your environment. This is necessary
and not trivial. I have been to many companies that have documented nothing about their
SharePoint environments. Make sure your documentation is up to date with any new
changes and make sure to understand what you have built in your farm.
Now that we have that out of the way and have verified that we have documentation, have
done testing of the new farm, and trained our users on the new features in the new version
of SharePoint, we are now ready to release the farm. We should have updated all required
information in all documents and stored these documents in a safe place with roles
assigned on who can access, edit, and delete these documents, which is the number-one
priority the day before day one. Some of these documents may change during these next
few days so we need to make sure we are prepared to update any information necessary.
Do not change anything without documenting what you are changing, no matter how
rushed you are or how small you may think the change will be.
Our teams should also be prepared and ready to manage whatever situations may
arise from this release of SharePoint. This should be their top priority for at least the
first two weeks to a month. The teams should be briefed and this should include the
following groups:
• Server administrators
• Management
If you are a one-man shop, then you and your manager should be prepared to handle any
calls that come directly to you. The calls related to the new environment are filtered as
priority calls. If your team is large you should have already had meetings on how to handle
calls and a process flow covering how help desk calls should be routed. We also need to
make sure you have guidelines on how these calls should be documented. We will get into
these areas later in this chapter.
As an example, you can see in the following figure a guide for how to provide separation
of SharePoint duties:
As you can see, communications with users at this time are very crucial for the success of
your implementation. Going back to the preceding figure, we have segregated control of
the environment based on roles and there are other roles involved as well, who are outside
of this example. The example also shows the duties of the farm admin role, who will make
sure to check the servers, Central Administration, and any other server areas related to
those resources that present the platform and services.
Then, as stated in Chapter 2, Planning and Architecture, we should have designated users
specific to the departments or site collections that could be technically able to handle the
management of the site collection. Again, as stated, this helps you to minimize calls to
the help desk and they can keep track of help desk tickets that are local to a site collection
admin who understands the department's layout, needs, and security.
One of the other project tasks we talked about was to hire a third-party vendor to help
with the deployment and help with tasks in the post-implementation. This would be good
for any company to provide an objective, second set of eyes on the process to make sure
all things have been captured and make sure you are on the right track. This is especially
needed for one-man shops that may not have all the skill sets needed to capture all details
as they may not be as experienced in some technical and project functions.
As part of this process, we should build a quick SharePoint site, all of which should be
able to be accessed to see posts on common issues. This will give the help desk and site
collection admins a way to funnel calls that may need a fix that was already implemented
and that the user can do themselves and keep track of the issues that have been called in.
The blog post template would be the best way to show this information as it could be
a set of step-by-step instructions posted in SharePoint. You can then have the user follow
up after they went through that process or go through it with them while on the phone
if necessary. The good thing about them having the link to this site is that they will know
where to go before calling you for help, alleviating unnecessary phone calls and duplicate
tickets. They can also pass this information along to other employees they know have the
same issue.
Another way to be successful during post-implementation is to divide and conquer your
tasks with the teams you have available. We should also have a page or two dedicated to
the same site as the common issues we talked about in the preceding paragraph, which
would provide an area where the other teams can manage help desk calls in an isolated
area. This could be ServiceNow, another service platform, or SharePoint, and this can
be done whichever way you want to handle it. Sometimes making this implementation
more isolated from the normal help desk makes sense as it brings a more cohesive and
collaborative way to make sure things are being corrected, documented, and seen by
everyone on the teams working through the fallout from the implementation.
Post-implementation 357
The following is my list of out-of-the-box and superuser functionality that will need to
be tested:
• Create a list
• Create a library
• Upload a document
• Create a column
• Create a content type
• Create a workflow 2010 and 2013
• Test existing workflows
• InfoPath Forms
• Create views
• InfoPath publishing
• Create a subsite
• Create a site with a template
• Test site settings (Search settings and Term Store)
• Test Search
• Incoming and outgoing mail
• Notifications in sites and SharePoint Designer workflows
• Find a file (List Search)
• Test People Picker
• Test advanced permissions
• Managed Metadata
• Navigation from Quick Launch and Top Navigation
• Test farm solutions deployed to the site collection
• Verify web parts and other areas in the site are functioning
• Test content and structure
• Syncing tasks
• Open in Windows Explorer
• Recycle Bin
Post-implementation 359
As part of this exercise, you will want to document the functionality for each site tested
using a spreadsheet or SharePoint list. This could be a spot check or even a full site
list check throughout the farm. It all depends on whether you have enough time and
resources to do this. Again, this is very important and will reflect on the support needed
on Monday when your users come back to work. This gives you a heads-up on errors,
which then gives you a heads-up on fixing issues and documenting how you fixed those
issues. I have had some sites not work after migration using out-of-the-box functionality,
so please be sure to test these functionalities before releasing.
This exercise with testing also gives you a heads-up on those sites that are working
as they should and those that are not. You also can communicate with the department
(the site collection admin) that owns the site to have them look through the site as well
just to check whether there are some things they do that are more customized so they can
verify those specifically and report on them.
Also, capturing images, technical errors, functional errors, and time and date information
related to those errors and details on the site where the error happened is also helpful
for the tech support team when troubleshooting. There is nothing like having all the
information you need so you can really dig deep to find a solution immediately. This
should be communicated to the site collection admins, so they know what information
they need to supply if this is farm-related.
Mind you, you could also have some changes in the content structure as well, which you
would need to communicate. When migrating and making changes to any environment,
it is the time to make those content changes as well. It could be a new site or even some
newly developed functionality, but this is not the time to tell your users about these types
of changes. Site mappings and structural changes should have been discussed by the user's
site collection admin who should relay these types of events.
These changes should have been done way earlier, during your design phase, and any
changes documented at that time. A lot of times admins may change the way sites are
mapped during migration because it is a convenient time to make the change, and will
move content based on the company's structure. If we have done these types of changes we
should have communicated those changes to our users long ago.
Take for instance an acquisition – you may have many sites or changes that need to be
merged into your site map. Content and structure could be a big issue as part of your
design. This is a good example of when these types of changes would come into play.
When keeping old URLs, we need to also make sure that our old farm is online as well and
in read-only mode as we migrate content. We also need to make sure this has been done
and is functioning because we do not want any data changing, which would bring the
need to do incremental migrations using ShareGate or other migration tools. The worst
thing that could happen is a user getting to the old site and making changes to the old site
not knowing they are not in the right location.
These types of issues can also be handled with changes to site colors and messages on
the front page of the site to make sure people know this is the old site. The reason for the
security is that these sites and data cannot change as we try to get the latest and greatest
content from the old farm migrated to the new farm.
We also need to change the URL for the old farm and update the DNS entries for these
URLs as well after the cutover. If we need to keep the old farm online and in read-only
mode for a period, we need to make sure that this task is completed during this weekend
of change. This farm could be up and running for another month or more depending
on your requirements. Once that point is reached, these servers can be backed up and
removed or repurposed as resources for other applications.
Use the following script to update all site collections in a web application to read-only:
$SPsite.ReadOnly = $true
#Or : Set-SPSiteAdministration -LockState "Unlock"
-Identity $SPsite.url
}
These changes could take time, as they need to propagate over the network, so all users
get the new changes when they log in through the network. These changes consist of IP
addresses, network segmentation, routing changes, firewall changes, load balancers, and
other areas that may need updating to successfully connect the environment to this
new URL.
We must also consider the old location where, in this configuration, we may be updating
the name of the URL and associating it with a new IP address. These changes can also take
time to be propagated successfully over your network. The bigger your network is, the
longer it may take for these changes to take effect.
When using path-based site collections, you will see that there is a one-to-one integration
with the site and DNS entry. Path-based sites prepend the site collection to the end of
the URL. Preparing your web application is easy in this case, and starts from when you
created the web application. Making sure you have the right URL and then the correct
choices for the host header, authentication type, and ports, all of which are in that form
we used to create the web application. These are very important to get correct because if
not, you will have to recreate these web apps, in some cases with information that may not
match what you had previously. Naming your web application is also important also so
you know what web application it is you are working with specifically.
DNS cutover could mean something totally different to those using host-named site
collections. This type of implementation takes a whole other process to complete.
To start, each site collection will have its own URL, which is a management nightmare
if you look at this at a high level. There are other configurations that are involved with
host-named site collections as well, as you have to use realms, many DNS entries for
each site collection, and HOST file entries on all servers in the farm, and if you are
integrating a cloud-based authentication solution, this could mean extra steps in creating
and securing you web application as well, as these will also require setup and installation.
362 Post-Implementation Operations and Maintenance
In Figure 8.3, you will notice that when keeping the same URL, there are not many
changes needed. You could add a server or two to the new environment but under the
surface there are many things that need to be configured:
Figure 8.4 – An error rendering the site (first day, same URL)
Post-implementation 363
As you can see in this example of a user going to a migrated site for the first time,
when migrated from cloud service to cloud service, the sites could come out garbled or
dismantled, and some of the sites may not load fully and look to be in a loop depending
on the type of authentication used. Authentication can be messy as well because if you
have a cloud provider, this authentication must be in sync with your old farm and your
new farm as well. So, there are potential risks with this implementation that will be
unforeseen but most can be corrected using the cloud provider's application interface, as
we can within Okta. This situation happened to me on a huge migration project, and after
six migrations, this situation suddenly reared its ugly head.
I have seen this happen only with cloud-based authentication solutions. You will see
a Signing in to Sharepoint loop, as the user is trying to authenticate but cannot due to
the authentication process looping.
To resolve this issue, you want the user to clear their cache, the SSL cache, FlushDNS,
and even the Windows cache to start fresh when accessing the new URL location. The
SSL cache may also need to be cleared as well, as this also keeps a cache for the system. I
have had this happen and it is not fun when you have over 90k users having this issue and
needing to know this information. It is best to include this in your planning, so everyone
is aware prior to the migration by testing sites within the production environment prior to
users gaining access. In my case, we were not afforded the time to test before migration.
You might say, well, why wouldn't you test this in DEV or TEST? Well, you cannot test
in these environments unless you set up the exact same URL in another environment
that has its own Active Directory and network structure. The URL can be recreated in
an environment where you have a DNS entry for this web application without causing
conflicts. Most likely, looking at my experience, not too many people have the luxury
of having a whole separate network environment for DEV and TEST. They are usually
servers on the same network using the same Active Directory structure.
URL changes using alternate access mappings are an alternative to using the default web
applications but again as I stated, you would only be using one IIS site for both the default
site and the alternate access mapping if there were two URLs you wanted to use for access
to a site. You can have up to five URLs pointing to one IIS site. There are five zones you
can use when adding alternate access mappings.
364 Post-Implementation Operations and Maintenance
Make sure to catch any of these mappings on the farm you are migrating from, so you can
include them in the configurations on the new farm. They are useful when you want to
just have an alternate URL to bind against a main web application with a different URL
name, maybe using it for a specific department or group of users.
To add a new alternate access mappings, use the following script but also remember you
may have several web applications you need to update. The following code will work with
traditional web applications or, as stated, path-based web applications:
In the case of Figure 8.3, we are looking at an internal on-premise SharePoint farm
and a migration from one set of servers to another. When using the server-named web
applications, this also gives us a way to test our content by connecting to it, testing
authentication, and looking through content to make sure all content is migrated and
services are working correctly before we even start this DNS change process. When
in testing, the URL does not matter too much at that point. This helps us with time
management so that we can start this days or weeks before we do our final migration. The
URL can be changed later when we need it.
This method of server-named web applications is an easy way to set up a web application
quickly without the need for networking teams to get involved. The server already has a
DNS name and IP, and is configured on the network, so utilizing this makes sense just to
do some testing. It also gives us an opportunity to have users go in and test and review
content, workflows, and other areas of their sites we may not know about. They also have
the opportunity to do a spot check before the sites have fully been migrated.
In my experience, this is not done often enough and the communication from the
SharePoint teams is usually nonexistent for this type of sanity check. In most cases, it
happens because of time constraints and a lack of understanding of how necessary these
tests really are before going live. Therefore, it is very important to plan and not rush
during an implementation like this where it affects almost everyone in your company and
even external users in some cases.
We also need to remember that if we are using DEV and TEST environments, this can also
be done using those environments to migrate too before bringing content into production.
As part of our migration process, this should be number one on our list of tasks as this is
so important. Please make sure to set up these environments even if they are just one or
two servers.
Post-implementation 365
In my experience, this is not the case as most SharePoint Admins do not have these
environments set up for us to use to brainstorm, test configurations, and develop
processes. Even if you just have a small DEV environment, it helps you to be able to test
something before bringing it to the production farm. There are so many problems you can
face later if you do not follow this process. If you don't follow this process, you may end
up releasing a farm riddled with errors.
Note
One of my pet peeves is migrating directly to production and potentially
having errors in production on services that have not been released yet. Be
wise and create DEV and TEST environments to make sure your content
and services are working as they should before migrating to production,
documenting the configuration for each environment. When you get to PROD
it should be pretty much perfect. This also gives you a way to test with users'
content prior to releasing to production, which then you can use different
URLs to do so. Production should be perfect, or close to it!
So, in our current on-premise enterprise, we would make sure to make changes to the
DNS the day you tell the users the systems will be down. So, if you schedule the system to
be down on Friday at 6 p.m. and you are expecting the system back up 5 a.m. on Monday
morning, then that window of time is to be used to migrate your DNS. This gives a chance
for URLs especially to be propagated and updated to where the new IP location of that
URL is moving. The best time to migrate is a holiday weekend or over Christmas through
New Year's, where there are not many people online due to the holidays.
This also gives you time to make any updates to your firewalls and other areas in your
network that could be a concern, especially when it comes to external access. This also
gives you time to configure (if you haven't already) and test your load balancer using the
URL. Propagation also has time to finish and update all network changes, so users do not
get errors when trying to connect to content.
As we are talking about internal enterprise changes, we also need to remember that we
also need to test our connectivity to our new URLs and even the old URLs if they are
staying in place. This should also be done during this downtime. We want to make sure we
do not have any missed DNS entries or weird errors, especially when using ADFS or other
intermediate applications for authentication. If you are leaving your old farm in place, we
need to make sure to test whether the old farm is working with the current or new URL
assigned to it. This is only preserved as a read-only site for verification.
366 Post-Implementation Operations and Maintenance
When looking at this scenario from path-based web applications and host-named site
collections, we will see that there is a big difference in how we would utilize our time
during the cutover. If you have a path-based web application and you are migrating that
application, it all needs to come over that weekend. Everything in the content databases
associated with the web application must be migrated all at once.
With host-named site collections, you can look at this very differently. The path-based web
application at the top of the URL does not have to be migrated; only the site collections
with URLs below the web application do, and they do not all have to go at one time. I have
seen some instances where there was a mixture of both in a host-named site collection's
web application. At that point, you would move all your site collections as you feel but at
the end, all path-based web applications could move when the top-level web application
is moved, as I have seen before where some of those top-level web applications have had
path-based site collections added by accident. Please check before moving this content.
There are more content strategies for on-premise migrations in Chapter 9, Managing
Performance and Troubleshooting. There are some strategies outlined within the cloud
migration details in the following section that can also be used for on-premise migrations
when it comes to content structuring.
When choosing a cloud provider, you really want to explore what the company offers as
services and make sure they fit what you are looking for. Here is a list of a couple of things
you can pay attention to when hunting for a new cloud provider, to help you find the right
provider for your new SharePoint environment:
From an overall contract position, most cloud providers will send you a long and
complicated SLA. This should be read in its entirety by your management and even
a company lawyer. This agreement will solidify your relationship with your new service
provider, so this SLA is very important. You may have some unique requirements this
provider may not understand or even offer support for. Having a clear understanding at
the start should be your focus at this point before signing any contracts. If you are looking
for compliance at a high level due to your industry, make sure to choose a provider that
meets those requirements.
When cloud first came on the scene, security was a big concern. This was due to horror
stories that people heard from other companies and the services had not matured to meet
the needs of high-profile government agencies. Those worries are gone now, except for
maybe a few small gray areas, but overall, you can pretty much now trust your providers
with the security of your presence in the cloud.
There are a few things to look for but one of the most important is how data is transmitted
from your business and the cloud. Is it secure? Does your provider use some of the best
technology to secure your data? This should be the questions you need to investigate
by asking your provider about these areas of concern and researching blogs and other
companies that may be using this provider's services. Blogs sometimes give you hints on
what shortcomings your provider may have.
368 Post-Implementation Operations and Maintenance
Other areas that affect data are transfer speeds and how we get our data from one
platform to another. Ask yourself, will this take me weeks or months to complete? How
much storage do I need on my new cloud platform? Questions like these are key to
discovering key areas of connectivity. This even weighs on how your users will connect
to the new services. Does the platform meet your needs? Most likely, yes, but you must
really test drive this platform to see what you uncover, and in most cases, it's going to take
you trusting your provider and testing your systems and processes on the platform after
migration. At that point, you will be able to tweak as needed to get the most out of the
platform. I will explain more later in this section about my experience of moving from
RackSpace to AWS.
You also want to see how your new provider handles physical security. Where are they
located? Do they have multiple locations? Are these locations dispersed around the world?
These are just a few things to think about with physical location. Even with those physical
locations, how does the provider handle weather disasters, security breaches, and damage
that could happen due to accidents or robbery? This should be the least of your concerns.
The provider should make you feel at ease when it comes to the protection of your data
and should have disaster recovery in mind and be able to explain how their failover
secures your data from disaster. Services could be affected by these incidents as well.
Providers offer such a vast range of services and in some cases, you may have one that
provides 80% of what you are looking for and another that provides only 50% of what you
are looking for. The 80% provider may have 20% of the services you need out of the box.
So do your investigation and wisely choose the best fit for your organization. This may be
two providers but that comes as a cost.
The thing to keep in mind is that when moving to the cloud, the first thing you want is
a place to expand and exploit new capabilities. The goal in all this is to provide services
that expand your business and even bring more continuity to the business using
applications and enhancing business processes. Looking at a provider's future goals would
be another area of concern. What are they planning to bring to the table in the future?
These are all things to think about during this time.
As stated, when looking at cloud providers, you will come across a new platform where
your current IT professionals may not have the expertise. This is where you look to your
provider to see what professional services may be included in your subscription for their
cloud and how you can get your current employees trained on the platform through other
training providers. This brings up the questions of, where do we start? Does the provider
have these training services available? How long will it take to get the employees trained?
Talk to them to get more details on a plan moving forward. In most cases, they have done
this all before and can provide recommendations.
Post-implementation 369
When looking at training and services, we arrive at another issue that brings all of this
together at the bottom line: cost. What is it going to cost me to move from my current
platform to this new platform? How is this going to affect my business and customers?
These are some of the things you need to think about as you walk through the process of
choosing a provider and what can transpire as part of this migration that may affect your
business.
Choosing a cloud provider and a subscription that fits will give you a total sum for the
bottom line and you then can adjust as you see fit. The big thing to consider is yes, you
are moving to the cloud, but moving to the cloud is going to enhance your systems,
streamline your processes and give you more to offer to your customers. So, the ROI could
make the difference in how you see the cloud playing in your migration as well. Think
about these things.
Again, in most cases, you will do an incremental migration, migrating areas within your
environment separately when possible and moving the services to the new platform. The
thing is, I am only talking about SharePoint in this book, but we need to consider other
services that are provided in your current environment. Create a plan of action once you
have chosen a provider. Move your systems as you see fit and create a plan that works for
you and the resources you need to get the job done. If you need help, call a third-party
company or some consultants to help you manage the migration.
Reflecting on this section, we covered the following topics:
• Environmental differences
• Training
• Cost
Please make sure to review and ask questions before you make a move to the cloud. This
is a pivotal move that can make or break your company. One mistake could leave you with
a long-lasting migration where a platform like SharePoint is way behind schedule and you
are paying for an engineer to migrate the platform. You have paid for their service and
nothing has been migrated. Now you are paying for servers in your current platform and
servers on the new platform. This is not cost-effective at all and can lead to you spending
a lot more than planned.
This exact situation occurred on a project I was on recently. After I analyzed what
happened, I could have saved the company over a million dollars in costs if I had started
the project. There was more to this story, but at a high level, I was brought in later in the
project to help the current consultant.
370 Post-Implementation Operations and Maintenance
I made recommendations the first week I was there, and they were ignored. Fast-forward
4 months later and I am on the project with a third-party vendor now and just starting to
migrate content. The other consultant was let go two months prior.
This brings up some great points, but we will not dive into this. I have touched upon a lot
of them in the book already but the points I am trying to make here are the following:
• Planning is everything.
• Choosing the right people is everything.
• Experience matters in most cases.
• Always evaluate your position constantly.
• Never overlook a recommendation.
In looking at these points, one stands out to me: planning for cloud migrations with
SharePoint. I have focused a lot on planning in this book because it's everything that
matters. For instance, you have a large amount of data, so you already know that the
connection speed could slow you down for database transfers if doing a content database
migration, and even when using a tool such as ShareGate.
Depending on the speed of the connection to your current provider's network, you could
see slow transfers of content databases and slow migration of content using a tool. It is
best to let your provider know what you are doing and see if they have other pipelines you
can use to transfer data. As part of that setup, you would want to include secure tunneling
to move the data from point A to point B as well.
In this scenario with SharePoint, you would want to look at your databases as well. You
could also come across large databases that need to be minimized as per best practice
sizing of 100 GB to 200 GB, which is a reasonable size to get the fastest transfers from one
provider to another. On premises, this will not have much bearing on your migration,
but over the internet to separate cloud providers, this could be an immediate cause of
concern, especially when you are talking about costs and the ability to move the data as
you planned to. As stated, the goal is to not have to pay more for your old system than you
have to. You want to move and be done with the former services, so you do not have to
pay both companies for very long at the same time.
There are services that some cloud providers have available that help with the migration
of large data. Amazon offers a few ways of moving these large databases, called Snowball
and Snow Mobile, which give you a way to get those extra-large data files moved faster but
require some physical logistics to make this happen.
Post-implementation 371
These services are hardware-related and you basically get a piece of hardware and copy the
data from your server to this hardware. Once the data is copied, you send it over to your
new service provider and they use it to copy the data to your new cloud instance.
Amazon S3 is a transfer acceleration service and has some requirements you must meet
as well. I used this to do a 15 TB content database migration and it worked like a charm.
Please look for these services online and find out which you can take advantage of in cases
where you just cannot or do not have time to downsize content in your databases. If your
databases are more than 600 GB, you will have an issue getting those databases copied
over in a decent amount of time. There is an upload process and then a download process
on the other end.
To elevate some of the stress on content size, there are a few things to remember. Again,
the best practice size for content databases, as mentioned previously, is 100 GB to 200
GB, but investigating the content to see what you can do to mitigate issues with content
size can help to soften the blow of the time required. This exercise really helps you to
load balance the content over the databases to make them less in size. Even creating new
databases that will help you manage what sites you want to move and many sites may not
want to move at a specific time. There could be plans you need to follow for moving sites
by department or even by priority.
Let's look at a couple of things you can do to make size less of an issue when copying
a database from one service to another:
With the cloud, we also have hybrid configurations that come into play when you want to
keep your server on premises or in a cloud environment, and connect to a cloud service
such as Microsoft 365. This service, along with Azure, really have some cool tools you can
take advantage of. We will talk about hybrid configurations and how these come into play
in Chapter 10, SharePoint Advanced Reporting and Features.
Site collection cleanup: This is the easiest way to delete old content and you should have
done this in the planning stages of your implementation. Verifying that a site is still being
used and or is valid is something you should do right away. If you have not done this,
then you want to try and get this done one evening before your migration weekend as you
migrate incrementally. The process consists of validating the site for deletion with the site
collection admin and then backing it up for retrieval if something goes wrong is the first
step of the process. Then, at that point, you can delete the site collection. This is done in
cases where there may be a need for this site later.
372 Post-Implementation Operations and Maintenance
Site collection moves: This is the next option we have and is not that hard to do but it
takes time and patience to make sure you are validating which sites need to be moved
and what database they are moving to. This does require some updating to your current
documentation so that we know where the site now resides in case there is a need to move
the sites in the future. To do this, you need to use PowerShell. Before we do this though,
we need to see what site collections are available within the content databases. We can
do that by either looking through Central Administration using View all site collections
and clicking on the site collection to see what content database it is associated with and
making a list or using the following PowerShell script:
The difference between each is that from the first PowerShell script, it is understood that
it is going through and getting the content database and listing all site collections under
it. The second command lists all the site collections and storage used within the database.
Choose how you want to do this, but both can be of help in any situation:
Once you create that content database using your naming convention, then use this
PowerShell command to move the site collection from the current location to the new
location (the content database). See the following code for this:
To show you how this would work, take for example a content database named "A",
which has site collections running, and a new content database named "B", which was
just created and is empty. The total size of content database "A" is 100 GB and it has 3 site
collections. The total size of the one red site collection is very large, at 75 GB, as shown
in Figure 8.5. We would then move that site collection from the current content database
to the new one, so we alleviate the size of the current database. Now that we have created
a new database "B", where this site collection is able to grow, content database "A" is now
able to grow as well with more space.
This is a good practice to follow, especially when you are moving content from one site to
another. This also helps with incremental pushes of content if you are using a tool once
you get your content database moved to the new location, as the size of those transfers is
not large in size, so the amount of time to move is not as long.
One of the things I noticed on AWS, for example, is that just because you set firewall
settings on a server doesn't mean that port will work or be open for your servers to
communicate through. There are also other layers that control this portion of the AWS
network. You do not want to learn this at this stage of the game. This should have been
something you learned during installation and configuration.
Before I install SharePoint, I make sure all firewall ports are open and local policies are set
correctly for all service accounts that are needed for a successful deployment so I do not
have issues later with failed services and trying to figure out and troubleshoot problems
after the release of the environment. Make sure you know your platforms before you start,
or get help from the service provider immediately for any issues or problems.
Not understanding the environment also affects your plans in other ways. This could
happen with unexpected pitfalls and the need for more training. This can also lead to
unexpected service costs or migration tools at a cost per GB – it could even be several
things that pop up at this time that you did not expect. The biggest is the fact that you
must keep the old servers online for a longer period than you expected.
Migrating from cloud to cloud comes with costs, even though you may not be using the
service actively. Your provider will understand you're moving, but they will continue to
bill you until you have finished moving or deleting your objects or have asked them to
shut down your subscription areas. So, while preparing your new environment for use,
you need to be preparing your old environment for deletion as well.
This means you need to figure out everything that needs to move, whether it's the
databases, folders with files in them, mapped drives with the content on servers you may
not know about, or whatever you may have that you may need going forward to protect
you from loss of data.
As far as migration tool costs go, you must look at this from a migration perspective.
These tools cost us per gigabyte, so we must be careful. We can sit here all day and say all
is going to go well, but there is the chance that you start the tool and it doesn't do what
you expect, and you have to restart the process of incrementally migrating that content
to the new sites. This can add a lot of extra costs depending on how much content you
just made a mistake migrating. Test within your internal environment using some trial
software for which your provider will give you a certain amount of gigabytes to test.
Whether it's on-premise or on the cloud, make sure you understand the settings within
the migration tool so you do not have to redo any work. The trial time with the tool will
also help you figure out the use of the tool and how it works.
Post-implementation 375
In some cases, old content is usually sitting around on SQL servers, and in some cases, is
still running on the database server but not attached to the SharePoint farm. There also
may be some content in mapped drives that have been planned as part of the migration.
These files need to be vetted and cleaned up as part of this process.
As part of planning, these areas should have been reviewed, deleted, and taken into
consideration already in your planning phase but if you have not done this, you will need
to finalize the data you want to move. You never know when someone is going to ask for
information from a few years back that may not be online or in backups. This way, you
have those databases or data that may be unused for a long time but if it is asked for in the
future, you will have it.
With this old content, we need to be able to document the content as well. The following
questions might pop up in your mind:
The answers to those questions need to be investigated and you need to make sure they are
documented. This goes for services databases as well. If you have an old service database,
you should keep it and just label it correctly on a backup drive or some media for review.
If you are in this mode of finding data, you want to make sure you do not have to go
through this again, so applying a fresh start and clean slate to this new farm environment
is the key to a healthy start. Do not go back to bad habits and do things as you used to. If
you need help with that and have some resources that can be used to check behind you,
make sure to do so at this time. It is time for a change and the beginnings of managing
this environment properly.
So, after you have found the files you are looking for and copied them to a central location,
what do you do now? You can have your old cloud provider give you backup space, or
provide them with a USB drive to get the data copied to a backup location. There are not
many choices and it all depends on how you want to move the data. If you do not want
to use the service any longer, then the easiest way would be to have the USB or storage
added to a server or storage hardware and copy the files there. You can have them send
the hardware back to you through the mail and or use OneDrive or another cloud storage
service to move files as well, but they all come with a cost depending on the storage
needed.
If you use a cloud location, you then have to take the time to copy up and then copy down
from the location, which could mean several days or weeks of time spent monitoring,
being at risk of losing your connection, and as I stated, cost. Timeouts can happen if
the connection is broken, which add to costs because you must start over, which also
lengthens the time required to move those files. Choosing a method will be best for you
to decide. I can make recommendations, but you know your situation and time frames at
this point in your migration. Please just be aware of stumbling blocks and make sure to
understand your cloud environment policies.
In Chapter 2, Planning and Architecture, we talked about disaster recovery and how to
implement a plan for backing up your SharePoint farm. There are many ways to build
redundancy in your new farm. If you plan to use our outline to develop a Disaster
Recovery Plan then make sure you put in place some of these options to help you
build a good backup and recovery plan. As part of the plan, we also want to take into
consideration all the locations and files you have gathered that SharePoint used or was
involved with in the old farm. These files are important and we should be backing up
those locations as well.
As far as databases are concerned, we can use SQL mirroring or SQL Always On to build
a redundant system for failover. This will help to prevent any data loss. Of course, this
all depends on where your other SQL server is located. If you do not have a great data
pipeline built between those two locations, either locally or externally, the connectivity
will fail. There are requirements for these systems to be external and the communication
speed needed to sustain connectivity between SQL servers. Please investigate these
configurations before starting to build a second or third server for geographical purposes.
Post-implementation 377
Database mirroring and replication also work the same way. I believe I mentioned
a product from AvePoint, called Replicator, that helps with these scenarios. Please
investigate third-party solutions as well, which may give you more in-depth control
over the information being pushed to another location or server on your network.
Out-of-the-box mirroring works, and so do transfers such as log shipping, but it all
depends on your requirements as to how you build the systems to support backup
and recovery. Check out all available options before proceeding and also look at the
configuration best practices for these areas first before starting any work, as they may
not work out depending on your needs.
As far as restoration, you will need to follow some basic rules around your database sizes,
which should be around the 200 GB size limit. The smaller your databases, the faster the
recovery. Please try to stick to that as much as possible. Always look at a site collection
that is getting close to that limit as they need to add a new site collection and not more
disk space. You can add links to the new site collection within the original, and use the
Portal Connection in site settings to make them site-related.
Labeling your databases properly also helps if there is prioritization during the recovery,
where you have an ordered list of what sites are to be online first to last, as this could
come into play in some company structures. This has been mentioned throughout the
book. Naming is everything, and stops you from deleting something that you need. When
restoring, do not leave a restored database name, such as Restored_11_10_2019. This is
a no-no!
Drive selection for your backup will also come into play. Please use a local drive to provide
storage for your SQL backups. Once those backups have been completed, then copy them
to cloud storage or another server. Of course, this process needs to be orchestrated and
tested, because backups should be captured and then removed so that the next batch of
backups have the space to be captured the next day. If you want to capture more than one
day's worth of backups, make sure to have the storage available to hold that amount of
days multiplied by the database sizes being stored.
Using less costly disk space for backups is fine if there is not a lot of activity and the size of
your databases is not too large, but you would need the right amount of space for storage
of these files. Using RAID 10 will help to increase the speed of your backup as it does not
manage parity. This type of configuration on your disks will read and write faster. This will
come at a cost, as this would be looked at as a higher functioning disk configuration.
Avoid running backups when other processes are running or when there is a lot of user
traffic on the systems. This could include a process built by a developer that runs at
night, or a process where some other maintenance is being captured on SharePoint or
SQL Server using PowerShell scheduled jobs, SharePoint custom timer jobs, and other
customizations that may take time to process.
378 Post-Implementation Operations and Maintenance
First-day blues
Although I tried to get as much in this book as possible on the configuration, migration,
and management of your SharePoint project based on my experiences and lessons learned,
there could still be issues that I have not come across. There could also be some things you
did not ask about during this process or that others did not mention while in meetings
with departments or stakeholders, as I have been in that situation before as well. This book
really provides a list of everything you need, which I could think of and noted down to
eliminate having anything missed during this process.
If you have done those things and understand where you are and what to look forward
to then good for you. Either way, the first day is very stressful because you do not know
what to expect. If you have worked with SharePoint long enough, you will understand
what I am saying. You could have triple-checked everything, and still, something will
happen to blindside you. So, on the first day, be ready for the unexpected and make sure
you are notified of things that go wrong immediately.
Post-implementation 379
You should also have a Release Management Process in place to make sure that you
understand the first versions of customizations that have been changed during this
process or updated due to changes in the newer version of SharePoint. The developers
should have already tackled this but this process should help us understand what versions
of customizations should be deployed in the farm. If you are using Team Foundation
Server, the versions of these solutions should be captured there. If not, make sure the
developers have a process in place that supports this process. We talk about this more
in Chapter 12, SharePoint Framework.
Think over your steps thoroughly as you go through this process and if you worked over
the weekend as you most likely have, then make sure you prepare for release day. Make
sure to get some rest on that Sunday or the day before to prepare for your morning. If your
management gets in at 6 a.m., you need to be in at 4-5 a.m. This is to make sure things
are still working correctly, as you will need to do some spot testing and make sure that
any early users do not have issues connecting to the site. Using a warm-up script would
help in keeping the site fresh so that those first users connecting are always connected
without waiting until the app pool spins up. There are warm-up scripts on the Microsoft
Office site that are good and easy to use. We talk about these more in Chapter 9, Managing
Performance and Troubleshooting.
By getting in early, you could also most likely mitigate any other issues in those first few
days of the release before the first users of the system get in the office. It is always good to
be in early before the users get in the office to make sure things are working, especially for
the first couple of weeks. SharePoint has a way of making the top list of applications, but is
never deemed a Tier 1 application, I've noticed. So be ready to be treated like a Tier 1 and
expect to have a busy week or month.
As part of those first users connecting to the system, you will also want to look over
Chapter 9, Managing Performance and Troubleshooting; as stated, warm-up scripts play
a big part in ensuring that the site does not take long to load for the first user of the
system. There are other things you can do to make content render quickly using database
maintenance and other areas we will cover in that chapter as well.
We also want to check the servers as well and make sure all servers are in place,
functioning, haven't blue screened, and are performing well with no error messages
detected in our Windows logs or ULS logs that might have come up during the evening.
A lot of times after the SharePoint jobs run through their schedules (which could be
hourly, daily, weekly, and monthly), there could be something that comes up in our
SharePoint Health Analyzer.
380 Post-Implementation Operations and Maintenance
We should also check the backups and any SharePoint server-side jobs you may have
running. You should have a backup of the systems and sites to start so that the content can
be baselined before being made available for the user community. If there are any issues or
you notice that there are errors in your backups that were not there before, make sure to
document those errors and start with the errors that need to be resolved immediately.
Logging errors will be part of your day today. You want to make sure you have a site or
list dedicated to this release. You want to tag all errors you encounter as high, medium,
or low. These errors also need to be categorized and if they are duplicated, they need to
be combined so you don't have a long list of errors you have to report to management
that include duplicate errors from different users. Make sure to show related errors as one
error, even if it is not the same error. Set priorities on these errors as well to make sure
attention is focused on the errors and issues you record as appropriate.
As stated in this section, we want to have a site and pages dedicated to our issues.
We should categorize all our issues that are reported and use a system like that in the
following figure:
There are many team players as I have pointed out in this chapter and you will need the
time of all of them at some point. Make sure you have met with the teams and have gone
over your expectations for each of them. They need to understand what you want from
them during this time as they will have other areas they will be supporting within the
company as well. So, you do not want to overburden them where they are not able to help
you or are slow to respond because they didn't know the plans for this new environment
or migration project.
In creating the best support efforts for your environment, communication and
collaboration are key to a successful implementation. The more you can get the team
working together and understanding their jobs in this process, the better off you will be.
For example, handing off errors to the technical team is part of the process, but if you are
handing over user training issues, trivial issues, or duplicated errors to the technical team,
this will not be a good use of their time. Sorting and managing this list of issues is the key
to success and you get a quicker turnaround on getting these errors resolved.
As part of this process, centralizing the process and incidents is another area of concern,
as I have seen many ways other companies have handled implementation and migration
cutover days differently. When reviewing these companies, I found that some of their
plans did not work due to the lack of communication between groups and the tools they
were using to manage the list of incidents.
When separating the processes visually using different solutions to host the incidents,
communication gets lost or misunderstood. It is best to centralize and use a system where
others from different teams can view and sort the incidents. The reason is that another
team may have the answer, as SharePoint Server resources span many areas of Microsoft
technology. We must be aware of that and make sure we do not leave any eyes or ears out
of the loop.
In my travels, SharePoint is the best tool to use because of its collaborative nature and the
way it helps to centralize the information. It also does not take much time to set up a good
prototype to support this process. Most people who support SharePoint already know
how it works and they can manage the views and manipulate the data. Processes can be
automated if you have time, so that if you assign an issue to someone, they get notified
and can document the trail of notes and resolutions.
382 Post-Implementation Operations and Maintenance
You can also prioritize items and make sure some have bigger priority over others in the
queue. As shown in the following diagram, your team intersects with many other teams:
Once everything has been completed and you have successfully moved into your
new environment there is another resource that will come into play to help guide the
development process called the Business Analyst. This role in my travels has been mostly
nonexistent. I have only seen this role implemented on two customer sites out of the over
100 I have been to over the years.
The role is a key role in development and interfacing with the user community. When you
look at this compared to a site collection admin or someone keeping the interface between
the farm admin and the user community this is the role needed to help guide those
communications. The site collection admin still has that key role of communication, but
the Business Analyst helps to provide information back to developers and the farm admin,
so coordination of newly developed solutions is guided and documented correctly.
Their key responsibilities are the following:
The reason I did not have this role documented in the image is because they are not
really part of the technical staff that supports the implementation. This role is more to
guide new and in process developed solutions. They should have skills to help with the
implementation, but their focus is not an administrative task at the farm level. This role
helps users and site collection admins come up with solutions to problems to help build
the right solution to support the community. This is a key role and there need to be more
companies utilizing this skill set in their collaboration space.
The technical team again consists of SharePoint Admins, SQL Admins, Network
Resources, Storage Resources, and others who may have a part in this wide-ranged system.
As stated, the collaboration between these teams needs to be fluent and responses need to
be almost immediate, especially for incidents like the preceding one.
For technical issues like the one described previously, with a total outage of services, we
must be careful. We must respond quickly and make sure to try and fix the problem as
quickly as possible. So, this would include logging the incident within the centralized
system and relaying this to the team and management right away. Major outages I would
always include management, so they are not blindsided. In the end, you will save your
manager some grief.
Next, I would work the incident to see if this is an easy fix or a major outage where it will
take time so I can communicate that back to the team and management. Once you figure
the issue out and you see that it is a fix that you can resolve, follow your process to fix that
issue. This could mean doing it right away for a major outage like the one described, or if
it's something that only affects a certain department then we see how it affects the farm,
and if it's something where you have to take down resources then maybe it could wait
until after hours to be completed. This is all part of your process within your company that
could include a change management process as well.
At this point, all teams should be on standby and processes should be streamlined. I
know in some companies I have been with it takes time to get approval for calling outside
vendors, vendor-customer numbers and or login information is not readily available, and
other procurement factors can come into play. At this stage in the game, you do not want
to have these types of hiccups. Make sure your teams are ready and have the information
needed to make calls on the spot for support.
This help could include some help from outside vendors like Microsoft as well where you
may have something a little more involved where you need an expert to look at the issue
over your shoulder. This requires time especially dealing with Microsoft because it will
take some time to log the call and, depending on your service agreement, you could be
waiting 4 hours for a response by contract. So, the quicker you find the issue and what you
need to fix the problem the better off you are to get management a solid update.
Make sure again to report success and failures immediately. Even if you are working with
an outside vendor, always communicate to keep some updates going. For me, it is bad to
have your manager calling you for an update instead of you providing one. It keeps them
off your back and fully aware of the status, even if it is using Teams or email.
Post-implementation 385
When to meet and when to include management? All throughout the process. Meeting
with management should only take place if there is a failure where your vendor could not
fix the issue. This is when you need management to make a decision as this could mean
some rework in the configuration and you will have to think of ways to work around the
farm so at least some people can continue working if possible, or some type of way to be
partially up and running on this new platform.
• Netmon: This is a tool used to monitor communication between App and Web
servers, Active Directory, DNS, SQL, and the client, in this case, would be your
end users.
• Event Viewer: These logs are the go-to for SharePoint admins. These logs, which
are located on the server, will point out issues quickly within the OS that is
supporting the application, which in this case is SharePoint. I have used these
logs to troubleshoot issues, but they do not always give you the answer to an
underlying problem.
• Perfmon: These are the tools that are built into the Windows Server and Desktop
operating systems and can be used to find out core issues with performance on
server and desktop resources. Monitoring all aspects of a SharePoint server farm
is important when you are troubleshooting so do not forget SQL. Counters for the
specific server types and services are viewable via Microsoft's website.
• PAL: This is an extension of Perfmon as it helps you to identify issues quickly using
the performance monitor capture. This tool was created by a PFE at Microsoft and
is used widely for assessments and other reports for SharePoint. It makes your life
easier by converting the performance files into something you can read easily. This
is a CodePlex download and is available on that site.
• ULS Viewer: This is a great tool provided by Microsoft. I would not recommend
any of the other tools who provide similar functionality. You can copy logs and send
them to others to review (if small enough in size) and the product does not have to
be loaded on the server. It's a great tool to find issues with problems in SharePoint.
386 Post-Implementation Operations and Maintenance
• Fiddler: This is a network monitoring application I have used a lot in the past as
well as Netmon. I prefer Fiddler because I am more familiar with it but have heard
great things about Netmon.
• PowerShell: It has many commands that people may be unaware of, like
Merge-SPLogFile. This combines all trace log entries from all the servers in
the farm into one single file on the local computer. Merge-SPUsageLog is also
available to merge usage logs.
• SharePoint Designer: This tool is very helpful when looking at site related issues
and changes you may need to make to sites and related files. I have used SharePoint
Designer in some cases to change the look and feel of the site, change filenames,
permissions, and many other areas. This tool is used to make life easier managing
sites within SharePoint.
These tools will get you started on troubleshooting issues and find solutions to problems
within your environments. Most issues come from not following best practices.
I have mentioned that several times in this book as well. Microsoft tests the SharePoint
environments using these best practices and this list of practices and software limits can
really save you when implementing SharePoint. We talk more about how to use a few of
these tools in Chapter 9, Managing Performance and Troubleshooting.
If you have all three environments, GREAT! If not, it is time to do things differently.
Do not make the same mistakes you made in your last farm environment. Work towards
keeping Production clean and healthy by illuminating excess from failed development
projects and other changes made to the server resources without testing first. This goes
for Microsoft Updates as well as we apply those updates to the product and the OS and
SQL servers. You want to test these updates before proceeding to apply these updates to
your production farm. This is a best practice! Your SharePoint environments should at the
least mimic the following image conceptually:
If you want to use the same AD domain, make sure to use separate Organizational Units
(OUs) and separate servers, with groups based on admin, test user accounts, and service
accounts to make sure the accounts are separated and have something in the naming
convention that indicates the environment that the account is being used for, such as
Domain\DevSPAdmin. This is a good way to tell where that service account should
reside and if it is not in the right place, you can see so immediately and do something
about that right away.
To add to that, make sure not to install SharePoint using an account you do not want to
own your environment. There is no way to change this, so using personal accounts and
for instance, a DEV farm account from the PROD environment, would not be an ideal
situation to be in. All accounts should be defined for specific uses and environments.
If a personal account was used as the farm admin account to install the service, once
that person leaves the organization you are stuck having that account active for the life
of the farm.
Again, this is only to help you make sense of what these environments are used for. This is
something you must do to maintain your sanity when you are managing three SharePoint
farms and any little slip-up could cause you an outage.
Change management processes are an ongoing effort and help to alleviate issues caused
by changes to your environment(s). The change process can be involved in controlling
when changes are allowed, setting an approval for the change, and scheduling the
resources to implement the change request. Implementing a change management process
also allows you to document how the change went in terms of its success, and add those
documents that show how the change will be implemented. Keeping track of changes
within each environment helps to backtrack as well, so you can see what has happened
and be able to go back through the history to understand where there could have been
a mistake made. Having that history is priceless if you have documentation that shows
what was done during that change request, especially concerning the code and developers.
SharePoint can help you with documenting change requests, and I have put together
simple change request processes that can be used to manage updates to systems,
applications, and services. There are other tools out there as well that do the same thing.
I prefer SharePoint because it centralizes your information and makes it available to those
that you want to see it. So, requests can be made, and the process can be started and
finished using a simple workflow created by your team. Having this process in place is
priceless but you must use it.
Maintaining the environments 389
Zero downtime and patching is a way of keeping your environment up and running
without the need for interruptions to your user base due to Windows Update or
maintenance processes. One of the big areas you will see change over the course of your
farm is the need for monthly maintenance to your farm, which is done to implement
changes and updates to features and services within your farm. These updates are
published on what is called Patch Tuesday, which is the second Tuesday in the month.
Note
For more detail on zero downtime patching, consult the Microsoft
documentation at https://docs.microsoft.com/en-us/
sharepoint/upgrade-and-update/sharepoint-server-
2016-zero-downtime-patching-steps.
As stated in prior chapters, you should have already come up with a baseline and schedule
for patches and have implemented the following areas for inclusion:
• Identifying patches needed (do not forget Project Server, AppFabric, and
third-party integrations)
• Leaving time for testing in DEV environment
• Leaving time for testing in the TEST environment
• Implementing the patches in the production environment
• In the process of this implementation, making sure all servers have each update
When doing a fresh install, as we talked about in prior chapters, you must be aware
of changes in the environment. You will see errors in the Health Analyzer that may point
to databases being out of sync and servers that may be missing patches. This is the time
to upgrade all servers and all databases to make sure the farm is cohesive. This should
have been done when doing your installation by applying the latest patches at this time.
If a newer patch has come up, then you can apply that if you want as well, but the farm
needs to have a baseline at some point.
Important note
This is so important, and I cannot stress enough the importance of making sure
your farm is in good standing. Checking the statuses of the database, product,
patch, and your Health Analyzer after changes have been applied is critical.
Remember to wait a day before checking, or run all the timer jobs that update
the status of these components so you can see the effects.
390 Post-Implementation Operations and Maintenance
Having a baseline helps in the amount of work you must put in as an admin. This means
you do not have to go back and check the farm again because it is understood you will be
releasing the farm at a baseline you determine. A mistake made by most admins is they go
back and apply updates to a farm and do not remember which servers they applied them
to. Documenting and paying attention to detail can be a pain, I know, but you can avoid
mistakes by updating all documentation and knowing where you left off if you had to
break off before finishing the update task. This could have also slipped through the hands
of the Windows admins who control Windows Server Update Services (WSUS). Make
sure to only download those patches and not automatically install SharePoint patches.
When you look at your farm from Central Administration, in the Upgrade and
Migration area you will see a Check product and path installation status option:
Get-SPProduct -Local
The preceding command shows the version of SharePoint you are currently running. This
can be run after patching and this will update your system (config db) to the latest
version of SharePoint it is running. It also updates operations in the SharePoint database
and ensures the information is accurate by updating the SharePoint configuration
database information. This is also the equivalent of running the Product Version timer
job, as I have had issues with servers updating but still facing an error because of not
having the right version when trying to connect to the farm.
Maintaining the environments 391
Note
Again, for all content databases that need an update, you should refer to
Update-SPContentDatabase "database name". Updating
services' databases requires different commands based on different services
depending on the error given. Please do some research on Microsoft's website.
Configuration management is a term you do not hear much anymore. I used to hear this
a lot back in the mainframe days in the 1980s and 90s. Today I believe it is still important,
but things have changed so much with technology that I do not see a lot of people doing
this and managing it correctly. It is important to remember we need to make sure to keep
our farms configured the same in each environment. If we are making changes to those
environments, the changes to Central Administration would need to be implemented on
the other environments as well. If I make a feature change, I should change it on my other
environments as well – and document it!
Updates to all areas of our network, servers, and other areas where we depend on
testing these updates in an environment that can be changed – all of these need to be
documented. Pushing things through environments can be a challenge if you are not
configuring areas where you will need high-level services configured before you can
implement a process.
Say you have not implemented Project Server in your lower environment, and you install
it and never document the steps for production. The next person who implements it in
a higher environment implements it differently than you did. Now we have issues in the
higher environment due to not following the process of documenting, testing, and passing
the working implementation documentation on to the next environment.
Change management and configuration management are very important, and are
here for you to work with to make your life easier. Yes, it takes some extra time but
in the long run, this will help you sleep at night and not be awakened by calls from
developers complaining that the systems are not working correctly. Make sure to take this
recommendation seriously for SharePoint implementations. SharePoint can be a massive
pain if you do not manage it correctly.
Tools are also available to push content from one environment to another. The use
of replication can keep data fresh and keep all of the environment's content the same
as production. This can be a lifesaver in some instances where having content from
production will help you in determining the best development process. With this,
configuration management is the key to success for this to happen because if you have
services running in production that key in on certain content and those same services are
not running in a lower environment, you will see errors and content will not be in a good
state, which could cause issues that are not recoverable from.
392 Post-Implementation Operations and Maintenance
Some of the tools that you can use range from SQL log shipping to AvePoint Replication
services. I am sure there are some services and third-party tools that can come into play
here as well. My experience has been with AvePoint and although it works in general,
you may have to see if it will work for your specific situation and requirements. AvePoint
makes great tools that tend to be missing out-of-the-box features such as replication, so
please check with them for a demo. I know a couple of my customers in the past used this
product and things went well. Cost might also be a factor at this point, if you have not
thought about it already. Talk with your management and see if this is something they can
provide to help you with migrating content.
The bottom line is to apply only proven concepts to this environment. Even this method is
not foolproof all the time. You may have developers who have developed and pushed code
through testing in other environments, but it still fails in the production environment. In
these cases, you need to figure out what happened, and if it does not work, then back out
of your code in most cases when you can't find a way to correct it. This situation can really
complicate things.
The difficulty in this approach is that developers are not perfect, and a lot of times they
expect to be able to rebuild an environment based on code that was pushed through
testing but there are some gaps. Say you added a new AD group to a site. You gave that
group permissions to content within the site during that Sprint. Now you have updated
the way this content is viewed; maybe, say we created a list in the beginning but now they
are moving to their own subsite as this group needs more functionality. Suddenly you
need to make that change – how does the code capture this when going through Sprints?
It will apply everything as it was during that Sprint, so now there could be some confusion
as to whether to delete the document library they used. So, any number of things can
happen to rebuild from code.
Taking that example we know there is no way to recreate an environment from code.
So, the best thing is not to rely on code and never implement code that is not tested.
The last thing you need is a major slip-up in a Sprint that changes the way the site was
designed. My point here is you need the lower environments to make sure you have
done your due diligence to provide the best code and updates to your production farm.
Once there's a backup, you are golden, with some caveats for updates from code you have
deployed, but do not get hooked on your code being your backup.
Summary
Going live with your new farm takes a lot of planning and attention to detail. As you have
seen in this chapter, there were lots of references to prior chapters to make you aware
of some of the things you may have missed. A lot of the information in this chapter was
mentioned way back at the beginning, but if you still didn't take those areas of concern
into consideration, you can see how they can come back to haunt you right before
deployment. This happens a lot, and if you did run into this issue, you are one of many.
Questions 395
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
1. When making the decision to release your farm to the users in production, what are
some of the things you want to check before the cutover?
2. Issues may arise after the cutover. The help desk should have had training to handle
these new help desk calls. True or false?
3. What are some of the basic site functionalities that should be tested before
the cutover?
4. Why do you need a test environment for your SharePoint farm?
5. When using the zero downtime process to update your environment, what is the
key to a successful update process?
6. A user calls and says the SharePoint site looks like it's been dismantled. What
do you suggest they do, as the site was migrated using the same URLs from
another farm?
9
Managing
Performance and
Troubleshooting
Managing performance for SharePoint Server has not changed much from prior versions.
The same tools and processes you used in the past can be used in this version, though
there are some caveats. Some of the things we did with other SharePoint versions have
become integrated into the configuration of the platform, which gives us a baseline
to make even more performance improvements. SQL and SharePoint settings are
automatically configured in some cases, whereas in the past, we had to do it ourselves.
In this chapter, we will talk about ways to get a baseline configuration documented on
your environment and what tools there are to help you get there. This includes tools that
will help you configure your baseline, as well as things admins forget as part of their
environment. There are many ways to slice and dice this and I will not be able to cover
all possible areas. However, I will point you in the right direction so that you can move
forward with getting your server resources figured out and set up for success.
398 Managing Performance and Troubleshooting
• Performance overview
• Troubleshooting tips
• SQL Server performance
Technical requirements
The technical requirements for this chapter are as follows:
• Experience using SharePoint Administration versions 2007, 2010, 2013, and 2016
• Lite coding using Visual Studio
You can find the code present in this chapter in this book's GitHub repository
at https://github.com/PacktPublishing/Implementing-Microsoft-
SharePoint-2019.
Performance overview
All users want is speed and reliability when it comes to satisfying the services provided
by SharePoint and your team. I cannot tell you how many times I have been confronted
by users and management on why something is performing slowly. Tuning performance
ensures that the platform is solid and working as it should, while also giving users the
ability to quickly do their work.
When SharePoint runs slowly or is not reliable, you run the risk of users not being able
to complete work – there could be major work being done in SharePoint you may know
nothing about. I have been on site at a bank where a process ran every day at the same
time. No one knew what was going on. We did some performance checks and figured out
it was coming from a specific department. It ended up being a reconciliation process that
was custom developed for one of the departments.
This type of scenario is common because, as farm administrators, we do not commonly
get in the weeds talking to developers all the time or talking with user groups, but we
should be. In the previous chapter, I mentioned a hierarchical support model for the
SharePoint service. Within that diagram, there were relationships that were built between
the support staff and brown bags, as well as information sharing going on between farm
admins and users.
Performance overview 399
This tells us what people are doing, but it also helps us figure out what teams are going
to do in the future. This makes everyone aware of those future processes because they
could come with modifications that are needed for the systems to perform those duties
successfully. This also gives you time to change things as needed to support any content
structures, such as web applications, site collections, and other areas, where you can split
off a new process to isolation, which also helps with performance.
A lot of times, as admins, we pay too much attention to the high-level requirements, such
as Microsoft recommendations on minimum RAM, CPU, and disk space, but there is
a lot more to performance that we seem to forget. Sometimes, even with those minimum
recommendations, admins still undercut the server resources, thinking it is not a good
minimum. They then run servers on less than the minimum specifications, which is
a bad choice.
I am here to tell you to make sure you use those minimum specifications. Microsoft tests
SharePoint Server builds before releasing the product to give us best practices. Those
minimum recommendations are captured for a reason. This is done to set the expectation
level of SharePoint running in an environment where you can be comfortable that the
product will work as it should.
The performance of your farm all depends on how you created it. Here, again, you use
best practices and software boundaries to figure out what is supported. If you are not
separating processes, understanding where and how processes run, and figuring out the
overhead of related processes that may run in the environment, then you are going to
lose the battle. There are also other areas where you can isolate some processes at the web
application level, thus giving a site its own IIS space, or even a site collection level, where
the site collection is in its own content database.
When systems are slowed by poorly performing configurations, depending on the user
or group who encountered the issues while the system was running slowly, you could
either have an angry mob or a single user who just puts in a help desk ticket. A single user
is not that noticeable in the overall company because it is a single incident that is isolated.
There is the factor of who that user is, of course, which depends on their rank and position
in the company.
In the case of an angry mob having an issue that is consistent across departments, this
will get you and your boss's attention. These incidents are the ones you want to avoid if
possible. That is why, in this book, I have been basically preaching about best practices
and making sure you have covered every area of configuration. This will save you in the
long run.
400 Managing Performance and Troubleshooting
Client configurations come into play as well, as the user's desktop can be a bottleneck
in the environment. The big thing now is that everyone is working from home, so there
is a dependency on connectivity from the home to the office. Then, you have a VPN
where the users connect to the office from home, which could also slow them down.
Understanding these areas as an admin will help you detect what the problem is in
most cases.
When site collection admins or developers develop solutions for SharePoint, they can also
be bottlenecks for performance. There are best practices they should follow as well. We
will touch on them in this chapter, but Chapter 12, SharePoint Framework, will explain
more about how we support our developers and what we need to ask about in meetings
with them.
There are a lot of areas that we will talk about in this section of this book that were not
mentioned in other chapters. This is because they are for special configurations or we
need to dwell on these topics a little deeper. You will see why I chose to keep these topics
separate; I wanted to have room to expand on them.
There is lots to cover in this chapter as we dive into configurations for caching, which
will give you some insight into how it can help your users get faster performance from
SharePoint. Distributed Cache also helps as the configuration for this service is usually
untouched. With this service, there are other caching methods that can really speed up
your farm's performance, especially when you're using large files.
So, let's dive into performance for SharePoint!
Client machines can be laptops, desktops, tablets, or mobile phones. The operating
systems that are used on them can be either Linux, Apple, or Windows, with all different
versions and flavors available. I believe that, as a company, we should be giving out
those systems and not having a user go buy a system to use on our networks. The reason
why we want to have more control over these clients is due to governance, speed, and
performance. Compromising those areas will cost you in the long run.
We also need to know about these so that we can support the environment, which makes
us a special breed of admin. The reason why I say that is that SharePoint admins deal with
identity, networks, servers, the web, and databases, and that is on top of managing services
and content within the platform. We need to have a vast amount of knowledge to support
our farms.
When supporting connections across the internet, there are many of client hardware
configurations that can make a desktop respond quickly or slowly. When your desktop
team is configuring for Microsoft 365, for example, and connecting to the cloud, we would
want a fast network speed. We can key in on a few things regarding this topic, but I am
just going to key in on the following areas in this section:
A user's desktop in a large enterprise – and even a small enterprise – should have the
resources available to support the workload and connectivity speeds needed for the user
to complete their daily job. The network team should have enough equipment and local
locations to support the number of users that will be connecting to the network. This
would mean that technical resources need to come up with configurations for networking
components, servers, and PCs and/or laptops that fit the person's need based on their job
description.
If you want to have users test their connectivity to the Microsoft 365 cloud, there is also
a Microsoft 365 connectivity test. The reason I am mentioning this is that you could be
standing up your SharePoint 2019 server in a hybrid configuration. These tests need to
be carried out to ensure connectivity to the cloud works as well. As far as SharePoint
2019 On-Premises goes, this all depends on the data pipes open to the user. To learn
more about this tool and this connectivity strategy, please go to https://docs.
microsoft.com/en-us/microsoft-365/enterprise/assessing-network-
connectivity?view=o365-worldwide.
402 Managing Performance and Troubleshooting
Note
To test client speed, have the users type in the words Speed Test in Bing;
they will be prompted to start a test from their location.
When evaluating network connectivity, the equipment being used matters. Creating
different configurations of machines can mean different behaviors occurring when they're
being used. So, if someone is a developer and needs to run VMs on their machine, you
may want to give them a machine that has more RAM and CPU than a normal user would
need. This could mean having a four-CPU machine with 32 GB of RAM, as an example.
This configuration could also include other areas of concern, such as disk space. We
should choose the right disk space for the user, be it for a laptop, desktop, or Windows
Surface machine, which are very good as well. With that, we should also have the
maximum speed possible from the NIC card on the devices and/or a speedy wireless
network connection on the hardware. When the client machine connects to the network,
we want the fastest responses to the network and, in turn, we want to have a network that
supports fast speeds for the best response back to the client machine.
When you are hosting SharePoint or using Microsoft 365, you should post hardware
requirements to the users of your service. The issues that come into play include that
you may not be able to force someone to use a certain type of laptop or desktop, or even
a mobile device in some cases. With that, the internet service speed that comes with the
customer's mobile carrier also makes a difference, as well as where the mobile device
is located.
So, in this case, not much can be done because you can't force your customers to buy
a certain machine or a certain internet service. At this point, we just have to support
the users in terms of what they bring to the table and if they do not use the minimum
required hardware, we need to explain that to them when they call in for support. There
must be expectations for connectivity, especially antivirus support for those client
machines.
There are some things we need to be aware of when configuring a desktop, especially for
SharePoint. SharePoint is a site, but there are things we need to remember. The following
is a list of applications you could set up for the employees of your company:
• Microsoft Project
• IE and browsers
Client resources support the user but also run applications that developers create. So, let's
talk about developers and their responsibility in the SharePoint environment.
Some of these have been mentioned over the course of this book, but I will explain them
in a little more detail in this section. There are many ways to configure this environment
and you really do not want to miss anything. So, please take the time to review the
expected results so that you can find the solutions and configurations that work for you in
your environment.
Load testing
If you have tested and constructed a load test with Visual Studio before, you will have
noticed that you get some much needed verification in terms of how the farm will
support your user base. It also gives you the opportunity to see how your server resources
respond as if users in your company were using the farm on a regular basis. These load
tests are essential in finding out performance issues before they happen. These tests
help you tune your farm and server resources so that the farm responds to the needs
of your users. Always remember to run these tools after hours and/or in a segregated
environment.
There are two types of Visual Studio load testing toolkits. One is Visual Studio Online,
while the other is using Visual Studio 2013 behind your firewall, which is a tool you can
use for on-premises environments. You can use the Online tool with your on-premises
environment, but the farm must be accessible from the internet:
Using the load test as an example, you can easily get your farm set up and configured. You
can migrate or create new content within sites so that you can use the load testing tool to
imitate a user by recording actions as a video screen capture. The capture then reproduces
these actions, such as Power Automate actions, in the cloud to create sessions within the
environment that simulate as many users as you like. In the following figure, we can add
a user count to simulate as many sessions as we need for testing. We can then bring them
online in certain scenarios we may find helpful for testing:
When recording your steps, you can go to different sites, download documents, upload
documents, and even test solutions and workflows if you like. Once you've recorded your
steps, you can load as many users as you want so that you can perform those same steps as
many times as you have users. This gives you a real-time test of your environment based
on server resources using solutions that your company may use every day. Configure
the options shown in the following screenshot to capture any URL criteria and validate
error patterns:
There is also an onboarding process as you can add to your steps at the beginning, which
onboards the users based on a certain increment of time. So, if you want to have the first
user onboard and start the load testing script, you can stagger the other users by a set time
interval. At this point, you could choose times such as 1 minute or even a few seconds.
This is when new users jump on board the process and run your load test script. This also
tests how many concurrent users the farm will support, as well as how many users can be
running scripts at the same time.
I wanted to make sure I shared this tool because lots of people know nothing about it. This
process helps you baseline your servers and gives you reports on the different processes
running in the environment, as well as information about how the server resources are
responding. At this point, once this load test has been successfully run, you can analyze
the data you've collected and tweak those areas of concern before running the test again.
This will save you so much time trying to tweak performance and getting ready for Go
Live day. There is no need to guess how the servers will perform because you are proving
how they will perform with these tests. However, you really want to use content from
your environment, and even some developed solutions in your environment, to get a real
feel of how these servers will perform. Doing this early in your build process once you've
gathered content to test will give you a head start on your server resource configuration. If
you want to run against your dev, test, and production environments, then do that. At this
point, production is what you really want to understand because that is where the bulk of
your users will be using SharePoint the most.
Load testing link: https://marketplace.visualstudio.com/
items?itemName=SharePointTemplates.SharePointLoadGenerationTool
Warm-up scripts
There is something about being the first person to come into the office in the morning
– especially when you're hitting a SharePoint site. Users want instant results and with
SharePoint, if you are the first user, then sorry, but you must wait. Application pools take
time to spin up. It's like you using the sleep mode on your computer. Even when you hit
the power button or the keyboard, the computer still takes time to come back to a point
where you can use it.
This is where warm-up scripts come into play. These scripts work on a scheduled basis,
depending on the person who wrote the script. It could also be based on a certain time
in the morning you have it run or based on server resources or IIS components. They are
used to keep sites in SharePoint warm by keeping the application pool fresh and ready for
a user to render the site.
410 Managing Performance and Troubleshooting
Usually, the first person to hit a site in the morning has to take on the responsibility of the
application pool being slow to spin up, which creates wait time for the site to come up for
the user. Instead of having them wait, we can add a warm-up script to keep the application
pool fresh and readily available so that no one ever has to wait for the site to come to life.
This requires either scripting and/or Windows Server scheduling to keep this script active.
To learn more about warm-up scripts, please go to https://gallery.technet.
microsoft.com/office/SPBestWarmUp-Warmup-e7c77527.
Storage
Better known as disk space, this is the next in line as disks can cause slow responses
when you're reading or writing data to SharePoint. When we say reading and writing
to SharePoint, we really mean SQL Server because that is where all our content,
configuration, and services live. We would be more concerned about this on a SQL Server
database resource than any other server role. The faster your disk can read and write, the
faster information can be relayed back to the user.
We need to choose the proper storage for the proper server resources and, in the case
of SQL, the proper components to house our databases. When looking at Web and App
servers, you should use a disk with good performance since our operating system, logging,
and search indexes will play a big part in how this server responds in the farm. Also, you
should take blobs and other files that will be added to the server into consideration, as
well as the amount of disk space needed to support the servers.
As far as SQL is concerned, we really need to be mindful of what types of databases
SQL Server will support, their size, and how they communicate with each other. If we
break this down at a high level, we have configuration, search, services, and content.
You can spread just these four areas across separate LUNNs or dig deeper to narrow
these databases down into more LUNNs. Believe it or not, this will bring about greater
performance for the databases and the resources the server consumes to support the farm.
Disk storage comes in many forms. There are companies that specifically make disk arrays
with proprietary software to help us manage disks and configurations that support our
server and application needs. NetApp is one that I have used in the past that has helped
me design storage and supports the SharePoint server farm. Please check out storage
companies that may have something to offer you that can bring about better solutions
for support.
Performance overview 411
Disk types are a great way to gain performance boosts as well, with options for SATA, SAS,
SCSI, and SSD available. There are many choices, all of which bring different methods
of high performance and specific ways to bring stability to your server. We also have
options for RAID, which brings data protection options into play. There are a few different
options here, including ones for RAID 0, 1, 5, 6, 10, 50, and 60. For more information,
please go to https://www.dell.com/support/article/en-us/sln129581/
understanding-hard-drive-types-raid-and-raid-controllers-on-
dell-poweredge-and-blade-chassis-servers?lang=en.
More information on Azure disks types for cloud configurations can be found here.
However, they come at a cost: https://docs.microsoft.com/en-us/azure/
virtual-machines/disks-types.
As you configure your servers, you also need to make sure ALL the logs within SharePoint
and SQL Server are moved to a separate drive other than your C drive. This can cause
servers to stop working altogether. Move IIS logs, especially when you have a large user
base. SQL Reporting Services and other integrated applications also have logs. Move them
to a separate drive and make sure the drive is big enough to hold your logs and the search
index for the farm. Size is important!
The following link talks about Microsoft's recommendations for storage:
https://docs.microsoft.com/en-us/sharepoint/administration/
storage-and-sql-server-capacity-planning-and-configuration.
Quotas
As we set goals to keep our content at a certain limit, we need to make sure we implement
quotas. Site Collection Quotas play a part in the storage's configuration as well. Quotas
help you set limits on how much content a user can have in a site collection. There are
thresholds you can set here that will help warn users using that site collection that they
may be running out of space.
Microsoft has limits on what you can set, which could be a 200 GB minimum to a 400 GB
maximum. The reason for these limits is to help the environment recover quickly if you
have to restore or perform a migration backup and copy these databases to another server
or cloud service. The smaller you can keep these site collections, the better off you will be
in the long run when it comes to avoiding issues with backups, restores, and performance
due to the size of content databases.
412 Managing Performance and Troubleshooting
• Run the deletion over a weekend, which would alleviate the issue of other users
being affected.
• When setting up the farm, make a rule that all site collections must have a one-to-
one relationship with a content database (this helps with restoring processes).
• Move the other site collections in the content database to a new content database
if there's more than one site collection per database.
• To recapture the disk space after deletion, run a shrink on the database and log files.
Deleting subsites is done within your site collection. Sometimes, these sites get very big
and take up a lot of space within a site collection. At this point, you can see if the site is
really needed, move content from this site to another site, or upgrade the subsite to its
own site collection. This helps reduce the storage in a site collection as well.
As a site collection admin, you now have the option to create site collections. This is a new
feature that was made available in SharePoint 2016 called Self-Service Site Creation. This
is different from what you are thinking, because now, we can make web applications use
this feature. However, the feature has gone further and lets site collection admins create
their own site collections. This does take coordination, documentation, and database
management, so make sure to communicate when and if you have these features activated.
Performance overview 413
RAM
As you set up your server resources, RAM is another area of concern as you will want to
make sure you have enough RAM to support the server resources for the farm. Do not use
minimums as your point of reference. It is better to test the load on the servers to see how
they respond. There are many dynamics that come into play here, such as the following:
There are probably more things you can consider, but these have been mentioned to get
you thinking about how to scale your environment with memory so that you do not revisit
this right away.
Virtual environments
When setting up my environment, I used VMs within my environment to run all the
server resources. One thing you want to make sure of is that you do not want to run
RAM dynamically. This causes lots of issues with the servers and is not supported by
Microsoft. They will basically stop working with you if you are running your farm with
this configuration.
Another thing to be mindful of is the introduction of Distributed Cache. If you have
a dedicated MinRole running dynamic RAM, this would negate the use of this service
as this has to be configured as static. This is because the RAM cannot fluxgate. Remember
to set your Distributed Cache size to an optimal setting. You can set Distributed Cache to
a maximum of 16 GB.
More information on this can be found at https://docs.microsoft.com/en-us/
sharepoint/install/deploy-sharepoint-virtual-machines.
414 Managing Performance and Troubleshooting
PowerShell Jobs
New to the scene, or somewhat new, are PowerShell Jobs. If you come from a Unix
or Linux background, then you will be familiar with these types of processes. With
PowerShell, there are two types of code that can be executed:
• Synchronous
• Asynchronous
When using synchronous execution, PowerShell will execute code in the script one line
at a time. It will complete each line of code before it starts another line of code. This
is usually how I see admins run code so that there is order in the code so that it can
complete successfully. It's usually an easy way to write something quickly and think out
the strategy in some type of order, while not really taking advantage of the performance
of server resources.
When creating scripts, you may want to consider using asynchronous execution and a
concept called Jobs. I used a product back in the day called WinCron, when I used to
do a lot of SQL and Windows Server automated processes. This type of processing using
PowerShell seems to be similar. Jobs are great for performance only when a script does not
depend on the results of a prior execution in the code. They run in the background and
don't interact with the current session.
The following are some parameters you can use when creating jobs:
To learn more about PowerShell Jobs, take a look at the following links:
• https://docs.microsoft.com/en-us/powershell/
module/microsoft.powershell.core/about/about_
jobs?view=powershell-7
• https://docs.microsoft.com/en-us/powershell/module/
microsoft.powershell.core/about/about_remote_
jobs?view=powershell-7
• Claims-based authentication
• Newsfeeds, micro blogging, and conversations
• OneNote client access
• Security trimming
• Page load performance
This service can be run on any server and can be run in dedicated server mode or in
collocated server mode. In dedicated mode, all services other than Distributed Cache are
stopped on the servers that run Distributed Cache. In collocated mode, the service can
run together with other SharePoint services, and it's up to you to manage which servers
the service was started on. Microsoft recommends using dedicated mode when you
deploy the service.
If this server is not configured properly, you could experience a serious performance hit.
We need to figure out the capacity for this server so that it can support the service and
how we want to install the service so that it runs within the environment. Here, you can
look at the number of users that will be supported and see that, based on the number of
users we have, we can determine how Microsoft considers your deployment size. The basic
minimum memory for a SharePoint server in a farm is 16 GB. We want to make sure that
the size of the server is correct so that we do not have any misconfiguration issues with
this service.
416 Managing Performance and Troubleshooting
If you have less than 10,000 users, then your farm is considered small by Microsoft's
standards. If you are running up to 100,000 users, then you are looking at a medium-sized
farm. Large farms contain up to 500,000 users. You should look at how much RAM should
be configured for your service based on size. 1 GB would be plenty for a small farm, 2.5
GB would be great for a medium-sized farm, and 12 GB would be fine for a large farm.
In a small farm, you have the option to use a dedicated server or collocated server
configuration. Medium-sized farms would be best run using a dedicated, but for large
farms, you would need to have two cache hosts per farm. When running Distributed
Cache service in a collocated configuration, you cannot run Search Services, Excel
Services (2013), or Project Server Services on the same server.
Configuring Distributed Cache service is easy, but you really need to understand how
to configure the service. To configure the service with any type of command in order to,
for example, change the memory allocation, you would need to stop the service first to
complete the changes and then start the service again.
From my experience, the best configuration for performance is to use dedicated server
mode to run the Distributed Cache service. This takes on the burden of processing
the web frontends and is recommended. You can run in collocated mode if you want to,
but again, there are some services that cannot be run from that server when you're using
that method.
To find out more, please go to https://docs.microsoft.com/en-us/
sharepoint/administration/manage-the-distributed-cache-service.
As an example, if there was a request coming in for an available web application and your
web frontend servers were busy, the service would choose the best performing server
at that time to route the traffic to that better performing server. The request manager
provides three functional components:
• Request routing
• Request throttling and prioritizing
• Request load balancing
The service also provides manageability, accountability, and capacity planning in order to
support specific types of services, such as Search, User Profile, and Office Online Server,
which means that SharePoint doesn't determine where the request needs to be routed.
This makes routing less complex and the service can locate problems within those servers
that may be failing or running slow.
The Request Management Service can also be scaled out as needed so that as an admin
creates and implements those routing associations in the servers in the farm, the load
balancer can quickly determine and load balance at the network level.
The service can be started within Central Administration or by using PowerShell. You can
use the following parameters in PowerShell to change the parameters within the service to
change the properties of the request management service:
• RoutingEnabled
• ThrottlingEnabled
• Set-SPRequestManagementSettings
The service can be run in two different modes. The first is in dedicated mode, where the
web frontends are dedicated exclusively for managing requests. These servers would be
created in their own farm, which would be located between the SharePoint farm and the
hardware load balancers.
The service can also run in integrated mode, where all the web frontends run the request
manager. All web frontends means all servers, as in APP and WFE. All the web frontends
receive requests from the hardware load balancers and then determine where the request
needs to be routed.
Using this service takes a lot of planning and configuration. You do not want to use this
service on smaller farms but on large farms where you need this type of dedicated service
routing. Once the service has been started, it adds information to your content databases.
Without the service, this content database will be out of support in a new farm.
418 Managing Performance and Troubleshooting
I have seen some weird behavior when moving a content database from one farm with
request management to another farm without the service. You must be careful where you
start because it may impact you once you migrate. We noticed that requests that were
made to sites that had site collections from a farm with the service running were trying
to route to different web applications, even when we entered the right URL in a new farm
without the service.
When I worked on a project where the engineer was using this service, we ran into some
issues. I was told by a well-known SharePoint architect to not use this service at all. He
said he had not seen it used before and that there was no legitimate rationale for using it
right from the get-go. He also said that there had been a lot of bad implementations when
using this service, though it is an option you can use for performance if you dare.
To find out more, please go to https://docs.microsoft.com/en-us/
SharePoint/security-for-sharepoint-server/configure-request-
manager-in-sharepoint-server.
Document size
When determining document sizes that could potentially be used within your
environment, note that they affect many areas. One of these areas is performance, which
we will talk about in more detail in the next subsection. These documents can also be
affected by search settings. I believe lots of people overlook this setting and then notice
they are limited by the results they get from the search.
The max document size in SharePoint 2019 Server is 15GB. The setting for Document Size
within the Search component, especially when large files are used, such as Word and Excel
files, is set out of the box at 64 MB. This setting can be set to a maximum size of 1 GB if
needed on both Excel and Word documents. You will capture more metadata on these
large files and the search results will produce more relationships for user search results.
This setting can only be completed using PowerShell:
• MaxDownLoadSize
• MaxDownloadSizeExcel
Use these commands to maximize the document's crawl size. This is explained in more
detail on the Software Boundaries Microsoft page: https://docs.microsoft.com/
en-us/sharepoint/install/software-boundaries-and-limits-0.
Performance overview 419
Blob Cache
One of the things you should do when planning your farm is look at the file sizes of the
documents that you will be supporting within the farm. Meetings with users should have
given you some clue of what is being used now and what is being planned. If your users
plan to upload large files, first, you need to make sure your network can support it. You do
not want to have your users uploading 1 GB files over a 100 MB network.
Blob Cache helps with finding the easiest ways to bring performance to your farm,
especially for those farms using large files within your sites. The new maximum for the file
upload size for document libraries is 10 GB, while list item attachments can be 50 MB in
size maximum in SharePoint On-Premises. Microsoft 365 is 15 GB for files in document
libraries and 250 MB for list attachments. By reviewing those size limits, you can see that
10 GB is a pretty large file. Processing those files for a user to view in the browser would
be pretty process-intensive and heavily dependent on the user's connectivity to the farm.
Using Blob Cache can help you by providing a separate flat drive space (not a database) to
hold all those large files so that when users request them, they are pulled from the drive
space and not placed within the content database. The blob storage space is located on
SharePoint servers and you need to make sure you have storage to support those files.
When using this method to support those files, we can use a configuration where the
content database would hold the site collection, which would then associate all the
document's meeting certain criteria for the blob and its size to a link to the image or
document that is larger than the limitation we set. Users would request that content using
the same user interface, but the content would be rerouted so that it can be rendered from
disk from the Blob Cache location.
There are many third-party companies that offer Blob Cache solutions, such as AvePoint.
Storsimple is a product from Microsoft that also works with on-premises SharePoint
environments. You can find out more about this solution at https://docs.
microsoft.com/en-us/azure/storsimple/storsimple-adapter-for-
sharepoint.
Microsoft introduced this feature in SharePoint Server 2019 and can be found
within the Manage Web Application area of Central Administration: https://
thesharepointfarm.com/2019/05/sharepoint-2019-blobcache/.
420 Managing Performance and Troubleshooting
Other resources:
Object Cache
As you saw in the installation setup for SharePoint, we have dedicated cache accounts
called SuperUser and SuperReader. These accounts support the Object Cache, which
stores properties about items in SharePoint Server. You cannot use out of the box accounts
because of check in/check out and claims authentication issues that will arise. The service
is used as part of the publishing feature and helps reduce the load on SQL servers to
improve performance. These accounts must be set up; otherwise, your SharePoint farm
can and will come to a complete stop one day with no warning.
Search Services
As part of SharePoint Server, there are services that provide search results based on the
content that's been crawled. This product is called Search Services and we talked about it
briefly in Chapter 6, Finalizing the Farm – Going Live. Some of this information may be
redundant but it's good to recap at this time. To make content available based on certain
criteria, a user associates metadata or column information within content that is stored
in SharePoint sites. The crawler function within Search Services runs the crawler process
and finds all the metadata needed to index that information for users to search and find
the results of a search. Be sure to stagger your search crawls across sites so that they finish
processing before you move on to the next scheduled site for crawling.
Choosing physical or virtual servers for this service is key as this server can be either. If
you have a large amount of data and are planning to expand even more, then you may
want to consider using a physical server for this service. Cloning, as we talked about
previously, will help in defining the components of Search Services for specific server
resources, which creates better support for this service.
Isolating Search Services would be yet another way of helping the performance of
a SharePoint farm. This is because this service is very intensive on RAM and CPU.
Search Services can take over your server because it requires the resources to crawl
through all your content. Remembering to divide the search components into separate
servers is key as well since they can be spread out among server resources.
Performance overview 421
You should isolate these processes either on several servers or a couple of servers.
However, most importantly, you should configure these services so that they only run
on application servers and not the web frontends where users request access to web
applications. The services can be split up, especially when you need redundancy.
If you have more than 10 million items, you must clone your search services, which would
make them redundant and boost performance for the service as well. This is the most
intense service you have, and it can make or break your farm. The faster you can crawl and
gather information, the better the results for your users. I haven't covered a lot on search
in this book, but please research further if you wish to make this service stable.
To find out more, please go to https://docs.microsoft.com/en-us/
sharepoint/install/software-boundaries-and-limits-0.
Excel Online
As you may have noticed, Excel Services is not listed in our services for the farm anymore.
This service has been moved to Office Online Server and is now called Excel Online. This
is a nice move on Microsoft's part. This service, especially when used heavily, needs some
isolation as it can weigh heavily on the server's resources.
Another thing I used to see all the time is that no one took advantage of the segregation of
Excel Services throughout the farm. In some cases where this service is used heavily, you
want to use Excel Services in designated locations of a web application or even segregate it
to a separate web application, purely for the purpose of big data and reporting. This takes
the service out of the most used areas of the farm and isolates it in one web application,
which then has its own IIS site and databases associated with content and resources.
The reason I would do the separation at the web application level is due to heavily using
the service and/or related services. These could be SSRS or other third-party integrations
that may need to have separate processing and relationships to services in SQL and
SharePoint. These can be separated by server as well since you can integrate SharePoint
with several SQL servers if you like. On the flip side, you only need to run certain web
applications on designated web frontends to give some good separation of processes.
Some things to think about when configuring Excel Services are as follows:
• Use trusted file locations to segregate the use of this service within web applications.
You must specify HTTP or HTTPS in your URL and the location limits where Excel
can be used within your farm.
• The service also controls the location type, which could be a SharePoint Foundation
location or other location types such as UNC and HTTP.
422 Managing Performance and Troubleshooting
• Configuring options so that child libraries and directories are trusted within the
trusted file location.
• Session management, for timeout settings and request duration.
• Calculating Excel data mode, which can be done against a file. This can be a manual
calculation or automatic, except when this is done on data tables.
• External data connectivity can be allowed or not allowed and can be for libraries
only or embedded connections.
• A refresh rate for the data can be set. You have many options here.
If you are looking to use Excel Online heavily, look at some of my recommendations to
spread out your processes so that they do not collide a create a performance bottleneck.
We will talk more about Excel Online's features and its installation in Chapter 10,
SharePoint Advanced Reporting and Features.
InfoPath
If you are still using InfoPath in your farm for users, this brings up another farm
service that gets overlooked as a performance bottleneck. There are areas in Central
Administration that control these services and are sort of hidden. I believe most admins
just keep the default settings and never look at these controls.
The issue I have seen with this service is the settings that are used to support it. I have seen
some InfoPath users or developers, especially those new to InfoPath, create forms that are
just one-page forms with many form inputs that collect a lot of data. In the real world, you
really want to split your form up and make views so that the data settings in the InfoPath
service do not have to be made larger for session data.
In some cases, I can see when you may have to do this, but if you are keying in on
performance, you really want to limit that data per session from each user. You also
want to watch what connections to services you are using as these can slow down forms,
especially when you're querying data when the form is open. As an example, interfacing
with User Profile Services can help you auto-fill forms, though this can produce issues,
depending on how much data you are autoloading. SQL Server databases can also be
used to connect to, but then again, you have to look at how much data you are pulling into
the form.
Doing this can delay the form from opening due to the service trying to connect to get
the data or user you requested, or even connecting to other resources within the domain.
These services can be slow sometimes and depending on how many users are using that
form at once, this could be a big burden on the farm and server resources.
Performance overview 423
Depending on how you create your form, getting data through InfoPath processes
presents data when the form is opened, such as a user's manager or phone number. This
could also be anything that is related to the user profile or even a separate list or database.
This process must go out to that data source, gather the required information, and then
load it inside the form.
This process can be quick but also a little cumbersome. So, it is better to load as little data
as possible so that you do not have to change your settings and put a burden on the server
resources. These are just a couple of tips about InfoPath you should think about as we
investigate the settings within Central Administration for InfoPath:
Now that our logs have been set up, we need to proactively monitor our SharePoint
farm. There are different ways to monitor SharePoint. As a starting point, you can use
the Central Administration website, which provides health and reporting features for
SharePoint. There is also the System Center Management Packs for SharePoint, which
provide out-of-the-box monitoring insights for monitoring SharePoint natively.
Then, there is PowerShell, which may take you some time to get set up. However, it is
a very powerful way of making sure your farm services and farm is running well. There
are also SQL monitoring tools that you can purchase to monitor the tasks that are being
processed on this server, but SQL includes an activity monitor that will feed processes
in real time:
• The Health Analyzer: How to mitigate issues that have been captured
• Reporting: Using the reports within Central Administration to find solutions
to issues
• Performing regular checks to inspect the current state of the environment
• Defining what is important to your business and what to monitor more frequently
Performance overview 425
The configurations for logging we set up in the installation are native to SharePoint only.
SQL Server is also monitored from a configuration standpoint to help you promote
a configuration that supports performance and best practices. SharePoint uses databases
that capture logging, as well as diagnostics and usage. There are still server components
outside of SharePoint that we use to support troubleshooting and to monitor our services.
All of the services we use for SharePoint run on the servers we provision, and how we
provision them makes a difference in terms of how the farm performs.
In SharePoint 2016 and 2019, MinRoles come into play. They help us manage roles within
our environment so that we can support our server resources. This is because MinRoles
limit what services can be run on our servers. These preconfigured and dedicated
MinRoles can be added to our environment:
These were covered in Chapter 2, Planning and Architecture. We need to make sure we
choose the right MinRole for the server resource. We also have Server Compliance. This is
shown in the following figure, which shows the role of the server in the farm. This Central
Administration function monitors which services are running on the specific MinRole or
server resource. If a server is running a service that is not accounted for by the restrictions
within the MinRole, the server will be deemed non-compliant:
Diagnostic Logging is a log that collects data that is used to troubleshoot SharePoint farm
environments. When you set up logging, you will see there are various ways to capture
data. These will be provided in a drop-down list. You can select from a list for the least
critical events to monitor in the event log and do the same for the trace log. There are
also categories you can select from where there's a list of areas you can choose to report
on. Since we configured our logging in Chapter 4, Installation Concepts, we will give some
background on logging here, as well as what to look for in SharePoint that can help with
performance and troubleshooting:
• Update the drive location where the logs are being captured.
• Restrict the trace log disk space's usage (set a quota).
• Back up the logs regularly as part of your server backup.
428 Managing Performance and Troubleshooting
• Enable event log flooding protection (limits repeating events in the event log).
• Only use the Verbose setting when troubleshooting (more logs will be created and
there will be storage issues).
The levels you can set within the diagnostic logging feature limits the types and amounts
of information that will be gathered within each log. The level settings for each log are
available here:
Event Log Levels:
Usage and Health Data Collection is a logging method based on how SharePoint is
used from an overall perspective. The information that's gathered here is assembled
into a database and placed in the logging folder on the server. This data is used to create
the health and administrative reports that are shown on the Central Administration
Monitoring page. This includes search usage and performance counter data:
You can change the settings for these jobs so that they run more frequently or less
frequently. Disabling jobs is only recommended if you really understand what that time
job is being used for. In some cases, time jobs are not needed, and we do disable them.
As an example, Dead Site Deletion may be disabled because you may not want SharePoint
to automatically delete sites. If a site is not being used, this timer job would automatically
try to delete the site. It will send you a notification to ask you if the site is still being used.
You may not want this type of automatic service on your farm, so you would disable this
timer job.
It is also important to run Health Analysis jobs after maintenance so that you can see the
effects is has on your health report. The Health Analysis rules verify and update health
collections so that if anything is not a best practice, it will post that to the health report
page, which is called Review Problems and Solutions. By doing this, you can get the status
of the environment and clean up any loose ends.
As part of timer Job management, you can do the following from Central Administration:
Always remember to run PSConfig after maintenance to make sure your farm is
configured cohesively and successfully. Do not just run the minimum PSConfig
command; instead, run the full command so that the configuration is fully updated; for
example:
PSConfig finalizes the configurations for the servers and databases, which then get
updated. You may see changes in the GUI of Central Administration in some cases or
even sites. Next, we will look at monitoring in Central Administration and how that helps
us manage the farm proactively.
Performance overview 431
You can also change the way the rules are viewed and add more information using
a custom view. You can also export this list to Excel or use the RSS Feed for the list.
There is also an Analyze Now button in those rules that can be corrected through the
Health Analyzer. Not all are successful all the time and they may require some manual
intervention. The big thing is that Health Analyzer can help identify potential issues.
432 Managing Performance and Troubleshooting
• Keep your content under 200 GB unless you're told otherwise by your Farm
Administrator.
• Split subsites across site collections when needed to load balance your storage.
• Organize your subsites across site collections using department or project names.
• Do not turn on features you do not need in your site collection.
• Always uninstall third-party features you do not use or have been abandoned.
• Tag content and come up with metadata that makes sense to describe the content.
• Use Active Directory groups within the security model for your site collection.
SQL Server performance 433
These are just a few things site collection administrators can do to support their
sites based on best practices. This team really helps Farm Admins do their jobs more
efficiently and helps mitigate issues within the environment that are site-related. If a farm
admin is administering a farm with over 2,500 sites, you do not want them managing
the user community as well. Site Collection admins are a plus in this scenario and
a recommendation.
• Database maintenance
• Dual NICs to minimize traffic
• Isolate Search Indexing
When creating a maintenance plan, one of the key responsibilities is to make sure
you have enough disk space to hold backups of the content you are managing in the
SharePoint farm. If you do not have enough space, you will not be able to hold backups
for all your databases. You also want to use cheap storage for this location, depending on
the size of your content and services. If you have a large amount of data, then you may
want to use faster writing disks.
When looking at the size of our databases, what could cause the database to grow? What
processes would start making our databases get larger? Well, the answer to that question
can have many credible sources. Let's look at some:
I have managed small and large farms where I had up to 3,000 site collections. When
you look at space, you may need the same amount of content you have at the moment,
or double or triple, depending on how much data you want to retain for direct access
if something was to go wrong.
434 Managing Performance and Troubleshooting
When creating a maintenance plan, the options we must run against our database to keep
them healthy are to shrink the databases to clean up any unallocated space. This helps
keep the databases at their current size when site collections or many documents are
deleted. Indexes can then get fragmented, which then can cause performance issues.
We can handle those fragmented indexes by scheduling index defragmentation as part
of our maintenance plan.
You can create the steps in a maintenance plan and then clean up to delete old backups.
This ensures that your drive space is free from old data that may have already been moved
to offsite storage. Make sure to keep backups off your C drive where the OS is located.
I have often seen times when SQL was implemented with one huge drive. Lots of things
can happen in that configuration.
Another way to help with performance is to create separate service user and database
traffic. Web frontends and SQL could be routed through one NIC card, while the other
NIC card can be used for user traffic. This separates the traffic so that those networks are
dedicated to supporting the processes in separate environments. This could be a VLAN
or even a physical network.
Choosing a physical or virtual server for the SQL database server would be another
way of boosting performance. Physical servers run with better performance and do not
have any contention on other VMs or HOST server processes that may be running on the
same server.
Troubleshooting
Troubleshooting SharePoint is a task you will fail at many times, but the good news is that
there are others who have experienced the same frustration. It is such a big platform and
it touches on so many other applications and systems within the enterprise that it takes
dedication to support this platform. You must really understand SharePoint under the
hood to take the right steps in fixing issues. As you may have noticed, we keep harping on
about best practices. If you can keep those in mind as you go through the implementation,
you will be able to configure a stable farm.
This section will be light due to the vast arena of issues we could come across and because
this could be a whole other book in itself. If you wish to understand troubleshooting in
SharePoint, we recommend the book Troubleshooting SharePoint by Stacy Simpkins. It
goes through many scenarios to show you how to deeply analyze SharePoint issues. It also
provides tips on how to figure out common issues within SharePoint.
Troubleshooting 435
So, let's start this section by providing a list of areas you should check when you're
monitoring your server daily. Performing daily checks and even consistent checks
throughout the day, if you're not using a monitoring tool, is recommended. SharePoint
can change at any moment, even when you think things are OK. Just make sure you look
at these areas on each server and SQL Server at least once a day to verify you have a good
standing of the farm, especially after changes and updates have been made:
• Health Analyzer
• Windows Event Logs
• Windows Application Logs
• Task Manager
• Task Manager – Performance
• Scan a recent diagnostic log
• View Administrative Reports
Finding the issue is the tough part of working with and administrating SharePoint.
In some cases, these issues are easy, but in others, you will find yourself surfing the net
finding information about what others have done to fix the issue. In SharePoint 2003 and
2007, this type of information was not available because the SharePoint products did not
take off immediately until MOSS and once that happened, Microsoft had to catch up and
post information. In those days, there were not a lot of blogs and support there to help
you. You had to work through the issues and fix them yourself.
Now, there is a lot of readily available information and you have many resources to choose
from. However, be aware that some of the issues that were fixed may not have been fixed
properly in Blog Posts. The first thing you must do is test the scripts you see in posts and
alter them as needed. Test the script in a lower-level environment first to see the effects
of the script. Do not trust every blog you see that contains scripts and other fixes that, in
your gut, do not seem to be correct. Beware of posts that go against best practices and
others that may ask you to run a script to update or change a database.
436 Managing Performance and Troubleshooting
We mentioned some great tools in the previous chapter but did not really dive into how
they are used. We will show you the real-world scenarios of how we found the answers
to solving issues within the farm. The following tools can help you troubleshoot issues
in SharePoint:
• Fiddler: Client-side logging software used to capture events with clients connecting
to SharePoint
• Process Monitor: Logs all processes running on the server and what the process
is processing
• Network Monitor: Logs all the network activity of the resource
• ULS Viewer: Gathers information from the ULS logs and color codes information
that may be critical
• PowerShell: Can be used to help manage the ULS files by merging logs, starting
a new log, setting the log level, clearing log levels that have been changed for
VerboseEx, and so on
When troubleshooting the first area, I always check the Windows event logs, either on the
client or the server. Most of the time, you will find the issue or a related issue in these logs
right away. Sometimes, it can be hidden among many entries, so you must be thorough
in your search. If it's a sporadic issue, you will see that it could be one line in hundreds of
entries, so sometimes, it's best to recreate the issue and log the first time to see if anything
comes up in the Windows logs while using the other tools mentioned as well.
When capturing logs, you want to set your ULS log level to VerboseEx. The first thing you
must do before starting VerboseEx is check your drive space. Depending on the size of
your farm, you can generate some big logs if you are not careful. Therefore, we have other
commands to help us manage these logs instantly.
So, let's say we have an issue we are about to recreate. The first thing we must do is set the
log level to VerboseEx, as shown here:
This will help us capture everything SharePoint spits out so that we can gather details
about anything happening within the environment. Once we start the log at that level, we
must immediately recreate our issue. The reason we must recreate the issue is that we want
our log to be as small as possible when capturing information about the error. So, let's
perform a quick recreation and then start a new log:
New-SPLogFile
Troubleshooting 437
Now, you will need to clear the log level, which means setting the log level back to the
default and not capturing verbose logging anymore:
Clear-SPLogLevel
If you are unsure of what server the error is happening on due to it being a service or
something you just don't understand, then you can also merge the logs to find errors for
the same period of time:
Always remember to put all the output from PowerShell and other growing files (logs) on
your D drive or a drive that is always separate from the C or OS location. We do not want
to fill up our C drive with this type of information – especially growing files that can fill
up drive space quickly and stop the server from running. I have seen many horror stories
just on this simple use of drive space.
We could really go into a lot of areas here while going through all the methods and
applications we could use to troubleshoot issues, but I believe the best use of your time
would be to look at some of the scenarios I have come across in my career and give
you those fixes. So, in the next section, we will go through some of the issues I have
encountered in the past and tell you how to get past them.
Behavior:
Users will see issues when connecting to the site where the site will be blank, never
connect, won't come up, and/or spin like it is trying to connect but can't. You may even
encounter broken user interface issues, such as the page not loading fully.
Fix:
Clearing the cache on the client machine is the answer to this issue. Clear everything in
the browser cache in IE since this seems to be the most vulnerable cache in this scenario.
Google Chrome and Firefox seem to work well in most cases, but you may have one or
two users who need to do the same in those browsers as well.
You also want to clear your machine cache and profile cache. Some information may be
stored that could be corrupted as well. Then, if all else fails, clear your SSL cache in the
browser. This seems to resolve the issue for those with stubborn desktop configurations.
Please follow these instructions:
Note
If you are not an admin on your computer, please contact an admin so that
they can do this for you.
1. Clear all browser caches completely (if you're using Chrome, make sure you choose
all the caches and not just the last 24 hours).
2. In Windows 10, open Internet Explorer.
3. Click on the gear icon (tools) and click on Internet Options.
4. Click on the General tab.
5. In Browsing History, do the following:
• Delete Browsing History (select the check boxes in the popup for deleting all types
of information).
• Click on the Browsing history settings and open View Files (there were still
a bunch there when I did this).
• Delete all those files.
Troubleshooting 439
1. Open My Computer.
2. Click the address bar and paste in %USERPROFILE%\AppData\Local\
Microsoft\WebsiteCache.
3. Delete everything within this location.
4. Click the address bar and paste in %APPDATA%\Microsoft\Web Server
Extensions\Cache.
5. Delete everything in this location.
6. Then, from the command line, run ipconfig /flushdns.
7. Reboot your computer.
Behavior:
When using PowerShell to mount the content database or the GUI within Central
Administration, you get an error when you try to add content databases to web
applications.
Fix:
Use PowerShell to add the content database to the web application and use a new
database ID:
Fix:
Uninstalling and reinstalling the application is the first test you should do to see if there is
just something wrong with the application. If this doesn't work, then you should delete all
SharePoint Designer caches. You can do this by following these steps:
1. Open My Computer.
2. Click the address bar and paste in %USERPROFILE%\AppData\Local\
Microsoft\WebsiteCache.
3. Delete everything within this location.
4. Click the address bar and paste in %APPDATA%\Microsoft\Web Server
Extensions\Cache.
5. Delete everything in this location.
6. Then, from the command line, run ipconfig /flushdns.
7. Reboot your computer.
To move your site collections, create a new content database using Central Administration
(make sure you document your moves during this process). Move the select site
collections to one or many new content databases to ensure they can be moved quickly
and easily:
Notice that the second entry is for search index files. This is how we set our location for
the search index and Temp space for the crawler. If you choose the default option, it will
locate everything on the C drive. However, when you set up your search service, you can
change the index location but not the Temp space for the crawler. To update the Temp
space, we need to do the following using the registry:
HKLM:\Software\Microsoft\Office Server\16.0\Search\Global\
Gathering Manager
Update the registry with the new location for the Gathering Manager and reboot your
machine. You will see that information will gather in the new path and that the OS drive
will be free from clutter and growing index files.
Behavior:
Naming conventions are good when you're using databases that support SharePoint. You
see where someone could make a mistake by deleting a database by the way it was named.
I have seen names such as Temporary Database, Restore_12-11-2015, which
was a live database, WSS_Content_Test, which was a live database in production, and
a host of other names that were deceiving when I read them.
Fix:
Before restoring any database, come up with a naming convention. When you have your
naming convention, restore the database from a backup and in the properties, put in the
correct name you would like for the database. Once you've restored the database, all MDF
and LDF files will be named with the name you specified.
For MDF and LDF updates, if you are moving files from one server to another (which I
do not like to do because if the file gets corrupted, you have nothing to fall back on), you
would just rename both files with the name you want and attach them to the new server.
If the content database is already attached to the farm and you need to update the name,
just disconnect it from the SharePoint farm (make sure you find out the location of the
MDF and LDF before detaching them), detach the database from SQL Server, rename the
MDF and LDF files, and then reattach the database to SQL Server. Once you've done that,
you can right-click and rename the database to what you named the MDF and LDF files,
unless you want someone to be confused.
I've had several users who had profiles from the Active Directory (AD) domain and
from a third-party cloud authentication solution in their user profiles. The problem was
that none of the AD domain accounts had email addresses associated with the profiles,
only the third-party solution. So, the user was creating workflows and using workflows
that were created by the AD domain profile that had no email addresses. None of the
workflows would work consistently. In the end, all the caches, including the SSL and
SharePoint Designer caches, had to be deleted for this user to authenticate correctly.
This also presents an issue when users log in to their device using an AD account and
when using a federated account to log in to SharePoint. Sometimes, the system gets
confused. I have seen situations where a user has created workflows using their Active
Directory account and have workflows with the federated account listed in the workflow
view of SharePoint Designer.
$web = $site.OpenWeb()
$mail = [Microsoft.Sharepoint.Utilities.
SpUtility]::SendEmail($web,0,0,"lewin@xyz.com","Testing from
PowerShell", "This is a test from PowerShell and server APP1")
In my case, the SMTP mail was working. All ports were working correctly, even with
a custom port, 587. All was working well but no emails were getting to the users from any
workflow, even though Alerts were working. I was scratching my head because I did not
understand what was happening.
So, I started investigating further and noticed that there were some changes to the server.
Upon accessing Control Panel and Uninstall Software, I noticed that three applications
had been installed over the weekend. My users noticed the change in the delivery of mail
over the weekend and Monday morning more so.
So, I screenshotted the applications that were installed and used Process Monitor to
log processes on the Central Administration server, where outgoing mail is processed.
In doing so, I noticed that one of the new applications was blocking all SMTP traffic.
I uninstalled the application and voila – the SMTP mail started being delivered again.
I also notified the security team to make them aware that this was the cause of my issue.
Coordination between teams is very important so that everyone is aware of changes.
Troubleshooting 447
When you want to create a migration plan, we need to know more information.
PowerShell comes in handy when you want to know how big the site collection is, as well
as the content database's total size.
Fix:
Use the PowerShell Move-SP-User command to move the account from one domain
to the new domain. This will give the user rights to access the server. Most of the time,
I have found that this is due to the account not migrating properly to the new domain.
That's it in terms of scenarios and fixes. Now, we will conclude this chapter.
Summary
Performance is one of the most important things you can implement as part of your
SharePoint farm and environment. Without it, you will hear from your user community.
I have done my best to provide as much information as possible within the confines of this
book. Make sure to follow the guidelines we set here and always look at how you can tune
your environment based on the ups and downs of concurrent users and services, which
could peak at a certain time of the day.
Monitor your environment daily and understand how it is performing. You should also
pay attention when tickets for issues come in. This can tell you a lot about your farm and
when it has the worst performance. Home in on those time frames and see what is going
on; for example, you could have peaked on CPU or memory. Investigating issues like these
is crucial to getting in front of problems instead of waiting for them to happen.
Troubleshoot issues with the tools listed in this chapter and make sure to look at obvious
locations first before gathering logs and searching through them. Following best practices
during configuration saves you from having issues in the first place, so the more you do in
the beginning, the less you have to fix in the backend.
If you have delivered the environment and are still having issues, this is the worst situation
you can be in. Make sure to test and know your services work before you deliver them.
Another thing to watch for is coming behind someone else's work. If someone else set up
the farm, make sure to follow along or review all their configurations before delivering the
farm. You can get yourself in trouble if you do not because if just one service doesn't work,
everyone will be looking at you, not the technician that left after they did the installation.
In the next chapter, we will review reporting and the other features and integrations
that are available to support it. We'll also talk about what cloud services are available for
reporting via hybrid connectivity.
Questions 449
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
1. When you're planning for performance, what are some areas you can key in on that
support physical or virtual servers?
2. What can be a key issue when a user connects to the company VPN?
3. Developers can contribute to performance issues. What tool can they use to help
mitigate issues with memory leaks?
4. What is the name of the tool that's used to onboard users using a recorded script
and receive performance reports before we can go live?
5. Which SharePoint service supports OneNote client access?
6. When large files are kept outside of the content database, what caching solution
should you use?
7. If my application pool is slow to respond every morning, how can I keep the
application pool live?
8. What are some specific disk types we can use to support the physical performance
of our physical server configuration?
10
SharePoint
Advanced Reporting
and Features
SharePoint 2019 has several advanced reporting features. These advanced reporting
features include Excel Online, Power Apps, Power Automate, Power BI, SQL reporting
services, and Visio and Visio services. In this chapter, we will explore some of these
advanced reporting features in SharePoint 2019. Some of these features also exist in
earlier versions of SharePoint, but you will see differences in how they are configured and
integrated into SharePoint 2019.
SharePoint Server 2019 brings the cloud closer to customers and vice versa. The cloud
features Power Apps, Power BI, and Power Automate, all of which are now available
locally and bring the power of the cloud to on-premises farms. SharePoint Server 2019
includes process automation and forms technologies such as Power Apps and Power
Automate to connect with your on-premises data. These features need to be configured via
an On-Premises Gateway, which we will talk about in this chapter.
452 SharePoint Advanced Reporting and Features
We will also briefly touch on various Azure tools and Excel Online, both of which have
been built to integrate SharePoint. We will then reference a few third-party reporting
applications that you may want to leverage in your SharePoint environment. This chapter is
going to provide a brief overview of these reporting features, so do not expect a deep dive
or an exhaustive exploration. If you want to do a deep dive into any of these topics, I have
added some links to the Further reading section that you can explore further on your own.
The following topics will be covered in this chapter:
Technical requirements
To be able to understand this chapter and the knowledge provided, you will need to meet
the following requirements:
You can find the code files present in this chapter in this book's GitHub repository
at https://github.com/PacktPublishing/Implementing-Microsoft-
SharePoint-2019.
It is not uncommon for medium- to large-sized enterprises to have their data spread
across multiple enterprise systems. These enterprises may have data residing in multiple
locations, both on-premises and in the cloud. Data residing in disparate systems across
enterprises that have not yet been consolidated can cause data compliance, performance,
and redundancy issues. Not to mention, that keeping old database systems in play that are
no longer supported by companies due to the company being no longer existent is a very
high security risk.
As companies seek to trim their technical overhead by reducing the number of systems
that maintain their data, this would be a prime target on the list for consolidation.
Database architectures cost the most in any organization, be it cloud or on-premises. The
licensing and overall cost for virtual or hardware can be staggering, depending on how
much data and redundancy you build in the architectures.
The need for consolidation and integration becomes even more complex as companies
merge and incorporate other entities under one umbrella. We are now seeing a trend
where companies want to reduce the number of systems that they have and use one or two
systems to view and manipulate data enterprise wide.
We are also seeing that user subscriptions are being more scrutinized now than ever
because of companies spreading their applications, business processes, and storage over
many cloud providers. I have had one customer paying $100 a month per user and they
had over 10,000 users. Upon breaking down the subscriptions, they were paying $35 for
Microsoft 365 and $65 for a subscription to an enterprise cloud application that was used
by everyone. The application could have been built in 365 using the tools available, which
could have saved them lots of money.
So, in most cases, these subscriptions and applications being used in the cloud need to be
looked at thoroughly. It is not cost-effective and you can run the risk of putting too much
at stake by connecting many cloud architectures together. With the cloud being so new,
things are changing daily. Your IT staff must understand the risks of securing those risky
areas. Since this can be a huge task, especially managing all these clouds and on-premise
areas, advanced reporting will be the biggest area of concern due to the size and sensitive
nature of the data being exposed.
However, as we mentioned previously, there is the need to have our teams work smarter
and put the power in the hands of the users to a certain point. We must empower
them to create their own applications and their own reports, which trims the need for
development staff and the server footprint to support the database architecture. Doing this
helps put data into perspective. If you have a small database, change where you hold the
information. Put it into SharePoint, where it can be utilized by everyone. Access databases
are a perfect target for migrating to SharePoint, and you can also redo the process using
Power Tools.
454 SharePoint Advanced Reporting and Features
Advanced reporting has always been a key integrated feature in SharePoint and is
now even more powerful. People really use SharePoint effectively when collaborating.
Collaboration means we are working together as a team to manage projects, tasks, data,
and documents. If we are working as a team, reporting should be easy, right? This is
because if we are using a collaborative platform with abilities to connect to other data
platforms, then all data can be reached. This is exactly why SharePoint was created.
There are other features, such as Power Automate and Power BI, that also give you
easy ways to pull data and report on it – either from databases, Excel spreadsheets, or
SharePoint lists – using these new cloud features. Workflow processes can also be built
with these tools, which automates processes and captures information based on the
process at hand.
Since SharePoint 2019 is built on the same version of SharePoint as SharePoint Online,
you will see that using modern sites is available in this 2019 version of SharePoint, which
makes it somewhat comparable to Microsoft 365. Microsoft 365 has more features on the
backend and is a lot more advanced, but you can accomplish some of the same things in
the on-premises version of SharePoint 2019.
The power of this platform is that you can put certain tools in the hands of users who need
reports and processes that they can create on their own. Using the tool sets described in
this chapter will help users make applications that can be shared across the enterprise.
Users can also take advantage of using out of the box views in SharePoint lists. This has
been a traditional way of reporting information in lists and libraries. We should not forget
that we have the out of the box capabilities and customized capabilities of JSON, which we
can use to format views from lists. Remember that although there are many new solutions
that can help show and process this data in many different ways, we still can use some of
our old customizations to do our work as well.
In this chapter, we will show you how to gather data and how the tools that make
applications, business processes, and reports based on specific areas of content are not
as hard to use as you may think. Using tools such as SSRS, Power BI, Power Apps, Power
Automate, and Excel Services, you can create forms, workflows, and pull reports together
quickly.
Deprecated features
With SharePoint changing over the years, we have seen many features come and go that
were used in prior versions of the software. We saw a big change from 2010 to 2013, with
one of them being the focus on claims authentication. Classic Windows authentication
was still available, but this created some extra steps when migrating from that version of
SharePoint. It was a real eye opener and let us know to expect change.
Advanced reporting and features 455
The most memorable for me was the FAB 40 templates and the Productivity Hub, which
for some people still using SharePoint 2007 and 2010 need to be aware of. These templates
will not work in 2013 or later versions of SharePoint. This was a big surprise for a lot of
people because Microsoft created them. Everyone was expecting them to be available in
the next versions to come but to our surprise, they did not make it.
With SharePoint 2016, we also saw a big shift, with Excel Services being moved from a
service in SharePoint to a service within Office Online Server now known as Excel Online.
This change makes sense to me because these services do need their own server resources
to keep SharePoint running at peak performance. There are already enough services and
web applications that SharePoint server farm resources provide, so taking some services
and spreading them out over resources is great news, but it will cost you to add another
server. However, if this new server is a VM, this will not be that costly.
The following is a list of deprecated and removed reporting features for SharePoint 2019:
It was a very useful integration and provided users with another way to provide reporting
on data. Most users loved this feature because it was built into the desktop version of Excel
already. Now, users will need to take advantage of Power BI and Power BI Report Server to
get some of the same functionality.
PowerView and BISM file connections have been removed completely from SharePoint.
This type of connection is typically used by Excel and SSRS to connect PowerView reports
to SQL server analysis services data sources. However, this is no longer included due to
PowerView being removed.
We will not go over InfoPath in this book, but we will mention what has taken place as
part of the big move to the cloud. InfoPath has been deprecated as well but still works as
it did in other versions of SharePoint. It currently works in SharePoint Online. It is still
available by default in the installation for SharePoint 2019 and is managed the same as
it was in prior versions. You will still need to use InfoPath Designer 2013 to create and
publish those forms to SharePoint 2019.
If I were you, I would start moving away from this product as soon as possible. This is
because many companies have used this product over the years and have many forms
published using InfoPath, along with workflows for creating business processes. I have
seen this with my own eyes and one of my latest customers had so many InfoPath forms it
would take them years to recreate them.
The problem comes with Microsoft pulling the plug on old applications, as they did with
SharePoint Designer 2010 workflows. This means they may pull the plug on InfoPath in
the next couple years. Is that enough time for you to recreate all the forms you have in
your current farm? Probably not. As I explained previously, there is a learning curve with
Power Apps and Power Automate, so you really need to start migrating and planning now.
You may be wondering why this is the case. Well, things change all the time. Your
company may change direction and not want to use SharePoint 2019 and go straight to
SharePoint Online. In 2021, you will be better off using cloud applications to create your
forms and workflows. This will be very helpful when you migrate to the cloud as the core
components will be in place, and training for those new tools can be achieved earlier. It is
always good to think ahead.
SQL Reporting Services Integrated Mode has also been removed from SharePoint 2019
released version. SSRS Integrated Mode was deprecated in 2016, but it could still be
used with SharePoint 2016. With SQL Server 2017, it was completely removed and is not
supported in SharePoint 2019.
Azure Active Directory 457
Next, we will discuss what BI tools work in SharePoint 2019. This will give you
the guidance to move forward with integrating BI into your new SharePoint 2019
environment.
Important Note
The following link provides an overview of Azure Active Directory:
https://docs.microsoft.com/en-us/azure/
architecture/reference-architectures/identity/
azure-ad.
Formally, Azure AD was solely a managed service but due to high demand, Azure recently
added Azure ADDS trust. This is important for many organizations and enterprises that
want to keep all their identity management on-premises, but also wish to allow Azure
AD to utilize those identities to connect to the cloud-based apps that Microsoft 365
provides. I will not go too deep into the one-way outbound forest trust as it is beyond
the scope of this book. However, do explore this option, especially if your organization
is hesitant to adopt Azure AD. Azure ADDS's one-way trust relationship will allow
users and applications to authenticate against the on-premises Active Directory domain
environment from the Azure ADDS managed services domain. This may be key to
incentivizing hesitant management to begin adopting cloud technologies for use within
their on-premises environments.
With SharePoint integrated mode, SSRS was installed on a SharePoint Server. It still
requires a SQL Server license. You could use SQL Server 2012 Standard Edition to access
many of the reporting features unless you had a need for PowerView, in which case you
would have needed to install SQL Server Enterprise Edition. However, now, SQL Server
Standard Edition is not available for SharePoint 2019 and it must be installed to leverage
SSRS. This is more costly than the standard edition, so keep that in mind when you are
planning your deployment architecture and costs.
Also, since SSRS cannot be deployed on a SharePoint Server in SharePoint 2019, this must
be another server that must be monitored and maintained. This can add to the admin
troubleshooting burden as you will now have to troubleshoot reporting issues from SQL
server instead of directly from your SharePoint Server.
The reporting capability now resides outside of SharePoint and is solely native to SSRS so
that it can be leveraged by SharePoint. Since Native mode is the only way to leverage SSRS
in SharePoint 2019, SharePoint connects to this external application server through the
SQL Reporting Services Report Viewer Web Part.
The Report Viewer web part enables you to put embedded paginated reports that are
stored in SSRS (Native mode) into a page on your SharePoint site or Power BI Report
Server into SharePoint Server web part pages. However, you cannot use this web part for
Power BI reports. The web part can only be used with classic pages in SharePoint. Modern
pages are not supported.
To utilize the Report Viewer web part in your SharePoint farm, you need to download
the report for your web part. Then, you need to deploy the web part to your SharePoint
farm in order to run the SharePoint management shell from your SharePoint farm.
Remember to run the SharePoint management shell as administrator. For this, you can
run Add-SPSolution to install the ReportViewerWebpart.wsp package.
You can download this package from https://www.microsoft.com/en-us/
download/details.aspx?id=55949&751be11f-ede8-5a0c-058c-
2ee190a24fa6=True.
To deploy the package, follow these steps:
1. Open SharePoint Management Shell on SharePoint 2019 Server and choose Run as
Administrator.
2. Run the Add-SPSolution command. This adds the farm solution to the farm.
The following is an example of this:
Add-SPSolution –LiteralPath "Drive:\*solutionfolder*\
ReportViewerWebPart.wsp"
460 SharePoint Advanced Reporting and Features
Note
You can deploy solutions to a specific web application if required by using the
-WebApplication parameter.
You need to activate this feature from the site collection where you want to view Power BI
paginated reports. To do so, follow these steps:
Migrating integrated reports from previous versions is supported through the use of a
script that Microsoft provides for moving these documents. Microsoft has stated that it
will continue to support SharePoint integrated mode in earlier versions of SharePoint.
However, if you have SSRS reports from previous versions of SharePoint and you want to
migrate them to SharePoint 2019, you will need to leverage the migration script that will
change the reports from integrated mode to Native mode.
To use the script, follow these steps:
Note that you will lose your passwords once you migrate your reports. These passwords
must be reentered. Make sure you document those passwords if you have not already.
Data sources can be located on various SQL servers in our enterprise. Just make sure you
have covered all your bases as they contain stored credentials.
Azure Active Directory 461
Note
For more information, see the section How to use the script section of this
article: https://github.com/Microsoft/sql-server-
samples/tree/master/samples/features/reporting-
services/ssrs-migration-rss.
Now that we have installed the solution into the farm, let's configure the sites where we
want to use the web part by adding it to the pages within the site.
1. In your SharePoint site, select the gear icon in the top-left corner and select
Add a page:
3. Within the page designer, select the Insert tab from the ribbon. Then, select Web
Part within the Parts section:
1. When editing the SharePoint page, select the down arrow in the top-right of the web
part and select Edit Web Part:
Azure Active Directory 463
As you can see, Microsoft thought about those customers who use a lot of reporting
features. At one point, I did not think they were going to give us any tools for SSRS so
that we could migrate to the cloud. I thought there would have been a conversion or
something using a new tool they created. This really confirmed that Microsoft takes
everything into consideration and gives us options to move forward.
Using Report Builder, you can determine what data sources you will pull into your report,
the datasets you want to utilize, and how you want that data displayed. Report Builder
allows you to create reports using existing datasets and report parts, which are parts of
already created reports that you have access to pull into your new report. It has a great
drag and drop feature that allows you to customize your report fields and layouts.
Note
You can download Report Builder by going to https://www.
microsoft.com/en-us/download/details.aspx?id=53613.
You can install Report Builder by going to https://docs.microsoft.
com/en-us/sql/reporting-services/install-windows/
install-report-builder?view=sql-server-ver15.
To create a simple report in SharePoint using Report Builder, you must have your SSRS
configured and have Report Builder installed. You are on your way to building a report
using this tool. It is a good idea you try and utilize the tools that you will be rolling out
to your users and document any issues. Ensure you cover them while training. We will
discuss this later. The following steps show you how to build a very simple report:
1. Open Report Builder and choose a dataset screen. The default choice will be to
create a dataset.
2. Click Next.
3. On the Data source property screen, click the Build button.
4. Next, you will need to enter the SQL Server name that you want to connect to.
5. Enter the database name under Select and select the name of your database.
6. Click OK.
7. In the Data source property window, click Credentials and enter your Username
and Password.
8. You will see Use as Windows credentials. Check this box and click OK.
9. Select the following from the list and click Next:
a) NameCustomer
b) Namelocation
c) Factloanamount
10. From the fields listed, click Factloanamount under Values.
Hybrid configurations 465
11. From the available fields, drag the United States location under the road and click
Next.
12. Choose the layout you want and click Next. Alternatively, you can choose the style
you want and click Next.
There are some cosmetic changes that you can make to update reports so that they are
more attractive and easy to read. Using Report Builder, you can add titles to your report
and resize your columns and rows. However, this is beyond the scope of this simple
tutorial. More advanced tutorials are available on Microsoft's website: https://docs.
microsoft.com/en-us/sql/reporting-services/report-builder-
tutorials?view=sql-server-ver15. There are even third-party classes available
that provide support.
As we can see, we have not really lost a lot of functionality with SSRS, but things have
changed. We can migrate reports and even show those reports in our sites using web
parts. The interfaces have stayed somewhat the same, but the locations of tools and
administration has changed, which has added a new twist to maintaining the farm and
what you need to be aware of in SQL Server.
Hybrid configurations
Since SharePoint 2013, we have seen how we can interface our on-premises SharePoint
farms with the Microsoft 365 platform. There is no reason not to have a mixture of both
cloud and on-premises functionality nowadays. To be quite honest, my recommendation
is to stay in this configuration. This gives you control over your data and any content that
you may not want to have hosted on systems in the cloud. These configurations also give
you bridges to move to the cloud as fast as you like instead of all at once. Also, allowing
you to use tools and services in the cloud can be very useful in helping you further
collaborate and bring about a broader range of reports, data gathering, and automation.
We are going to touch on those areas in this chapter so that you know what is offered
mainly on-premises, but we must remember that these services play both ways. So, what
I can connect to in the cloud I can also connect back to my on-premises environment.
There are endless possibilities that can help you manage the cost of your services and
analyze the needs of moving to the cloud instead of moving blindly.
• Hybrid Sites: Allows users to follow sites on-premises and in the cloud.
• Hybrid One Drive for Business: Creates a user's OneDrive for Business profile in
the cloud instead of on-premises.
• Hybrid Self-Site Creation: Redirects sites to be created in SharePoint Online only.
Hybrid configurations 467
• Hybrid Auditing: All audit logging is pushed from on-premises to the cloud.
• Hybrid Taxonomy and Content Types: Allows shared taxonomy and content types
between the cloud and on-premises environments.
• Hybrid Business Connectivity Services: Allows data to be securely displayed from
external systems within the cloud or on-premises.
• Hybrid Search: Allows you to search both on-premises and the cloud for function
separating indexes that are supported on those systems individually.
When taking advantage of this configuration, we must plan what we expect to gain from
this hybrid connectivity. Some questions you need to ask before you go down the road of
configuring this connectivity are as follows:
These are just some of the questions that need to be answered at this point. You want to
make sure you have a plan in place and that that plan is followed so it is documented.
Your risks at this point could be the answers to one of these questions. Please understand
what you are creating, providing, and managing at this point – a connection to the cloud
that you want to be able to leverage. Before including this as a solution for your company,
make sure you understand what it brings to the table.
Although there is not a lot of space in this book to write about this subject, I do want to
touch on some of the topologies that can be configured when using this Hybrid Gateway
for SharePoint 2019:
To create the connection, we need to know the requirements for these features to exist:
To set up the Hybrid Picker on your SharePoint Server, you need to meet some other
requirements, as follows:
Note
More information about the configuration you need to do can be found
here: https://docs.microsoft.com/en-us/sharepoint/
hybrid/configure-server-to-server-authentication.
Hybrid configurations 469
As you can see, there are many configuration steps and areas you need to tend to when
creating this connection between your on-premises and Microsoft 365 environment. To
install the Hybrid Picker, execute the following steps:
1. Log into the console of a SharePoint Server farm server as a farm admin.
2. From the farm server, connect to Microsoft 365 as a global admin.
3. Navigate to https://go.microsoft.com/fwlink/?linkid=867176.
4. To configure your hybrid features, follow the prompts provided by the Hybrid
Picker.
There is a lot more to this configuration. You can find out more by taking a look at the
links provided in the Further reading section. In this chapter, I did not want to focus much
on the configuration but expand on the capabilities and applications in the cloud that can
bring benefits to your on-premises environment. Unfortunately, I do not have the room in
this book to publish the whole configuration.
The configurations for hybrid can be removed if needed but from my experience, this is
not easy to do. You may find some residue from these configurations, which is why you
should think through this process before moving forward. This can be a great experience,
but you really have to plan and make sure your architecture is correct before following
through with it.
• Power BI
• Power Apps
• Power Automate
• Azure Analysis Services
• Azure Logic Apps
When utilizing this method of connecting on-premises enterprise systems and cloud
services, organizations can keep their databases and other data sources on their enterprise
networks. This process was built to be secure so that customers can take advantage of the
features that are only available in the cloud.
470 SharePoint Advanced Reporting and Features
• On-premises data gateway for multiple users: This allows multiple people to
connect to on-premises data sources. It opens a single gateway for this purpose
and can support complex scenarios with multiple people accessing multiple data
sources.
• On-premises data gateway for a single user: This is a personal mode that allows
one user to connect to sources that cannot be shared with anyone else. The Power
BI cloud service can only be used with this offering. If your company only has one
person creating reports, this would be the gateway you'd use with your company's
needs.
Installing the gateway is fairly easy unless you run into issues. There is a troubleshooting
page that can help you if you have problems getting this setup on your server: https://
docs.microsoft.com/en-us/data-integration/gateway/service-
gateway-tshoot.
The requirements for installation are as follows:
• Minimum requirements:
.NET Framework 4.6 (Gateway release August 2019 and earlier)
.NET Framework 4.7.2 (Gateway release September 2019 and later)
A 64-bit version of Windows 8 or a 64-bit version of Windows Server 2012 R2
• Recommended hardware:
An 8-core CPU server or workstation
8 GB of memory minimum
A 64-bit version of Windows Server 2012 R2 or later
Solid-state drive storage for spooling
Next, we will look at the best practices for configuring an on-premises gateway.
Hybrid configurations 471
• Gateways, as you may have guessed, are not supported on Server Core installations.
• The user installing the gateway must be the admin of the gateway, so using a
personal account may not be the best idea. You don't want to tie up the services
with an account that could be disabled one day.
• As you know, we do not want to install anything on a domain controller that has
connectivity to something outside the organization.
• Remember that if you are planning to use Windows authentication, you must install
the gateway on a server that is connected to the domain.
• Do not install the gateway on a computer that may get turned off, such as a personal
computer.
• You may want to shy away from installing other applications on this server as those
services could stop the gateway or create a slow response.
• Installing one personal mode and one multi-user mode gateway is supported on the
same server.
To install the gateway, download the installation files from Microsoft by going to
https://go.microsoft.com/fwlink/?LinkId=2116849&clcid=0x409.
You can configure an on-premises gateway by following these steps:
1. Provide a location for the installation files and accept the terms of use:
2. Once the installation is successful, enter an email address to use this gateway. The
email address needs to a work or school account that's used to sign into Microsoft
365. This should be your organization's account.
3. You will now be fully signed into your gateway on this computer.
4. Now, you have the option to migrate or register the gateway. As part of the
migration process, you can migrate the gateway for one of the following reasons:
5. Insert a name for the gateway and create a recovery key for it. Once completed, click
Configure.
6. Now that the gateway has been created, click Close to exit the setup.
Once you have finished with the setup, you have the option to restart the gateway, give
the gateway a service account, perform diagnostics and network setting configurations,
and more. You can play around with these once you have this set up in your environment.
At the moment, we can use the gateway to create connections to data in our on-premises
environment.
Now, let's talk about the tools we will be using to connect to our data sources within the
enterprise, starting with Power BI.
Power BI
Power BI is a cloud platform that gives everyday users a way to connect to and visualize
many data sources in your enterprise. The interface will be familiar to those users who
have worked with Excel and it has deep integration with other Microsoft products. This
gives our users the ability to build robust reports using a self-service application that gives
them the tools to analyze and aggregate data. The platform is easy to use and can help a
company gain better knowledge of the data they are managing using this self-service tool.
Power BI can help a company make decisions based on data since the product allows you
to connect to multiple data sources and aggregate data. So, depending on the data and
reports generated, this can prevent the company from making bad decisions. With Power
BI, you will see that the data is made available through charts, graphs, and other visual
ways by modeling data how you want to see it. You can build KPIs and other advanced
reports using this tool. The data can also be exported as other formats, such as PDF, Excel,
and PowerPoint.
Power BI 473
The advantages of Power BI are that you can connect to many data sources individually
or multiple data sources to create more advanced reports. There are over 120+ free data
connectors that support many other database platforms. With these connectors, you
can take advantage of the big data investments your company has put a lot of time and
money into to provide insights across your organization. By using Power BI, you can start
working smarter. The tool allows you to analyze data and share it while maintaining the
consistency and security of your data. Your data is protected even if you share information
outside your organization.
Power BI was built to put the manipulation of data in the end users' hands. In the past,
users had to rely on the IT department to build reports from the data that resided in
database systems. As we have seen, in recent years, Microsoft has decided to put more
functionality into the end user's hands so that they do not have to rely on a database
programmer, or other development teams, to give them reports. This can be seen in
several aspects of Microsoft's strategy, including self-service site creation.
SharePoint is one of the systems that leverages Power BI to accommodate data flowing
from various sources. When you are implementing SharePoint within an organization,
it is important that you understand not only your current challenges but also your
future goals, as well as the vision and direction of the company when it comes to data
management.
Next, we will briefly look at Power BI and the tools available for managing and
manipulating data.
Power BI admin
This tool allows you to manage organization-wide tools that control how Power BI works.
The portal provides settings that allow you to manage and monitor the usage as well
as manage tenant settings. Users that have been assigned admin rights can configure,
monitor, and provision organizational resources. Power BI admins should be familiar with
other related tools and admin centers within Azure and Microsoft 365.
474 SharePoint Advanced Reporting and Features
There are a few admin roles to consider when using Power BI. These roles are used to
spread security roles across the admin center:
To get to the admin portal, you must have provisioned the Power BI service administrator
role or a Global Admin role. These roles can be provided via the web interface or
PowerShell. The limitations of this role are as follows:
• The ability to modify users and licenses within the Microsoft 365 Admin Center
• Access to all the audit logs
Using the Power BI Admin Role, you can only see a limited amount of information in
the user audit logs. The log only keep activity data that has been collected over a 30-day
window. Currently, there is no user interface for searching the activity log currently.
Admins can download activity logs using the Power BI REST API and management
cmdlets.
Power BI Desktop
There is also a free application called Power BI Desktop. This application runs on
Windows 10 and on mobile. It is available for Windows, Android, and iOS devices. There
is also a Power BI Report Server that can be used on-premises to support companies that
want to maintain their data and reports. This version of the software is called Power BI
Desktop for Power BI Report Server.
There are several components that help users get the most out of the platform:
As stated previously, there are over 120+ connectors available, which gives you the
ability to connect to most types of databases and data sources. You can also connect to a
SharePoint list, a SharePoint Excel file, and a SharePoint folder. From there, you can view
the data contained within that list or folder.
To create a Power BI report, you must download the Power BI Desktop application.
Once you open the application, you have the option to select Get Data and choose your
resource. You then need to query the data to create reports based on the needs within the
data provided.
476 SharePoint Advanced Reporting and Features
Once you have created your report, you can share this report in a secured manner to users
both internally and externally. Mobile and cloud users can also see the reports you've
created, and the users can interact with the report using these platforms. Security can be
set as well since you can give users the ability to read or edit reports based on groups or
individual use.
There are three levels of usage that Power BI provides (note that these pricing tiers may
change):
There is also a web part that is available within the SharePoint Online Microsoft 365
offering. Unfortunately, the dynamic Power BI web part is only available for SharePoint
online and cannot be used with SharePoint 2019 On-Premises. With the Report Viewer
web part, you can view paginated reports that are available on the Power BI report server
directly in SharePoint.
Since this book is primarily about on-premises resources, I wanted to touch a little more
on Power BI Desktop for Power BI Report Server.
Access management
Remember that embedding a Power BI report into SharePoint does not automatically give
the user access to your data. Power BI reports are still subject to the security trimming
within SharePoint. So, if you are using this function to allow users to view data that is
being pulled from a source, you must make sure that they have the access to do so. This
access management consideration is important for all aspects of your migration, upgrade,
or new solution configuration. However, this is something that the average user may not
think about before a feature goes live. You must think about this.
Many companies nowadays are moving toward a centralized Access Management Control
Model where access may be configured through the Identity Access Management team
and provisioned through Active Directory groups. This may add more of a lead time
requirement to your report rollout, so please plan for this ahead of time.
If you have not moved to the cloud yet, the first step you will take is to look at your Active
Directory structure and the email you wish to migrate. Once that has been established,
you can build from there. If you do have these resources in the cloud, you can still use
Active Directory from the cloud to manage resources in your on-premises environment.
SharePoint can use these security resources to secure content.
Performance Analyzer
The Performance Analyzer is a tool that can be used to examine report element
performance within your environment. The analyzer records logs and measures different
reports you have created, looking specifically at the report elements, and capture how they
perform when users interact with the reports. The analyzer can also see what areas of the
report resource are intensive. This will give you an insight into how these reports impact
your environment.
The Performance Analyzer works similarly to the Load Testing product in Visual Studio.
You can record your session so that you can interact with the report. When changing
the way the report criteria interacts with data, you will see the analyzer recording those
performance changes. The analyzer will then collect and display the results of your
interactions when you run queries that pertain to the report.
As we've already mentioned in this book, performance is everything, especially when
it comes to the support of your users. Always do testing and analyze things before they
go into production. The better you can plan for these added reports, especially reports
that deal with data, the better off you will be when these reports make it to production
resources.
Power BI brings reporting to the forefront in the cloud. It really changes the game for
on-premises environments. With the changes to server platforms and their removal,
this brings hope for us to continue robustly reporting using cloud tools. There are other
changes that have been made to the cloud platform that can help us start thinking about
the cloud's direction. With SharePoint Designer 2010 going out of commission on the
cloud, we must think about SharePoint Designer 2013, whose changes will be becoming
more prominent soon. We will talk about replacements for these tools in the next section,
which is all about Power Apps.
Power Apps
To provide rapid development for apps, Microsoft came up with a cloud application
known as Power Apps. This application empowers users to create apps almost
immediately. If you come from an InfoPath background, then this is your new buddy.
Power Apps is not the same, nor does it have the same menu, so be prepared for a learning
curve. Training is available to kick-start your knowledge at very low reasonable prices
since COVID-19 and the push for virtual meetings.
Power Apps 479
Power Apps can be given to everyone or a subset of users in your company to help them
develop applications that work for your organization. You can extend these apps with
capabilities using Azure Functions and the use of custom connectors for on-premises
systems. This gives the company a business transformation that enables it to see ROI
quickly. It also improves employee productivity.
Again, Power Apps can be used with our on-premises environments through the
On-Premises Gateway. This brings the power back to our server farms, which can be
local to our server rooms or in the cloud. Remember that you must use modern sites and
modern lists or libraries to use the tools within SharePoint 2019. We will talk about this in
more detail in Chapter 12, SharePoint Framework.
When you log into Power Apps for the first time, you will see that there are a few ways
to get started. You will have to choose an environment for the app to run in for security
purposes. This could be internal for internal users or external if you plan to share
information outside the organization. Choosing this option does not limit you because
you can move apps between environments as well.
The options for creating new apps are as follows:
• Modern Apps
• Canvas Apps
• Portal Apps
Each option creates a different type of app. The most common is Modern Apps. You can
also start from data, which gives you the option to select the data source first and then
come back and create your app on top of the data source you've selected.
480 SharePoint Advanced Reporting and Features
When you first open Power Apps, you will be prompted to create an app via Data
Connection or via the Canvas, Model-driven, or Portal application models. The following
screenshot shows these choices:
• SharePoint
• SQL Server
• Excel Online
• Common Data Service
• Other Data Sources (this is where you will find the On-Premises Gateway)
Power Apps 481
When using an On-Premises Gateway, you must have set up the gateway first and have
the necessary credentials to connect to the resources you plan to use to connect to the
data you need. Once you have set up your On-Premises Gateway, you will be prompted to
choose your SQL server, if that was the data source you were planning to use, by inputting
its name, username, and password and then choosing the gateway you plan to use to
connect to the data you are in need of:
Power Automate
Power Automate is used as a workflow automation tool and is very useful for creating
workflow triggers in SharePoint 2019 and SharePoint Online. Power Automate was
known as Flow previously, and although the name has changed, its awesome functionality
persists. Due to the deprecation of previous workflow solutions, such as SharePoint
Designer and InfoPath, users and developers have started to rely more heavily on the new
workflow tools available to them.
Although there are a few third-party products that have gained popularity, such as
Nintex, if you want to stay within the Microsoft Suite for your workflow solutions, Power
Automate is a great option. Keep in mind that if you wish to utilize Power Automate's
tools, you will need a subscription. This is not a solution that is deployed directly into your
SharePoint Farm and is part of the Microsoft 365 suite of products, which means there are
fees associated with this product.
The following are some things you need to know when using Power Automate:
With the recent announcement of SharePoint 2010 workflows not being able to be
created after September 2020 and not able to run at all after October 31, 2020, companies
have been scrambling to find solutions that can help recreate those workflows and even
InfoPath forms as well. Since it was such a surprise, many companies are utilizing Power
Automate to make sure their current workflows and business processes do not fail.
Since this announcement, you may be wondering when InfoPath and SharePoint 2013
are going to be permanently retired from Microsoft 365. I believe it's coming as soon as
2023. I would start preparing those business processes so that they can also be converted.
The issues that come into play when you have so many workflows is overwhelming. This
also creates a big issue for those who have not migrated yet and have SharePoint 2010
workflows in their on-premises environments.
This brings about one of the big issues I saw, which was training the developers who
have been using SharePoint Designer and how they are going to transition to these new
tools. They are not the same by any means and you cannot expect developers to jump
into this from one day to the next, especially during a migration. It's best to plan this
upfront so that users can start playing with the tools early in the planning phases of the
implementation. Do not wait until the last minute and expect developers to run with these
tools like they have used them before.
I was recently working with a company that was just planning to move SharePoint farm
content from 2013 on-premises to Microsoft 365. In the on-premises farm, they had
well over 500 workflows within the web application. They had six web applications with
many developers. This creates a big stink when you want to migrate and must remember
that 2010 workflows do not work almost as soon as you migrate them to Microsoft 365.
Support for them is still available in SharePoint 2019, but if your goal is to migrate one day
and SharePoint 2019 is your bridge to that goal, make sure to plan early and understand
that none of those workflows will move and work in the cloud after the dates I noted
previously.
So, to solve this problem, the only thing you can do is recreate your workflows. The Power
Automate workflow tool can be utilized by SharePoint On-Premises and SharePoint
Online using connectors. In this context, connectors is a prepacked API that is used to
communicate with and perform actions in the destination location. The following are
some simple step-by-step instructions that show you how to connect Power Automate to a
SharePoint list online and on-premises using the On-Premises Gateway.
484 SharePoint Advanced Reporting and Features
When you log into Power Automate, you will see the following screen, which gives you
options to start creating flows right away. This page also gives you access to templates that
were pre-created by Microsoft to help you get started as well:
• Automated flow
• Instant flow
• Scheduled flow
• UI flow
• Business Process flow
• Start from a template
This should give you a basic overview of how to started with Power Automate. Depending
on the solution you are trying to create, you could use any of these to start creating flows
to automate business processes. The goal of this book is to let you know about SharePoint
2019 and how we can leverage the cloud so that we can use our on-premises resources
with cloud applications. Due to this, I will not be going into these topics in more detail.
However, we will learn how to connect Power Automate to our on-premises data sources
and SharePoint resources. We will cover this in the next section.
Power Automate 485
1. The Power Automate tool must be launched from the Power Automate website
at https://flow.microsoft.com/en-us/. Log into the Power Automate
website.
2. In the top-right corner of the Power Automate website, you will see a gear icon just
like the one you are familiar with from SharePoint. Select it.
3. Once you've selected the gear icon, you will see a dropdown that provides a few
options. Select the Connections option.
4. Select Create connection.
5. You will see a list of available connections. In this instance, we want to select
SharePoint.
6. Select the Create Connection button. Then, click Set up your connection to
SharePoint. Once you have selected a connection, choose a site within the
SharePoint Online environment.
1. To connect to the SharePoint On-Premises environment, you need to log into the
Power Automate website at https://flow.microsoft.com/en-us/.
2. Select the gear icon and select the Connections option.
3. Click Create connection.
4. Power Automate will now be connected to your on-premises environment through
an on-premises data gateway. Power Automate supports SQL Server and SharePoint
Server On-Premises data gateways.
5. From the available connections, list select SQL and select the option to connect
through the on-premises data gateway or SharePoint On-Premises. This option will
be featured as a checkbox.
486 SharePoint Advanced Reporting and Features
6. You must provide your connection credentials and then choose the gateway that
you want to connect to.
7. To check that the connection is successful, check My connections. You should see it
listed.
Once connected, you will be able to utilize the tool by connecting to SharePoint
On-Premises environments to create workflows that work specifically for your
on-premises content. This gives you the opportunity to update those old workflows and
build out with the newest tools from Microsoft.
The product still allows external data connectivity and data refresh features, as provided
in the past versions of SharePoint for services such as Excel Online. Office Online Server
only works with SharePoint web applications that use claims authentication. All other
authentication methods are not supported.
Office Online Server not only allows you to view content – it also allows you to edit
content. It provides support for PCs, tablets, smart phones, and many different mobile
device platforms. There may be differences due to the browsers supported by those device
platforms, and testing would need to be done to figure out what is supported by Microsoft.
Language packs are used to enable users to view files in multiple languages.
Office Online Server has a small footprint on a server and should be set up on its own
resource. I do not advise trying to add this component to a SharePoint server or install any
other applications with this application when running it on its own server. All integrated
products, including Workflow Manager, should all be separated onto their own server
resources.
You can install the services on multiple instances of this server to support redundancy.
Providing that type of support for the product would depend on how big your
organization is. If you need to grow this server platform out, you have the options to do so
later as our organization grows as well.
If you are planning to provide redundancy, you will probably want to add a load balancer.
Windows Server (IIS) solution using Application Request Routing are supported for this.
You can run this IIS role on one of the servers Office Online Server is running. DNS will
come into play, as well as certificates, to help resolve to the IP address of the load balancer.
As part of the planning process for this product, you should use the same minimum
requirements for a server as you would for SharePoint Server 2019. The product is only
supported on Windows Server 2016 and Windows Server 2012 R2. So, when installing
this server, you cannot use Windows Server 2019 OS to do so. To support the server
correctly, you need to install the server OS as Server with Desktop Experience, which is an
option for Windows Server 2016 if you're using that OS.
As with any application server, there is maintenance that comes into play when you're
supporting this server installation.
Note
The following is a link to the Office Online Server release
schedule: https://docs.microsoft.com/en-us/
officeonlineserver/office-online-server-release-
schedule.
488 SharePoint Advanced Reporting and Features
Some of the best practices to follow when running this server are as follows:
• Do not install any applications that need support for ports 80, 443, or 809 on these
servers.
• Do not install any versions of Office products on this server. If installed, uninstall
Office before you install Office Online Server.
• Do not install Office Online Server on a domain controller as the product will not
run on a server hosting Active Directory Services.
• Configure the product for HTTP or HTTPS requests.
To install the application, Office Online Server must be deployed on your network. Then,
you can configure the server so that it works with SharePoint 2019. If you are planning to
deploy Office Online Server, make sure that you have enough memory on the servers you
are deploying.
If you do not have enough memory on your farm's server resources, certain functions,
such as previewing documents, will fail to work. I had a failure when deploying a Skype
test server and I recommended minimum memory requirements and was given 4 GB
of RAM. When I tested Skype, I had issues with the scheduling process within Outlook
and kept seeing the presence of users online. After a long 4-week process with Microsoft,
it was found that due to not using the minimum amount of memory required as a best
practice, these processes were failing.
This all goes back to the earlier warnings and advice that you received in this book – you
must make sure you are aware of the best practices and what you are actually building
as it could impact the amount of server resources needed; that is, your CPU, RAM,
and storage. Make sure to play close attention to these details and test all areas before
launching.
Also, it is highly advisable that your farm is configured over a secure port using HTTPS
and the latest TLS configuration, This is because Office Online Server uses OAuth tokens
to communicate with the SharePoint 2019 server. This can cause an entry point of attack
into your environment if your environment is not configured on a secured port. To
configure Office Online Server, follow these instructions:
Prerequisites:
Install:
1. To install Office Online Server, you need to get the required license and download
it from your Volume Licensing Service Center. Office Online Server is a
component of Office, so it will be included under Product pages for Office in your
Service Center.
2. Once you have downloaded the product, run the Setup.exe file.
3. Accept the terms and licensing terms and click Continue.
4. Choose a file location for the installation (it is recommended that you install it on
your system drive).
5. Click Close to complete the installation.
6. Using Kerberos Constrained Delegation with Excel Online, you must set the
Windows Service (Claims to Windows Token Service) so that it starts automatically.
If you are not familiar with Office Online Server and it is not available from your company
download location at Microsoft, please remember to get specifics on how to access the
download so that you can install and deploy the application.
Deploying Office Online Server:
Before you can use PowerShell to manipulate Office Online Server, you must import the
OfficeWebApps module into your server using the following command:
Verify that the Office Online Server farm was successfully created:
Go to the following URL from a browser on your server: http://servername/
hosting/discovery.
490 SharePoint Advanced Reporting and Features
Set-OfficeWebAppsFarm -AllowHttpSecureStoreConnections:$true
To verify that the server is running, follow the same steps we followed for the first server
we created.
This command will allow SharePoint Server 2019 to receive information from Office
Online Server.
2. Check if WOPI is using the default internal HTTPS zone:
Get-SPWOPIZone
The result that we are looking for here is internal-http. If the shell results
show internal-https and not internal-http, then we must perform an
additional step. If your results are showing internal-http, you can skip the next
step.
3. Switch WOPI to internal-http:
Set-SPWOPIZone -zone "internal-http"
(Get-SPSecurityTokenServiceConfig).AllowOAuthOverHttp
Once your deployment is completed, make sure that you have tested Office Online Server
in your SharePoint environment. Test the URL for Office Online Server to make sure
you can get to the URL successfully. Also, ensure you can connect to this URL from your
SharePoint servers: http://servername/hosting/discovery.
Make sure that your users are testing this thoroughly. You should set up your test scripts
and make sure that all the functionality of Office Online Server is working properly before
it is fully released to the user community.
492 SharePoint Advanced Reporting and Features
One you have installed Office Online Server, you can configure the different components
within the server. In the next section, we will cover Excel Online to give you a short
overview of what this server can provide.
Excel Online
Excel Services has changed immensely and if you offer these services as part of your
SharePoint 2019 deployments, there are many changes you need to know about and
understand. It was moved outside of SharePoint in the SharePoint 2016 version of the
product. Excel Online includes many of the same features Excel Services included in
SharePoint Server 2013, such as external data connectivity and data refresh features.
Data refresh in Excel Online is supported only on SQL Server and SQL Analysis Server.
There are steps you can follow to setup data refresh, from creating a workbook to setting
the workbook's refresh based on the data source. Refreshes can only be triggered in one of
two ways:
Office Online Server can communicate with SharePoint servers, Exchange servers,
and Skype for Business servers by using HTTPS protocol. This should be the way
environments are set up in a production environment as it helps secure communication
between the servers. Certificates will need to be installed either on the server (IIS) or on a
load balancer if you're using multiple servers.
With this new version of Excel Online, you can adjust the resource usage of your Office
Online Server farm. These PowerShell commands give you the opportunity to enforce
governance over the services provided by this server or server farm. This really helps make
this farm better suited to support your users and the performance and stability of the
services provided.
Excel Online supports the following connections when it comes to connecting to data
sources:
Windows authentication and SQL authentication can be used to connect to external data
sources through Excel Online. Kerberos delegation can be used as well as but requires that
the Claims to Windows Token Service is running. Office Online Servers must be allowed
to delegate to each backend data source, and the c2wtshost.exe.config file must be
updated.
When using Excel Online workbooks, you can use two types of connections:
Note
Link to Plan installation: https://docs.microsoft.com/en-us/
officeonlineserver/plan-office-online-server.
Applying updates to Office Online Server requires a maintenance window that should
be in sync with your SharePoint maintenance. Automatic updates cannot be used for
this product because the updates must be applied manually. If the updates are applied
automatically, this could result in you having to rebuild your Office Online Server farm.
Users may not be able to see documents at this point either. So, you want to make sure you
are updating appropriately because in some instances, such as other servers, these servers
require a certain process to be followed. PowerShell is used to apply updates to this server.
Anything could come into play here, so research and be careful.
With these offerings, we also have the option to report using services such as Analysis
Service and Logic App Service. These services allow you to analyze data and build apps
across platforms and clouds while you're integrating with your on-premises environment.
There are other services that cross the bounds of cloud and on-premises as well, such as
Azure Information Protection, which is an RMS. This is another service that's available for
security and data compliance. We will keep our focus on reporting in this section.
The product is supported in regions and there are plans that support query replicas,
depending on the region you choose.
Tip
Please check the regions for availability at https://docs.microsoft.
com/en-us/azure/analysis-services/analysis-
services-overview.
Microsoft guarantees 99.9% availability and users can access this data from anywhere
in the world. This self-service platform for data discovery can help you define the data
in your company and find those areas where data still may need some cleanup. When I
say easy to use, I am talking about the interface of Microsoft Visual Studio, which is the
interface that's used to work with this product. This way, your developers do not have to
learn about any new interfaces in order to work with the data within your environment;
they should be familiar with this interface already.
The application is robust and can be scaled to match your business needs. It also provides
life cycle management capabilities for the following areas:
• Governance
• Deployment
• Testing
• Delivery
The easy-to-use platform is for those who have used Visual Studio in the past. The
application service is free with no obligations but once you start to use it, you will be
charged for what you use. You will have to use SQL Server 2016 Enterprise Server to use
the product as this is the version of SQL that's supported. If your SQL Server is on your
on-premises network, this is when the On-Premises Gateway helps you connect to those
databases that hold data, so you may want to fully expose it in the cloud. SQL Analysis
Server supports tabular models at the 1,200 and higher levels. Both direct query and
in-memory are supported.
Azure is not well-known by some customers because they never planned to use its
services. The problem with being closed-minded and only looking at Microsoft 365 as a
target solution is that you miss out on some of the benefits Azure provides natively to the
cloud platform. It really can open a new world of supported applications and security to
your company that you may never actually investigate. Let's look at another option that
works with the On-Premises Gateway.
496 SharePoint Advanced Reporting and Features
The interface for Azure Logic Apps reminds me of Power Automate. It has a similar
interface with deeper functionality than Power Automate, which helps you create robust
customized solutions. The process uses triggers that respond to events that happen in
the created business process, which could be a choice in the logic of the app by the user.
By using the templates and recurrence triggers for more advanced scheduling, no code
needed in most instances.
There are a few ways to interface with this product to create automation:
You will have to create a Logic Apps Custom Connector to use Logic Apps over an
On-Premises Gateway. Once the On-Premises Gateway has been created, it will be
available in Azure apps and Microsoft 365 apps such as Power Apps, Power Automate,
and Power BI. Once you have that resource available, just select it from the menu as your
resource group:
Once you have selected the resources group, create your connection to the SQL
server hosting your data or whatever connection your area is looking for within your
on-premises environment:
This sums up the available options for creating reports and connecting to data from
on-premises resources. Again, you can do this using your companies' location, where you
have a data center, or through Azure, AWS, or another cloud service provider. There is no
end to the possibilities. It is all about what you want as an architecture to support your
users and the company's assets.
Summary
In this chapter, we have looked at the many changes that have been made to the BI and
application reporting structures in SharePoint 2019. Microsoft has moved in a new
direction and removed much of the BI functionality from its former integration into
SharePoint. Now, the services must be connected to from the cloud or even separate
servers to our SharePoint 2019 farm. This change also complicates your overall migration
strategy as these changes must be planned for. This adds a measure of complexity to your
migration.
For instance, if you want to migrate SSR reports from previous versions of SharePoint, you
would need to do this by utilizing the migration script referenced in this book. This script
is provided by Microsoft and is also linked and available on the Microsoft website. It is
best to inform your user community of the BI reporting changes that Microsoft has made
before migration, and then get your user community accustomed to the deprecated and
removed features and the changed functionality of some of the BI processes that they have
gotten used to. I am a firm believer that if this is planned for and laid out in the beginning
and surprises are minimized, you will greatly increase the satisfaction of your user group.
Microsoft's overall strategy of removing a lot of the BI integration from within SharePoint
is a good one from an enterprise standpoint. This perspective can be used as a great selling
point when dealing with prospective clients. Most enterprises utilize various applications
and if they are explained and demonstrated properly, the enterprise will welcome the
opportunity to have these various systems connected to a centralized business intelligence
reporting source. Highlight the positive aspect that these centralized business intelligence
data and reporting services will help consolidate data reporting within their organization
and could also help minimize the redundancy of these types of tools going forward.
500 SharePoint Advanced Reporting and Features
In the next chapter, we will talk about the User Interface and Developer frameworks that
can be used within SharePoint 2019. Things have changed drastically, and we need to
make sure we explain how the modern and classic sites work, as well as how developers
can make changes to the sites. This is an important chapter since admins and users need
to understand the possibilities this brings to farms and where their skill sets must be
updated to support the new platform. The tools listed in the next chapter bring on a whole
new level of training and understanding as they are different from the legacy tools that
supported SharePoint in the past.
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
5. Report Builder remains integrated with the latest version of SharePoint 2019
On-Premises SharePoint 2019.
a) True
b) False
Further reading
Please visit the following links to learn more about Azure AD.
Hybrid identities and authentication:
• https://docs.microsoft.com/en-us/azure/active-directory/
hybrid/whatis-hybrid-identity
• https://docs.microsoft.com/en-us/azure/active-directory-
domain-services/tutorial-create-forest-trust
• https://docs.microsoft.com/en-us/azure/active-directory/
fundamentals/active-directory-access-create-new-tenant
More information about how to install the Hybrid Picker can be found at
https://docs.microsoft.com/en-us/sharepoint/hybrid/configure-
inbound-connectivity.
You can learn more about the Power Automate desktop offering here: https://flow.
microsoft.com/en-us/blog/jumpstart-your-business-with-power-
automates-new-desktop-rpa-solution/.
To create your first Azure Logic App, check out these links:
There are some features included that SharePoint offers natively that we will also talk
about as some of these features have been deprecated. Yammer has some integrations as
well with SharePoint 2019 and there are also Microsoft mobile capabilities that we can
walk through that are native to SharePoint on-premises and social collaboration.
In this chapter, we will also look at the features of SharePoint 2019 and show how these
products can be bridged together. Although Teams is a key component in the cloud, there
are other features available that we can take advantage of to push the platform.
The following topics will be covered in this chapter:
Technical requirements
For you to understand this chapter and the knowledge shared, the following requirements
must be met:
You can find the code files present in this chapter on GitHub at https://github.
com/PacktPublishing/Implementing-Microsoft-SharePoint-2019.
All the social features you have seen in other versions of SharePoint, such as 2016 and
SharePoint Online, are available in SharePoint 2019. As we mentioned, where SharePoint
2019 enhances these features is with integration with other applications such as Yammer
that push content to SharePoint through web parts and other integrations. We will talk
more about this in this chapter but here is a short list of social features that are available in
SharePoint 2019:
• Community features
• Company feed
• Follow features
• Microblogging
• One-click sharing
• Personal sites
Developers can extend these features by developing enhancements to make these features
do more to support their users. With the use of APIs, the developer can start developing
these enhancements. Using Visual Studio 2012 or newer, the platform is available for
these enhancements; we will talk more about this in Chapter 12, SharePoint Framework.
They can also use Power Apps, Power Automate, and other tools within Microsoft 365
to enhance the functionality of social features within SharePoint 2019 using hybrid
connectivity.
Now we see how, with new social applications, users can stay in one application such as
Microsoft Teams and do not have to hop around to find information. A bridge has been
created where we can pull in information from other apps and bring them into Microsoft
Teams, including SharePoint, which makes the user experience so much easier and saves
time when users are looking for information.
The user experience is enhanced by Microsoft offering mobile apps for SharePoint and
Microsoft Teams. This gives users access to these applications from their mobile devices
and they are able to access meetings and information from anywhere. Yammer also
has a mobile app and you could create your own app using Power Apps or, using other
development languages, you can create single sign-on access to protect access from the
internet into the company tenant, using Intune and Azure to host the mobile connectivity.
We believe we will see more of this as some of the social networking tools get more
enhanced and bring more collaborative functionality to the table. It seems the goal of all
this is to be able to share anything with people internally or externally securely, which is
a great step in the right direction. Everyone seems to love these new features and cannot
wait to see what is yet to come.
We apologize but we do not have enough page space in this book to dig deeper into this to
show more ways of how Yammer can be useful in on-premises environments and how it
can be easily integrated with SharePoint 2019. Research to find out how Yammer can help
your organization along with Microsoft Teams.
There is also another social tool that needs to be mentioned, and that is a mobile app
built and supported by Microsoft named Kaizala. Kaizala is a group communication and
work management tool used for secure mobile messaging. It provides a chat interface to
securely connect to colleagues, vendors, distributors, and other resources a person may
need in and out of your enterprise.
Kaizala gives you options to send invoices, and other Kaizala actions can be used within
the app. You can schedule work, provide training materials, or even send attachments
for review by other colleagues. Polls and surveys are also available and these types of
communications can be sent to thousands of people. This app is an alternative to other
social media-type applications we can use within the Microsoft 365 cloud. In the case of
SharePoint 2019, you must be on a hybrid configuration to take advantage of Yammer,
Kaizala, and other apps like SharePoint as well. These all are supported through hybrid
connectivity.
If you look at the other tools, Yammer and Kaizala, you will see that they both work for
large-scale communication, more like enterprise communication tools. Microsoft Teams
works better for projects and smaller teams of people within your organization, as you
will see in this section. Those teams that are created are used mostly to support projects
or departments within the organization. This is a smaller group of people, so responses
would be quick, and you will only find yourself using Teams when you are working on
projects or working with these small groups of people.
When Microsoft Teams is rolled out in an organization, so is SharePoint Online,
technically, because behind every Microsoft Teams workspace is what we traditionally
know as a SharePoint team site. This is important to remember and drive home with your
users and management teams as they need to understand that with each team comes a
SharePoint site. You could end up with many sites that you know nothing about as an
administrator.
The other thing that admins need to be aware of is that Microsoft Teams only
authenticates using Microsoft Azure Active Directory (AD) authentication. So, when
using the product for an on-premises SharePoint farm, that farm also needs to use AD in
the cloud. So, the dependency on using Teams within your on-premises environment is
using Azure AD.
508 Enterprise Social Networking and Collaboration
The reason why you want this integration is to get the presence feature working in your
on-premises environment seamlessly. This would require user profiles to import users
from AD in the cloud, and if you want to make an even tighter integration, use the hybrid
features within SharePoint 2019 to bring it all together, integrating OneDrive and other
great features.
While Microsoft Teams is an amazing interface that offers a lot of different collaboration
tools under a single moniker, it is not a replacement for SharePoint. I have seen some
confusion around this as management in certain organizations are solely focused on
reducing redundant applications and some may wrongly believe that if Microsoft Teams is
adopted, then Microsoft SharePoint disappears, yet that is simply not the case.
Use both collaboration tools for different types of collaborative options. SharePoint is
used for a more advanced set of features, such as document sharing, departmental sites,
libraries and lists, and automated business processes on a large scale that work to create
collaborative interactions with departmental technology.
Use Microsoft Teams for meetings, project calendars, messaging, and keeping related files
associated with the project or team in the SharePoint site associated with the team. With
this use of collaboration tools, it seems to me to be more project focused but has some
organizational areas it supports, such as meeting technology.
Bridging these two applications together creates a full, powerful collaborative experience.
This is why you see more and more people flocking to the Microsoft cloud because the
tools are powerful and can create a sense of your company being large but you are actually
small due to the technology behind your company.
Users can communicate through Microsoft Teams using the chat functions that Lync and
Skype historically provided. Businesses can also host meetings, both video calls and audio,
through the Microsoft Teams platform. This function was typically handled by Cisco
Webex functionality, GoToMeeting, and other virtual meeting platforms. Microsoft Teams
also allows users and groups of users to access, view, and edit SharePoint documents
right from the Teams interface. The Microsoft Teams platform can also be integrated with
Outlook so that your email messages can be viewed from this single interface. Many other
apps can be integrated as well, such as Salesforce, Power Apps, and so much more, which
brings a new meaning to collaboration.
As you can see, Microsoft Teams has very tight integration with SharePoint Online. When
dealing with files in Microsoft Teams, any changes made to that file are made directly
within SharePoint within the document library. This is not in sync with SharePoint as it
is just that the file actually exists in SharePoint. The user also has the option to open the
document in SharePoint and using this option will take them directly to their SharePoint
Online team site, which corresponds with the team that was created in Microsoft Teams.
There is also the option to add a SharePoint page tab in your Teams channel. If a page
already exists within the SharePoint site, you can plug it in by adding it as a tab and view
the information without having to visit the SharePoint site.
We are all familiar with the SharePoint failed newsfeed in conversations launched in
SharePoint 2013. It failed when it was first launched due to the novelty of users playing
around with the feature and those of us who presented the features of SharePoint
highlighted this innovative feature to our customers. However, there was very low
adoption in our opinion, and apparently in Microsoft's opinion as well, and this was
due to the placement of the conversation and newsfeed option. It was placed outside of
the actual working content and you could see the conversations on a site; however, this
placement made the conversations within the newsfeed very general. Now, conversations
can be placed alongside a specific document. I think that this is an amazing feature
because it allows collaborators on a document to have real-time conversations about that
document, thereby eliminating the need for emailing back and forth or Lync/Skype chat
sessions that may be interpreted out of context without having the document readily
available. The following is just a snapshot of some of the awesome features of Microsoft
Teams as it relates to SharePoint:
Understanding the differences between Microsoft Teams and SharePoint is critical because
we want to make sure that we use the correct technology for the right purpose. Please
make sure to review, document, and train users on how Microsoft Teams and SharePoint
are different. It seems to be confusing for users what they can and cannot do in both
applications. This is key to rolling out the application across your enterprise and if not
explained, you can wind up with many Microsoft Teams SharePoint sites. So, now you are
asked to spin up Teams in your company's environment; what do you do? Let's talk about
the preparation and areas we need to be concerned about.
Governance
In this section, we will explore some of the governance features built into Microsoft
Teams. Due to the rapid pace of the Microsoft Teams rollouts in many organizations, the
governance of this platform became an afterthought. We highly recommend that if you
are in the position to roll out your team's platform from the beginning and have time to
do this properly, you consider the governance strategy for this platform beforehand. If you
already have a governance strategy for your Microsoft 365 or SharePoint environment,
this plan can be augmented to fit your Teams environment.
512 Enterprise Social Networking and Collaboration
Since Microsoft Teams is most likely new to a lot of admins, decisions must be made
about what type of identity model is supported in your organization, who will have what
type of administrative access, whether conditional access provisioning is needed, retention
and life cycle management, and a rollout communication strategy.
There are many areas to check but one of my pet peeves is not checking guest and external
user access. You want to know who has access to your company's user community within
the Microsoft Teams environment. The last thing you want is unwanted users contacting
your users or even users sharing files outside the organization that are deemed intellectual
property. There are other areas you need to make sure to check as well that concern how
the application is set up. These are listed as follows:
Make sure you do your due diligence on checking these governance areas. There are more
as well within the Teams user settings that may need to be looked into to make sure you
have complete control of what you want your users to have access to change.
In a case where we would be looking at storage, this could be for email, OneDrive,
or SharePoint, where we know that storage is key for documents and email. These
applications that need storage may come at a higher price based on how much content you
plan to store per user. In some cases, you may give some users a higher subscription and
some a lower subscription depending on these two factors mentioned.
Microsoft Teams overview 513
If you feel that by application is the more appropriate way to provide governance
over subscriptions, then you would look at what applications are provided by which
subscription model. This would be the case when you look at, for example, Microsoft
365 Business Basic, which includes Teams, Exchange, OneDrive, and SharePoint, but
your users may need Microsoft Office as part of their subscription. In this case, they
would need to buy Microsoft 365 Business Standard, which includes the four applications
mentioned plus Microsoft Office apps at a higher price.
Please be aware of these pricing models as the subscriptions and pricing change; although
they have not changed at this level lately, you will find that other apps outside of the
subscription model, such as Stream, will cost you and those pricing models do change
sometimes. You must be aware of these changes so that you understand your monthly bill.
Third-party applications
Microsoft Teams also provides ways to integrate outside applications and pull in data from
those applications to present to a team within the Teams desktop. With those applications,
you want to make sure that data from those applications is governed, as well as knowing
what third-party apps are being used. Microsoft Teams gives you the option to control
what apps are integrated with your Teams rollout and what users can or cannot use within
the application.
Along with these areas, we must look at Teams access from a location standpoint, be it
internal (private) or external (public) users. Then, we must also look at the types of roles
we plan to include as part of the rollout of Microsoft Teams. Admins need these areas of
coverage to support the service:
There are also other types of roles that need to be assigned to support members of teams,
which we will talk about later in this chapter:
• Team creator
• Team owner
• Team member
• Guest
The life cycle follows a pattern as stated where you have a beginning, middle, and end of
the project life pattern. Let's look at these three important parts in detail:
• Beginning: The beginning of a team would be to create the team. This would
include several steps because you can also add an existing Microsoft group to a team
or many of them. You can also create a team from scratch or add an existing team
along with using APIs for Microsoft Graph teams to create teams programmatically.
These teams would be created based on global address book attributes. Within the
setup, we should also create channels and assign team members to those channels.
• Middle: The middle is used to describe the use of the Microsoft Teams product after
it is set up. This would be the management of the application based on the team.
Areas that need administration would be to update team members, update channels
as needed, manage guests using the Microsoft Teams mobile app, and a few other
areas of change and/or adoption with the product. This would also include users
being more active using the product, which we want so that they enhance their
knowledge of the product and build confidence with using the product successfully.
• End: After the use of a team has been completed, we must make decisions on how
we handle the team's content. You need to make sure to get confirmation from the
project leader or users before removing anything from the team. Closing these
ended projects or teams makes sure that users do not get access to old content that
is irrelevant to anything they are currently working on.
Microsoft Teams overview 515
You can delete teams that you do not need. All teams are deleted as a soft delete, which
you then have the option as an admin to reverse within 21 days. If the team is for a
Microsoft 365 group, the reverse option is available for 30 days.
Retention policies can also be used to prohibit the deletion of teams. Policies can be added
to teams or associated with SharePoint so that information can be retained for further
time periods. Teams consist of files and content that need to be examined by the teams.
Some teams may see value in different content and only want that content retained. This
could be in SharePoint as well where you do not want things changed and kept in state
going forward so that the information can be recalled if needed.
Some types of content that are captured by Microsoft Teams are as follows:
In all the integrated apps, their content is retained in the app and not in Microsoft Teams.
If you need that information, you should be able to either migrate it into SharePoint or
keep it in the current app and pull it from that application when needed.
There are a host of new features in Microsoft Teams that you may want to look at for this
purpose, which are listed as follows:
Now that we have our governance vision, let's look at access roles and the types of access
we need to create.
516 Enterprise Social Networking and Collaboration
Access
As we know, Microsoft Teams is a service that is only available in the cloud. However, this
does not mean that you must have a cloud-only identity to access Teams. All the identity
and access models allowed in Microsoft 365 are compatible with Teams. Even third-party
authentication providers such as Okta are available to integrate with Microsoft 365 and
your on-premises SharePoint 2019 environment for single sign-on.
However, if your access resides on-premises, for example, Windows AD, it must be
configured within Azure AD. Azure AD was discussed in Chapter 10, SharePoint
Advanced Reporting and Features, and briefly this chapter also offers knowledge on AD
services, identity security, and application access management.
The following are the types of identity models supported by Teams:
• Cloud identity: Cloud identity is when the user is created and managed within
Microsoft 365. Azure AD then stores this identity and manages password
verification and security, giving access to various applications – in this instance,
Microsoft Teams.
• Synchronized identity: The synchronized identity model is when the identity is
managed and stored on-premises and accounts and passwords are synchronized
to the cloud. This synchronization is done through Azure AD. This is the preferred
method of most organizations for security reasons.
• Federated identity: With federated identity, the identities are still synchronized;
however, the password is verified within the on-premises user store and/or through
an online provider such as Active Directory Federation Services (ADFS).
Most organizations go with the synchronized identity model for security reasons as stated.
Organizations are becoming more open to cloud offerings; however, many prefer, and
some are required by policy, to have their access identities remain on-premises and be
managed by an identity access management team within the organization that is separate
from the IT organization or the business users. Many organizations are even audited
by outside governmental agencies to ensure that access to systems and applications is
managed through identities on-premises and that segregation of duties is enforced.
By using Azure AD, these organizations can leverage powerful collaborative tools such as
Microsoft Teams. Once Azure AD is configured to sync with on-premises, the identity for
Microsoft Teams is then able to use these identities to dole out licenses to those identities,
granting them access to the Teams features, such as Exchange mailboxes and phone
system licensing.
Microsoft Teams overview 517
Teams uses identities used in Azure AD. Groups can be created utilizing these Azure
identities and from a governance perspective, different groups can be given access to
different features of Microsoft Teams that fit in line with the function of the users within
the group.
Structuring these groups properly can create a group-to-Teams relationship, but in a lot
of cases, this will not happen. But we want to get to a point where we don't have to do
a lot of clean-up work within our Azure AD, so the closer you can relate groups to Teams,
the better. As you will see, we can set retention on groups and archive information within
Microsoft Teams, so we could have many groups created that are not being used if we are
not careful to unprovision these teams and groups consistently.
• Teams service administrator: This role has access to manage everything within the
Microsoft Teams admin center. Along with managing the Teams service, it also can
manage and create Microsoft 365 groups.
• Teams communication administrator: This role has access to manage all the
calling features of Microsoft Teams, as well as the meetings feature within the
service.
• Teams communication support engineer: This role is tasked with troubleshooting
communications issues within Teams using the advanced troubleshooting tools
available. This troubleshooting is done through the Microsoft Teams admin center.
• Teams communication support specialist: This role has access to troubleshoot
communication issues using more basic tools. The scope of the troubleshooting
tools and information available for the Teams communication support specialists
is the individual affected user, unlike the Teams communication support engineer
who has access to view data from all of the Teams communication users.
• Teams device administrator: This role manager is actually a device that utilizes the
Teams service.
• Global administrator: Has broad access to administer the entire Microsoft 365
suite, including Microsoft Teams
518 Enterprise Social Networking and Collaboration
Setting up these high-level administrator identities for support of the product is key to
supporting Microsoft Teams. Make sure to remember to look at those areas specifically
as you get started working with Microsoft Teams to make the right decisions on who has
what admin access.
To learn more about Microsoft Teams administration, please follow this link to try your
hand at administration:
https://docs.microsoft.com/en-us/microsoftteams/manage-teams-
in-modern-portal
• Team owner: The team owner is the person that created the team.
• Team member: A team member is anyone inside of an organization that is invited
by the team owner to join.
• Guest: Anyone outside of the organization that the team owner invites to join the
team. This option may or may not be allowed within an organization and can be
suppressed.
Microsoft Teams overview 519
The following chart represents the tasks that Teams users can perform based on their
role in a particular team. Organizations may limit the out-of-the-box capabilities of each
role depending on governance specifications. Please consider whether any limitations are
required before deployment:
Note
Team admins can control what guests within the team can do. Team owners
can control some of what members and guests can do if certain functionality is
not limited by team admins.
For more information on Microsoft Teams guest access, check out the following link:
https://docs.microsoft.com/en-us/MicrosoftTeams/guest-access
Deployment
Microsoft recommends deploying Teams in stages and not rolling out all the Teams
features at once. We wholeheartedly agree and can speak from experience here as we
participated in a very rapid rollout of Teams due to the pandemic. The vast number of
features and lack of organizational communication (we will go into the need for a firm
communication plan in the next section) left users befuddled and overwhelmed and
hindered adoption due to confusion about what in fact Microsoft Teams is and what it
does. As in all other rollouts, migrations, and upgrades, clear communication and training
is key to satisfaction and adoption.
Microsoft Teams overview 521
If you can deploy Teams in stages, pick the set of features you want to roll out first. If
you already have other technologies that are performing these functions, you can drive
adoption by letting the user community know that the older technologies will be phased
out and suggest that they begin their transition as soon as possible to Teams for the
features that will be replaced. For instance, if you are set to begin your Teams rollout
with the chat and meeting features, which is recommended if you plan on making
Teams the organization standard for these features, let the user community know when
their previous chat and meeting technology, such as Skype or Webex, will no longer be
accessible to them.
Training is also an immensely important step in your rollout. Training is made much
easier when you focus it on a small set of features that are newly available versus expecting
the user community to take Teams training on all the features that are rolled out at once.
Network considerations
Microsoft Teams has many features, including document collaboration, Skype phone
services, chat messaging, virtual meetings, and more; each of these components
requires network and bandwidth considerations. We will not do a deep dive on network
preparation for Microsoft Teams in this book; however, there are some requirements that
you must consider:
• Make sure that your organization's network is already optimized for Microsoft 365.
If it is not, you must follow the prerequisite steps from Microsoft's site to make sure
to prepare the network for this optimization.
• At a minimum, all locations where we use Microsoft Teams must have internet
access, of course. For all locations that will access Teams, the following ports must
be opened, and the IP must be set:
Ports: UDP ports 3478 through 3481
IP addresses: 13.107.64.0/18, 52.112.0.0/14, and 52.120.0.0/14
• SharePoint Online and Exchange Online must be deployed within your
organization.
• Once Microsoft Teams is deployed, please test the following for network
optimization as further optimization may be required:
• Utilize the call quality dashboard to understand the quality of calls and meetings
within Teams. This will help to identify issues and plan remediation.
Communication
Having a firm communication plan is extremely important to the success of your
Microsoft Teams deployment. If you have chosen to roll out only certain features, make
sure that this is highlighted in the communication plan and the communications being
sent to users. The user community will undoubtedly begin doing research on their own
on the features of Microsoft Teams and may be confused as to why certain features are
lacking in the initial rollout.
Along with the communication, if there is a long-term strategy within the organization
to use Teams as their preferred method of collaboration and communication, make sure
this is emphasized in the plan. This emphasis can help drive adoption as users are often
hesitant to move to newer technologies when their older counterparts are still in use
within the organization. Let the user community know that the older technologies will be
phased out.
Also, within your communication, reference or link any training sessions that are available
to the user base. I believe it is important to have live overviews and step-by-step training
sessions from the beginning. These live sessions are important especially if you are doing
a phased rollout or rolling out to a smaller user base at the beginning because it will allow
you to gather many of the questions that users may have and ascertain any confusion that
the subset of users may have concerning Microsoft Teams. Gathering this information at
the beginning will allow you to tailor your environment, communication, and training
going forward.
Prerequisite steps
Before you embark on a Microsoft Teams rollout, there are a few steps you need to take
to make sure you look at all your environmental dependencies. As we talked about
governance over SharePoint, we also need to do the same for Microsoft Teams and this
should be taken even more seriously as it really opens huge areas of concern and exposure
to unwanted incidents.
Microsoft Teams overview 523
When looking at these prerequisite steps, we are really going back to our section on
governance. Here, though, we will explain more about the high-level governance needed
to be in place to make sure the service is covered and vetted by all stakeholders.
The areas that we want to concentrate on are the following:
• Project stakeholders
• Project scope
• Coexistence and interoperability
• Journey to be successful
There are many things that need to be done as well to get started, such as workloads,
configured domain, Azure AD, Exchange configuration, Microsoft 365 groups interaction,
and other areas, which include a public switched telephone network if this is the
configuration you are implementing. Please make sure to look at this link before you start
your journey:
https://docs.microsoft.com/en-us/microsoftteams/upgrade-plan-
journey-prerequisites
When opening Microsoft Teams, you will see the following sign-on page come up on your
computer. The same page renders on your mobile app if using the app on a mobile device:
Creating a team
When Microsoft Teams is rolled out within an organization, users will have access to
any organizational teams spaces that they are automatically added to by being part of the
enterprise organization AD group that is synced to Azure AD. However, there is also an
option to create your own group within Microsoft Teams to collaborate with a specific
community of users within the organization. Follow these steps to create a new team:
1. Log in to Teams.
2. In the bottom-left corner, click Join or create a team:
For this example, I will create a team from a template. Select Manage a Project:
• Private: You will want to choose the Private option if you want to keep this group
to a limited audience. With this option, only those with permission to join will be
able to access this team and the content native to it.
• Public: The Public group option allows anyone in the organization to join this
group.
• Org-wide: The Org-wide option automatically gives everyone within the
organization access to this group.
A note on creating an org-wide group. Org-wide teams can only be created by the
global administrator. This type of team is synchronized with the organization's AD
and is kept up to date as people join and leave the organization. Microsoft imposes
limitations on this option; only organizations with less than 5,000 users can create
an org-wide team and the number of this specific type of team is limited to five per
tenant.
5. Name your new team:
1. Navigate to the Microsoft Teams site or desktop app and sign in:
2. Select the ellipses (…) to see the list of more options, then select Add channel:
4. Check on the left side of your workspace and you will see the newly created channel:
3. You will now see the SharePoint Online library behind Microsoft Teams:
Figure 11.20 – Start a conversation on the file with others in your team
4. See your message displayed in the pane:
5. Anyone who is a member of your team has access to view the document and the
conversations related to the document. If a team member misses the real-time
conversation on the document, they will receive a notification with the document
linked and the conversation displayed:
You can see on the Add a tab page that there are several default apps that can be added
to your Teams channel. Some of these are the standard Microsoft applications, such as
Word, PowerPoint, OneNote, and Excel. However, do take special note of the Power
BI app, which can be added as a tab, as well as the SharePoint Document Library app.
There is a SharePoint Pages app that will allow you to add a SharePoint page from your
SharePoint Online environment that is associated with this team's site. You can also
select the SharePoint app, which gives you a bit more flexibility in connecting to content
outside of the site that simply exists within your Teams channel. In addition to the out-of-
the-box apps that are displayed, there is also the option for a developer to create custom
applications within your organization, and based on access, those applications will be
displayed here as well and can be chosen as a tab.
Out of the box, team owners can create private channels at will. What administrators must
understand is that private channels are their own SharePoint site collections and should
be limited in number. You do not want team owners creating a lot of private channels that
correlate to SharePoint site collections. The team administrator can limit the ability of
team owners to create private channels; this option should be considered. Each team can
have up to 100 team owners; if they are all creating private channels, this will become a
problem.
Each team and each channel within a team should be created for a purpose, a goal, or a
project, and only those that are necessary to contribute to that purpose should be added to
that team. Channels and apps and connections within channels should be added solely for
the purpose of reaching the goal of that team.
Now that we have looked at some of the functional areas of Microsoft Teams, let's take
a look at one of the meeting features within Teams called live meetings.
Our event will look like this once we set up all the hardware components:
Since this is basically a small concert setting, we are going to start with the Behringer
X32 console as the first area of focus. The Barringer X32 is an intermediate digital
sound console as far as cost and notoriety around the music world go. It's not bad for
the price and will give you a great sound with some amazing features to go along with
it since the board is digital. The mixer supports many new functions you will not see in
analog mixers:
This mixer also comes with a network connection for Cat 5 that enables it to be managed
on the network from a remote console such as an iPad. Mixing the sound on an iPad can
be very useful when there is no one to manage the sound, or the sound person can also
be home mixing as well. This is almost the best way to do the mix and that is due to you
wanting to check the way the sound is coming out through the speakers at home because
this is what your audience will hear when the event is streaming. They will not hear what
you hear in the auditorium, so we must do some things outside the auditorium to make
sure things are mixed well.
There is also a USB connection that takes the sound input and pushes that sound out
digitally to a laptop. In most cases, all mixers will need to have a driver installed on the
laptop to make sure it can connect the hardware to the computer. Once you have that
driver, you will see the choice for sound sources that comes up in the Microsoft Teams
device settings menu for the X32 sound source, which then will project that sound
through your Teams event.
The sound needs to be mixed well and you want to make sure you verify that the mix
you have created, called a scene, works for the outputs in the auditorium and for those
streaming online. Using two separate scenes in the console will provide that capability and
give you a way to change settings on the different mixes separately as needed.
This also brings up some other X32 capabilities, for example, you can set mixes for
different outputs in the mixer. So, if you have a house mix, which could be the main mix
on outputs 1 and 2, this can be set in the mixer and saved as a scene as well. So, when you
select that scene or channel configuration, the mix comes back to the board, where you
can change it and then save it again if needed. If you have a mix for the USB out and the
sound is just not working or you added an instrument, you can also go back and edit the
channels you have saved to be received by that output. Overall, the board gives you so
much control over everything, but if you come from an analog background like me, then
these new mixers will take some getting used to.
If you are having trouble with your X32 console, look at updating the firmware. Old
firmware could cause freezes on the board and other weird power issues where the board
resets power with no warning. Upgrading the firmware will provide the latest and greatest
menus and functionality for the board as well. It will also fix a lot of hardware issues you
may have. We suggest doing this as soon as you get your board, so you program your
settings on a good foundation and download the compatible Windows drivers for that
firmware as well. One thing to remember is that the board settings will be erased on this
board once the firmware is upgraded. Save the scenes before you upgrade if you started
adding scenes before you upgraded the firmware. You should upgrade the firmware no
matter what soundboard you buy!
546 Enterprise Social Networking and Collaboration
Headphone amps are also useful for stage sound instead of monitors and help the
musicians and others who are part of the event to hear the music clearly and/or even a
microphone you may deem as an instruction feed for stage coordinators could help in the
production of the event as well. As a musician, I have used these many times and it helps
to hear the music more clearly and even instructions from a bandleader. This requires
a separate feed from the mixer, which would be from the analog or digital snake where
instead of using an input, we would use an output to get the feedback to the headphone
amp. Separate mixes can be sent, or you can use one mixer to feed all headphones.
The digital snake is such a lifesaver as far as cabling and getting a true digital sound from
the players, singers, or MCs that will be performing at the event goes. The snake sends a
signal from the digital input and output mixer on the stage with ¼ and XLR inputs and
outputs back to the board so that the sounds from the instruments and vocals can be
captured and mixed on the console. The snake costs a lot more than a traditional snake
but provides a better quality of sound. This is really all it provides but avoiding having this
huge cable from your console to your stage is also a saving as digital snakes can use Cat
5 or 6, depending on your mixing console, or a special cable. This also helps to feed the
monitors on stage if needed so that everyone can hear themselves if a headphone amp is
not the solution.
Wireless mics are what you see most of the time at events but there are still people that
love cabled mics as well. The only thing to look at with wireless microphones is to make
sure you either buy different brands of wireless mics or make sure to upgrade the firmware
on the wireless microphones' bases. In most cases, the newest brands provide channel
separation if there are the same brand wireless microphones on stage. If you do not have
the latest and greatest, then updating the firmware may be your answer. It is important to
make sure the microphones do not stay on the same channel because when they clash, you
lose your sound from the microphone, which means the audience will not hear you and
could miss something important you said.
Your video team should investigate what software will work with Microsoft Teams. In
some cases, Teams is all you need. When doing a live event, the console changes from the
normal Microsoft Teams screen where you see other people to the live interface, where
you see a staging area and a live area for presenting the staged content. This is good in
some cases but there is other software out there, such as Wirecast, UBS, and others, that
give you better functionality to handle many types of feeds. Although Teams does handle
some feeds well, some feeds are currently not handled very well. I tested many different
types of HDMI adapters to see which one could help with the multiple cameras and
figured out that Teams is still in need of some upgrades.
Microsoft Teams overview 547
NDI was not available when I did this installation, but I believe it is soon to be available to
use with Teams, which will make Teams more compatible with other video management
software out there. NDI is a video network protocol that streams video from different
network resources but can be captured by a software application so that it can be managed
by the producer. It makes cameras that are wireless easy to connect to and brings those
feeds back to the video console. You can also connect wireless cameras using IP as well by
connecting them to the laptop and then pulling them into your software.
It also creates an environment where if we had a second laptop and had something we
wanted to stream from that laptop, then we could use NDI to bring that video back to
the main producer console. This is all wireless or you can use Cat 5/Cat 6 to connect
all the hardware. With that being said, I still like the Cat 5 or 6 option because when a
network protocol fails, it really makes troubleshooting tough, especially when you are live
streaming. Knowing a cable could be the problem is a quick switch, so I have been up in
the air so far on hardware connection. Network cables also provide reliable streaming, so
you do not lose video stream during a production.
Cameras come in so many shapes, sizes, and tech varieties. The main camera used for this
setup was an old Sony HVR-HD10000U camera that had HDMI and then I purchased
two Mevo wireless cameras. The Sony camera was used as the main camera and was
connected using HDMI. This camera did most of the main video captures for the event.
It was manned by a cameraman and used to pan and follow the main person on stage.
The wireless cameras were focused on other parts of the stage but with the Mevo, you can
also do some remote control to zoom in and out. This became very effective for the shots
during the event.
The Mevo is very unique as it will connect to your wireless network and make itself
available for streaming. It can also be connected directly to some applications, such as
Facebook, which it was built to be used with. The camera is small but gets some great
video and you can buy a stand for it or mounting hardware to put this camera pretty much
anywhere. It is not a high-end camera with a lot of tech bells and whistles but for the price,
it is great, small, and compact:
All video monitors were installed using TV risers and were 70 inches to present a good
view from the back of the auditorium. We connected them all through HDMI to a switch
and then back to an HDMI switcher in the booth. To get the monitors connected, we used
HDMI boosters that used Cat 6 to connect a TX and RX box on each end. Those then
convert to HDMI on the other end of the hardware, which is then connected to the TVs.
We ran our cables over 100 feet and the feed was solid with no delay or lack of clarity on
the screen.
So, let's now look at how to create a live event.
1. To create a live meeting, use your calendar in Microsoft Teams to create a new event
from the New meeting menu:
4. After clicking Next, we will get a menu to choose permissions for the event:
6. Once you click Schedule, you will see the event details screen. This screen shows
you the link for the attendees and gives you options to copy it so that you can share
it. You can also join the event or create a chat. You can cancel the meeting if you are
having trouble or need to reschedule:
10. If you need to display two content sources such as video and content from another
app, Microsoft Teams give you an interface to do that within the live event UI:
13. Click the gear button in the top-right menu to see the device settings:
15. To set the auditorium, we can use this in-room audience feature to get reactions,
but the more interesting feature that is configured on your X32 soundboard is a
mix-minus. What this does is you create a process in the board where people that
call in can ask a question, which is then projected on the house speakers so that
everyone can hear it. Then, the moderator or MC can answer questions from those
who called in or joined in via the Teams app. This is needed when you want to have
interactions with your audience.
16. By clicking the first choice on the settings menu, you will find the health and
performance menu:
18. The following menu is for taking notes about the event. This is for the producers
and presenters:
As you can see, Microsoft Teams has a great platform to host live events and meetings and
supports up to 10,000 attendees depending on your subscription. I really believe it is only
going to get better and I am digging deeper into this because of my music background. I
wanted to share at a high level what I did to make this work for my customer as it can shed
light on the possibilities for your company looking to provide auditorium-sized events
with outside callers. Teams is a great tool and there is so much more you can do with it.
Summary 557
Summary
Microsoft Teams seemingly came out of nowhere as the hero of enterprise social
networking and collaboration. During the trying times that people and organizations
were dealing with due to the COVID-19 pandemic, many turned to Microsoft Teams as
a solution to their social connectivity issues as the workforce and education were driven
home into remote working situations that many were facing for the first time. This left
many admins scrambling to get up to speed with the Microsoft Teams application and the
opportunities for extendibility within this platform.
Microsoft Teams offers phone communication, virtual meeting capabilities, Outlook
integration, document management, and chat-in-conversation features all in one place.
As amazing as Teams is from a business perspective, from an administrative perspective,
it is important to understand that Microsoft Teams is a layer on top of SharePoint and not
a replacement for SharePoint. It is very important that as an administrator you drive this
point home to management and the decision-makers for your SharePoint project. It is
also very important to remember that a solid governance plan should be created prior to
rolling out Teams in your organization because although the mini features in one location
are great, Microsoft Teams can grow unwieldy without proper governance and use of best
practices.
In our next chapter, on SharePoint Framework, we will cover how this all works from
a developer standpoint. Developers have such a vast array of tools to use to create
customizations that APIs are available to alter any Microsoft application offering.
Developers also have access to all SharePoint 2019 sites and content to Microsoft 365
sites, and content using data gateways, which brings this platform to another level. This
is so powerful and really takes collaboration to the next level with these integrations,
especially with Microsoft Teams as pulling in an app in Teams is very easy. The ideas for
collaboration at this point in time are unlimited. "Why leave on-premises?" You may ask
yourself whether a hybrid configuration is your best bet after reading this book.
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
558 Enterprise Social Networking and Collaboration
6. Which of the following can be extended using apps, bots, and custom connectors?
a) User permissions
b) Channels
c) Azure AD
d) None of the above
7. In Microsoft Teams, live events can be created from which menu? Choose the best
answer.
a) New meeting menu
b) Privacy settings menu
c) Add an app menu
d) None of the above
Further reading
Please check out the following helpful links to further explore Microsoft Teams on
your own:
• https://docs.microsoft.com/en-us/microsoftteams/how-to-
roll-out-teams
• https://techcommunity.microsoft.com/t5/microsoft-
sharepoint-blog/sharepoint-and-teams-better-together/
ba-p/189593
• https://support.microsoft.com/en-us/office/create-a-
channel-in-teams-fda0b75e-5b90-4fb8-8857-7e102b014525
12
SharePoint
Framework
The method of SharePoint development has gone through many iterations since
SharePoint 2007. In SharePoint 2007, farm solutions were all the rage. Developers had free
rein to create what they wanted – the custom solutions they needed with full integration
with services running on the farm. However, this did not come without risk because farm
solutions could in fact bring the entire farm to a halt. Next, in SharePoint 2010, we saw the
rise of the sandbox solution, which still allowed developers to create customized solutions
but limited the scope of the development and the impact on the site collection. Now, of
course, we have SharePoint's client-side development, which includes add-in model script
infusion and is now the newest of all the SharePoint frameworks.
Although we have seen some very much necessary changes throughout the versions of
SharePoint over the years, there are some updates within SharePoint 2019 that put this
platform on another level. As developers, there are tools and things we need to know to
understand the framework in a platform so that we can go on to develop. In Chapter 10,
SharePoint Advanced Reporting and Features, we saw the no-code developer tools available
within the platform locally and most of them were accessed on-premises environments
through gateways using hybrid configurations.
562 SharePoint Framework
In this chapter, we will look at hardcore development and how it's changed for coders.
We will take a quick look at the things that you need to know before you get started on
working with SharePoint 2019 development. The topics covered in this chapter could
be a book in itself but we just wanted to make sure to cover them as administrators
need to know these areas, and if you are an aspiring developer, this may give you some
information on how to get started on your quest to become a developer. It is a great time
to start because things have changed.
The following topics will be covered in this chapter:
• Developer essentials
• Developer tools and languages
• Developer best practices
• SPFx
Technical requirements
To understand this chapter clearly, you must meet the following requirements:
You can find the code files present in this chapter on GitHub at https://github.
com/PacktPublishing/Implementing-Microsoft-SharePoint-2019.
Developer essentials
Since SharePoint 2019 is a big change from all other versions of SharePoint, there are
things you need to know to move forward as a developer. As stated, we have seen so many
changes over the years, especially between 2007, 2010, and 2013. These versions brought
big functionality changes, which then brought development changes and changes to
out-of-the-box functionality.
Developer essentials 563
The same goes for this version; as you will see in this chapter, SharePoint 2019 is bringing
the cloud SharePoint Online service to an on-premises environment. Although it's not
patched and updated to the standard of the server used in the cloud, we do see most
features available for developers. Admins do not have the same configuration features
in the on-premises servers that support SharePoint Online. These features will probably
never surface in an on-premises build. If they do, these features would be a great addition
to environments that need administrative support.
The good thing though is that developers get almost everything SharePoint Online has
to offer in SharePoint 2019. Again, things have changed, so the things you need to know
and the skills you need to have add to the learning curve. If you have not been keeping up,
this can be detrimental to your job search as most SharePoint on-premises work is slowly
fading. Do not make the mistake of keeping a job in SharePoint on-premises development
and not aspiring to learn more about the new tools.
As developers, we need to have a certain skill set, especially when developing on the
SharePoint platform as this is a niche environment. There are certain things you need to
know, such as how to use and understand out-of-the-box features, lists, libraries, scripting,
design, and other tools that all together will make you a great developer. Business
knowledge is a big plus as your background can help you understand requirements when
developing solutions.
One thing I have noticed over the years is that many developers do not understand
out-of-the-box functionality. I have seen developers develop functionality that is already
out of the box in a document library. This is not a great way to develop. It is best to use
those out-of-the-box features and code around them using the API. Even some of the best
developers you meet may not fit the skill set when working with SharePoint due to this
need to know the area of SharePoint.
If you are new to SharePoint, focus on the functionality within the product first. Make
sure to look at all the settings within sites, lists, and libraries. Look into features such
as document sets, manage copy, versioning, and all the features available that support
SharePoint to make sure you do not recreate something that is already available.
564 SharePoint Framework
The important thing to know is the functional differences between on-premises and
SharePoint Online, which are listed as follows:
As you can see, there is a lot here. SharePoint is vast and has a lot of twists and turns. You
must really dive in to be a developer on this platform. It takes skill and years of learning
on projects to really be a developer on this platform. With this knowledge of just the
out-of-the-box features, you will also need to know how to code.
Let's look at the coding languages that support SharePoint:
• .NET: Our first coding language that is a basis of the product is .NET and is used
with on-premises environments when you want to develop custom web parts.
This coding language is very powerful, and it supports a wide variety of features
and functionality, so it must be understood to program in SharePoint. .NET is a
technology that supports Windows apps and web services. It provides a way to
build and run those apps and services while providing a consistent object-oriented
programming environment. The code is stored locally on the server and executed
locally or can be executed remotely. The latest version of the framework is available,
which is 5.0 or greater for new development, and is serviced with monthly security
and reliability bug fixes. There are no plans to remove .NET Framework from the
Microsoft platform.
• C#: Our next coding language is C#, which is also one that gives you the basis
to build SharePoint solutions. This is a modern, object-oriented, and type-safe
programming language. You may find similarities with other languages, such as C,
C++, Java, and JavaScript, as the roots of these languages are very similar. Using
both C# and .NET brings the vastest server-side library to build the most robust
solutions. For more information, refer to the following links:
566 SharePoint Framework
Now, let's turn our attention to some of the helpful tools used in SharePoint development.
Helpful tools
Working with SharePoint, especially when in a development environment, you will
need to know some administration basics. Understanding the tools and the different
environments where the tools are supported is a big plus. Please make sure to learn more
about Azure and Microsoft 365 as you could be working in a hybrid environment where
the following tools will be useful:
• PnP: The PnP module, provided by the Pattern & Practices (PnP) community, is
managed by Microsoft affiliates to help developers and administrators. It is available
for many frameworks and is the latest library. There are libraries for SPFx, CSOM,
C#-CSOM for SharePoint Online, and PnP for PowerShell. PnP also gives you a way
to provision template features on any site with a predefined look and feel.
Link to resources: https://docs.microsoft.com/en-us/powershell/
module/sharepoint-pnp/?view=sharepoint-ps
• Office 365 CLI: Using the CLI for Microsoft 365 can help manage your Microsoft
365 tenant and SPFx projects from any platform. The platforms supported are
Windows, macOS, and Linux. Using the Bash, Cmder, or PowerShell CLIs and the
CLI for Microsoft 365, you can manage various configuration settings of Microsoft
365. This tool also helps you manage SPFx and build automation scripts. The tool is
provided partially by the PnP community in Microsoft.
Link to resources: https://pnp.github.io/office365-cli/
CLI blog: https://developer.microsoft.com/en-us/office/blogs/
new-version-of-office-365-cli-040/
• Azure CLI: The Azure CLI can be used across many Azure services and is provided
directly by Microsoft, which allows you to manage your Azure resources with an
emphasis on automation. The Bash scripting and CLI help you get working fast with
Azure. The Azure CLI offers the capability to load extensions provided by Microsoft.
Extensions provide access to experimental commands and give you the ability to
write your own CLI interfaces as well. It is available to install on Windows, macOS,
and Linux environments.
Link to resources: https://docs.microsoft.com/en-us/cli/
azure/?view=azure-cli-latest
Get started: https://docs.microsoft.com/en-us/cli/azure/
get-started-with-azure-cli
List of managed services: https://docs.microsoft.com/en-us/cli/
azure/azure-services-the-azure-cli-can-manage
• Azure Functions: This is a powerful way to develop ad hoc functions with
serverless computer technology that accelerates and simplifies application
development. Azure Functions can be invoked from Power Automate as well to
perform operations and it also supports flexible scaling based on your workload
volume. The tool can incorporate C# and PowerShell when needed. Use Visual
Studio and Visual Studio Code on your local machine, which are integrated fully
into the entire Azure platform.
568 SharePoint Framework
• Full stack engineer: An engineer who can handle both frontend and backend work
where they can create a fully functional web application.
• Frontend engineer: User interface developer, which includes visual elements such
as layouts and aesthetics.
• Backend engineer: Uses APIs to integrate data systems, caches, and email systems
and takes on underlying performance and logic in applications.
• Software engineer (TEST): Validates the quality of the application using automated
tests, tools, and other methods to validate the product.
• DevOps engineer: Builds, deploys, integrates, and administers application
infrastructure, database systems, and servers.
• Security engineer: Specializes in creating systems, methods, and procedures that
test the security of a software system. Exploits and fixes flaws as they penetrate to
discover vulnerabilities.
570 SharePoint Framework
Finding an eagerness to learn really starts with your interests. Find a job you believe you
will enjoy. If you love to design things and have an artsy-type background, you may want
to stick with trying to aspire to be a frontend engineer. If you love data and working with
reports, you may want to look at being a backend engineer. It's all in what you want but it's
best to get into something that piques your interest; otherwise, you will end up dropping
out of classes and be bored with the work to achieve these goals.
Once you achieve these goals, there are players in the development teams you need
to know about. We will explain some other members that become important to your
everyday work.
Team members
As we saw earlier when we talked about skill sets and teams, there is a position that helps
in these types of situations where you need guidance on the direction of a solution. This is
where the business analyst comes in and can help you determine requirements, and since
they should have a SharePoint and coding background, they can help you identify those
features you may need to research and/or customize to make the solution work. In Agile
project management, as explained in the next section, this is a Product Owner. This is
a necessary team member in heavily developed SharePoint environments. They interface
with the users and meet with them to gather requirements. If you use Jira, you will see the
stories these team members create, which are worked on as a team, but the initial stories
will be entered by a business analyst.
You could also encounter a Scrum Master as Agile project management is really surging
as the go-to project management tool. This project management style is, I have to say, very
useful, especially with development. I was on a team that used this methodology and it
really helps to keep track of what is going on within the teams associated with the product
they support. Daily standup meetings come into play that are centered around stories,
which are requirements broken down into stages considering the customers' needs and
how we build out those solutions within the SharePoint platform. Please read more on
Agile at the following link. I encountered the company Scaled Agile recently and like their
approach to learning the craft:
https://www.scaledagile.com/
Training users
This activity goes right along with requirements as the requirements need to be met so
that the user can put two and two together when looking at what they told you and what
you bring to them to demo. Your demo will determine whether you have hit the mark
when it comes to the development of the solution and if not, they can let you know of any
areas that may seem rough that they want to be done a little differently.
Developer essentials 571
Once those areas are updated and tested and you have determined that the user has no
other concerns or bugs, then you can have the users test the solution themselves in a test
environment, which you would need to vet the changes for first through a tester prior to
delivery to the test users. This way, the users have access to the product that was vetted
by a separate member of your team or another team. Also, if you want to hold their hand
before releasing the solution, you can do a training session with them to make sure they
understand the solution.
Your business analyst, or Product Owner, in Agile project management controls the
interactions between you and the customer along with the Scrum Master. They also need
to be well versed in the product you are supporting as they could have been a developer
in the past or worked with a particular software intensely for years. This team member is
key because they make recommendations to the customer on the requirements and also
create stories about those requirements for you, the developer. Again, this is a very key
interaction between customer and developer.
As mentioned, Agile will play heavily in a development team if this project management
style is being used within your company. The stories you receive have time limitations
and you can help grade the length of developing those areas with the team using story
points and using a process called Planning Poker. This helps the team look at what are
called backlog stories to help determine what stories should be next in line, as well as
determining how much time each will take to complete and who should be responsible
for the work. Read more on Agile project management so that you will not be surprised
in this environment.
For me, using Agile was a great learning experience and I will tell you that those standup
meetings can be brutal if you do not have the work completed in a timely fashion. This
method promotes healthy dialog, learning from peers, and an organized development
process, and also uncovers those that are not performing. Team dynamics are very
important and in the experiences I have had, you really needed to be sharp because, on
this side of IT, you can get eaten up quickly. If you want to be a developer, you have to
know the craft well, especially in this environment.
Documentation
Make sure to document bug fixes; we will get into more detail about developers' best
practices later in this chapter. We want to make sure you understand the processes
involved and what you are expected to do as a developer. Document any thing and
everything is needed so that you can capture where code fails; plus, it helps you to grow
as a developer.
572 SharePoint Framework
Not every shop is the same and some shops use Agile to manage projects, where code will
be pushed quickly by separating requirements into chunks. This helps the business analyst
to support the users and the developer to understand the requirements for working
together as a team. This will also require you to be in daily meetings where you update
your team on your assigned tasks. These chunks of coding projects are called Sprints.
We will not get into this much in this book, but we wanted to point Agile out especially
for developers because you will be held under a microscope. I was on a contract with this
style of project management and it really helped me grow. Being put on display every day
to show what you have accomplished face to face with your manager and teammates will
keep you on task, but if you are not doing the work or do not understand it, everyone will
see your limited skill set openly.
Also, if you want to start to learn from authorized training, please check out the
following link:
https://www.linkedin.com/learning/subscription/topics/
sharepoint?trk=sem_src.pa-bi_c.bng-lil-sem-b2c-nb-dr-
namer-us-lang-en-biz-beta-desktop-core-tier2-sharepoint_
pkw.%2Bmicrosoft+%2Bsharepoint+%2Btraining_pmt.
bb_pcrid.77240797721612_pdv.c_trg.kwd-77240851810630%3Aloc-190_
net.o_learning&hero=10&veh=sem_src.pa-bi_c.bng-lil-sem-
b2c-nb-dr-namer-us-lang-en-biz-beta-desktop-core-tier2-
sharepoint_pkw.%2Bmicrosoft+%2Bsharepoint+%2Btraining_pmt.
bb_pcrid.77240797721612_pdv.c_trg.kwd-77240851810630%3Aloc-190_
net.o_learning&src=pa-bi&gclid=%5B*GCLID*%5D
Where do you start? To get started with working on your skills, you need training and an
environment to play in. Let's look at how to set up a development environment and what
things we can use to help us get things in place to start our journey.
• 16 GB of RAM or more
• 500 GB of disk space or more
• Four cores of virtual processors or more
• A laptop or desktop compatible with the virtualization
• Licenses for SharePoint 2019 and any other integrated applications
The choices for virtual server host and other options are as follows:
• VirtualBox: This is a software platform used for personal usage environments, such
as laptops, desktops, or even servers, but we do not recommend this platform for
production environments: https://www.virtualbox.org/.
• VMware: The free version of VMware Player can be sufficient for testing, but
for development, this solution can quickly become insufficient: https://www.
vmware.com/products/workstation-player/workstation-player-
evaluation.html.
• Hyper-V: This is the option used in this book to set up servers for installation.
Microsoft provides this ability within the Microsoft operating system by
configuring the server options: https://docs.microsoft.com/en-us/
virtualization/hyper-v-on-windows/quick-start/enable-
hyper-v.
• Parallels: This gives you the option to run Windows on macOS but is not free if you
want to virtualize Windows environments. The pricing for the product is not bad
and you pay by year. We have never used it, so we cannot give any reviews, but it's
worth a try if you strictly use macOS: https://www.parallels.com/.
• Azure: This is a cloud infrastructure environment provided by
Microsoft and provides 12 free months of service: https://azure.
microsoft.com/en-us/free/search/?OCID=AID2100131_
SEM_1aa5a26fbf5a139881e0fdf61ec56fb0:G:s&ef_id=1aa5a26
fbf5a139881e0fdf61ec56fb0:G:s&msclkid=1aa5a26fbf5a
139881e0fdf61ec56fb0.
• Microsoft Partner Program: Microsoft offers an Action Pack subscription for those
who are looking to work with Microsoft products. This could be a small company
or a newbie looking to start developing, but you gain access to licenses for their
software products: https://partner.microsoft.com/en-US/.
Setting up a development environment 575
Let's take a look at the modern and classic SharePoint sites. This book does not thoroughly
explore the SharePoint user interface; however, these two types of SharePoint sites are
important to discuss because the differences could inform your development choices.
Modern or classic?
This will be our only topic that has to do with the user interface for SharePoint in the book
really. The reason for this is very important because you must know what the differences
between a classic and a modern site are. There are two experiences you can choose from
within SharePoint 2019: one is modern and the other is classic. These experiences bring
different functionality and aspects to the SharePoint Server 2019 site experience, which
is different from all the other versions of SharePoint. SharePoint Server 2016 has some
updates to the classic experience but was never designed like SharePoint 2019.
The reason why these are important is that there are two distinct ways you can create
content in SharePoint 2019 sites. When we refer to classic, it is just like referring to
what Windows Authentication was called in SharePoint 2007 and 2010. It is the native
way of doing things from when a product was created, sort of like a legacy version of a
component.
When we refer to modern, we are referring to the most up-to-date way to display content
or interact with features within sites. This way of displaying content also brings a new
way of using sites, lists, and libraries. Remember, this will take some thought and training
before it is exposed to your user community.
Your users need to know about changes implemented like this because everything changes
at this point. The classic lists and libraries look and feel goes away and links to areas we
used in site actions along with how to navigate to certain actions will change. This could
be a learning curve for your users, so you need to really make them aware of what you are
doing before you do it.
576 SharePoint Framework
If you like your classic experience, then stick with the classic, but if you want some cool
updates to your sites, lists, libraries, and other areas where the cloud can be supported by
SharePoint 2019 using modern features, then you should upgrade. This upgrade brings
a new set of Microsoft 365 cloud power tools, which are only made available in this
experience. So, if you are using an on-premises gateway, then this is the only way you will
be able to take advantage of the tools in the cloud and use them on-premises.
As a developer, this is important because if you do not have access to power tools in the
cloud and are asked to develop something on a classic site using those tools, you need to
make the users who want this change aware of the changes you have to implement before
you can even get started.
Let's see what the differences between the classic and modern experiences are and what
they look like in SharePoint 2019.
In the following screenshot, you can see how the classic feel gives a legacy display of
a library:
There are many areas Microsoft has tackled in using modern templates, but there is still
more to be done. As you can see, not everything is customizable using this type of site. We
suspect with the change from SharePoint Designer and InfoPath coming soon, there will
be no more classic sites and everything will be modern in the next couple of years.
Search
Unfortunately, we haven't gone too much into search in this book but we wanted to
mention this when it comes to modern sites. SharePoint Server 2019 comes with a
modern search experience and you will be able to see results before typing in the search
box. The results come in real time, so as you continue to type, the results update and
change right on the page.
580 SharePoint Framework
The modern search boxes can be found on the home page, communication sites, and
modern team sites. There will be options to change where the results come from, such
as a site you may have searched on but did not mean to search. Then, you have other
options to refine your results. There is also a Show more results option that expands
from the bottom of the search box and you can open the search results page and look
at more details:
Supported browsers
SharePoint supports several web browsers that are most used, such as Internet Explorer,
Mozilla Firefox, and Safari. However, some browsers may cause some functionality
to be limited or available only through some alternative steps. At times, some of the
functionalities may not be available for noncritical administrative tasks.
The following chart quickly summarizes some key points in user interface design:
SPFx overview 581
SPFx overview
A brief history of SharePoint development follows. In the past, SharePoint development
has consisted of the following methods: farm solutions, sandbox solutions, script injection
using content editors and script editor web parts, and the SharePoint add-in/app model.
The modern way to develop in SharePoint is through SPFx. Before we get into our
examination of SPFx, let's go over the previous forms of SharePoint development:
• Farm solutions: Farm solutions were the original development method for
SharePoint. They involved creating full-trust code and employing WSP. WSP is
the file extension for the Windows SharePoint solution. This form of development
allowed great creativity within SharePoint because developers could do pretty
much whatever they wanted to do. Microsoft introduced full-trust farm solutions
with SharePoint 2007. Developers were able to write fully customized server-side
solutions using ASP.NET. Using SharePoint's API, developers could create solutions
that integrated timer jobs and web parts. However, having the ability to write full-
trust farm solutions using ASP.NET came with some obvious risks. Farm solutions
are hosted in the IIS worker process (W3WP.exe), so the scope of possible impact
is wide. Fully customized code can, and sometimes does, contain errors. A poorly
created web part could bring down an entire SharePoint farm. For this reason, farm
solutions are no longer allowed in SharePoint 2019 and SharePoint Online.
582 SharePoint Framework
Please refer to the links in the Further reading section for more information on SPFx.
What is SPFx?
Now that we've explored the previously popular methods of SharePoint development, let's
look at Microsoft's current development method, SPFx. SPFx is a Node.js and TypeScript
development platform. One of the great benefits of SPFx is that it provides a faster browser
experience. Another benefit is that SPFx is automatically responsive, so no matter the
device, solutions will render in the appropriate aspect.
SPFx is used in the development of pages and web parts for SharePoint 2019 and
SharePoint Online. In this chapter, we will give an overview of the latest SharePoint
framework, as it relates to SharePoint 2019 in SharePoint online. We will briefly discuss
how this framework is used in Microsoft Teams development as well. The Microsoft
SharePoint framework is a client-side tool that gives developers what they need to extend
SharePoint and create client-side solutions to best serve their customers' needs. See the
following comparison chart, which gives a quick snapshot of the key differences between
traditional SharePoint development and SPFx:
SPFx overview 583
• SPFx client-side solutions can be used on all SharePoint sites, unless limited by the
administrator.
• SPFx solutions can be created to extend the functionality of Microsoft Teams.
• Client-side web parts can be built using HTML or JavaScript.
• SPFx applies to both SharePoint Online and on-premises.
• SPFx is framework-agnostic, therefore you can use any JavaScript framework.
• The SharePoint client-side framework is supported both on-premises and online.
Therefore, solutions created for the on-premises environment will migrate to
SharePoint Online. This is important when road-mapping for the future state of
your environment.
• Due to the use of common open source client-side development tools, developers
that were not SharePoint developers before can now develop SharePoint solutions
using SPFx.
• SPFx web parts can be added to classic and modern pages.
• SPFx can be used to create both client-based web parts and extensions that
customize applications, commands set, and fields in lists or libraries.
584 SharePoint Framework
Now that we have something of an understanding of what a developer can do, let's look
at how we can create a server environment to support our new learning. We need to
use the environment we created to try and develop on the platform. This will help us to
understand how the developer tools are set up but, most importantly, give us a place to
play with those tools once they are installed. If you are planning to take classes, this is the
best way to get started as while learning, you can also do each lesson on your development
servers. This will capture what you learned as well as giving you a way to enhance what
you learned by making changes to those lessons within the class later after the class is over.
So, let's dive into setting up our development environment!
Note
Node.js v9.x, v11.x, and v12.x are not supported in SPFx development.
SPFx overview 585
You can check to see whether you already have Node.js by using the following steps:
If you have no version of Node.js installed or a version other than the currently supported
v10.x.x for SharePoint 2019 and SharePoint Online, you must use the following steps to
download the correct Node.js version.
Install Node.js
Let's look at the steps to install Node.js:
3. You will see a list of Node.js versions. At the time of this book's creation, the
supported version of Node.js is 10.22.1. If you are using Windows, choose either the
64-bit or 86-bit MSI version depending on your machine:
5. Read then accept the licensing agreement, and then click Next:
7. Select Next:
11. Verify that Node.js is installed on your computer by using the following command:
Node -v
5. Select Install:
Note.
The site collection creation may take a few moments. Once it is created, you
will be able to install SharePoint framework solutions on the app catalog site.
In addition to installing Yeoman and Gulp globally, you need to install the Yeoman
SharePoint generator. The Yeoman SharePoint generator allows you to create client-side
solutions using the correct toolchain and project structure. The generator gives you
the common build tools that you will need, as well as the basic code, and supplies the
"playground" websites for testing.
To install the Yeoman SharePoint generator, use the following command:
Npm i -g @microsoft/generator-sharepoint
Yo
Additionally, the command line can be used to scaffold projects instead of going through
the interface prompts. You can use the following command to return a list of command-
line options:
yo @microsoft/sharepoint --help
To learn more about scaffolding SharePoint development projects using the Yeoman
generator, please visit the following link:
https://docs.microsoft.com/en-us/sharepoint/dev/spfx/
toolchain/scaffolding-projects-using-yeoman-sharepoint-
generator
In the next section, let's go over the steps to upgrade an SPFx project to the latest version.
SPFx overview 597
1. Navigate to the package.json file to locate all of the packages using SPFx version
1.10. You will need to use the package name in the console.
2. To uninstall version 1.10, use the following:
npm uninstall @microsoft/{spfx-package-name}@1.10
Make sure that you update the package solution with the developer information. This is
a new addition with SPFx and if you do not update this, you will get an error during the
Gulp package solution process.
Visit this link to learn more about the process to update the developer info:
https://docs.microsoft.com/en-us/sharepoint/dev/spfx/
toolchain/sharepoint-framework-toolchain
New features of SPFx version 1.11 are as follows:
In the next section, we will discuss the SharePoint Framework toolchain. The vast array
of open source development packages available will provide you with a head-start in
developing client-side solutions. Be sure to visit the links included in the section to
learn more.
1. From the PowerShell Admin console create a project directory using the following
command:
Md myfirstspfx-webpart
7. Run the following command to preview and build onto the new webpart:
gulp serve
Now that you have the information you need to set up your development environment
and create your first SPFx web part, let us turn our attention to developer best practices so
that you can create successful solutions and steer clear of any trouble.
Requirement gathering
Requirement gathering can seem like the easy part of the development process – after
all, you're only speaking with your customers and figuring out what they want and what
they need in the development solution. However, this is the portion of development that
can lead to a poor product or an upset customer who thought they were requesting one
thing and you deliver another. The best way to avoid this is to make sure that you have the
proper stakeholders in the room when you are gathering requirements. Make sure that the
decision-makers are in the room and that those decision-makers, once you have gathered
the full list of requirements, are willing to sign off on the finalized document.
In the statement of work or whatever agreement document it is that will cover this rollout,
make sure that clauses are included to ensure that if requirements are changed by the
customer, then the timeline and cost will have to be adjusted to accommodate these
changes. The stakeholders need to know this from the beginning before you document the
set of requirements. Let them know that changes can occur; however, changes will come
at a cost, and that cost may be scrapping the entire development project and starting from
the beginning.
Developer best practices 603
Take a look at the following chart for a quick-glance view of some important requirement
gathering techniques.
The following list is not an exhaustive list of requirement gathering techniques. However,
these are the points that you must remember to cover when gathering requirements from
clients. As you begin to gather more requirements for solutions, you can add additional
techniques to the following list that have worked well for you:
That is why we highly recommend the Agile approach to development. Agile does not
mean development done on the fly; you still need a solid plan that outlines all of the
stages. However, the iterative nature of Agile development will allow you to roll out the
first stage, have your stakeholders test it and see whether you're going down the right path,
and make corrections if necessary before you move on to the next stage. This method of
development can lead to a successful deployment without the need for scrapping an entire
project that did not meet the customers' needs.
Take a look at the following chart to see the stages of a development cycle.
Whether you are following a waterfall development approach or an Agile approach, the
standard stages of development still apply. With an Agile approach, you will cycle through
the development stages multiple times:
Preproduction environments
The standard preproduction environments that you will be required to stand up and
maintain for SharePoint development is the development environment and the User
Acceptance Testing (UAT) environment. However, the development environment
and UAT are the bare minima and these two environments often fail to offer enough
testing of the new solution before deployment. It is recommended that in addition to
the aforementioned environments, you stand up a System Integration Testing (SIT) or
performance testing environment and a quality assurance environment:
If your organization is small, you may not have a quality assurance team; however, you
can include other members of the technical team, including the capacity monitoring
team. In most small shops, you would most likely do this testing yourself. This SIT should
not be performed by the same person who developed the solution as this can lead to
missed issues or reworking of code directly in SIT, which should never happen. The SIT
environment should match your production environment. This is very important because
if these environments do not match, production issues may not be caught before they
reach production.
606 SharePoint Framework
Most enterprise organizations have a quality assurance team; this team sits apart from
your internal IT team and is tasked with rigorously testing solutions before they are
deployed into production. Having a quality assurance environment is a bonus as this
quality assurance team will maintain access to the environment and can easily test the
developed solutions before deployment into production. However, some small companies
will not have a separate quality assurance environment but that's OK; the quality
assurance team can utilize the UAT environment for their testing.
The success or failure of your deployment depends on having good-quality test scripts that
cover all of the functionality expected from the solution. Make sure that you understand
how the business will utilize this solution, how they will access this solution, and how they
perform their work day to day; these aspects should be covered in the test scripts. When
creating test scripts, place yourself in the shoes of the business user and remember to write
test scripts to cover the different roles of the users. Testing is an easy place to slack off. Do
not get lazy here as it will cost you. Make sure all the functionality has been tested.
Testers
Choosing testers for the developed solution is extremely important. We've already
discussed quality assurance performing testing on the solutions that you build; however,
quality assurance will be testing the functionality of the solutions and possibly the logistics
and ease of use from their perspective. However, members of the quality assurance team
are typically very technologically savvy, and this may lead to an ease of use of the product
that may not be experienced by the user.
You must have the user community involved in your testing. We caution you not to just
hand over the test scripts, go away, and then come back for sign off. Testing your solution
is not your users' primary job; they may be busy with other activities and not prioritize
fully testing the solution. Users have been known to sign off on solutions that they have
not thoroughly tested. This is why it is important to have the stakeholders engaged, as they
are actually accountable for the success of this rollout. If the stakeholders are accountable,
they are more likely to have a vested interest in the success of the solution. Choose your
testers wisely and always get sign off from multiple stakeholders and their management.
Capacity planning
When you are developing and deploying solutions into an on-premises environment,
you must consider the performance configuration of the platform before you start
development.
Developer best practices 607
Consider jobs running on the servers, search configurations that could drain capacity,
and available storage on the platform for logs and data. Many times, developers create
solutions in a silo, which can lead to overzealous development that does not take into
account the actual limitations of the environment.
For instance, if a large number of users are going to be accessing a developed solution
at the same time or the solution will lead to a heavy influx of documents to be stored in
SharePoint, as in this example, this must be planned for with the capacity teams. If you are
a one-person IT shop, then you must make sure that your environment is configured and
revved up to handle the upswing in usage and storage.
List limits
When creating solutions, always keep in mind the SharePoint list limits and be sure
that you are not designing a solution that will quickly breach those limitations. Use the
following link to learn more about SharePoint limits: https://docs.microsoft.
com/en-us/sharepoint/install/software-boundaries-and-limits-0.
Timeout issue
If the development solution is meant to ingest large files, you must configure the file
upload size limits. Be aware that users can face timeout issues with large file uploads.
SharePoint on-premises is not the best place for extremely large files that eat up tons of
storage space. This is where good governance will come into play.
If everyone developing, managing, and using solutions understands the limits, then it
will be easier to prevent these types of issues. SharePoint storage is expensive; the use of
connectors in your development can help to overcome timeout issues and storage limits.
If the organization has access to other storage systems, then connectors can be developed
to store the document external to SharePoint while allowing users to view the documents
within SharePoint. This is a great method to consider if you run into this sort of issue
because it alleviates the large storage expense while still taking advantage of the user
interface and features of SharePoint that the users love.
608 SharePoint Framework
Enterprise security
Begin with enterprise security in mind. You must follow the security standards of your
organization. Do not wait until you are halfway through development, or even worse,
finished with development and in production, to realize that your application goes against
a policy. Make sure that you are familiar with the enterprise security policies before
you begin development. It is often a good idea to have a meeting with the enterprise
security group in the design phase to get initial approval and check that nothing you are
developing is breaking any policies.
Summary
Microsoft has made it clear that the SharePoint development framework is the way of
the future. Microsoft over the years has made the steady move from farm solutions to
sandbox solutions to app solutions, and this in turn has made developing on the platform
less risky and more uniform. Additionally, since SPFx uses common open source client
development tools, traditional SharePoint developers who embraced this new way would
now not be limited to only developing for SharePoint but could begin developing cross-
platform. In the same vein, the agnostic framework has opened up the world of SharePoint
development to developers that did not traditionally work in SharePoint. SPFx can be
used to develop solutions both on-premises and on SharePoint Online. This standardized
development method is great because as many organizations are looking to move their
technological footprint from on-premises to the cloud, we must begin to create solutions
even while on-premises that will migrate easily when the day comes.
Best practices should always be considered and adhered to when creating solutions.
Taking the time to gather clear requirements is extremely important. You must make sure
that you are building the solution that the client is expecting. Understanding the client's
expectations occurs by listening to your client's needs and understanding the business
problem that they need to solve. You need to work with the client throughout the entire
development process to be sure that you are on track to providing that solution.
In this book, we tried to cover as much as we could to give you insights on what is new,
what has changed, and what tools and knowledge you need to move forward in this new
cloud-integrated SharePoint Server 2019 application. There are many people that work
in this type of deployment and we hope we have touched on enough topics to give you
insight into those teams and responsibilities, and enough technical information to get you
moving on your project with confidence. Some of the topics on SharePoint 2019 require
deeper knowledge and research, which we pointed out during the course of the book.
Questions 609
There was no way we could add any more to those areas in the book. When talking about
technology, there are areas that require another book to learn what those technologies
consist of. We recommend researching those areas and learning more about how you can
implement SharePoint 2019 effectively from all standpoints. We hope this book has given
you a good insight into implementing SharePoint 2019.
Questions
You can find the answers on GitHub under Assessments at https://github.com/
PacktPublishing/Implementing-Microsoft-SharePoint-2019/blob/
master/Assessments.docx
Further reading
For more information on SPFx, check out the following links:
• https://docs.microsoft.com/en-us/sharepoint/dev/spfx/
sharepoint-2019-support
• https://docs.microsoft.com/en-us/sharepoint/dev/spfx/
sharepoint-framework-overview
• https://docs.microsoft.com/en-us/sharepoint/dev/spfx/
release-1.11.0
For more information on the SharePoint client library, check out https://docs.
microsoft.com/en-us/sharepoint/dev/sp-add-ins/complete-basic-
operations-using-sharepoint-client-library-code.
SharePoint code samples: https://docs.microsoft.com/en-us/sharepoint/
dev/general-development/code-samples-for-sharepoint
SharePoint development center: https://developer.microsoft.com/en-us/
sharepoint
SharePoint development documentation: https://docs.microsoft.com/en-us/
sharepoint/dev/
Other Books You
May Enjoy
If you enjoyed this book, you may be interested in these other books by Packt:
• Get to grips with basic Office 365 setup and routine administration tasks
• Manage Office 365 identities and groups efficiently and securely
• Harness the capabilities of PowerShell to automate common administrative tasks
• Configure and manage core Office 365 services such as Exchange Online,
SharePoint, and OneDrive
• Configure and administer fast-evolving services such as Microsoft Search, Power
Platform, Microsoft Teams, and Azure AD
612 Other Books You May Enjoy
document, opening in
SharePoint 534, 535
N
governance features 511, 512 Netmon 385
limits and restrictions, network access
considerations 541, 542 configuring 120-123
live event, creating 548-556 Network Interface Cards (NICs) 113, 339
network, considerations 521, 522 network name
organizing, best practices 540, 541 configuring 118-120
overview 506-508 new features
roles, using within 518-520 about 23
rolling out 511 additional documentation links,
SharePoint-related tab, adding to for central administration 24
Teams channel 538-540 # and %, using in file 29
team, creating 526-529 # and %, using in folder names 29
teams within 520 communication sites 25
using 523-525 Fast Site Creation 25
Microsoft Teams, governance features files, syncing with OneDrive
life cycle management 513-515 sync client (NGSC) 29
Microsoft 365 licensing 512, 513 increased storage file size, in
third-party applications 513 SharePoint document libraries 26
Microsoft Teams, identity models modern lists and libraries 26
cloud identity 516 modern search experiences 27
federated identity 516 modern sharing experiences 26
synchronized identity 516 modern team sites 27
modern experience, versus service application enhancements 24
classical experience SharePoint home page 28
about 576, 577 sharing, with modern internet
search boxes 579, 580 information APIs 27
SharePoint start page 578, 579 site creation support, for AAM zones 28
site usage analytics 579 sites, creating from home page 28
supported browsers 580, 581 SMTP authentication, while
team sites and communication sites 579 sending emails 29
modern SharePoint site new features, new health analyzer rules
versus classic SharePoint site 575, 576 about 31
MySite web application People Picker health rule 31
creating 266-280 SMTP Authentication health rule 31
Index 621
installing 114-118
internet access, configuring 120-123
network access, configuring 120-123
network names, configuring 118-120
server, adding to domain 123-125
server names, configuring 118-120
Windows Server (Hyper-V) feature
versus SharePoint feature 108
Windows Server (IIS) 487
Windows Server Update
Services (WSUS) 390
Word Automation Services 226-229
Workflow Manager
about 296, 297
installing 297-306
workflows 296
Z
zero downtime patching
reference link 389