22 May Hindsight 20/20: ECM Implementation – Formula for Success
Overview:
Once the Enterprise Content Management System (ECM) has been procured, the implementation can begin. For many organizations, this is the step that can be most overwhelming and most inconsistent, especially when the implementation team is rushed to meet tight deadlines. Without an effective strategy, appropriate resources, and prioritized onboarding, your investment will be underutilized.
The Formula for Success breaks down the most critical components of implementation to ensure a consistent, well-designed strategy.
Join us for Part 3 of a 4-Part Series, where Ashley Schilling, Managing Consultant, shares her valuable lessons learned from system implementations.
This webinar will cover:
- Setting up effective, fit-for-purpose governance from the beginning vs as an afterthought
- Appropriate prioritization of business groups to maximize investment
- Functionality planning based upon needs of your organization
Hindsight 20/20: ECM Implementation – Formula for Success
Presented By Ashley Schilling
Ashley:
My name is Ashley Schilling, and I’m a managing consultant with Access Sciences. I’ve been with the organization for a little over 10 years now, and I’ve been involved with projects ranging from data analysis to system implementations. I personally was involved with a lot of successful implementations early in my career and I was wondering if that was just luck since it said that 70% of all IT projects fail. But as I’ve progressed through the last 10 years, I’ve gathered a lot of lessons learned from my projects and from projects where Access Sciences was brought in to right the ship. I hope to relay some of those lessons learned with you today and show you how important planning appropriately is to success and how it can help you avoid the paths to failure.
At Access Sciences, we wanted to share our lessons learned and our best practices from our experiences, helping companies to improve their information management programs. The formula for success breaks down the most critical components of an ECM implementation to ensure a consistent well-designed strategy. The webinar today is part three of a four-part series, the detailed steps to evolve an information management program to its desired future state. If you saw parts one and two, great, but if you happen to miss them, that’s okay too as they are up on the website. Let’s do a short recap to see what got us here.
We started with an assessment of what’s currently in place to get a clear understanding of how the technology impacts business needs and processes. We analyzed all of that information. We evaluated the needs, and we identified the pain points that an organizational wide system could help us fill. We then went out to procure that system by engaging our stakeholders to ensure that we have active participation from the right people throughout the organization during our requirements gathering. We used these to define our approach. We then constructed our demo scripts to level the playing field during the vendor demos. After we completed our objective evaluations, we selected our system. Once you’ve procured the Enterprise Content Management system or ECM, the implementation can begin. For many organizations, this is the step that can be the most overwhelming and the most inconsistent, especially when the implementation team is rushed to try to meet tight deadlines. Without an effective strategy, appropriate resources and prioritize onboarding, your investment can be underutilized, but what’s the right way to accomplish these things? How do we ensure that it’s a successful implementation? That’s what I hope to be able to relay to you today.
Finally, step four is managing that change. After a system is implemented, users need to understand how to use that system successfully, but change management is more than just training them on an existing system. It’s the communications and the engagements throughout that entire process. Each one of these steps is necessary to having a successful information management program. I’ll just be covering the implementation part today but if you can, I do encourage you to go watch, assess and procure on the website and be sure to mark your calendars for the upcoming change management webinar.
What we’re going to look at today is a two part formula that based on our lessons learned, helps to ensure a successful rollout of our ECM system. As we go through each piece of this formula, we’re going to look at why it’s needed, how we do it and what the outputs are. But remember, this is not a sprint, it’s a journey, just like a baby has to learn to crawl before they can walk and walk before they can run, a good implementation could take us months to many years and like the tortoise and the hare, nothing good ever comes from rushing it. So, this is our two part implementation formula that we’re going to go through for each phase of our implementation.
A good implementation is so important because it can overcome almost any software issue. We’ll explore not only the aspects required for configuring our new system, but what’s required to set up an effective program around Enterprise Content Management. Many of you may be familiar with the Plan Do Check Act model. It’s used as an improvement process based upon the scientific method of problem solving. We start with plan where we understand the problems and we define the solutions and do where we start putting those plans in place and making them real followed by check where we monitor and analyze how well the plans worked. We start running our metrics and reviewing them and then we look at our audit results. And finally, we act and act is where we make adjustments based on what the data is showing us and how well we are achieving our goals, which is what we set out to do, where we view this with our steering committee and we create our plan. Then we change our implementation plan for the next phase. And then we reinforce the things that are working well. Then we’d start back at plan for our next phase and do this for each of our pilot phase one, phase two, etc.
And in the middle here, we see the Access Science’s seven point governance model. We’re going to use this model to go through each of our seven variables. The beginning items are having the planning, the next few in the doing and the last one in check and then act encompasses all of our variables depending on where our adjustments need to be made. Today we’re going to discuss what to think about or act on within each one of these variables. And we’re going to use this model to ensure that we don’t miss anything, but before we jump into the planning, we’re going to do a quick poll. So, let’s take a moment to think about what major issues you’ve experienced in your ECM deployments. Is no one using it? Maybe you don’t know what departments you should start with, or have you just encountered a lot of problems along the way, or maybe you’re just getting started with your implementations? So, I’ll give you a few seconds to respond to this poll.
So, we’ll get started with our first variable, which is strategy and scope. So, this is where we identify the business objectives from an enterprise level and the scope of the initiative with regards to the people, the processes, the content, and the systems that will be impacted as a platform is rolled out across our organization. But what happens if we don’t have a strategy or scope? Well, this forces us to be reactionary and being reactionary is always slower, so you miss things. Ultimately, we end up spending time and resources to address issues that are not related to our goal. If we haven’t identified which systems or storage locations are in scope for the first phase, we’ll end up down a number of rabbit holes with no real direction or end insight to get our first business units onto the system. It may seem like we’re going to spend a lot of time in the planning cycle, and this is true. We can’t just jump headfirst into configuring our system.
We need to remember that the system is only a piece of our implementation, the program that we’re going to set up matters. The foundational components are critical pieces of our implementation plan. We have our IT strategy and information landscape, enterprise taxonomy what’s our common language and our shared classifications. Our metadata model, which is enabling our robust search and findability. Our lifecycle management or retention and disposition. We have a policy, we have a big bucket record retention schedule, and we have to understand how this will map to the content types within our system. And then we have a security model with a security policy, and you have to identify the roles and security levels and then map those roles and levels.
So, as we’re starting to plan for our strategy and scope, we need to ask ourselves, what is it that we’re trying to do? What business problems are we looking to solve? What are the drivers maybe we’re looking for better integration across our systems? We’re looking to standardize forms across our enterprise, or maybe process improvements or even reducing paper and going more electronic. Do we have common understanding across our leadership of why this is being done and what the vision is for our future? After we answer these questions, it’s time to set the scope and create a high-level roadmap.
It’s important to have a roadmap to understand where we’re going. For instance, here’s the roadmap for my trip to Hawaii. I had food, hiking adventures, beaches, and just everything that we didn’t want to miss laid out around the Big Island. This allowed us to plan our routes for each day so that we didn’t have to backtrack and lose precious vacation time just driving the same roads for some areas we may have missed. It’s the same with your implementation, a high-level roadmap ensures that capability improvements are prioritized and sequenced to maximize efficiency in delivering our business value while still minimizing the disruptions to the ongoing operations for our end users while we bring them along with us on our journey to our desired future state.
Projects on the roadmap will address delivery of the right tools, the right processes, and the right behaviors to effect sustained and transformative change. When creating the roadmap, we want a phased approach. Depending on the size of your organization and the complexity of your solution, this could be over months or even years, but how do we know when things should happen along this journey? Well, when we think about how to prioritize our business groups, who will be the champions? Have we identified early adopters? What was the driver for the funding to begin this initiative? Maybe we have audit findings that we need to address, or maybe there’s groups that are using third-party cloud systems that IT is looking to get a handle on. Or maybe we have records management needs.
When we look at functionality planning, we want to base this upon our organizational needs, just because a functionality is available in the system, doesn’t mean we need to use it right away. Standards and processes must be in place for each new item before they’re released or we’re going to experience all of the negatives from having a lack of governance. What functionality do you have in-house expertise to support right now? What’s a realistic rollout strategy and timeline for the new functionality? Does a group you’re planning to roll on in the next, say six months have a strong case for one of the new modules? We want to take all of this into consideration while we’re planning. The roadmap we make tells us where to start, which groups to start with and what functionality we’re going to be implementing during this phase.
Now we construct our implementation plan as here. So, to do this, we dive into the roadmap for the departments, and we set our timeline and we build out our detailed implementation plan. This is essentially the start to the project plan. We have ordered tasks with responsible parties, participants, the purpose of the task and the estimated durations. All of the activities are included in this implementation plan from kickoff, design, build, testing, go live and training, and ultimately sustainability. This is a reusable plan across our departments, and it’s going to feed directly into each of our project plans during our phase rollout.
Our second variable is governance groups. This identifies and establishes our authorities to govern and approve processes, capabilities, and infrastructure to enable our information to be a useful asset and reduce liability based upon our organization’s business requirements. Why is governance important? Well, without governance we see sites flaw. I’m sure a lot of your organizations have seen this today with SharePoint or Teams, or even on your network drives. You get a lack of consistency or no organization because your users don’t have any guidelines and boundaries, so they just do what they want. There’s no enterprise search because there’s no information architecture in place with consistent metadata tagging. It becomes a security nightmare because there’s no real way to know who has access to what from an enterprise view. So, it’s just ultimately not sustainable and you end up with a lot of frustrated end users.
Different teams throughout your organization represent unique views and areas of expertise. So, for this reason, we recommend establishing a cross-functional strategic team with representation from core groups, including the business, IT, and records and information management. With the strategic team, the higher the level your representation is in your organization, the better. From the business side, we often see a high risk of failure if they’re not engaged because business requirements need to be identified, defined, and then supportive. It’s critical to understand how the business operations will be impacted for successful user adoption. The true business value of your new system cannot be achieved without including the business. And oftentimes the resulting program is difficult for the business to use. So, we want to remember essential governance. When RIM is not part of our strategic team considerations for risk and compliance often go unaddressed. And without IT, we don’t have visibility into the IT or security strategies, what existing systems we have and what integrations could be possible. This often leads us to point solutions and even redundant systems. So again, we must include everyone in order to be successful.
A three-tiered governance model is recommended for the ECM platform as shown here. We need to identify the groups and individual roles that will play a critical part in determining how the platform will be defined, deployed, used, and then modified over time. Governance is frequently not thought about ahead of time and this often causes issues down the road as rules or processes need to be updated or changed in order to account for those new responsibilities. What assigned roles and responsibilities do you need in order to maintain what it is that you’re developing? A steering committee is typically made up of the executives across the organization, ensuring representation from each of our three core groups. They’re responsible for the strategy and updates to the governance model. The services team consists of the implementation team and IT support and handles the configuration standards and ensures compliance with our policies and our regulations.
Finally, you have a department owner community. This is comprised of the owner for each departmental area who’s responsible for their department’s use of the system. Managing the ECM environment requires a mixture of responsibilities in the short term being the day-to-day and the long-term being the strategy and the vision. The tiered model enables clear separation of roles and responsibilities between our strategic actions being our direction and our purpose and our tactical actions being our execution or know-how as well as delegation of the decision authority from our enterprise down to the business as appropriate. Governing an enterprise application so that decisions are made at the right level in the organization to be able to enable sufficient analysis of the impact of these changes without introducing any undue bottlenecks into the decision-making process requires us to use a federated governance model. This enables the decisions to be distributed to various levels in the organization based upon the impact of the decision. Decisions may be made by the business to the extent possible and then only decisions that are driving standards, reliability, and security are escalated up to the enterprise level.
Now I need to dive into each of these groups and define the specific roles and responsibilities for each. Once we understand what each role is responsible for throughout the system, we’ll be able to define our training, our security, and our permissions that are required to fulfill those roles. If we think about a particular department, we don’t want all of the end-users contacting support on a day-to-day basis, just to understand where their information goes or to add a new user to a particular area within their space. By giving these responsibilities to a department admin who has had sufficient training to become a power user in their department, we’re able to alleviate a lot of this additional strain on the IT support desk.
We can’t have our policies without the procedures to implement them. Policies and rules are the why. They must be established to direct appropriate use so as to deliver on our business objectives while remaining compliant with our regulatory requirements. And then our processes and procedures are the how. They’re the supporting processes and procedures necessary to govern the changes toward design or configuration, operation and use and enforcement of our policies.
The first step and outlining our policies and rules for the ECM system is to create our guiding principles. The principles come from our goals and objectives for the system. These principles will guide the development of appropriate policies, rules, and standards to govern our ECM platform and ensure that our goals are met. In defining these principles, we want to ensure that we’re reviewing existing policies across our organization and identifying the gaps that we have for any additional policies that we may need to create.
So, some example principals would be establishing a system of record or improving findability and availability or maintaining content according to governmental and business requirements. To support these guiding principles, some example policies would be things like the IT policies around disaster recovery plans, or security policies for document confidentiality, or information management policies around classifications and hold management and our records retention schedule, which tells us how long our information needs to be retained and any approvals that are required for its disposition. When we look at enterprise taxonomy design, this is part of our information architecture that we’re laying as part of the groundwork for implementation. Once we establish these policies and identify any gaps that we may have, we need to write them and then follow our approval process through our governance groups to put them into place. But how do we know what goes into these policies?
Well, let’s take a look at an important policy that’s affecting our information, architecture, the enterprise taxonomy, and facet design. We can see an example of what this could look like and what kind of approvals would be a required based upon its impact. But to get to this point, we first have to understand how many stakeholders will be impacted by this change. And it’s just one area with a new term or if everyone in the organization is impacted by say a new facet or metadata column being introduced. We also need to identify the level of effort to make this change real in the system and what roles are able to make that update.
But don’t stop with the policy, effective guidance through processes and procedures is important. We see this a lot, organizations will develop a policy for records management or information management with statements on retention or disposition, but the level of governance stops there. There’s not a documented process for how to dispose of records and information, or the process has only covered paper. This is risky because people need specific guidance on how to comply with policy through documented processes and procedures. As an example, would they know what effective, timely, secure disposal of electronic information looks like on their own? Processes, procedures and training are necessary to guide people’s actions and support their ability to comply. It’s the implementation aspect of the policy that’s frequently left on the table. Processes and procedures show the workflow of approvals and decisions required to enforce our defined policies. For instance, if we continue looking at the policy for the enterprise taxonomy and facet design, the process may look something like this. We can see all of the roles within the organization that are involved in the process in each of these horizontal swim lanes.
Another process for end users is to understand what their content is and where it should be stored. So, what goes where type of chart is shown here will help them in making these types of decisions on what system is appropriate for their content since not all content repositories may be in scope for our new system. Not only do we need these types of defined processes and procedures from a system wide perspective, but we need work groups specific too. This is often a gap for organizations as functional groups don’t have the information management elements embedded in their procedures like classification and storage. So, by defining what each storage location is for, this will help them in making these determinations. What is this and where do we keep it? How do we know how to make that type of decision? Using content maps like this one will help us take this a step further for each group and help them understand what the classifications and the retention is for each type of information that they work with.
I hope the client create this type of mapping for business unit and then the client attempted to follow the same process for the next one. However, they did not fully grasp the importance of guiding the business unit through this mapping exercise instead, they gave them a template to fill out. The business did not understand the importance of classifying their content using our enterprise taxonomy elements, nor did they know how to correctly assign retention to their information. The mapping that they produced for this business unit did not fit into our implementation plan for the system, nor did it contain any of the information architecture pieces that we put into place. How do we move forward with this? The robust search classifications and even retention would not have been successful. It’s important to work with the business and educate them along the way.
There are a lot of processes and procedures that need to be fleshed out but to get your mind thinking in this direction, I’ve called out a couple here that I’ve seen a lot of organizations miss, like new site provisioning. How does the request come in? Who reviews it to see if this type of content already has a home that exists? Records destruction, depending on how we’ll be implementing our records retention policy, are there any manual steps that are the responsibility of the end user? Are there any company-wide workflows that we’re going to automate? Perhaps one that approves content for our website? Is this going to alter the current way that these items are submitted by our end users throughout the organization? And permission changes. If someone’s leaving the group or a new team member joins, what process do we follow?
So, at this point we’ve covered the first half of our formula, the planning phase. We’ve created our strategy and scope. We’ve established our governance groups and defined the why and the how with our policies and their supporting procedures. So now let’s take a look at the tools and the systems. This is where we identify the design and the configuration necessary for the platform and any supporting technology tools to support compliance with our established rules and then verify proper configuration through testing.
I can’t stress enough how important it is to define the requirements before you start configuring and deploying your system. I’m often responsible for developing testing scripts, but I was recently engaged with a company that had developed their testing scripts based off of what the system was supposed to do, not what the business wanted the system to do. What happened? Well, they got through their typical testing cycles, unit testing, system integration testing those, and then when we got to the user acceptance testing, the business couldn’t use the system. Everyone was confused because initially our testing had went fine. But the real problem is that they were essentially doing software testing of the product and not the way that it was configured for the business. So how do we make sure that we avoid this?
If you’ve attended our other two webinars on assessing and procuring, you may have noticed that we talked about requirements gathering and all of them. Requirements gathering is extensive and you want to ensure that you’re doing the right amount in each level of your project. So here we’re going to be pulling back even more layers of our onion and getting even more detail. But based on the requirements gathering you did for your procurement process, you have an organizational wide view and understanding of the high-level system-wide functionality. These documented business requirements need to be translated into the form of use cases. These use cases are what should drive testing and training. The test cases should be written off of these use cases by role. For instance, in this example, we can see that the records management administrator needs to be able to export content that’s on hold.
So, during our testing cycles, we need to test not only the role-based security of the records management administrator, but also the configuration that has been done within the system to allow the content to be put on hold and then export it. These use cases also directly impact our training. By understanding what each role is doing in the system we know what areas that they need to be trained on. We not only need these enterprise-wide use cases, but we also now need to go deeper and conduct our in depth requirements gathering with the business units for this phase. From this, we need to understand our work group specific views of information. We can use things like file plans or content catalogs that marry the policies with the content and the information architecture elements.
If we think back to our strategy and scope pieces, we’ve now turned these into actionable processes and procedures in detailed use cases by role that fit the business needs. These documents give us the information that we need in order to configure the system to support compliance with our policies. So now we must determine how best to configure each of these things to alleviate any undue burdens on our end users. When designing configurations to support our information landscape, if we look at our sample procedure for provisioning new areas, is this a centrally managed process? What pieces of this process does our ECM allow us to automate? Can it be managed manually at first and then scripted, maybe using PowerShell before too many new areas are rolled onto our system? Or do we see the number of these requests being too large to manually handle even upfront? These are some items that we should consider prior to rollout so that we don’t end up with frustrated end users that don’t even have a place in our new system for their content.
Our enterprise taxonomy will be configured differently depending on our chosen system. So, for instance, with OpenText content server, we would use categories and attributes. And with SharePoint, we would use content types and columns. And some of the other systems we work with like FileNet, Laserfiche, or Highland, they have their own ways of referring to it. But regardless of the system, we know that our taxonomy values are centrally managed, and they provide us with a standardized list to the entire organization that enables us to have a true enterprise search experience. As we begin to expand our metadata model for each of our groups, this opens up opportunities to standardize our naming conventions and hopefully allows us to automate the document names depending on what features or modules we may have available.
Lifecycle management encompasses the content from creation to archival or destruction. There are several paths that the configuration can take in regard to retention. At what point in the document’s life cycle does it become an immutable record? Are there automated triggers that can be implemented in the system to ensure that the documents are retained according to their record series? Will records be sent to a central record center, or they be maintained in place until their destruction? I know this security model example may seem overwhelming, but it should be reflective of the roles that we’ve defined. What system privileges does each role require in order to fulfill their responsibilities? What is the level in which each role gets that privilege?
So, if we take department admins, for example, if we look at their assigned responsibilities from our governance model, we’ll see that they can create folders. These folders require the folder administrator permission level to be able to create, update or delete, but we don’t want a department admin to create folders anywhere in our enterprise, just within their own area. What about external access? Are we allowing users that are not part of our domain access to certain areas of our system? Are we segregating this information out or is it going to be commingled within our different departments?
Now we’re just in variable six, which is training and engagement. This is where we identify the training needs for all identified roles that will be impacted by the ECM platform and the required degrees of competency for each of those critical roles. With insufficient training or engagement, we get user frustration. People don’t know how to use their system and instead of asking, they find ways to stick with their old systems and processes. They don’t embrace the change because they don’t know how the new system can benefit them or even how to use it. There’s no real incentive for them to adapt to the new ways of working. In many implementations, the client has us manage all parts of the implementation. However, in some like the one I’m about to share with you, we’re only engaged in pieces. It allows us to see how other organizations approach their projects and it also allows us to identify some steaks.
In a recent client engagement, we provided migration and information management support to an implementation team. Communications about the new system were very limited and only given to a small group of core team members. Additionally, the training was provided to users very late in the implementation process. And it was very high level and focused mostly on system functionality. There was no change management effort focused on how the new tool would fit into their existing business processes. So as a result, in the days following go live, only a handful of users logged in to use the new system. And now this has showed us that the users were continuing to use alternate storage locations. If you do not have adequate stakeholder engagement and meaningful training, users will not see the value in making an effort to work in a new place. The old saying, if you build it, they will come, does not apply to a new ECM system. People need to feel supported with their new ways of working. Remember that this is an ongoing change.
To construct a training plan like we see here with the role on the left and the topic on the right, we need to look back at each of our roles and the responsibilities that we’ve assigned to them. From this, we can create an outline of each of the items to be covered by role. The end user training will always a prerequisite to all of our other roles. And as we can see in this example, the end user guide contains the basic document management functions like viewing, editing, creating new documents, and versioning. Then we have the department admin role who can do a few additional items like running some reports for their department or updating some of their metadata options. And then we have the records management administrator who’s responsible for the enterprise-wide records management tasks like applying and releasing holds.
So, once training content has been developed for each of the roles, it should be delivered to the pilot group as this gives you a chance to test it out. And ideally this training would happen before UAT or User Acceptance Testing occurs. If that’s not possible, then your UAT test scripts need to be verbose enough for any untrained users to follow step-by-step. You don’t want users testing your system that don’t understand how to work with it. The remaining users who are not participating in UAT should receive their training prior to day one, or go live. Ideally, this would be a day or two before, so that when users first log in after their content has been successfully migrated to this new system, they can continue working instead of not knowing how to edit or upload a document.
Training is an essential part of onboarding users to their new ECM system but it’s just as important to communicate with them throughout the entire process. User should have been engaged initially during the assessment and the procurement of the system. So, by now they should be aware that something is coming. It’s important to engage them early and often with communications throughout the entire implementation. The communication plan and strategy should be embedded within our implementation plan.
So, now that we’ve reviewed our first six variables of the governance model through the plan and do cycles, we’re going to do another poll. Which of these governance model variables have presented the biggest challenges in your implementation? You’d have to do with this strategy and scope, was it around the governance or the policies and their associated procedures? Was it with the tools and systems or in your training and engagement? So now that brings us to our last variable, audit and metrics. This is where we identify essential metrics that should be monitored to evaluate how successful the ECM platform is in meeting its established business objectives and how effective governance is in supporting us to meet that goal. Without audit and metrics, we have no gauge of success. So how will we know that we were successful? What pieces of our plan were correct, and what pieces do we need to fix?
Now we want to check and audit to make sure that the processes are being followed and determine what needs to be reinforced with training or even additional system controls. It’s important to collect feedback because surveys help us to understand user adoption and satisfaction. We’re looking to measure where we are and enforce accountability. If we think back to what does success look like, we can determine what it is that we need to monitor, to measure for that success. Could be things like how many people have logged in? What’s our daily active user count? How many new files were loaded last week? Are we getting any new folders being created? What percent of our information is being tagged correctly with our enterprise metadata?
These things will help us understand the effectiveness of our training and engagement initiatives and aid us in determining how well our configurations are enforcing our defined policies and procedures. This will highlight areas that we need to address during our act cycle, but we need to ensure that the metrics we’re going to collect are meaningful so that we don’t end up with too much information and we can’t make sense of any of it.
Once we establish what those metrics are that we’re going to collect, we need to build the tools or enable the appropriate logging and reporting for each of those statistics or metrics. This example here shows some reporting that’s available through Office 365, but other systems have similar reports available as well, or we can build our own using any raw data that we have.
And that brings us full circle through the seven variables of the governance model. Based upon what our metrics are showing, we now act and make the appropriate adjustments throughout or add additional system controls to prevent any unwanted behaviors and then we reinforce the behaviors that are working well. During implementation projects that we’ve been involved with, we found that utilizing this formula has helped to mitigate a lot of the issues we’ve seen in other projects. The most time is spent in the planning cycle with defining the strategy and scope, setting up governance groups, and defining the policies and the processes to realize our guiding principles. Having a plan forces you to be proactive instead of reactionary, which always leaves you one step behind, but configuring these in the system to prevent any unwanted outcomes is just as important.
Through prioritized onboarding to a new system and effective training and engagement we should begin to see our version of success while having the measures in place to identify where we’re falling short and identify what we can do to ensure that our next phase is even more successful than the last. I hope the pieces to this formula have given you some new ideas to consider as you’re planning for your implementations. And if you’re interested in finding out some more information about these types of projects, be sure to check out our website at accesssciences.com/case-studies. There’s a lot of good case studies out there to read.