Data integrity – order and security

Data integrity – order and security

Modern IT solutions and industry automation are changing the world, and in many areas are contributing to the dynamic development of various sectors, including the pharmaceutical industry. In addition to the obvious benefits stemming from the digitization and automation of processes, the progressive computerization of the pharmaceutical industry also has a second, slightly less-beneficial side. Together with the increasing amount of digital data and electronic records, we’re seeing more and more data integrity breaches. The FDA (Food and Drug Agency), among others, is highlighting this in its publication “Data Integrity and Compliance with CGMP Guidance for Industry” published in 2016. The US FDA is not alone in its assessment, because the British MHRA (Medicines and Healthcare Products Regulatory Agency) and other global regulatory bodies are paying increasingly more attention to maintaining the accuracy and reliability of stored data to ensure an adequate level of safety and quality of drugs.

In response to the emerging problem with maintaining the integrity of data, a range of guides has been created to define and unify the rules of conduct in the data management process. These include:

  • MHRA: “GMP Data Integrity Definitions and Guidance for Industry” (March 2018);
  • WHO: “Guidance on Good Data and Record Management Practices” (2016);
  • FDA: “Data Integrity and Compliance with CGMP – Questions and Answers, Guidance for Industry” (December 2018);
  • PIC/S 041-1 “Good Practices for Data Management and Integrity in regulated GMP/GDP Environments” (July 2021),
  • EMA: “Questions and answers: Good Manufacturing Practice” (April 2016).

So what is “data integrity”? According to the MHRA, it’s a process that’s responsible for the completeness, accuracy and reliability of generated data throughout their entire life cycle (Data Life Cycle – DLC). DLC includes all phases in the life of data – from generation and recording, to processing, use, retention, archiving and destruction. Data integrity also includes data consistency, which ensures that data are not deliberately or inadvertently modified, falsified, distorted, deleted or amended in an unauthorized manner. This applies to both data recorded in electronic format and data in paper form.

According to the MHRA guidelines, data that have integrity should be “ALCOA”, meaning they should have five basic attributes:

  • A- Attributable – attributed to the person generating the data;
  • L- Legible – readable;
  • C- Contemporaneous – recorded in real time;
  • O- Original;
  • A- Accurate.

Data integrity is also associated with a range of terms and tools used in the process of maintaining data management, such as:

  • Metadata;
  • Audit Trail;
  • Backup Data;
  • Static vs. Dynamic Records;
  • System Validation.

According to the definition included in the MHRA document, metadata are data that describe the attributes of other data, such as their structure, inter-relationships and other characteristics of data e.g. author’s details, date of issue/creation, version, disk access path, etc.

The audit trail is a type of metadata that are a list of information that’s important from the point of view of GMP (Good Manufacturing Practice). It enables the re-creation of the history of the creation, deletion, supplementation or amendment of data, without impacting the original records. It’s a chronological record of user operations and actions containing who changed or modified what, when, and why.

Backup is simply a copy of original data, e.g. metadata, configuration settings, measurement data, etc. that is then secured and stored appropriately for a specific period. Data contained in the backup copy must be recorded in the original format or in a format that matches the original.

A static record is a fixed data document created in paper or non-editable electronic format that cannot be amended. The dynamic recording format enables interaction between the user and the record content, e.g. tracking trends or reprocessing.

A validated system comprises computer equipment, software, procedures, training and, of course, the validation process.

The collection and creation of data that are precise, exact and generated in a timely manner are important for researchers for assessing the credibility and reliability of research. Errors in data collection or damage to data result in a range of potential consequences, such as misleading other researchers, the need to repeat falsified research several times, or increased use of resources necessary to perform the studies.  Therefore, data integrity is an essential element in research. However, in the healthcare sector it has much more importance. Incorrect, inadequate or falsified data may pose a threat to the health and life of patients if they are the basis for product launches, qualitative research or the development of medicinal products for humans and animals. In the case of integrity breaches of data related to the quality of medicinal products, there can be serious consequences leading to health complications among patients taking the given drug. Why is this so?What is the reason for the lack of data integrity? It can be caused by the absence of appropriate procedures, training and, increasingly, adequate supervision of computerized systems used in the pharmaceutical industry. For many years, computerized systems have been replacing traditional processes, and paper forms of documents are being replaced by electronic data. However, we need to remember that the introduction of computer systems into various processes must not decrease the quality of these processes or increase risk. Some manufacturers and analytical laboratories are of the opinion that if they go back to paper documentation, data integrity requirements will no longer apply to them. They couldn’t be more wrong. As mentioned above, integrity applies not only to electronic data, but to paper data as well!

What happens in the absence of data integrity? What are the potential consequences? Despite the fact that data integrity has long been a topic described in legal regulations, its importance has increased significantly in recent times. This is because audits and inspections revealed many errors related to data integrity. This is why the FDA issued numerous warning letters.

So what should you do to avoid data inconsistencies? The most important aspect is for the pharmaceutical company to ensure the originality, accuracy, correctness and consistency of data generated during the broadly understood creation process. For this purpose, it would be good to introduce a coherent policy for conduct that allows assessment and analysis of data risk, control and management of data, and continuous data monitoring.

In order to avoid problems related to data integrity during audits, a three-tier system is recommended:

1. Monitoring and maintaining a culture of quality at the organization – the absence of data integrity is not just the result of deliberate fraud, but often of bad practice, organizational behavior or inadequate quality systems that create opportunities for data manipulation. That’s why companies should consider improving the organization of work by taking procedural, technical and behavioral actions.

2. Control tools – the following tools, among others, allow you to maintain data integrity throughout the entire system life cycle:

  • Computerized system validation
  • Regular Audit Trail record reviews
  • Introduction of a Data Risk Management approach
  • Staff training
  • Service supplier audits
  • Introduction of document management procedures
  • Defining rules for data migration and storage
  • Data security audits

3. Training – to create the right level of awareness among employees, internal auditors should become a focus. Experienced consultants and internal auditors introducing a fresh approach to the organization also contribute to the improvement of data integrity programs.

Given the numerous recorded data integrity deficiencies, their verification is a priority for the FDA and EMA (European Medicines Agency) during pharmaceutical inspections. When the main stakeholders – patients – take a given drug, they believe that the documents and data containing the decisions related to the production, research and launch of drugs are credible and reliable, and that the quality of the medicinal product is not at risk. Emerging problems that manufacturers face in terms of ensuring data integrity may lead to the imposition of huge fines, the suspension of drug production, import and distribution, and first and foremost, threaten patient safety, which is of course the most important aspect.

How to mitigate risk in validating a cloud solution?

How to mitigate risk in validating a cloud solution?

Companies are eager to implement cloud solutions, so they can cut costs and streamline their business efficiency. However, within pharmaceutical and life science industries, implementing a new system requires special prudence and supervision. Implementing a cloud service in a highly regulated environment comes at a risk which may be mitigated by validation – if performed by an experienced partner with long-term expertise.

Cloud computing is a daily reality for many companies, even outside the IT environment. From simple tools to advanced solutions, businesses use remote, on-demand resources to streamline their operations and maximize their profits. Cloud systems are gaining popularity also in life science and pharmaceutical industries. However, in these areas the adoption of cloud services is somewhat constrained due to high security and compliance standards, which not all cloud suppliers are able to meet.

Cloud applications have plenty of advantages, but they are not completely free from weak points. Their efficiency and reliability depend on many factors and may be verified by means of the Computer Software Validation (CSV) or Software Quality Assurance (SQA). These procedures, with special focus on Supplier Audit and Risk Assessment, serve to ensure that a specific cloud-based solution:

– meets industry and regulatory requirements,

– provides high quality results,

– is well-aligned with business goals of the organization,

– in other words: allows maximum benefits at the lowest possible risk.

Before we delve deeper into this issue, let’s explain the core characteristics of cloud solutions and the advantages they provide for businesses also in the field of pharmaceutics and life science.

Cloud computing: main models, key characteristics

In broad and simple terms, cloud computing is the online delivery of computing services, such as software, analytics, networking, intelligence, databases, storage and servers. This paradigm provides access to flexible resources and enables companies to innovate and scale faster, as well as improve business efficiency and reduce operating costs.

Cloud computing is commonly associated with three acronyms: SaaS, PaaS and IaaS, the first one being the most popular. They refer to three service models, whereby software, platform or infrastructure are provided via cloud – thus eliminating the necessity for implementing more expensive and less efficient in-house solutions.

Cloud computing model is characterized by five essential features:

1) on-demand self-service – meaning consumer can get access to a cloud service at any given time without human interaction with service provider;

2) broad network access – meaning that the service can be accessed from a wide array of devices, i.e. smartphones, tablets, notebooks, PCs or Macs, as well as from a wide range of locations with internet access;

3) resource pooling – meaning cloud solutions providers pool large-scale IT resources to serve multiple users;

4) rapid elasticity – meaning the ability to provide scalable services;

5) measured service – meaning the ability of the cloud system to automatically control and optimize resource use and apply predictive planning.

It’s good to be aware of the above characteristics, as they are the foundation for any type of cloud services.

Lower costs, increased efficiency – hard-to-ignore benefits of cloud solutions

Cloud solutions provide companies with numerous business and operational advantages, notably:

– minimal hardware costs,

– reduced costs of data storage, data processing and IT maintenance,

– increased efficiency and flexibility,

– universal, location-independent access to the resources,

– data safety (protection against data loss) and data security (protection against unauthorized use),

and many more.

No wonder, SaaS solutions are tempting for the companies looking to maximize their efficiency, cut the overheads and lift the profits, especially when they come with the promise of increased data security.

However, transition to a cloud solution is not an easy step for organizations operating in a high risk and highly regulated environment, such as pharmaceutical or life science industries. Especially that cloud services do not come with a guarantee of full compliance and may pose a liability. The principal concern is the dependence on the cloud provider. Possible implications thereof include temporal unavailability of services or loss of data integrity. Some legal issues also may occur, for example with regard to data storage and processing under GDPR.

Validation essentials: Supplier Audit and Risk Assessment

These uncertainties beg the need for expert guidance provided by an experienced external partner. Professional support is all the more essential since companies point to security as their biggest fear when implementing cloud solutions. To mitigate possible risks, any type of SaaS service may be subject to validation, such as CSV and SQA. It is worth stressing that in the case of cloud service the  very same validation procedure applies as in the case of local server systems or applications.

On the other hand, the results of the cloud system validation may be impacted by limited access to the mechanics of the system and its provider’s operations or policies. To ensure maximum transparency and minimize the risk of non-compliance, it is essential to conduct the Supplier Audit and Risk Assessment with utmost care to detail. The specific procedures may differ based on the type of SaaS service but they cover the same core areas.

Foremostly, Supplier Audit must include the verification of supplier’s compliance with quality standards, backup and restore procedures, as well as functional and technical specifications. When assessing the risk we examine the potential risks related to level of trust and control, shared responsibility and quality of deliverables. We also take into consideration relevant internal standards and requirements of the company implementing the cloud solution.

After Supplier Audit and Risk Assessment confirm compliance of the service and its provider with quality standards and client’s internal regulations, we may move on to the next stages of the validation process which include:

– User Requirement Specification

– Validation Plan

– User Acceptance Tests

– Validation Report

– Service Level Agreement

– Operational Support Plan

We can skip the details, as the above items are well-known industry standards. However, from the validation angle, we expand on each of these stages in our webinar available on YouTube https://www.youtube.com/watch?v=44FrG-yI7CQ

Validation of cloud-based serialization system: a case study

Now, let’s give a brief practical insight into mitigating risk in validating cloud solution on the example of a cloud-based serialization system. We were dealing with two cloud providers – one responsible for the infrastructure and the other for the software – and a sensitive issue of GxP data exchange with external clients.

The main areas of focus in the validation process were as follows:

1) Risk Assessment

2) Supplier Audit

3) Project Documentation

4) Operational Support Plan

The first step laid the basis for the whole validation process. RA process included:

– analysis of the information about intended use of the system,

– determining the type and criticality of the data which would be processed by the system,

– determining the business criticality of the cloud solution and explaining it to the parties involved.

We also had to make clear to our client that the system is externally hosted and it uses cloud computing.

Based on the results of RA, we were able to focus on the right issues when performing the Supplier Audit. Among other activities, we verified the standards applied by the supplier, their internal processes relevant for operability and safety/security of the system, quality of the documentation and the supplier’s awareness of the GxP requirements for pharmaceutical companies. Fortunately, the system offered by the supplier was already validated in accordance with GxP and GAMP5 methodology. This facilitated the whole process, as – after scrutinizing the supplier’s documentation – we were able to use many of the provided documents for the purpose of validation.

We were less lucky in the case of validation of the infrastructure. IaaS supplier was well aware of the requirements and security standards. However, the documentation and operational standards of the company were far from acceptable. We prepared a report with suggested improvements. Unfortunately, the company didn’t choose to comply, which forced us to look for another supplier. This caused a slight delay in the project, but soon enough we were able to find a company that met the requirements, which was confirmed by our audit.

It’s best to mitigate the risk with an experienced partner

The process of verifying cloud system provider and the system itself is not an easy one. It requires expertise and keen understanding of the validation process, stemming from long-term experience. However, for organizations operating in the pharmaceutical and life science industries validation may be necessary precondition for implementing SaaS services.

If you seek further advice on the issue of mitigating risk when implementing cloud solution, don’t hesitate to contact us at validation@ecvalidation.com 

IT Project Management: A Validation Challenge

IT Project Management: A Validation Challenge

Computer systems validation is a procedure performed in every pharmaceutical or medical device company that uses IT systems at the stage of production, logistics or distribution of its products. Creating a new system and its modifications or implementing existing software requires proper project management. There are two approaches to this type of process prevalent in IT projects, namely the traditional and agile management. This article looks at the differences between the two methodologies and their advantages from the validation point of view.

The main issue is an approach to conducting projects involving the validation of computerized systems in the pharmaceutical sector. Due to serialization requirements, many pharmaceutical firms will be bound to implement some completely new software. Projects are usually delivered using either a classic waterfall or agile methodology. The question is, what are the differences between those two and when do they prove to be more effective?

With traditional methodologies, such as Prince2 or PMI, we divide a project into individual stages (analysis, design, execution and implementation). Since each phase must be completed before the next one begins, they are often called cascade projects. The project is executed according to a precise scheme; its functionality and requirements are well-known and all its areas, such as its duration, measurable business product, resources, organizational structure, including roles and responsibilities necessary for project management, are precisely defined. The project is delivered as a whole. This methodology is characterized by a strict sequence of stages. If the preceding phase has not been satisfactorily completed, the next one cannot begin. It is necessary to go back to the previous stage and make certain modifications to achieve the expected result.

In contrast, in the agile methodology such as Scrum, the plan is also defined, but everything else is flexible. The team itself is responsible for organizing and assigning tasks, and the project is completed stage by stage. Throughout the project, new needs may arise that the team can follow. The scope must be defined both in the agile and cascade projects and it may be identical, but when there is a need to modify the project or the schedule, the difference between the methodologies is clear. In agile methodologies, we have instruments that allow us to modify this scope quickly and easily. In the classic approach, we also have such procedures at hand, but they are time-consuming and require much more commitment. It is also worth mentioning that implementing a project using the waterfall approach requires more time, which usually leads to higher costs. This also entails the necessity to repeat individual stages when there are no effects or when the effects are not the ones that were expected, which, consequently, will lead to downtime.

Another methodology that should be mentioned here is the hybrid agile project management, which combines control mechanisms known from traditional methodologies, such aspects as risks, budget, time and quality, but the paradigm is the continuous collection and discovery of new requirements and following the change.

How do these differences reflect on project validation processes?

Let us agree that validation is a rather rigid method, which requires certain procedures and instructions and is nowhere near flexibility, which probably in many projects would be helpful. We can say that it is a typical cascade process and from our perspective, it is this constant variability in agile methodology that is a challenge, especially from the point of view of documentation and testing. In agile projects, it is both important and problematic to set up our validation rules to avoid too much “distortion” of the agile approach. That means we need to leave the possibility of flexible defining or changing the scope of the project, but on the other hand, we have to observe the formalities required in the validation process, namely to develop a risk analysis and to supervise the risk in the process, to plan the scope of validation work and the scope of tests, to define and develop the required technical documentation without the risk that something has been omitted or, for example, that the scope of tests has been inadequate.

You should also note that there are so-called hard-gates in the validation process that must be met before moving on to the next stage. For example, specification documentation (URS, FS, DS) must be developed before testing can begin. For instance, in agile methodologies, the dynamics of changing functionalities directly affects the documentation, which in turn translates into testing. From my perspective, the whole art of validating agile projects lies in building such an approach or validation model so that completing the formalities one way or another does not block the flexibility of changes in the project.

If the project is too simple to be broken down into smaller phases, i.e. so-called sprints in Scrum nomenclature, it is hard to talk about an agile project. Breaking a project into sprints means gradually adding functionality, which is closer to the client’s requirements from sprint to sprint. In the waterfall project we plan everything from A to Z (first of all, what the final product should look like), we go through the various stages of production and we deliver and check the final product right away, while in agile methodologies we deliver individual elements or functionalities that make up the whole. It is wrong to assume that in agile methodology the plan is improvised, because generally each action is defined and contrary to what it may seem, some agile methodologies also make use of deadlines, i.e. we have the so-called time boxes. In the Scrum methodology, each phase of work, however small, such as an iteration or a sprint, is rigidly defined. Tasks are performed precisely and all our results are verified on an ongoing basis. It’s certainly easier to verify the performance and results after each sprint because if you identify bugs in the delivered functionality from the last sprint, you can implement an improvement after the bug was identified, or schedule a fix for the next sprint. From a software development point of view, for example, this is a very big advantage. From a validation point of view, it complicates the whole situation.

Does this mean that within an iteration or a sprint, validation tasks can be carried out alongside the agile process of the whole project?

It depends on what phase the project is in. It often happens that these initial phases, when, for example, the environment is being prepared and the backend and configuration code are being created, do not contribute much to the whole and from the validator’s point of view can be lost. I have come across two approaches in this situation. In the first one, the validator is involved in the project from the beginning and starts his activities in 2/3, 3/4 of the sprint, for example checking the documentation or validating the application code. He stays on top of it and gives feedback, comments or what has been done wrong from a validation point of view. The validator has control over the entire development process on an ongoing basis and can verify individual stages or functionalities. The risk that the final product will be incompatible with business objectives and requirements and the whole process has not been conducted by the rules, regulations or good manufacturing practice is minimized. Of course, there are also disadvantages of that solution, for instance certain functionality, which was created during the sprints, is often absent in the final product because it has been abandoned. That is contrary to the validator’s approach because his role is to ensure that the solution is consistent with the business goals or specifications. In the second approach, the validator gets involved with the project in its final stages, let’s assume it to be 80% of the delivered functionality. He starts validating and working with the person responsible for the documentation, for the source code, and gets the documents that are needed at the end of the project.

The latter approach obviously has a very serious con. If everything has been done correctly, nothing is missing from the documentation, there are no mistakes and no defects, then the validation process goes smoothly. But if there are complications, deficiencies or defects, validation takes longer and has directly affected the delivery of the final validated product. A far better model is to involve the validator in the project from the very first stages, albeit to a lesser extent.

So, when should a validator or a team of validators start their work in the case of software development or implementation of the project in a pharmaceutical company?

The later we get involved in the project, the worse. The worst option is to include validation right before the tests themselves because in practice this means the most work for everyone. On the one hand, for a validation project to start, a document such as risk analysis or a validation plan has to be created. Even if we are talking about the agile methodology, or Scrum to be precise, we should get involved at the very beginning, with the risk analysis. The validator should have a say in developing such things as the test strategy, when the tests are performed, what tests are performed, what documentation is created during the project, whether it is kept meticulously, what the working environments are, what the implementation should look like, in what environments, and much more. Even if most of the validator’s job is taken into account in the testing and pre-implementation phase, our involvement in the early phases of the project will be helpful. Joining the project, later on, may lead to delays or entail a lot of additional work and costs that will be necessary to make the project compliant and meet regulations, of course when we talk about the need for validation. I am sure it is no secret to anyone that it costs the least, both in terms of time and commitment, as well as in financial terms, to amend a project at the beginning.

Including validation at the very beginning is crucial, because the project team should know what is going on and what is expected. The mere fact that there is no scope or plan of the project yet, is not a matter of agile methodology, it is a result of bad project organization. It is sometimes misunderstood in the case of IT projects that if the project is carried out using agile methodology, certain issues can be approached more freely, e.g. the documentation will be written later or plans will be made later. This is not the case. The scope of the project and its implementation plan must always be known, documented and written down. The documentation must also be provided. As I said at the beginning, the only difference is whether we can break it down into phases and whether we have procedures or functions that allow us to make these modifications.

Let’s assume that a project is being implemented in the agile methodology, but during one of the stages, a defect or non-compliance with the documentation appears. What happens then? Is the sprint or phase terminated when the functionality works? Is there any procedure for that?

We should refer here to the Scrum methodology. Sprints are usually two-week stages, during which specific tasks are planned. After each sprint, you have to present the effects, successfully complete tasks and identified errors (which need to be corrected). After the progress of the project is presented, another meeting is held with the business department to plan the scope of the next sprint. At this meeting, detailed requirements can be discussed, and any doubts as to how the requirements will be implemented can be resolved. If there are bugs, fixing them is included in the plan. In Scrum projects, no sprints are planned from the beginning to the end of the project. To put it simply, a Scrum project is a set of individual tasks that must be completed throughout the project for the final product to be formed as intended. However, the number of sprints often increases during the project because of the bugs that need to be resolved or new features that need to be added. During the sprints, certain tests in the project should also be scheduled, so that it is possible later on to prepare the documentation,, report the progress and the detected bugs, and plan to correct them.

This is also the difference between waterfall and agile methodologies. In traditional methodologies, the occurrence of a critical problem stops the project, because you have to do the tests before you can implement the whole project, so no project may begin until the problem is solved. In Agile, this doesn’t necessarily mean that the project will end up at a standstill. If a given sprint or iteration doesn’t affect other phases or functionalities (and it often doesn’t, because the next stages are only being planned), then one team can work on solving the problem, while the other one carries out other tasks. But in principle, we know about problems or bugs on an ongoing basis.

Also, from a validation perspective, Agile projects should be approached more flexibly, but still within good manufacturing practices. The general guidelines for validation, whether for traditional or agile methodologies, are identical. The key is to choose the right tools to enable the validation process in a “changing environment”. For example, instead of using traditional documents, you can build documents in modules or introduce intermediate revisions (not just major versions) to allow more frequent updating. In fact, it all comes down to the methods of documenting and later approving documents before individual phases. Poor documentation, even in the case of a traditional approach, can ground a project, especially an Agile project. Apart from documents, it is of course important to look at the testing phase and, again, to see whether only parts of the system can be verified during individual sprints. To make sure that the whole system works properly as an integrated whole, you can plan for a regression test phase at the end of the project or end-to-end cross-sectional tests throughout the whole system. With good planning, such tests can be automated beforehand, which will also speed up the whole testing process.

So which methodology is best to choose for validation projects?

The best solution is to carry out the project in a hybrid methodology, for example, AgilePM, i.e. when the main phases will be conducted traditionally with all formal requirements preserved and broken down into the lowest phases, i.e. those groups of tasks that can be divided into iterations specific to the Agile methodology.

With any project, it is very important to have a good work organization, a work plan, a schedule and a flow of information so that everyone involved knows what they are responsible for and they know the project progress. This minimizes the number of mistakes that can happen. It is also important to cooperate with the ordering party and support its close involvement in the project. The exchange of information between the ordering party and the project team is then definitely better and it is certainly easier to deliver a project that is 100% compliant with the expectations in a shorter time.

Agile methodologies are not a cure for all ills, but they can sometimes speed up a project. Before starting a project, it is important to carry out an analysis of the project based on, for example, a requirement analysis, which will give an idea of how this project can be led. From the point of view of validation processes, whatever methodology is chosen, an important aspect is to involve the validator from the beginning of the process so that their comments and recommendations, as well as the work to be done throughout the project, can be taken into account. This will save everyone time and eliminate downtime, which is what not only the client but also the contractor is most concerned about.

From CSV to CSA – what should you know about the new validation paradigm?

From CSV to CSA – what should you know about the new validation paradigm?

Validation is going through a paradigm shift. The new way of doing things is called Computer Software Assurance (CSA). It’s founded on critical thinking and intended as an improvement on the long-standing Computer System Validation (CSV) standard. Feeling confused? Don’t be. We’re here to help you through the transition from CSV to CSA the smoothest way possible.

This change has been long awaited. Ever since FDA issued its guidance on General Principles of Software Validation in 2002, the volumes of documentation have been growing out of proportion. “Better to be safe than sorry” has become the common approach in the industry, while the focus on improving medical device quality for patient safety has waned. Now, with CSA, the priorities in the validation process have been reordered. While CSV standard – at least in practice – has been predominantly based on documentation, its successor ascribes key importance to critical thinking.

Common issues with CSV

It’s not like things were poorly thought out from the start. The risk-based approach, now encapsulated in CSA, was promoted by FDA already in 2003 and introduced to CSV by GAMP 5 five years later. With sufficient focus on the system functionalities with the highest risk, CSV serves well enough to achieve compliant computerized systems fit for intended use within a regulated environment in an efficient and effective way.

In reality though, the process is all too often overburdened with documentation which consumes roughly 60% of the time required for completing the validation projects. This doesn’t imply CSV is flawed at its core. The problems usually stem from improper execution and misunderstanding of the validation process. Common issues with CSV application involve the following:

– Insufficient cooperation among business experts

– Insufficient understanding of the intended use of the system subject to testing

– Tendency to maximize testing effort, “just to be sure”

– Demands to provide detailed evidence of testing step by step

– Excessive reviewing of documents

Much of the above boils down to insufficient application of critical thinking in the validation process. With CSA, things are about to change. By no means does it make the CSV obsolete. The new paradigm is rather restructuring than recreating the validation methodology, so that it can help achieve the desired objectives in a more rational and efficient way.

CSA vs CSV – new hierarchy of priorities

The critical thinking underlying CSA model switches the focus from zealous effort to document every action to:

– impartial fact analysis

– pattern identification

– trend assessment

– evaluating outcomes

Combined with a risk-based approach, critical thinking serves to aim the validation effort at the features of the system that are critical to patient safety, product quality, and data integrity. Thus, the central questions the validation team should be able to answer with perfect clarity are:

– Does this software impact patient safety? 

– Does this software impact product quality? 

– Does this software impact system integrity?

CSA – critical thinking for better risk assessment

Combining critical thinking with proper risk assessment allows to address the assurance needs in a proportionate manner. Consequently, the issues requiring more attention won’t be neglected in favor of those of lesser importance, as often is the case with poorly performed CSV. As opposed to its predecessor, CSA results in higher confidence in system performance, rather than excessive documentary records covering even low-risk features where little action is required.

CSA provides not only more reasonable but, in a way, more holistic approach. While CSV focuses on individual steps in a system, the new model aims at understanding the overall business process, in order to better assess the risks involved and apply adequate testing. Therefore, successful CSA completion is highly dependent on seamless collaboration between the experts – skilled professionals knowledgeable about the business processes subject to analysis and testing.

While the “human factor” is crucial, CSA also relies on more extensive use of new technologies, especially automated test tools and systems for digital management of test documentation.

Smooth transition from CSV to CSA paradigm with ecvalidation experts

Essentially, CSA is not about retiring “outdated” CSV standard. It’s about unlocking the potential of a critical mind and collaborative teamwork, as well as overcoming the anxiety related to the responsibility of risk management. As ecvalidation, we’ve completed hundreds of validation projects and have been perfectly aware of potential shortcomings of CSV model.

However, in our individual approach we have always adhered to FDA recommendations regarding the role of risk analysis in validation. Hence, we gladly welcome the transition to CSA paradigm, and perceive it as a valuable upgrade on the previous model. As a result, we’ll be happy to provide assistance to any company interested in smooth implementation of new standards in the validation process.

Computer System Validation – SaaS solutions

Computer System Validation – SaaS solutions

One of the increasingly popular forms of cloud computing are so-called SaaS services, or Software as a Service. This solution is based on the assumption that applications and databases are installed on the provider’s servers, and access to applications is usually provided through a website. The most popular applications of this type include: OneDrive, SalesForce.com, Google Apps, Concur, and Dropbox. The advantage of such an approach is that the Customer gets access to solutions with specific functionalities, without having to invest in IT infrastructure. An additional advantage is that such programs can be accessed through both desktops and mobile devices.

Cloud solutions are also becoming an increasingly popular tool in pharmaceutical companies, including areas deemed as critical to good manufacturing practices (GMP). When we talk about software supporting a critical GMP area, we must of course remember that such a solution will require validation. How can such validation be carried out correctly? What are the potential risks? What kind of cooperation can you expect between the pharmaceutical company and the provider? These are the principal questions, that will be addressed in this article.

Definition of requirements

As in any other case, we should start by preparing the User Requirement Specification (URS). However, when choosing a SaaS solution, it should be kept in mind that apart from functional requirements describing system operation or quality requirements covering GMP aspects, we should particularly consider the issues of data management and data integrity.

We must, first of all, acknowledge that the same solution is already in use or may be used in the future by other companies, including our competitors. This is important because the data will be stored on external servers managed by an external company. If the system is to additionally process personal data, we are entering the area of personal data protection regulations. Therefore, it will be important to specify the requirements in this area in accordance with the internal security and data management policy. When creating the URS, we should at least pay attention to a few of the following issues:

– server location

– dedicated server

– archiving and backup

– access management

– updates, system corrections

– testing

The above issues do not, of course, exhaust the whole range of questions and requirements that can be posed to the SaaS system, but they should help to point the way when writing the URS.

Provider Audit

According to the requirements of Annex 11 of the GMP, the IT systems provider must be audited. In the case of cloud solutions, this is particularly important because we entrust the provider with the implementation of the business process, the system and its care, as well as data and its security. It may also be one of the few opportunities to analyse the provider’s quality system in detail.

Satisfactory responses of the provider to the requirements defined in the URS must be verified and confirmed at this stage. To this end, the provider must be verified with regard to whether they have appropriate procedures in place, whether these procedures are implemented and applied in practice, and whether their staff is adequately trained. From the perspective of the client – a pharmaceutical company – one of the key points of the audit should be the verification of validation documentation and, if the provider does not have such documentation, then the verification of technical and test documentation. Another key audit point should be the aspects related to system security, starting from the verification of system architecture, through encrypted communication protocols, to the verification of penetration tests.

It should be kept in mind that according to GMP regulations, the final responsibility for the business process always falls to the pharmaceutical company, even when the process has been delegated to the provider as a part of the Software as a Service solution. Taking this into account, the provider audit is a critical element of the solution validation process.

Software as a Service solution validation process

As a rule, the SaaS solution validation process should not differ significantly from the validation of other computer systems owned by a pharmaceutical company. Therefore, if the company has defined validation procedures in this area, they should be applied. Below is an example of one of the possible approaches. An example of a validation process is presented in the diagram below:

Risk Analysis – the risk analysis process should commence with the start of the validation process, and all subsequent activities (qualification and its scope, scope of tests, required documentation, etc.) should be planned and performed based on a “risk-based approach”. Various tools can be used for risk analysis, some of the most popular are FMEA and Expert Assessment, which will also work well in this case.

Validation Plan – based on the conducted risk analysis and the results of the provider’s audit, a validation plan should be developed, which should take into account all the key stages of the process, while also taking into account the specificity of the SaaS solution. This mention of the provider’s audit result is not a coincidence, because the conclusions of the audit will strongly influence which actions should be planned for implementation on their own, which actions should be given to the provider for implementation, and which actions can be omitted because they are already being implemented by the provider.

Documentation Preparation – the pharmaceutical company is responsible for the development of the User Requirements Specification, while other technical documentation (Functional Specifications, Technical Specifications, description of system architecture, description of configurations, interfaces and others) should be developed and provided by the software provider. In order to verify that the provider’s documentation properly covers the URS, the preparation of the Traceability Matrix can be started at this stage. The Traceability Matrix is part of the pharmaceutical company’s validation documentation and is designed to demonstrate that user requirements have been implemented in the system (the connection between the URS and the corresponding Functional Specification) and that they have been properly tested (the connection between the URS and the corresponding tests).

Infrastructure Qualification – validation tests must be carried out on qualified infrastructure. The software provider should at this stage provide qualification documentation for servers (at least a test and production server). Without confirmation of infrastructure qualification status, validation tests should not be started.

Performing validation tests – the scope of tests, approach to testing, reporting rules and error assessment criteria as well as the method of documentation should be described in the test plan. The tests can be divided into two groups: IT Tests (Unit Tests, Source Code Review, System Tests, Integration Tests, Security Tests) and Business Tests (User Acceptance Tests, End-to-End Tests). The provider is responsible for the technical part of the system, so if the audit results in the testing area were positive, then in the case of IT Tests, it is possible to refer to tests performed by the provider. It will be important to verify the available test documentation and the scope of testing. Such an approach significantly reduces the amount of testing on the part of the pharmaceutical company, because only Business Tests will be performed. After the tests are completed, the Traceability Matrix should be completed in order to show that each of the requirements has been properly tested. Finally, a Test Report summarising the results should be prepared. All errors reported during testing and their current status should be verified. Errors deemed as critical and important should be resolved before the Test Report is finalised.

Determination of System Maintenance Rules – after completion of all tests and prior to finalization of the Validation Report, system maintenance rules should be established and described. If necessary, appropriate procedures and instructions should be developed and training provided. These actions should also be summarised in the Validation Report and detailed maintenance policies described in a separate Service Level Agreement. Validation Report Preparation – at the end of the validation process a Validation Report should be prepared with a summary of the entire process. If any discrepancies or deviations occurred during the validation process, they should also be described and evaluated. If any discrepancies have not been resolved at this stage, the risk reduction measures implemented and the deadline for resolving open discrepancies should be presented. The Validation Report should include a statement that the object of validation has been approved and released for use in production. All documentation that is produced as part of the validation process must be prepared and approved before signing the Validation Report. This also applies to training materials, procedures, instructions, etc.

System Maintenance – Service Level Agreement

A validated system working within the SaaS solution, like any other, requires maintenance from both the technical and the validation sides. By system maintenance we mean a number of activities carried out by the provider in order to ensure the correctness and continuity of system operation. These activities include: monitoring system operation, making changes, corrections and updates, testing, managing permissions and access, archiving and backup, incident management, providing support lines (helpdesk, 2nd and 3rd lines of support) and others.

From the point of view of the pharmaceutical company, it will be crucial to continuously maintain the validated status, implemented in the change control process, which should combine aspects related to:

– incident and defect management – how defects are handled by the 1st, 2nd and 3rd support line, who opens the incident and when, what the communication channels between the customer and the provider are

– risk assessment – what the assessment criteria are, what needs to be done in case of an emergency notification, how the level of risk translates into further implementation steps, e.g. testing

– document management – which documents and how they should be updated

– testing – how the range of tests is selected, what the testing methods are, how the tests are documented

– training – whether correction or system updates are changing the way the system works and operates, user training is required

This process, in simple terms, should comply with GMP requirements, i.e.:

– formal change requests should be available – this applies to bug fixes, when an incident may be reported, and updates

– each change request should be assessed at least in terms of its impact on the system, documentation and testing; on this basis, an implementation plan should be drawn up together with a list of necessary actions to be taken

– the implementation should be monitored on an ongoing basis in terms of both substantive and qualitative correctness

– finally, the change should be assessed as to whether it has been implemented correctly

Since the system is maintained by the provider and it is the provider who is responsible for the implementation of processes such as, for example, change control, it is crucial that there is a formal service level agreement between the pharmaceutical company and the provider, which will specify the above issues, both in terms of the content of these processes and the principles of cooperation (communication channels, response times, access to documentation, etc.).

Summary

Due to their numerous advantages, SaaS (Software as a Service) cloud solutions are increasingly popular among pharmaceutical companies, also in GMP critical areas. And although the validation required in these cases can be challenging, it seems that the key aspect of the whole process is the choice of the provider, preceded by a detailed audit. However, such a conclusion should not surprise anyone. The quality of the system and its security in this case depends primarily on the provider, their awareness of the GMP requirements and the quality processes being implemented properly. However, it should be kept in mind that when deciding on a given solution, a pharmaceutical company is responsible for the whole validation process and should know how it wants to carry out such validation before deciding on such a solution.