What is a fair test?
Understand what is meant by ‘fair test’ in primary-school science and organise your own child-friendly investigations at home to help your child practise this concept.
or Register to add to your saved resources
What is a fair test?
A fair test is a controlled investigation carried out to answer a scientific question.
What do children learn about fair tests in primary-school science?
In a fair test two or more things are compared. In order for a test to be fair or scientifically sound, children are required to ensure only one thing (this is called a variable) is changed.
For example, if testing which material is the most waterproof by pouring liquid onto a selection of different materials, in a fair test only the type of material (the component you are testing) should be changed.
Therefore all other variables (the type of liquid used, the amount of liquid used, the distance the liquid is poured from, the speed at which it is poured and the size of the material) must remain the same each time the test is carried out. It would be unfair or an unscientific test if, for example, 1000ml of water were poured onto a piece of foil and 2ml of milk poured onto a piece of foam – you would not be able to fairly compare the materials.
When do children learn about scientific methods of testing?
Fair testing is taught throughout primary school through other science topics. The children will gradually develop their skills and level of independence in planning and carrying out fair tests.
In Years 1 and 2 (Key Stage 1) children will be taught to perform simple tests.
In Years 3 and 4 (Lower Key Stage 2) children will be taught to set up simple comparative and fair tests.
In Years 5 and 6 (Upper Key Stage 2) children will be taught to plan scientific enquiries to answer questions and to control variables where necessary.
Download fantastic science resources today!
- Experiments And Science Fun pack
- Science Learning Programme for each school year
- All the instructions, questions and information you need
Download FREE resources
How is fair testing taught?
How to carry out a ‘fair test’ is taught throughout the science curriculum and not as a separate topic. Children will explore fair testing as a method of investigating questions within the science topics they are studying (for example plants, soils, light and sound, materials, electricity or forces).
A fair test begins with a question. This might be given to the class by the teacher or children may be asked to think of their own question to investigate (usually in Years 5 and 6).
Children will usually work in small groups to plan, carry out, take measurements, record results and write a conclusion. The children may be supported in different ways depending on age and ability. The planning stage may involve discussing the method and completing an investigation sheet to identify the variable (thing) they will change and what must stay the same to make it a fair test. Children may be given or create their own diagrams or tables to record results on.
During the investigation children will use equipment to take measurements or record their observations. Often bar charts or line graphs will be drawn up before a conclusion is made.
Put fair testing into practice with activities to do at home
At home you could plan and carry out your own fair tests with your child. Here are some suggestions to get you started;
- Which type of chocolate melts fastest? (Always remember to supervise children if they are working with hot liquids.)
- Which magnet is the strongest?
- Which material is most suitable for an umbrella?
- Does a plant need light to grow?
- How can you go faster down a slide?
- Which kitchen towel absorbs the most liquid?
- Which shaped ice cube melts the fastest?
For each investigation consider what variable will change and which variables need to remain the same to make it a fair test. Think about what you are measuring. What equipment do you need? Can you make a prediction (a good guess, with a reason) before you begin?
Watch this great clip of school children carrying out and explaining a fair test on magnets:
Watch this little clip of an investigation into which type of chocolate melts fastest:
Fair tests: A do-it-yourself guide
Home → Fair tests: A do-it-yourself guide
- Designing a fair test of an idea — in formal science or in everyday life — means deciding what results you’ll be comparing, controlling variables, avoiding bias, and figuring out a way to distinguish chance differences from meaningful ones.
- Controlled variables are those factors that are kept constant across a test, so that the effect of another variable can be better observed.
- The larger the sample size a test employs, the smaller the difference that the test will be able to detect.
Which brand of chocolate chip makes the best tasting cookies? Is the tree outside your window causing your runny nose? Why won’t your car start? If you want to answer questions like these, you’ll probably need to do some testing. But all tests are not created equal. In order to figure out the real answers to such questions, you’ll need to test your ideas in a fair way.
Testing can help you pick the tastiest brand of chocolate chip, figure out why you’re sneezing, or find out why your car won’t start. Photo credit: Flickr user chocolate monster mel, Flickr user Mussel used under this Creative Commons license, and Flickr user Darren Copley used under this Creative Commons license.
The considerations that go into making “everyday” tests fair are the same ones that scientists consider when they test their ideas using experiments and other methods. Whether one wants to optimize a chocolate chip cookie recipe, develop effective treatments for Alzheimer disease, learn more about how mass extinctions work, or investigate the workings of gravity, the components of a fair test are the same:
- Comparing outcomes. To be confident in test results, it’s generally important to have something to compare them to. So, for example, in your cookie test, you’d want to actually compare batches of cookies made with different brands of chocolate chips. You might also want to make a batch without any chocolate chips at all — just to make sure that the chocolate chips are really making a difference in the cookies’ taste. Making just one batch of cookies with one brand of chocolate and seeing how they taste wouldn’t help answer your question. In experiments, whatever you are comparing your test results to is sometimes called the control group or control treatment. But don’t confuse the control group with …
- Controlling variables. In most tests, we want to be confident in the relationship between cause and effect. Is it really the chocolate chip brand, and not the baking temperature, that makes one cookie taste better than another? To be able to make a strong statement about cause and effect, you’ll need to control variables — that is, try to keep everything about the test comparisons the same, except for the variables you’re interested in. So in the cookie case, this would mean, for each batch standardizing the dough recipe, the method for mixing and baking the dough, and the procedure for tasting and rating the cookies. The only element that should vary across batches is the one variable you’re interested in: brand of chocolate.
- Avoiding bias. No matter how hard we humans try to be objective, bias can sneak into our observations and judgments. In a sense, bias occurs because it’s very difficult to “control” variables associated with human judgments. For example, your cookie tasters might be hungry and so the first cookie they eat could seem tastier to them than the rest. To avoid this potential source of bias, you’d want to set up the test so that different testers taste the cookies in different orders. And if testers knew which cookies were made with which brands of chocolate they might be subconsciously biased towards more expensive chocolate brands. To avoid this, you could label your cookie batches with letters instead of brand names. It’s even possible that you, the cookie baker, would give subtle clues to your tasters if you knew that Cookie B was made with your personal favorite brand of chocolate. So, you might want to arrange to stay out of the room while the tasting is going on.
- Distinguishing chance from real differences. All sorts of subtle things that you either don’t or cannot control can affect the outcome of a test. Some cookies in a batch might have wound up with a few less chocolate chips than others. The oven might have heated unevenly and burnt a few cookies. One taster might have been distracted during the test and not given careful ratings. All of these random factors will affect the outcome of the test — but in small ways. So how do you know if the difference between a cookie with an average rating of 4.1 and one with an average rating of 4.25 is due to random factors or a real difference in chocolate brand? First, sample size is important. Cookies from each batch should be rated by many different people. The larger your sample size, the more likely it is that these random factors will cancel each other out and that real differences (if they exist) can be detected statistically — which leads to our second point: Statistics can be used to analyze your raw data. The purpose of conducting such statistical tests is to tell you how likely it is that a difference in rating like the one that you observed is actually due to random factors.
DETECTING THE DIFFERENCES: STATISTICS AND SAMPLE SIZE
You might be wondering, what counts as a “large” sample size? Twenty, 200, or 2000 chocolate chip cookies? Well, it depends on how small a difference between groups you want to be able to detect. If you are interested in very tiny differences (e.g., subtle differences between chocolate chip brands), you need a very large sample size, and if you only care about pretty big differences (e.g., the difference between yummy and disgusting), you can get away with a smaller sample size. The appropriate sample size depends on the statistical tests you want to run and the sorts of differences you want to detect.
Take a sidetrip
Advanced: Visit the Visionlearning website to learn more about the role of statistics in science.
It is often impossible to make a test perfectly fair, and each issue listed above may be more or less important for a particular test — but by considering each of these factors in how your test is designed, you can maximize the amount of useful information you get from the test.
- Take a sidetrip
- Teaching resources
Now it’s your turn. Test your knowledge by applying what you’ve learned about fair tests to these situations:
- Ask your students to apply what they know about fair testing to the following situations: Imagine that you want to figure out whether the blooming tree in your yard is causing the runny nose you’ve gotten this spring. Can you think of an observational test you might perform to help figure this out? How would you make that test fair? Can you think of an experimental test you could perform to help figure this out? How would you make that test fair?
- Read about an experiment in which scientists tried to test the idea that getting physically cold (e. g., cold feet) can contribute to getting a viral cold. How does this test measure up in terms of fairness? What might you do to improve the test?
- Students will learn more about designing fair tests if given the opportunity to do this themselves and to make mistakes, instead of following a lab with procedures dictated by a manual. During labs, you can provide students with a set of tools appropriate for exploring and testing the relevant ideas, but ask them to develop and refine their own procedures and measuring techniques.
Pentest (pentest) — what is it and how to become a pentester
Pentest — penetration testing, a set of measures that simulate a real attack on a network or application. The purpose of a penetration test is to understand whether a hypothetical attacker can break into a system. To do this, the testers themselves are trying to hack it or gain control over the data.
The name pentest means penetration testing, which translates as «penetration testing». Penetration in this sense is gaining access to the system. The spelling pen test and the translation as «pen test» are erroneous.
During a pentest, experts look for and analyze vulnerabilities that can disrupt the system or give attackers access to confidential information. They also act from the position of a real attacker, that is, they imitate hacking the system in various ways.
Who is in charge of pentesting?
Pentest specialists — Pentesters, penetration testers, white hat hackers. The latter are also called white hat — “white hat”. To imitate attacks on information systems, you need to be able to conduct them, so pentesters must be able to do the same as hackers. These areas intersect and belong to the information security industry.
Pentest can be performed by company security personnel. But this is not justified in all cases: sometimes it is necessary that during testing a person does not know anything about the network infrastructure.
Why do we need a pentest
- Identification of weaknesses, vulnerabilities in systems and networks.
- Understanding how and where an attack can come from, whether attackers are able to disrupt the system; if so, how.
- Determining how the protection will work in case of different hacker attacks.
- Making recommendations on how to remedy the situation.
- Prevent real hacker attacks on systems.
- Maintaining the security and confidentiality of data and network health.
What is included in penetration testing
Pentest is carried out at the physical and software levels. The main task is to penetrate a system or network, gain control over a device or software, and collect information. The exact steps depend on what is being tested.
Networks. Testers are looking for weak nodes, misconfigured protocols, and other vulnerabilities in transmitting or receiving data. This may also include the search for «weak» passwords and other opportunities to gain unauthorized access to the network.
Applications, software. These are local or network applications, large sites. Pentesters forge requests, try to access the database, inject malicious scripts into the code, and interfere with sessions. These are just some of the possible actions. Events are held for testing and do not affect the actual operation of applications. Pentest is usually carried out before the mass launch of the system.
Devices. Testers or «white hat» hackers find software and hardware vulnerabilities, weaknesses in the network to which the device is connected, try to get passwords using brute force.
Physical systems. This could be a data center or any other protected area. In addition to the IT infrastructure, the ability to break the lock, bypass or disable cameras and sensors is being tested.
The human factor is also important. Finding out whether employees can accidentally or intentionally break the system, succumb to the provocations of intruders is also the task of a pentester.
External testing. Carried out on behalf of an attacker «from outside» — a person who is not related to the company. Determines whether the system can be accessed remotely, and if so, how and how deeply. Such a pentest is needed for servers and other equipment that communicates with the outside world.
Internal testing. Testing as a user with standard rights, such as a company employee. An attacker can be not only a person from the outside, but also someone from the staff — an internal pentest monitors his ability to harm the company.
White box. The method assumes that the pentester has knowledge of the system. He can get them from the company he is testing for. The tester acts with this knowledge in mind. Helps to simulate attacks from people who were able to get some information about the product.
Black box. The pentester has no prior information and behaves like an intruder who first encountered the system. It has data that is publicly available. This technique is used by most real crackers.
Double blind testing. This is a pentest that almost no one knows about, including the security service. Only 1-2 people in the company know about it. They are not allowed to disclose this information. Testing is also called covert. It helps to identify vulnerabilities that cannot be detected using previous methods. For example, to understand how easily an attacker can bypass a security service.
When conducting double-blind testing, the pentester must have documents confirming that he is doing it legally. Otherwise, there may be problems with the security service and the law.
Sample Pentest Software
Kali Linux. Special lightweight OS, Linux family distribution kit for pentesting and white hat hacking. It is based on Debian Linux. By default, it has more than 600 programs and services for attacks and finding vulnerabilities. For security, there are very few repositories in Kali Linux that can download software packages. If necessary, the user can add them himself.
Metasploit. Information security project and software suite required for pentest. For example, the Metasploit Framework helps to create exploits – this is the name of malware that takes advantage of system vulnerabilities and conducts an attack.
Using Metasploit, you can analyze vulnerabilities and create virus signatures — isolated signs of real malware. The latter is sometimes necessary, for example, when creating anti-virus systems.
Nmap. Program for scanning networks with any number of users. It shows the state of the network objects, provides information about them and facilitates further attacks. It is needed to collect information. For example, this way you can get information about ports, about services, about the OS on the device.
You can see the name zenmap — this is the name of the graphical interface for Nmap.
Nessus. A program that automatically searches for vulnerabilities in systems and networks. Needed to find common weaknesses. It has a database that is updated every week, so the program is almost always up to date. Nessus helps you automate the search for vulnerabilities in the network and do not perform a number of manual actions.
Wireshark. A program that analyzes network traffic. She knows how packets are arranged that are transmitted over different network protocols, she can “disassemble” them into components and read information from them. If the data stream is not secure, you can use Wireshark to get information that is transmitted over the network.
Aircrack-ng. The program is needed to detect and intercept traffic in wireless networks. Helps to gain access to the wireless adapter, check the strength of protection, intercept information from the wireless network.
What a pentester should know and be able to do
Previously, pentesters were programmers with a large stock of knowledge in various industries. Now you can learn pentest from scratch. A partial list of what a penetration tester should know looks like this.
Computer networks. A pentester must understand how the OSI network model works, what it is, how computer networks function, and where vulnerabilities can be found in them. You need to know the protocols, the features of their work and typical errors in the settings.
Operating systems. Penetration testing requires working with both server and user operating systems. Therefore, the tester must understand how they function, and at a deep level. You will need to study the architecture and infrastructure, the features of the processes.
Cryptography. The science of information encryption, which provides theoretical knowledge about how information is protected. It introduces how modern encryption and decryption algorithms work, whether they have weaknesses, and how they can be found.
Attacks on information systems. These are methods used by real attackers. A pentester should know about them theoretically and be able to carry out, bypass security systems and not be detected.
Malware analysis. Pentesters and information security specialists should be aware of viruses, trojans, worms, exploits. It is important for a specialist to be able to write malware for the task and apply it.
Programming. To write scripts and exploits, to issue commands, you need to know at least one programming language. Often these are system languages. They are used to issue commands directly to the operating system. But if the pentester knows several languages, this is a plus.
Linux command line . Basically, penetration testers work on special Linux distributions, so they must have a good command of the OS command line. This is sometimes necessary when simulating a hack.
Pentest is an activity that is licensed by law. Therefore, companies that deal with it must have the appropriate license. This also applies to freelance pentesters acting on their own.
Testers who work for an organization may not be licensed. But they are encouraged to have CEH and OSCP certifications, which confirm that a person can be considered a pentester. Their presence may be a requirement of the employer.
What is quality. We understand the hierarchy of the terms «QA», «QC» and «testing» / Sudo Null IT News And there are so many holivars around “quality” that if you start asking colleagues in the shop what it is, you can hear different versions in response: from satisfied customers or the absence of bugs to absolute formality. But even more leapfrog begins if you ask how to ensure this quality.
If you ask colleagues about quality assurance in the enterprise, they usually have no problems: you are quickly sent to the quality assurance team and then go about your business. But in a non-enterprise (for example, in retail), something interesting begins. Depending on who you ask, you will be sent to different people, but in most cases it all comes down to “Do not interfere with work, go to the testers, they are just about quality.” No problem, let’s go.
Below are the results of my little research on what quality is and how to ensure it, so as not to listen in response: “Listen, what have you dug up, go to www.protesting.ru (hereinafter referred to as ProTesting), everything is written there especially for people like you. Since I hear about it (ProTesting) all the time, I will rely on it.
Basic concepts and definitions
To quote the definitions of SQ from ProTesting:
Software quality (Software Quality) is the degree to which the software has the required combination of properties. [1061- 1998 IEEE Standard for Software Quality Metrics Methodology]
Software Quality (Software Quality) is a set of characteristics of software related to its ability to satisfy stated and implied needs. [ISO 8402: 1994 Quality management and quality assurance]
I would like to draw attention to the years that I have highlighted in bold. The standards from which the definitions are taken were released more than 20 (!) years ago. And what is there now?
Answer: GOST R ISO 9000 — 2015. Here, many people have the reaction “Um, but the standards numbers don’t beat!”. That’s right, I recommend to google it yourself and spend time studying how the standard numbers changed and one standard absorbed others.
Let’s return to «Quality». GOST tells us the following:
Quality (Quality) — the degree of compliance of the totality of the inherent characteristics of the object with the requirements.
Compared to previous versions, there are fewer words and more crystallized meaning. An important conclusion follows from this definition: if you have not put forward requirements, then the conversation about quality has no basis. Quality does not appear by itself, it requires costs. This is discussed in detail in section «2.2.1 Quality».
To quote a paragraph from this section:
A quality-focused organization promotes a culture that is reflected in behaviors, attitudes, actions and processes that create value by meeting the needs and expectations of customers and other relevant interested parties.
Culture eats strategy for breakfast? Yes, but not only. She «eats» everything, including quality. If you don’t invest in a culture that encourages quality, you can forget about it. Quality cannot live apart from the organization as a system.
The quality of an organization’s products and services is determined by the ability to satisfy customers and the intended or unintended impact on relevant interested parties.
Quality cannot live apart from those who use the product. If you do not meet the requirements that are based on the wishes of users, then the product will be perceived as poor quality. On the other hand, if you do not talk about your product and its purpose, then your product will be misunderstood and will be considered poor quality.
And one more paragraph. Very important paragraph:
The quality of products and services includes not only the performance of functions in accordance with the purpose and their characteristics, but also the perceived value and benefit to the consumer.
If we don’t communicate our quality vision, quality policy, then the perception of our products by consumers may not match what we expect to see.
In my opinion, in these three paragraphs, the drafters of the standard have tried to reflect that quality is quite a complex and multifaceted thing, including the culture of production, our understanding of customers and our vision of what our product is. This approach is very close to me.
Let’s go ahead and discuss what has changed in the definition: Quality Assurance ( Quality Assurance )
— this is a set of activities covering all technological stages of development, release and operation of software (SW) of information systems undertaken at different stages software life cycle , for to ensure the required level of quality of the released product .
According to GOST R ISO 9000-2015 :
Quality assurance (Quality Assurance): Part of quality management aimed at providing confidence that quality requirements will be met.
What I like about the definition of GOST: they left all the verbosity about events, stages, etc., and focused on the essence — on ensuring confidence. If you think about it, this is the only thing you can do. Nothing can be 100% guaranteed. And how you will ensure this confidence depends directly on the corporate culture and quality policy.
Somewhere these are clearly defined criteria for accepting a feature for work, somewhere special contractual relations or a lot of policies and instructions. The company chooses how to implement quality assurance. But one thing is clear — it is very expensive.
Quality Control – and here comes the fun! We look and compare, the upper definition is taken from ProTesting, the lower one from GOST:
Quality Control (Quality Control) — this is a set of actions carried out on the product in the development process, to obtain information about its current state in the sections: « product readiness for release «, « compliance with the fixed requirements «, « compliance with the declared quality level of the product «.
Quality control (Quality control) — part of quality management aimed at meeting the requirements to quality.
The attentive ones have already noticed that the translation has changed. Now it is quality management !
The new definition also removed verbosity, leaving a focus on meeting requirements, which, in my subjective opinion, made it much better in the old version. Mainly because it contained «…for information…».
What then to do with all this information was not specified … The new definition clearly states: you must comply with the requirements. Again, how you do this depends on the corporate culture.
For example, the quality policy dictates that test cases should be as complete as possible. – is quality assurance. And quality management is:
verification that these test cases are complete and sufficient;
verification that testing is carried out in full.
If we correlate the definitions with the Deming-Shewhart cycle, then quality assurance = planning, part of the execution and part of the adjustment. And quality management is part of the execution, checking and part of the correction.
Now we’ll talk about what testing is. Below are two definitions, the first is taken from ProTesting, below — from ISO / IEC TR 19759:2015, aka SWEBOK.
Software Testing (Software Testing) — verification of the correspondence between the actual and expected behavior of the program, carried out on a final set of tests, chosen in a certain way. [IEEE Guide to Software Engineering Body of Knowledge, SWEBOK, 2004].
In a broader sense, testing — is one of the quality control techniques that includes the activities of test management (Test Management), test design (Test Design), test execution (Test Execution) and analysis of the results (Test Analysis).
Software testing consists of the dynamic verification that a program provides expected behaviors on a finite set of test cases, suitably selected from the usually infinite execution domain».
There is no official translation, so I will give my translation below. Write alternative versions in the comments below the post.
«Software testing is the verification that a program provides the expected behavior on a finite set of test cases chosen in a certain way from an infinite set of test cases.»
The definitions, if they have changed, have not changed much. In general, this is already a test, and I fully agree with my colleagues: this is part of quality management.
And from all of the above, this is the picture.
You can talk a lot about definitions, long and beautifully. How about in life? She is complex and multifaceted. Let’s look at a simple example of how testing differs from quality management and quality assurance.
Disclaimer: The example below is entirely fictitious. All matches are random.
Meet company X
The company sells products, it is doing well. The end-to-end value delivery process is shown below:
The company created the value, then did the verification and validation to make sure everything went well and delivered, the output artifact was the distribution. We remember that this is a simple example!
The company cared about the quality of its products and communicated to customers that testing is carried out at the highest level. And indeed, the company’s products were cool and great at solving customer problems. Customers thought, since the products are so good, then the tests with which they are tested are also good, and this data will help with checking their own business processes. And the company received a tempting offer «Sell us your tests, here’s an open-ended contract.» The company decided “Why not!”.
The value delivery process has taken the following form:
Now the output is two artifacts: distribution and tests. But a question arose from a colleague from IB: what about our personal data? In the distribution, they are clearly not there, but in the tests? How do we comply with 152-FZ? What if someone used their full name or the full name of colleagues for testing?
According to the law, last name, first name and patronymic are treated as personal data. I understand that lawyers have a lot to say here, but remember — we have a simplified example.
It is clear that you can ask employees to sign documents for the transfer of PD, but we are now talking about something else. We consider the case of how we can ensure that there are no personal data in the tests. As a PD, we consider only the surname, first name and patronymic, greatly simplifying it to fit the format of the article. What does it all mean? The requirement «There are no personal data in the supplied tests» has been added to the company’s quality policy. Let’s implement it.
We go from the particular to the general. The approach to testing is very simple: you need to check that our tests do not contain the names of employees. A good tester will say that it would be nice to pass the full name through a morpher to get them in different cases.
We build a separate step into our process where we check our tests for the absence of PD. As a result, the process of delivering business value becomes like this.
Cheap and fast. If they found PD, then they removed them in the output artifact, and in the tests in the test management system. By the way, the latter is already more about QC. There are no questions to this approach, they checked it, well done, but are the quality requirements met? No. For example, a new employee got a job in the company, how will he get into this list?
Here we go to the next level. We consider the solution of our case from the point of view of quality management.
At this level, we are already building additional steps into the company’s processes, and not just into the main value creation process. In order to ensure that the full names of employees are not included in the tests, it is not enough to check each of them. We need to be sure that the list we check against is always up to date.
In general, two events are possible that require adding information to the list:
The question immediately arises: why not update and when the information is removed from the list? The answer to them is one — never. Information is not deleted or updated, but only added, so we guarantee the absence of PD. Better, as they say, to overdo it.
The value creation process becomes like this:
That’s good, but we will be constantly plagued by two problems: false positives and false negatives. And nothing can be done with them, because we are at the end of the value chain, we do not influence the previous steps. Moreover, we react to events that have already taken place.
The objective of quality assurance is to eliminate the possibility of such events occurring in principle. Let’s go to this level.
In order to prevent the full name of our employees from getting into the test data, we are switching to procedural generation of full name. Testers no longer need to come up with a full name to fill in the fields, the procedure will do it for them. Moreover, testers are prohibited from using data obtained from something other than a procedure. The value creation process becomes:
What advantages do we get from this approach:
We ensure that the data created for testing will not contain the names of employees. In the procedure itself, checks are implemented for the fact that the created full names are not in the list of full names of employees;
We guarantee that they will not look like «human». Very good practice + another step of confidence. For example, full names can be created in the elvish style;
We guarantee that signatures for fast scripted identification will be implemented in them. For example, insert the letter “y” at the beginning and at the end;
We provide a single point of delivery for test data;
Everything is a code! Provides transparent control over the algorithm for creating test data;
A nice bonus: we can use our processes to ensure code quality. For example, any changes are possible only after the IB approves the change request.
All of these measures provide us with confidence that the requirement “There are no personal data in the tests supplied” will be met. But it is important to understand that this in no way cancels testing the output for the presence of PD.
Definitions are not cast in granite, they change. What was true in 1999 may be completely obsolete in 2022. Some standards are replaced by others, the meanings of terms change over time. We just have to accept it.
While preparing this article, I realized one important point. It is wrong to talk about quality in isolation from requirements.