hw1
.docx
keyboard_arrow_up
School
University of Texas, Dallas *
*We aren’t endorsed by this school
Course
6385
Subject
Computer Science
Date
Feb 20, 2024
Type
docx
Pages
2
Uploaded by ChefMantisMaster932
CS 6367 – SOFTWARE TESTING AND VALIDATION
Question 1: What are the assumptions for the traditional random testing? The assumptions of traditional random testing are as follows
1.
Uniform probability distribution : The inputs are randomly generated from a domain of values which can be numbers, text etc. It is mainly called data. The
test cases should be from across the data, and it must include edge cases as well. 2.
Independent test cases : Each test case should be independent of each other.
The existence of one should not disturb the functionality and outcome of other test cases.
3.
No bias : There should not be any bias in selecting the test cases. Everything should be evenly distributed to accurately perform the testing.
4.
Test oracle : Oracle should evaluate the test case correctly and should provide insights to measure the performance of the program after testing.
5.
Randomness : Random inputs are generated without using any prefixed values. This helps in accurately finding the correctness of the program.
6.
Realistic usage : The inputs are to be randomly generated in such a way that it is similar to the real-world scenarios , rather than testing with dummy input
data. Question 2: What is the rationale behind adaptive random testing? Adaptive random testing is introduced to cover the flaws in the traditional approach.
It has a new set of logic to improve the overall testing experience. The reasons are as follows.
1.
Increased efficiency : Rather than choosing the test cases from the input domain, adaptive random testing generates the test cases related to the program by adapting to it. It provides the test cases in such a way that true performance of the application is measured. This helps to quickly find the issues.
2.
Defect detection : After getting the results and analyzing the critical areas , new test cases are generated in such a way that critical bugs can be found easily by prioritizing the area.
3.
Edge cases : Rather than going with the regular test cases from a standard domain of inputs , adaptive random testing generates some edge cases where the performance of the application can be tested using fewer and more
complex test cases. This way , the need for more test cases will be dropped.
4.
Execution time : The complex areas are tested easily by using the edge cases
and by setting the priority to high such that the testing can be completed easily and quicker.
5.
Data driven : Adaptive random testing generates the test cases based on the data available on the application based on existing knowledge.
CS 6367 – SOFTWARE TESTING AND VALIDATION
Question 3: What are the challenges when using adaptive random testing to handle high-dimensional complex input data?
Any testing strategy becomes inefficient for the high-dimensional data as it is difficult to analyze and perform the testing. Here are the few challenges faced while using adaptive random testing with high-dimensional data.
1.
Dimensionality : As the number of dimensions increases for the input, it is difficult to cover all of them which might lead to missing the critical areas.
2.
Distance : The algorithms rely on distance metrics to assess the spread of test cases across the data. It needs more time and computational power to calculate the distance.
3.
Boundary : Some techniques might generate the testcases near the input boundary domain which might result in incorrect reporting of application performance. There is also the possibility of overlooking a specific boundary.
4.
Correlation of input : Sometimes in the complex dimensional data , it is difficult to identify the correlation of the input domain which might lead to increase in the time complexity.
5.
Sparse data : The highly complex data will have fewer data points in a region increasing the distance between each point. Navigation becomes difficult and
thus results in increased complexity.
6.
Scaling : algorithms might not scale efficiently with high dimensional data which leads to longer execution times and increases usage of the resources.
7.
Visualization : it is difficult to visualize and interpret high dimensional data making it difficult for test cases to get a picture.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Related Questions
A test strategy is defined as:
arrow_forward
a) Why is a test program's specified behaviour different from its implemented behaviour?
b) What are the key differences between special value testing and random testing when employing boundary value analysis?
arrow_forward
Do the two methods of testing differ?
arrow_forward
Definition of Parameterized Testing.
arrow_forward
As we've previously seen, equations describing situations often contain uncertain parameters, that
is, parameters that aren't necessarily a single value but instead are associated with a probability
distribution function. When more than one of the variables is unknown, the outcome is difficult
to visualize. A common way to overcome this difficulty is to simulate the scenario many times
and count the number of times different ranges of outcomes occur. One such popular simulation
is called a Monte Carlo Simulation. In this problem-solving exercise you will develop a program
that will perform a Monte Carlo simulation on a simple profit function.
Consider the following total profit function:
PT=nPv
Where Pr is the total profit, n is the number of vehicles sold and P, is the profit per vehicle.
PART A
Compute 5 iterations of a Monte Carlo simulation given the following information:
n follows a uniform distribution with minimum of 1 and maximum 10
P, follows a normal distribution with a mean…
arrow_forward
What distinguishes exploratory testing from other testing techniques?
arrow_forward
An experiment is performed and four events (A, B, C, and D) are
defined over the set of all possible outcomes. Use the table below to
select the pair of events that are independent:
P(A) = 1/6
p(A/B) = 1/6
P(B) = 1/3
p(A/D) = 1/3
P(C) = 1/6
p(C/B) = 1/4
P(D) = 1/3
p(C/D) = 1/3
arrow_forward
Please dont USE Ai
Fuzzy Logic-Based Fake News Detection Example
Suppose we have a news article with the following features that you use to determine whether the article is likely to be fake news:
Word Count: The total number of words in the article.
Emotional Tone: A score indicating the article's emotional tone (positive, negative, or neutral).
Source Reliability: A rating (on a scale of 0 to 1) representing how reliable the news source is.
To describe the qualitative characteristics of the above features and represent its inherently vague or imprecise, we use these linguistic variables:
Word Count: Low, Medium, High
Emotional Tone: Negative, Neutral, Positive
Source Reliability: Low, Medium, High
These membership functions represent each feature.
Consulting a group of experts, they suggest that to classify an article as fake news, we must follow two basic rules:
If the Word Count is High AND the Emotional Tone is Negative, the article is likely fake.
If…
arrow_forward
e) Design a factorial ANOVA test and run it in the statistical software package of your choice.
Write all the steps (two-way factorial ANOVA model, statistical hypotheses,
Tukey comparisons, interaction plot and conclusion).
arrow_forward
Supervised machine learning:
includes a fair amount of trial and error in model design.
requires picking the correct model from the start, or else your foundations are weak
requires a PhD or the equivalent level of experience in Computer Science
will often need millions of images to build any sort of useful model
is largely dependent on the quality of your data. Clean, well-labeled data builds strong models.
arrow_forward
Software testing experts claim that applying a stratified sample of real-life test cases is
more effective for identifying errors and more efficient than regular random sampling.
1. If you agree, list your arguments.
2. If you disagree, list your contradictory arguments.
arrow_forward
select more than one!
arrow_forward
When it comes to testing, what exactly is parametrized testing?
arrow_forward
Supervised learning example: Iris classificationLet’s take a look at another example of this process, using the Iris dataset we discussed earlier. Our question will be this: given a model trained on a portion of the Iris data, how well can we predict the remaining labels?
arrow_forward
T or F
arrow_forward
The p-value of a test is the smallest level of significance at which the null hypothesis can be rejected. True False
arrow_forward
SEE MORE QUESTIONS
Recommended textbooks for you
Operations Research : Applications and Algorithms
Computer Science
ISBN:9780534380588
Author:Wayne L. Winston
Publisher:Brooks Cole
Related Questions
- A test strategy is defined as:arrow_forwarda) Why is a test program's specified behaviour different from its implemented behaviour? b) What are the key differences between special value testing and random testing when employing boundary value analysis?arrow_forwardDo the two methods of testing differ?arrow_forward
- Definition of Parameterized Testing.arrow_forwardAs we've previously seen, equations describing situations often contain uncertain parameters, that is, parameters that aren't necessarily a single value but instead are associated with a probability distribution function. When more than one of the variables is unknown, the outcome is difficult to visualize. A common way to overcome this difficulty is to simulate the scenario many times and count the number of times different ranges of outcomes occur. One such popular simulation is called a Monte Carlo Simulation. In this problem-solving exercise you will develop a program that will perform a Monte Carlo simulation on a simple profit function. Consider the following total profit function: PT=nPv Where Pr is the total profit, n is the number of vehicles sold and P, is the profit per vehicle. PART A Compute 5 iterations of a Monte Carlo simulation given the following information: n follows a uniform distribution with minimum of 1 and maximum 10 P, follows a normal distribution with a mean…arrow_forwardWhat distinguishes exploratory testing from other testing techniques?arrow_forward
- An experiment is performed and four events (A, B, C, and D) are defined over the set of all possible outcomes. Use the table below to select the pair of events that are independent: P(A) = 1/6 p(A/B) = 1/6 P(B) = 1/3 p(A/D) = 1/3 P(C) = 1/6 p(C/B) = 1/4 P(D) = 1/3 p(C/D) = 1/3arrow_forwardPlease dont USE Ai Fuzzy Logic-Based Fake News Detection Example Suppose we have a news article with the following features that you use to determine whether the article is likely to be fake news: Word Count: The total number of words in the article. Emotional Tone: A score indicating the article's emotional tone (positive, negative, or neutral). Source Reliability: A rating (on a scale of 0 to 1) representing how reliable the news source is. To describe the qualitative characteristics of the above features and represent its inherently vague or imprecise, we use these linguistic variables: Word Count: Low, Medium, High Emotional Tone: Negative, Neutral, Positive Source Reliability: Low, Medium, High These membership functions represent each feature. Consulting a group of experts, they suggest that to classify an article as fake news, we must follow two basic rules: If the Word Count is High AND the Emotional Tone is Negative, the article is likely fake. If…arrow_forwarde) Design a factorial ANOVA test and run it in the statistical software package of your choice. Write all the steps (two-way factorial ANOVA model, statistical hypotheses, Tukey comparisons, interaction plot and conclusion).arrow_forward
- Supervised machine learning: includes a fair amount of trial and error in model design. requires picking the correct model from the start, or else your foundations are weak requires a PhD or the equivalent level of experience in Computer Science will often need millions of images to build any sort of useful model is largely dependent on the quality of your data. Clean, well-labeled data builds strong models.arrow_forwardSoftware testing experts claim that applying a stratified sample of real-life test cases is more effective for identifying errors and more efficient than regular random sampling. 1. If you agree, list your arguments. 2. If you disagree, list your contradictory arguments.arrow_forwardselect more than one!arrow_forward
arrow_back_ios
SEE MORE QUESTIONS
arrow_forward_ios
Recommended textbooks for you
- Operations Research : Applications and AlgorithmsComputer ScienceISBN:9780534380588Author:Wayne L. WinstonPublisher:Brooks Cole
Operations Research : Applications and Algorithms
Computer Science
ISBN:9780534380588
Author:Wayne L. Winston
Publisher:Brooks Cole