hw1

.docx

School

University of Texas, Dallas *

*We aren’t endorsed by this school

Course

6385

Subject

Computer Science

Date

Feb 20, 2024

Type

docx

Pages

2

Uploaded by ChefMantisMaster932

Report
CS 6367 – SOFTWARE TESTING AND VALIDATION Question 1: What are the assumptions for the traditional random testing? The assumptions of traditional random testing are as follows 1. Uniform probability distribution : The inputs are randomly generated from a domain of values which can be numbers, text etc. It is mainly called data. The test cases should be from across the data, and it must include edge cases as well. 2. Independent test cases : Each test case should be independent of each other. The existence of one should not disturb the functionality and outcome of other test cases. 3. No bias : There should not be any bias in selecting the test cases. Everything should be evenly distributed to accurately perform the testing. 4. Test oracle : Oracle should evaluate the test case correctly and should provide insights to measure the performance of the program after testing. 5. Randomness : Random inputs are generated without using any prefixed values. This helps in accurately finding the correctness of the program. 6. Realistic usage : The inputs are to be randomly generated in such a way that it is similar to the real-world scenarios , rather than testing with dummy input data. Question 2: What is the rationale behind adaptive random testing? Adaptive random testing is introduced to cover the flaws in the traditional approach. It has a new set of logic to improve the overall testing experience. The reasons are as follows. 1. Increased efficiency : Rather than choosing the test cases from the input domain, adaptive random testing generates the test cases related to the program by adapting to it. It provides the test cases in such a way that true performance of the application is measured. This helps to quickly find the issues. 2. Defect detection : After getting the results and analyzing the critical areas , new test cases are generated in such a way that critical bugs can be found easily by prioritizing the area. 3. Edge cases : Rather than going with the regular test cases from a standard domain of inputs , adaptive random testing generates some edge cases where the performance of the application can be tested using fewer and more complex test cases. This way , the need for more test cases will be dropped. 4. Execution time : The complex areas are tested easily by using the edge cases and by setting the priority to high such that the testing can be completed easily and quicker. 5. Data driven : Adaptive random testing generates the test cases based on the data available on the application based on existing knowledge.
CS 6367 – SOFTWARE TESTING AND VALIDATION Question 3: What are the challenges when using adaptive random testing to handle high-dimensional complex input data? Any testing strategy becomes inefficient for the high-dimensional data as it is difficult to analyze and perform the testing. Here are the few challenges faced while using adaptive random testing with high-dimensional data. 1. Dimensionality : As the number of dimensions increases for the input, it is difficult to cover all of them which might lead to missing the critical areas. 2. Distance : The algorithms rely on distance metrics to assess the spread of test cases across the data. It needs more time and computational power to calculate the distance. 3. Boundary : Some techniques might generate the testcases near the input boundary domain which might result in incorrect reporting of application performance. There is also the possibility of overlooking a specific boundary. 4. Correlation of input : Sometimes in the complex dimensional data , it is difficult to identify the correlation of the input domain which might lead to increase in the time complexity. 5. Sparse data : The highly complex data will have fewer data points in a region increasing the distance between each point. Navigation becomes difficult and thus results in increased complexity. 6. Scaling : algorithms might not scale efficiently with high dimensional data which leads to longer execution times and increases usage of the resources. 7. Visualization : it is difficult to visualize and interpret high dimensional data making it difficult for test cases to get a picture.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help