4_El-Masri21_A Systematic Literature Review of Automated Log Abstraction Techniques
.pdf
keyboard_arrow_up
School
Concordia University *
*We aren’t endorsed by this school
Course
691
Subject
Computer Science
Date
Oct 30, 2023
Type
Pages
23
Uploaded by BaronSandpiperMaster927
A Systematic Literature Review on Automated Log Abstraction Techniques
Diana El-Masri
a,
*
, Fabio Petrillo
b,
, Yann-Gaël Guéhéneuc
c
, Abdelwahab Hamou-Lhadj
c
, Anas Bouziane
a
a
Département de génie informatique et génie logiciel, Polytechnique Montréal, Montréal, QC, Canada
E-mail: {diana.el-masri,anas.bouziane}@polymtl.ca
b
Départment d’Informatique et Mathématique, Université du Québec à Chicoutimi, Chicoutimi, QC, Canada
E-mail: fabio@petrillo.com
c
Department of Computer Science & Software Engineering, Concordia University, Montréal, QC, Canada
E-mail: {yann-gael.gueheneuc, wahab.hamou-lhadj}@concordia.ca
Abstract
Context:
Logs are often the first and only information available to software engineers to understand and debug their
systems. Automated log-analysis techniques help software engineers gain insights into large log data. These techniques
have several steps, among which log abstraction is the most important because it transforms raw log-data into high-
level information. Thus, log abstraction allows software engineers to perform further analyses. Existing log-abstraction
techniques vary significantly in their designs and performances. To the best of our knowledge, there is no study that
examines the performances of these techniques with respect to the following seven quality aspects concurrently: mode,
coverage, delimiter independence, efficiency, scalability, system knowledge independence, and parameter tuning effort.
Objectives:
We want (1) to build a quality model for evaluating automated log-abstraction techniques and (2) to
evaluate and recommend existing automated log-abstraction techniques using this quality model.
Method:
We perform a systematic literature review (SLR) of automated log-abstraction techniques.
We review 89
research papers out of 2,864 initial papers.
Results:
Through this SLR, we (1) identify 17 automated log-abstraction techniques, (2) build a quality model com-
posed of seven desirable aspects: coverage, delimiter independence, efficiency, system knowledge independence, mode,
parameter tuning effort required, and scalability, and (3) make recommendations for researchers on future research
directions.
Conclusion:
Our quality model and recommendations help researchers learn about the state-of-the-art automated log-
abstraction techniques, identify research gaps to enhance existing techniques, and develop new ones. We also support
software engineers in understanding the advantages and limitations of existing techniques and in choosing the suitable
technique to their unique use cases.
Keywords:
Log Abstraction Techniques, Log Analysis, Log Mining, Log Parsing, Software Analysis, Software Log,
Systematic literature review, Systematic survey.
1. Introduction
Logs contain a wealth of data that can help software en-
gineers understand a software system run-time properties
[1, 2]. However, modern systems have become so large and
complex, especially with the emergence of the Internet of
Things (IoT) and Cloud computing, that they produce too
huge amounts of log data for software engineers to handle
manually. Google systems, for example, generate hundreds
of millions of new log entries every month, which account
for tens of terabytes of log data daily [3, 4].
Also, logs
come in different formats, hindering the analyses of their
content and making their uses yet more complex [4, 3].
To tackle these problems, software engineers have at
their disposable a wide range of Automated Log Abstrac-
*
Corresponding authors
Email addresses:
diana.el-masri@polymtl.ca
(Diana El-Masri
),
fabio@petrillo.com
(Fabio Petrillo)
tion Techniques (ALATs) that they can use to reduce the
amount of data to process.
These techniques implement
different log-abstraction algorithms, designed for various
purposes,
e.g.,
performance optimization, information se-
curity, anomaly detection, business reporting, resource uti-
lization, or users’ profiling [1].
However, there is a gap between industry and acad-
emia. First, software engineers are not aware of all exist-
ing ALATs developed in academia and the characteristics
of their algorithms. To the best of our knowledge, there
is no work that presents a comprehensive view on state-
of-the-art ALATs and software engineers cannot afford to
undertake the cumbersome and time-consuming task of
searching through the large body of literature to identify
the best suited ALAT. Second, software engineers do not
have the time and resources to study and understand the
characteristics of each ALAT. The gap is further spread
because researchers focus on enhancing accuracy (defined
Preprint submitted to Information and Software Technology
February 16, 2020
in Section 6) when proposing new ALATs whereas software
engineers are also interested in comparing the ALATs in
terms of other useful aspects.
To reduce this gap, this paper helps researchers and
software engineers as follows:
•
It provides a SLR to inform software engineers of
existing state-of-the-art ALATs in Section 5
•
It collates and combines ALATs’ characteristics iden-
tified through the SLR into seven desirable quality
aspects on which it builds a quality model to evalu-
ate ALATs, explained in Section 6
•
It presents a comparison of 17 ALATs according to
our quality model, identifies research gaps, and makes
recommendations for researchers on future research
directions, in Section 7.
•
It helps software engineers understand the advan-
tages and limitations of existing ALATs and select
the most suitable for their use cases, in Section 7.
We review 89 research papers out of 2,864 initial pa-
pers, identified using a SLR, following the guidelines pro-
posed by Kitchenham
et al.
[5, 6]. We selected these pa-
pers after searching all the papers related to log analysis
in the digital resource Engineering Village. Two authors
independently read and evaluated the papers.
We per-
formed backward and forward snowballing through SCO-
PUS. Based on our inclusion/exclusion criteria and quality
assessment, we obtained 89 papers, in which we identified
17 unique ALATs.
We evaluated these ALATs and showed that (1) re-
searchers worked on improving the efficiency of ALATs
by adopting diverse algorithms, while distributed archi-
tectures seem most promising; (2) parameter tuning for
large-scale log data is challenging and requires major ef-
fort and time from software engineers, researchers should
consider techniques for automatic and dynamic parame-
ters tuning; (3) due to confidentiality issues, log datasets
are rare in the community while all existing unsupervised
ALATs depend on these datasets for training, so we rec-
ommend researchers to investigate new ALATs that do not
rely on training data; (4) practitioners must make compro-
mises when selecting an ALAT because there is not one
ALAT that can satisfy all quality aspects even if online
ALATs (
e.g.,
Spell, Drain) or ALATs based on heuristic
clustering approaches and implementing a parallelization
mechanism (
e.g.,
POP, LogMine) satisfy most combina-
tions of quality aspects; (5) supervised ALATs based on
Natural Language Processing techniques (NLP) are accu-
rate if the models are trained on large amounts of data and
researchers should build and share their logs to benefit the
research community.
He
et al.
[7] provided an ad-hoc comparison of four
ALATs using accuracy and efficiency as quality aspects.
Also, parallel to this work, Zhu
et al.
[8] measured the
performance 13 ALATs on 16 log datasets and reported
Log Collection
ALAT
Unstructured
Raw log data
Structured
Log Data
Visualisation
...
Log Analysis
(Off-the-shelf tools,
Anomaly detection
Models,etc.)
Debugging
Anomaly Detection/
Failure Diagnosis
Decision Making
Figure 1: Log Mining Pipeline
interesting results in terms of accuracy, robustness, and ef-
ficiency. Differently, we conduct a systematic literature re-
view (SLR) from which we identify, study, summarize, and
compare 17 ALATs based on seven desirable quality as-
pects identified from the literature: mode, coverage, delim-
iter independence, efficiency, scalability, system knowledge
independence, and parameter tuning effort.(defined in Sec-
tion 6). Furthermore, we provide practitioners with direct
references and summarize/group the researchers’ findings,
so practitioners benefit from their experience with ALATs.
Our results are based on a thorough review of ALAT devel-
opment contexts and algorithmic characteristics, detailed
in Section 5 and Table 1, and on the results of empirical
experiments and experiences shared in the literature. Our
results are not based on review of any source code released.
The remainder of the paper is as follows. Section 2 pro-
vides a background on log abstraction. Section 3 motivates
the use of ALATs by practitioners and researchers. Section
4 describes our study design. Section 5 groups and sum-
marizes the 17 state-of-the-art ALATs identified through
a SLR. Section 6 presents the ALATs quality model based
on seven quality aspects identified in literature. Sections
7 provides the results of our study and promising direc-
tions for researchers and software engineers, respectively.
Section 8 discusses threats to the validity of our results.
Section 9 concludes the paper with future work.
2. Log Mining Process
To perform log-mining tasks, such as failure diagnosis,
performance diagnosis, security, prediction, and profiling
[1], a typical log-mining process is composed of three steps:
log-collection, log-abstraction, and log-analysis (Figure 1).
The raw log data collected during the log-collection
step contains log entries describing system states and run-
time information. Each log entry includes a message con-
taining a free-form natural-language text describing some
event.
Based on the log-mining task at hand, the log-
analysis step implements the most suitable automated log-
analysis technique (
i.e.,
anomaly detection, model infer-
ence, etc.), which usually requires structured input-data
that can be encoded into numerical feature vectors.
As
shown in Figure 2, during the log-abstraction step, ALATs
transform the raw log-data into structured events lists re-
quired by the automated log-analysis techniques.
Thus,
ALATs are essential in the pre-processing step for efficient
2
Training
raw
log entries
ALAT
Discovery phase
Matching phase
Structured
log events
list
...
Anomaly Detection
Model
Model inference
New raw log
entries
Structured
log events
list
Training Stage
Log Analysis Techniques
ALAT
Matching phase
Detection Stage
Events
Types
Figure 2: Log Mining Process
log-mining (
e.g.,
searching, grouping, etc.), a foremost step
for most automatic log-analysis techniques and a useful
step for managing logs in a log management system [9].
2.1. Log Format
Logs are generated by logging statements inserted by
software engineers in source code to record particular events
and track run-time information. For example, in the log-
ging statement:
logger.info("Time taken to scan block pool {} on
{} {}", map.get("pool"), path, executionTime )}
logger
is the logging object for the system,
info
is the
chosen verbosity level,
Time taken to scan block pool
and
on
are static messages fixed in the code, which remain
the same at runtime, and
poor
,
path
, and
executionTime
are dynamic parameters varying each time this statement
is executed, which can thus generate different log entries,
such as the example in Figure 3.
Each
log entry
in a raw log-file represents a specific
event. As shown in Figure 3, a log entry is generally com-
posed of a log header and a log message containing run-
time information associated with the logged event.
The
logging-framework configuration determines the fields of
the log-header. Usually, they include data such as a time-
stamp, a severity level, and a software component [10, 11].
Therefore, these fields are structured and can easily be
parsed and abstracted.
As illustrated in Figure 4, the
log message
of a log
entry is written in a free-form text in the source code,
typically as a concatenation of different strings and–or a
format string, which is difficult to abstract because it does
not have a “standard”, structured format.
Log messages
are composed of static fields and dynamic fields. Dynamic
017-09-26 12:40:15, INFO impl.FsDatasetImpl - Time taken to scan block pool BP-805143380 on /home/data3/current 30ms
Timestamp
Verbosity
Component
Log Header
Log Message
Figure 3: Log Entry Sample
Time taken to scan block pool BP-805143380 on /home/data3/current 30ms
Log Message
Static Field
Dynamic Field
Dynamic Field
Dynamic Field
Static Field
Figure 4: Log Message Fields
fields are the variables assigned at run-time. Static fields
are text messages that do not change from one event oc-
currence to another and denote the event type of the log
message. Log fields can be separated by any delimiter
e.g.,
white-space, brackets, comma, semicolon, etc.
2.2. Log Abstraction
Log-abstraction structures and reduces the amount of
log entries in the raw log-file while keeping the provided
information. The goal of ALATs is to separate the static
fields from the dynamically-changing fields, to mask the
dynamic fields (usually by an asterisk *), and to abstract
each raw log message into a unique event type that is the
same for all occurrences of the same event. For example,
the log message in Figure 4 could be abstracted by the
following event type:
Time taken to scan block pool * on * *
3
2017-09-2611:57:25
INFOdelegation.TokenSecretManager- Creating password for identifier:
owner=auser, maxDate=1506513445163,sequenceNumber=1355, masterKeyId=2, currentKey: 2
2017-09-2611:57:25
INFOdelegation.TokenSecretManager - Creating password for identifier:
owner=auser, maxDate=1506513445163, sequenceNumber=1356,masterKeyId=3, currentKey: 3
2017-09-2611:58:04
INFOdatanode.DataNode - Opened streaming server at /127.0.0.1:36574
2017-09-2611:58:10
INFOimpl.FsDatasetImpl - Time taken to scan block pool BP-1846194586 on
/home/ hadoop/ hadoop-hdfs/target /data1/current: 12ms
2017-09-2611:58:10
INFOimpl.FsDatasetImpl - Time taken to scan block pool BP-1846194586 on
/home/hadoop /hadoop-hdfs/target /data2/current: 5ms
2017-09-2611:58:11
INFOdatanode.DataNode - Opened streaming server at /127.0.0.1:38510
2017-09-2611:58:13
INFOdatanode.DataNode - Opened streaming server at /127.0.0.1:41576
2017-09-2611:58:23
INFOimpl.FsDatasetImpl - Time taken to scan block pool BP-1846194586 on
/home/hadoop /hadoop-hdfs/target /data3/current: 4ms
Input raw logs
2017-09-2611:57:25
event template1
2017-09-2611:57:25
event template1
2017-09-2611:58:04
event template2
2017-09-2611:58:10
event template3
2017-09-2611:58:10
event template3
2017-09-2611:58:11
event template2
2017-09-2611:58:13
event template2
2017-09-2611:58:23
event template3
event type1
Creating password for identifier: owner *
maxDate * sequenceNumber * masterKeyId * currentKey *
event type2
Opened streaming server at *
event type3
Time taken to scan block * on * *
Structured Event list
Event Types
Discovery Phase
Matching Phase
ALAT
ALAT
Figure 5: Log-Abstraction Phases
ALATs include two phases:
discovery
and
matching
.
As shown in Figure 5, during the discovery phase, ALATs
take as input a batch of training raw log entries and output
the abstracted event types for all log entries of the same
event. Once the event types are generated, they serve as
a basis for matching new log entries in batch or stream
processing.
2.2.1. Challenges
Abstracting logs for complex and evolving systems re-
quires ALATs to tackle several challenging issues. We now
summarise these challenges.
Heterogeneity of Log Data.
Log messages have various for-
mats. They are produced by different software layers/com-
ponents and can be written by hundreds of developers all
over the world [3, 9].
Therefore, practitioners may have
limited domain knowledge and may not be aware of the
original purpose and characteristics of the log-data [3].
Updating of Event Types.
Log messages change frequently
(
e.g.,
hundreds of logging statements are added in Google
systems each month [3]). Practitioners must update event
types periodically via the discovery phase to ensure ab-
straction accuracy for the matching phase [12, 13].
Manual Parameter Tuning.
During the discovery phase,
practitioners must manually tune ALATs parameters, which
is challenging: (1) some are not intuitive and impact the
ALATs internal algorithms; (2) others must change accord-
ing to the systems because each system has different log-
data characteristics; and, (3) tuning ALATs parameters on
large data is time-consuming. Usually, practitioners tune
parameters on a small sample [14], hoping to obtain the
same accuracy on large log-files [3].
Log Entries Lengths.
Some ALATs, such as Drain, IPLOM,
or POP, assume that log messages of the same event type
have the same lengths (
i.e.,
number of tokens in their mes-
sages).
However, log messages of a same type may have
different lengths,
e.g.,
User John connected
(length: 3)
vs.
User John David connected
(length: 4) for the type
User * connected
.
2.3. Log-Analysis
Log-analysis is a rich research field.
We give a brief
overview of some its purposes and their influences on ALATs.
4
2.3.1. Anomaly Detection
Anomaly Detection analyzes log data (
e.g.,
system logs,
security logs) to identify in a timely manner abnormal be-
haviors that deviate from typical, good behaviors to diag-
nose failures [15] or security [16] and performance issues
[17] and, thus, mitigate their effects [1, 18, 19, 20].
Anomaly detection typically uses machine-learning tech-
niques (supervised, such as SVM and decision tree or unsu-
pervised methods, such as PCA, clustering, and invariant
mining), which use as input a numerical feature vector for
each event sequence generated from the structured events
list provided by an ALAT. Therefore, ALATs are a pre-
requisite for anomaly detection to provide the structured
event lists needed to train the anomaly-detection model
and to abstract log entries during the detection [21].
2.3.2. Model Inference
Model inference mines systems logs (
e.g.,
execution
logs, transaction logs, events logs) to infer a model of the
system behavior (
e.g.,
finite sate machines). The model is
then used to detect deviation from the expected behavior
and infer the faults that produced the abnormal behaviour.
Model inference is useful for understanding complex and
concurrent behaviour and predict failures.
For example,
Beschastnikh
et al.
[22] generated finite state machines to
provide insights into concurrent systems. Salfner
et al.
[23]
generated Markov models for failure prediction. Therefore,
ALATs are a prerequisite for model inference (1) to ab-
stract log messages into structured event lists from which
to generate numerical feature vectors and (2) to remove log
messages that are irrelevant and–or too frequent, keeping
only messages useful to build a model [24, 25].
3. Motivation
Organisations, regardless of their sizes, find log data to
be invaluable. They use this data in various ways. How-
ever, the log-abstraction components offered in off-the-
shelf automated log-analysis tools (
e.g.,
Loggly, Prelert,
or Splunk) and open-source automated log-analysis tools
(
e.g.,
GrayLog, Logstash) do not satisfy the challenges of
modern systems, because they abstract log messages using
domain-expert predefined regular expressions and, thus,
depend on human knowledge and manual encoding, which
are error-prone, non-scalable, and non-evolutive.
In organisations adopting Cloud technology, practition-
ers have logs coming from logic-tiered servers, multiple
Web servers, and database servers.
They also have logs
generated by Docker containers and other virtual machines.
They must treat all these logs as a whole and aggregate
them via a log shipper (
e.g.,
Logstash or Apache Flume) to
a centralized server where an ALAT and a log-analysis tool
are installed. Practitioners managing centralized logs need
an ALAT with a strong focus on efficiency, heterogeneity,
scalability, and independence from the servers.
Further-
more, in organisations adopting continuous software de-
livery (
e.g.,
Facebook pushes tens to hundreds of commits
every few hours), practitioners face streams of log state-
ments being continuously added and updated (
e.g.,
Google
systems introduce tens of thousands of new logging state-
ments every month, independent of the development stage
[26]). Therefore, they require an ALAT updating its pa-
rameters automatically without the need to retrain/retest.
There is a wide range of ALATs among which to choose
in the literature.
Practitioners should select the ALAT
with quality aspects that best suite their unique use cases
and–or address the prerequisites of their log-analysis tech-
niques.
For example, for anomaly detection, an ALAT
must have a high coverage and abstract rare events to
avoid false positive [16].
The ALAT should handle the
evolving nature of logs and discover/refine event types
dynamically without interrupting the anomaly detection
process by an offline discovery phase [16, 9]. In contrast,
for model inference, an ALAT must allow practitioners to
perform a pre-processing step to remove irregular/irrele-
vant log messages to make their analysis more effective
[25, 1, 27].
Furthermore, predictions depend on whether
the log granularity used to create the model matches the
decision-making granularity and the ALAT must allow
practitioners to change it as they see fit [1, 27].
4. Study Design
We follow the guidelines by Kitchenham
et al.
[5, 6]
for an SLR. We divide our research method into five main
steps: (1) research questions (RQs), (2) search strategy,
(3) selection procedure and quality assessment, (4) report-
ing of the results and answers to the RQs in Section 5 and
Section 6, and (5) comparing ALATs to guide software
engineers in Section 7.
4.1. Research Questions
We want to answer the following RQs to understand
the current state of automated log-abstraction techniques
along with the existing challenges. We use the answers to
these questions to propose a quality model for evaluating
existing techniques and tools.
•
RQ1. What are the state-of-the-art automated tech-
niques for log abstraction analysis?
•
RQ2. What are these techniques’ quality aspects in
addition to accuracy?
4.2. Search Strategy
We used papers from conferences and journals, writ-
ten in English, and published between 2000
1
to 2018. We
1
We chose to start at the year 2000 because the ALAT SLCT
proposed by Vaarandi
et al.
in 2003 represents one of the first log
data clustering algorithms [28].
We decided upon a tolerance of 3
years before 2003
5
conducted the literature search through the digital meta-
library Engineering Village
2
that offers access to 12 engi-
neering literature and patent databases and provides cov-
erage from a wide range of engineering sources including:
ACM library; EI Compendex; IEEE library; Inspec-IET;
and, Springer.
We conducted the snowballing using Scopus, the largest
database of abstracts and citations of peer-reviewed liter-
ature
3
.
We used Scopus to cover a larger range of pa-
pers, combining seed papers from Engineering Village and
snowballing papers from Scopus. We searched in the titles,
abstracts, and keywords of the papers with the following
queries
4
:
(("log analysis") WN ALL)
and:
(("log parsing" OR (log AND "message type")
OR (log AND "message formats") OR "log message"
OR ("signature extraction" AND logs)
OR ("log format") OR "log template"
OR "log event type") WN ALL).
4.3. Literature Selection Procedure
We passed the papers through three stages of screening.
The filtering steps are (1) general criteria (language, pa-
per type, time frame, domain category), (2) inclusion and
exclusion criteria, and (3) overall quality of the papers.
Inclusion criteria are:
•
Paper must be in conference proceeding or journal.
•
Paper must be published between 2000 and 2018.
•
Paper must be written in English.
•
Paper must be on log analysis, log abstraction, log
mining, or log parsing.
•
Paper must pertain to software engineering.
•
Paper must propose, explain, or implement an auto-
mated log-analysis technique.
Exclusion criteria are:
•
Papers with identical or similar contributions (du-
plicates).
•
Papers not publicly available.
•
Papers focusing on end-user experience.
•
Papers focusing on logging practices (
i.e.,
how to
write logs).
•
Papers using off-the-shelf tools (
e.g.,
ElasticSearch,
Logstash, Kibana stack (ELK)).
•
Papers focusing on log-analysis component architec-
ture (
i.e.,
logging pipeline-architecture).
•
Papers requiring access to source code of the system.
Quality assessment answers the following questions:
•
Is the paper based on research?
2
https://blog.engineeringvillage.com/about
3
https://www.elsevier.com/solutions/scopus
4
The full queries are available in the replication package at
http:
//www.ptidej.net/downloads/replications/ist19a/
.
•
Is the research method clear enough?
•
Is there a description of the context in which the
research was carried out?
•
Does the proposed method address the objectives set
by the researchers?
•
Is there an evaluation of the proposed method?
Figure 6 shows our search and selection process, which
we detail in the following.
Seed papers.
We first performed an automatic search by
running our search queries through Engineering Village.
The initial search returned 2,864 papers.
After filtering
these papers based on the inclusion and exclusion criteria,
we obtained 121 papers. Then, two of the authors reviewed
the titles and abstracts of these papers independently and
classified each paper as “include” or “exclude”. We collated
the results: any papers in disagreement was discussed with
all the authors until an agreement was reached.
We ob-
tained 31 seed papers.
Candidate papers.
We then obtained a set of 738 papers by
merging the sets of paper obtained (1) by running the sec-
ond search string in Engineering Village and (2) by search-
ing in SCOPUS for all papers referencing the 31 seed pa-
pers (forward snowballing) and all references in the seed
papers (backward snowballing).
Two of the authors re-
viewed independently the titles and abstracts of each of
the 738 papers and kept 106 papers. Finally, we grouped
these 106 papers and the 31 seed papers into the set of 137
candidate papers.
Selected papers.
Independently, two authors read in de-
tails the 137 candidate papers. They evaluated each paper
based on our inclusion/exclusion criteria and our quality
assessment.
Again, we collated both authors’ decisions
and obtained the set of 89 selected papers.
4.4. Data Extraction and Synthesis
Data extraction.
Independently, two authors reviewed in
detail the 89 selected papers and extracted data regarding:
•
State-of-the-art ALATs approaches, algorithms, and
techniques.
•
Desired ALATs’ characteristics/quality aspects, their
definitions and classification criteria.
First, the authors compared the data and resolved dis-
agreements by consensus.
Then, they collated the data
extracted on ALATs characteristics/quality aspects, which
they consolidated into seven industry desired quality as-
pects (
i.e.,
unified the naming, typical question, definition,
and classification criteria) to compose our quality model.
They also extracted the main results and evaluations of
the ALATs in terms of the identified quality aspects.
6
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Related Questions
Consequences incorporated in bare assertion
arrow_forward
Discuss the use of Laplace transforms in system modeling and their advantages.
arrow_forward
Four frameworks a, b, c and d of aspects pxq, qxr, rxs and sxt individually can be increased is multiple ways with
various number of complete scalar duplications. For instance, when increased as ((a X b) X (c X d)), the all-out
number of duplications is Imn + mno + nop. When increased as (((a X b) X c) X d), the all-out number of scalar
duplications is pqr + prs + pst. In the event that I = 10, m = 100, n = 20, o = 5 and p = 80, then, at that point, the
quantity of scalar increases required is (with explanation):
arrow_forward
Define homoscedasticity?
arrow_forward
What is the definition of homoscedasticity?
arrow_forward
Explain the properties of OOP with an example
arrow_forward
A good system design aims ..... and ..... .
A low coupling - medium cohesion
B high coupling - low cohesion
C low coupling - high cohesion
D low coupling - low cohesion
E high coupling - high cohesion
arrow_forward
1.Construct Phong Reflection Model and its related key OpenGL primitives.
2. Describe OpenGL code for Viewing rectangle parameters are xwmin, xwmax, ywmin, ywmax, xvmin, yvmin, vpWidth, and vpHeight.
arrow_forward
What is a comparator? Explain in detail.
arrow_forward
What is homoscedasticity in particular?
arrow_forward
Is it feasible to provide a more in-depth explanation of the various UML diagrams?
arrow_forward
Explain the concept of state-space representation in system modeling and its advantages over transfer function-based models.
arrow_forward
Python codification about diagram
arrow_forward
There is a difference between physical data flow diagrams and logic
data flow diagrams, yet they are both equal.
arrow_forward
in OpenGl
arrow_forward
Don't copy please
Compareand analyze Z and VDM SL(Specification Language) in formal methods.
arrow_forward
What are the two most frequent varieties of UML diagrams? What is their function?
arrow_forward
Your software house has picked up a project that requires background in differential equations andtheir solution methodologies. The client requires final solution in form of a MATLAB algorithm,however, complete documentation including the analytical solution is also required to show the basisof the algorithm. Complete details of the requirements are provided by the client in the next sections.You have been tasked by the manager of your software house to pick at least one and maximum twomembers for your team to work on this project.The client is a major cosmetics manufacturer and wishes to devise a new marketing strategy for itslatest product. Through experience, it is known that price, sales, and product inventory depend oneach other in some way and thus can change over time. Therefore, the client has decided todynamically set the price of its latest product that reflects the demand for the product. It is desired toincrease the sales and keep the product inventory at a minimum level by…
arrow_forward
Define supertype in ER model
arrow_forward
Define positive autocorrelation?
arrow_forward
SEE MORE QUESTIONS
Recommended textbooks for you
Database System Concepts
Computer Science
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:McGraw-Hill Education
Starting Out with Python (4th Edition)
Computer Science
ISBN:9780134444321
Author:Tony Gaddis
Publisher:PEARSON
Digital Fundamentals (11th Edition)
Computer Science
ISBN:9780132737968
Author:Thomas L. Floyd
Publisher:PEARSON
C How to Program (8th Edition)
Computer Science
ISBN:9780133976892
Author:Paul J. Deitel, Harvey Deitel
Publisher:PEARSON
Database Systems: Design, Implementation, & Manag...
Computer Science
ISBN:9781337627900
Author:Carlos Coronel, Steven Morris
Publisher:Cengage Learning
Programmable Logic Controllers
Computer Science
ISBN:9780073373843
Author:Frank D. Petruzella
Publisher:McGraw-Hill Education
Related Questions
- Consequences incorporated in bare assertionarrow_forwardDiscuss the use of Laplace transforms in system modeling and their advantages.arrow_forwardFour frameworks a, b, c and d of aspects pxq, qxr, rxs and sxt individually can be increased is multiple ways with various number of complete scalar duplications. For instance, when increased as ((a X b) X (c X d)), the all-out number of duplications is Imn + mno + nop. When increased as (((a X b) X c) X d), the all-out number of scalar duplications is pqr + prs + pst. In the event that I = 10, m = 100, n = 20, o = 5 and p = 80, then, at that point, the quantity of scalar increases required is (with explanation):arrow_forward
- A good system design aims ..... and ..... . A low coupling - medium cohesion B high coupling - low cohesion C low coupling - high cohesion D low coupling - low cohesion E high coupling - high cohesionarrow_forward1.Construct Phong Reflection Model and its related key OpenGL primitives. 2. Describe OpenGL code for Viewing rectangle parameters are xwmin, xwmax, ywmin, ywmax, xvmin, yvmin, vpWidth, and vpHeight.arrow_forwardWhat is a comparator? Explain in detail.arrow_forward
arrow_back_ios
SEE MORE QUESTIONS
arrow_forward_ios
Recommended textbooks for you
- Database System ConceptsComputer ScienceISBN:9780078022159Author:Abraham Silberschatz Professor, Henry F. Korth, S. SudarshanPublisher:McGraw-Hill EducationStarting Out with Python (4th Edition)Computer ScienceISBN:9780134444321Author:Tony GaddisPublisher:PEARSONDigital Fundamentals (11th Edition)Computer ScienceISBN:9780132737968Author:Thomas L. FloydPublisher:PEARSON
- C How to Program (8th Edition)Computer ScienceISBN:9780133976892Author:Paul J. Deitel, Harvey DeitelPublisher:PEARSONDatabase Systems: Design, Implementation, & Manag...Computer ScienceISBN:9781337627900Author:Carlos Coronel, Steven MorrisPublisher:Cengage LearningProgrammable Logic ControllersComputer ScienceISBN:9780073373843Author:Frank D. PetruzellaPublisher:McGraw-Hill Education
Database System Concepts
Computer Science
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:McGraw-Hill Education
Starting Out with Python (4th Edition)
Computer Science
ISBN:9780134444321
Author:Tony Gaddis
Publisher:PEARSON
Digital Fundamentals (11th Edition)
Computer Science
ISBN:9780132737968
Author:Thomas L. Floyd
Publisher:PEARSON
C How to Program (8th Edition)
Computer Science
ISBN:9780133976892
Author:Paul J. Deitel, Harvey Deitel
Publisher:PEARSON
Database Systems: Design, Implementation, & Manag...
Computer Science
ISBN:9781337627900
Author:Carlos Coronel, Steven Morris
Publisher:Cengage Learning
Programmable Logic Controllers
Computer Science
ISBN:9780073373843
Author:Frank D. Petruzella
Publisher:McGraw-Hill Education
Browse Popular Homework Q&A
Q: 9. lim 3x2_1
X-80
x2+5
"
Find the value (efit exests) x70
8. lim
ㅠ
X->>>1
10. lem √x-4
x-16
X-160
7.…
Q: n python.
Please do not copy from other websites, as it is incorrect and I'd like to be able to…
Q: Dougall compared the resting heart rate of average healthy citizens and soldiers suffering from…
Q: Use the pie chart at the right, which shows the number of workers (in thousands)
by industry for a…
Q: In a rural area only about 35% of the wells that are drilled find adequate water at a depth of 100…
Q: In a laboratory experiment, students synthesized a new compound and found that when 14.95 grams of…
Q: (Technology Permitted) The population P of wild turkeys in a sanctuary is given by
P(t)=et+2 5+1500,…
Q: K
The Core Plus Mathematics Project (CPMP) is an innovative approa
in group investigations and…
Q: When does a molecule's molecular geometry differ from its electron geometry?
When double or triple…
Q: A stunt person jumps from the roof of a tall building, but no injury occurs because the
person lands…
Q: A random variable X has a mean μ = 14 and a variance o²=4. Using Chebyshev's theorem, complete parts…
Q: Two
An empty sled of mass M moves without friction across a frozen pond at speed vo.
objects are…
Q: Add each element in sourceArray with the corresponding value in offsetAmount. Store each value in a…
Q: Give the value of c in the interval [0,2] that satisfies the conclusion of the mean value theorem…
Q: Consider the following sets of sample data:
A: 145, 126, 120, 97, 123, 119, 115, 96, 134, 125, 96,…
Q: Find a news article or a blog post related to cryptography and submit the following as your initial…
Q: Which would be the control plate for the NaCl experiment and the Sucrose experiment and why?
Q: If 1 mol of Argon atoms strike at a wall with an area of 10 cm² at an angle of 90 degrees per…
Q: Calculate and interpret the Macaulay and
modified durations of a
a) 3-year 10% semi-annual bond…
Q: 22) Given the following reactions
2NO-N2+02
2NO+022NO2
the enthalpy of the reaction of nitrogen with…
Q: Two blocks with masses mA and ma are connected by a
massless string over a massless, frictionless…
Q: Question 18
a fe-²
and fe-
Use the substitution = - to rewrite the integral:
Suppose you want to…
Q: Balance the following chemical equations (include the states of matter)
The hydrolysis of…
Q: 12. What is the major product for this reaction?
A.
B.
C.
D.
CH3
OCH3
CH₂
a
CH3
OCH3
CH3
Br
CH3 CH3O…
Q: K
The weights of ice cream cartons are normally distributed with a mean we
of 0.3 ounce.
(a) What is…