Showing posts with label modeling. Show all posts
Showing posts with label modeling. Show all posts

10 May 2026

🤖〽️Prompt Engineering: Copilot Unabridged (Part 184: How to Cope With AI Invisible Prompt Injection)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words on how to cope with AI invisible prompt injection"

Introduction

Invisible prompt injection is quickly becoming one of the most challenging vulnerabilities in the era of generative AI. Unlike traditional attacks that target code, networks, or infrastructure, invisible prompt injection targets the interpretive layer of AI systems - the part that decides what the model pays attention to and how it responds. Because these manipulations are hidden inside text, images, or metadata, they are difficult to detect and even harder to prevent. Coping with this emerging threat requires a combination of technical safeguards, architectural changes, and new habits of interaction. The goal is not to eliminate the risk entirely - no system can - but to reduce its impact and build resilience into the way we use AI.

The first strategy is input sanitization, a lesson borrowed from decades of software security. Just as developers learned to sanitize user input to prevent SQL injection, AI systems must filter and clean the text they receive before interpreting it. This includes stripping out zero‑width characters, removing hidden HTML elements, and normalizing metadata. While sanitization cannot catch every attack, it dramatically reduces the surface area for invisible instructions. It creates a buffer between raw input and the model’s reasoning process, ensuring that only legitimate content reaches the interpretive layer.

A second approach is context isolation. Many prompt injection attacks succeed because AI systems treat all input as a single, unified context. If hidden instructions are embedded anywhere - inside a document, an image caption, or a webpage - the model may treat them as part of the user’s request. Context isolation breaks this assumption. By separating user instructions from external content, the system can ensure that only the user’s explicit prompt influences the model’s behavior. This can be achieved through architectural changes, such as using separate channels for instructions and data, or through interface design that clearly distinguishes between the two.

Another essential technique is retrieval‑anchored grounding. When AI systems rely solely on internal patterns, they are more vulnerable to manipulation. Retrieval‑augmented generation (RAG) forces the model to ground its answers in external sources - documents, databases, or verified knowledge. If a hidden instruction tries to steer the model toward a false claim, the retrieval layer can counterbalance it by providing factual evidence. This does not eliminate the risk, but it reduces the model’s susceptibility to manipulation by anchoring its reasoning in something more stable than raw text.

A fourth strategy involves uncertainty modeling and self‑critique. Invisible prompt injection often works because the model does not question its own reasoning. It simply follows the most salient instructions, even if they are malicious. By incorporating mechanisms that encourage the model to evaluate its own output—such as self‑critique loops, consistency checks, or multi‑agent debate frameworks—the system becomes more resistant to manipulation. When the model detects contradictions or unusual patterns in its own reasoning, it can flag the output as uncertain or request clarification from the user.

Equally important is user awareness and workflow design. Invisible prompt injection thrives in environments where users assume that AI output is always trustworthy. Coping with the threat requires a shift in mindset. Users must treat AI output as provisional, especially when working with untrusted content. Workflows should include verification steps, source inspection, and human review for high‑stakes tasks. Organizations can also implement guardrails that prevent AI systems from acting autonomously on unverified output.

Finally, coping with invisible prompt injection requires ongoing monitoring and adaptation. Attackers evolve their techniques, and defenses must evolve with them. Logging, anomaly detection, and behavioral monitoring can help identify when a system is being manipulated. Over time, these signals can inform better defenses and more robust architectures.

Invisible prompt injection is not a passing curiosity. It is a structural vulnerability that demands structural solutions. By combining technical safeguards, architectural changes, and human‑centered practices, we can build AI systems that are resilient, trustworthy, and aligned with user intent - even in the presence of invisible manipulation.

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

03 May 2026

🤖〽️Prompt Engineering: Copilot Unabridged (Part 177: How AI Can Detect Multi‑Step Relationships That Span Many Variables)

Prompt Engineering Series
Prompt Engineering Series


Prompt: "write a post of 600 words on how AI can detect multi‑step relationships that span many variables"

Introduction

One of the most remarkable capabilities of modern Artificial Intelligence (AI) is its ability to uncover multi‑step relationships that span across many variables - patterns so complex, layered, and indirect that they remain invisible to human reasoning. Humans excel at intuitive leaps and conceptual understanding, but we struggle when relationships unfold across multiple dimensions, involve subtle dependencies, or require tracking interactions over long chains of influence. AI, by contrast, thrives in precisely these environments. Its computational architecture allows it to map relationships that are not linear, not obvious, and not easily captured by traditional analytical methods.

The first reason AI can detect multi‑step relationships is its ability to operate in high‑dimensional spaces. Humans can mentally juggle only a handful of variables at once before cognitive overload sets in. But many real‑world systems - biological networks, financial markets, climate dynamics - operate across hundreds or thousands of interacting factors. AI models, especially deep neural networks, are designed to navigate these high‑dimensional landscapes. They can track how small changes in one variable propagate through a chain of dependencies, eventually influencing outcomes far removed from the original cause. This ability to follow long causal threads is something human intuition simply cannot replicate at scale.

A second advantage lies in AI’s capacity to model nonlinear interactions. Multi‑step relationships often involve nonlinearities: effects that amplify, dampen, or transform as they move through a system. Humans tend to assume linearity because it is cognitively simple. AI does not make this assumption. Neural networks, decision trees, and attention‑based architectures can capture nonlinear transformations at every layer. This allows AI to detect relationships where the influence of one variable depends on the state of several others - patterns that only emerge when multiple conditions align in specific ways.

Another key factor is AI’s ability to learn hierarchical representations. Deep learning models build understanding layer by layer. Early layers detect simple features; deeper layers combine these features into more abstract concepts. This hierarchical structure mirrors the multi‑step nature of complex relationships. For example, in medical diagnostics, an AI system might first detect subtle biomarkers, then combine them into intermediate patterns, and finally infer a higher‑level diagnosis. Each step builds on the previous one, allowing the model to trace relationships that unfold across multiple conceptual levels.

AI also excels at temporal reasoning, which is essential for detecting multi‑step relationships that evolve over time. Recurrent neural networks, transformers, and sequence models can track dependencies across long time horizons. They can identify how an event today influences outcomes weeks or months later, even when the connection is indirect. This is particularly valuable in fields like supply chain forecasting, epidemiology, and macroeconomic modeling, where delayed effects are the norm rather than the exception.

A further strength comes from AI’s ability to integrate heterogeneous data sources. Multi‑step relationships often span different types of information - numerical data, text, images, signals, or categorical variables. Humans struggle to synthesize such diverse inputs. AI systems, however, can fuse them into a unified representation. This multimodal integration allows AI to detect relationships that cross boundaries between data types, revealing patterns that would remain hidden if each source were analyzed in isolation.

Finally, AI’s ability to detect multi‑step relationships is amplified by continuous learning and iterative refinement. As new data arrives, AI systems can update their internal models, strengthening or revising the relationships they have inferred. This dynamic adaptation allows them to track evolving systems where relationships shift over time. Humans, by contrast, tend to cling to outdated mental models, even when the underlying reality has changed.

AI’s ability to detect multi‑step relationships across many variables is not a replacement for human judgment. Instead, it expands our analytical reach, revealing structures that lie beyond the limits of intuition. When humans and AI collaborate - combining human understanding with machine‑level pattern detection - we gain a deeper, more accurate view of the complex systems that shape our world.

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

01 September 2025

🤖〽️Prompt Engineering: Copilot Unabridged (Part 83: Mapping the Future - A 25-Year Evolution of AI)

Prompt Engineering Series
Prompt Engineering Series

Prompt: "write a post of 600 words that depicts a map that reflects the evolution of AI for the next 25 years"

Introduction

Artificial Intelligence (AI) is no longer a distant dream - it’s a living, learning force reshaping our world. But what lies ahead? If we were to chart a map of AI’s evolution over the next 25 years, it would resemble a dynamic landscape of breakthroughs, ethical crossroads, and societal transformation.

Let’s take a journey through this imagined terrain.

Phase 1: 2025–2030 - The Age of Specialization

In the next five years, AI will become deeply embedded in vertical industries:

  • Healthcare: AI will assist in diagnostics, drug discovery, and personalized treatment plans.
  • Finance: Predictive models will dominate risk assessment, fraud detection, and algorithmic trading.
  • Education: Adaptive learning platforms will tailor content to individual student needs.

This phase is marked by narrow intelligence - systems that excel in specific domains but lack general reasoning. The focus will be on trust, transparency, and explainability, as regulators begin to demand accountability for AI-driven decisions.

Phase 2: 2030–2035 - The Rise of Generalization

By the early 2030s, we’ll witness the emergence of Artificial General Intelligence (AGI) prototypes - systems capable of transferring knowledge across domains.

Key developments will include:

  • Unified models that can write code, compose music, and conduct scientific research.
  • Self-improving architectures that optimize their own learning processes.
  • Human-AI collaboration frameworks where machines act as creative partners, not just tools.

This era will challenge our definitions of intelligence, creativity, and even consciousness. Ethical debates will intensify around autonomy, rights, and the boundaries of machine agency.

Phase 3: 2035–2040 - The Cognitive Convergence

As AGI matures, AI will begin to mirror human cognitive functions more closely:

  • Emotional modeling: AI will simulate empathy, persuasion, and social nuance.
  • Meta-reasoning: Systems will reflect on their own limitations and biases.
  • Synthetic consciousness debates: Philosophers and technologists will grapple with whether machines can possess subjective experience.

This phase will see AI integrated into governance, law, and diplomacy. Machines may advise on policy, mediate conflicts, or even represent interests in global forums.

 Phase 4: 2040–2045 - The Age of Integration

AI will no longer be a separate entity - it will be woven into the fabric of everyday life:

  • Neural interfaces will allow direct brain-AI communication.
  • Smart environments will anticipate human needs and adapt in real time.
  • Digital twins will simulate entire cities, economies, and ecosystems for planning and resilience.

The line between human and machine cognition will blur. Society will face existential questions: What does it mean to be human in a world of synthetic minds?

Phase 5: 2045–2050 - The Post-Intelligence Era

By mid-century, AI may surpass human intelligence in most domains. This doesn’t mean obsolescence - it means redefinition.

  • Collective intelligence: Humans and machines will form hybrid networks of decision-making.
  • Ethical superstructures: Global frameworks will govern AI rights, responsibilities, and coexistence.
  • Cultural transformation: Art, philosophy, and identity will evolve in response to machine creativity and cognition.

This era is not about domination - it’s about symbiosis. The map ends not with a singularity, but with a new equilibrium.

Navigating the Map

This 25-year map is speculative, but it’s grounded in current trajectories. It’s a tool for reflection, planning, and imagination. As we move forward, the challenge isn’t just technological - it’s philosophical, ethical, and deeply human.

AI’s evolution will be shaped not just by algorithms, but by our choices. The map is ours to draw - and to walk.

Just try the prompt on Copilot or your favorite AI-powered assistant! Have you got a different/similar result? How big or important is the difference? Any other thoughts?
Just share the link to the post with me and I'll add it to this post as a resource!

Disclaimer: The whole text was generated by Copilot (under Windows 11) at the first attempt. This is just an experiment to evaluate feature's ability to answer standard general questions, independently on whether they are correctly or incorrectly posed. Moreover, the answers may reflect hallucinations and other types of inconsistent or incorrect reasoning.

Previous Post <<||>> Next Post

29 March 2021

Notes: Team Data Science Process (TDSP)

Team Data Science Process (TDSP)
Acronyms:
Artificial Intelligence (AI)
Cross-Industry Standard Process for Data Mining (CRISP-DM)
Data Mining (DM)
Knowledge Discovery in Databases (KDD)
Team Data Science Process (TDSP) 
Version Control System (VCS)
Visual Studio Team Services (VSTS)

Resources:
[1] Microsoft Azure (2020) What is the Team Data Science Process? [source]
[2] Microsoft Azure (2020) The business understanding stage of the Team Data Science Process lifecycle [source]
[3] Microsoft Azure (2020) Data acquisition and understanding stage of the Team Data Science Process [source]
[4] Microsoft Azure (2020) Modeling stage of the Team Data Science Process lifecycle [source
[5] Microsoft Azure (2020) Deployment stage of the Team Data Science Process lifecycle [source]
[6] Microsoft Azure (2020) Customer acceptance stage of the Team Data Science Process lifecycle [source]

24 May 2014

🕸Systems Engineering: Agent-Based Model/Modeling (Definition)

"Modeling refers to the process of designing a software representation of a real-world system or a small part of it with the purpose of replicating or simulating specific features of the modeled system. In an agent-based model, the model behavior results from behavior of many small software entities called agents. This technique is used to model real-world systems comprised of many decision-making entities with inhomogeneous preferences, knowledge, and decision-making processes. An advantage of this method is that no assumptions need to be made about an aggregate or mean behavior. Instead, this aggregation is made by the model." (E Ebenhoh, "Agent-Based Modelnig with Boundedly Rational Agents", 2007)

"A modeling and simulation approach applied to a complex system or complex adaptive system, in which the model is comprised of a large number of interacting elements (agents)." (Charles M Macal, "Agent Based Modeling and Artificial Life", 2009)

"A modeling technique with a collection of autonomous decision-making agents, each of which assesses its situation individually and makes decisions on the basis of a pre-set of rules. ABM is used to simulate land use land cover change, crowd behavior, transportation analysis and many other fine-scale geographic applications. (May Yuan, "Challenges and Critical Issues for Temporal GIS Research and Technologies", 2009)

"Agent-based models (ABM) are models where (i) there is a multitude of objects that interact with each other and with the environment; (ii) the objects are autonomous, i. e. there is no central, or top-down control over their behavior; and (iii) the outcome of their interaction is numerically computed." (Mauro Gallegati & Mateo G Richiardi, "Agent Based Models in Economics and Complexity", 2009)

"An agent-based model is a simulation made up of a set of agents and an agent interaction environment." (Michael J North & Charles M Macal, "Agent Based Modeling and Computer Languages", 2009)

"Systems composed of individuals who act purposely in making locational/spatial decisions." (Michael Batty, "Cities as Complex Systems: Scaling, Interaction, Networks, Dynamics and Urban Morphologies", 2009)

"A computational model for simulating the actions and interactions of autonomous individuals in a network, with a view to assessing their effects on the system as a whole. (Brian L. Heath & Raymond R Hill, "Agent-Based Modeling: A Historical Perspective and a Review of Validation and Verification Efforts", 2010)

"A model that involves numerous interacting autonomous agents, homogeneous or heterogeneous. The objective of agent-based modeling is to help us to understand effects and impacts of interactions of individuals." (Peter Mikulecký et al, "Possibilities of Ambient Intelligence and Smart Environments in Educational Institutions", 2011)

"a class of computational models for simulating interacting agents." (Enrico Franchi & Agostino Poggi, "Multi-Agent Systems and Social Networks", 2012)

07 July 2013

🎓Knowledge Management: Conceptual Model (Definitions)

"A conceptual model is a qualitative description of the system and includes the processes taking place in the system, the parameters chosen to describe the processes, and the spatial and temporal scales of the processes." (A Avogadro & R C Ragaini, "Technologies for Environmental Cleanup", 1993)

"A conceptual model is a model of the projected system that is independent of implementation details." (Michael Worboys, "GIS: A Computing Perspective", 1995)

"A conceptual model is what in the model theory is called a set of formulas making statements about the world." (Dickson Lukose [Eds.], "Conceptual Structures: Fulfilling Peirce's Dream" Vol 5, 1997)

"A conceptual model is a representation of the system expertise using this formalism. An internal model is derived from the conceptual model and from a specification of the system transactions and the performance constraints." (Zbigniew W. Ras & Andrzej Skowron [Eds.], Foundations of Intelligent Systems: 10th International Symposium Vol 10, 1997)

"A conceptual model is one which reflects reality by placing words which are concepts into the model in the same way that the model aeroplane builder puts wings, a fuselage, and a cockpit together." (Lynn Basford & ‎Oliver Slevin, "Theory and Practice of Nursing: An Integrated Approach to Caring Practice", 2003) 

"A conceptual model is simply a framework or schematic to understand the interaction of workforce education and development systems with other variables in a society." (Jay W Rojewski, "International Perspectives on Workforce Education and Development", 2004) 

"A conceptual model is a mental image of a system, its components, its interactions. It lays the foundation for more elaborate models, such as physical or numerical models. A conceptual model provides a framework in which to think about the workings of a system or about problem solving in general. An ensuing operational model can be no better than its underlying conceptualization." (Henry N Pollack, "Uncertain Science … Uncertain World", 2005)

"A particular kind of learning object design to be supplied to learners to support their mental modeling." (Daniel Churchill, "Mental Models" [in "Encyclopedia of Information Technology Curriculum Integration"], 2008)

"The concepts and constructs about real work things we have in our heads are called mental model." (Hassan Qudrat-Ullah, "System Dynamics Based Learning Environments" [in "Encyclopedia of Information Technology Curriculum Integration"], 2008)

"Representations of real or imaginary structure in the human mind enabling orientation as well as goal orientated actions and movements" (Ralf Wagner, "Customizing Multimedia with Multi-Trees" [in "Encyclopedia of Multimedia Technology and Networking" 2nd Ed.], 2009)

"A conceptual model is a qualitative description of 'some aspect of the behaviour of a natural system'. This description is usually verbal, but may also be accompanied by figures and graphs." (Howard S. Wheater et al., "Groundwater Modelling in Arid and Semi-Arid Areas, 2010) 

"[…] a conceptual model is a diagram connecting variables and constructs based on theory and logic that displays the hypotheses to be tested." (Mary Wolfinbarger Celsi et al, "Essentials of Business Research Methods", 2011) 

"A conceptual model of an interactive application is, in summary: the structure of the application - the objects and their operations, attributes, and relation-ships; an idealized view of the how the application works – the model designers hope users will internalize; the mechanism by which users accomplish the tasks the application is intended to support." (Jeff Johnson & Austin Henderson, "Conceptual Models", 2011)

"Simply put, a conceptual model is a simplified representation of reality, devised for a certain purpose and seen from a certain point of view."(David W Emble & Bernhard Thalheim, "Handbook of Conceptual Modeling", 2012) 

"Briefly, a conceptual model is the configuration of conceptual elements and the navigation between them. As such, a conceptual model is the foundation of the user interface of any interactive system." (Avi Parush, "Conceptual Design for Interactive Systems", 2015)

"A conceptual model is a framework that is initially used in research to outline the possible courses of action or to present an idea or thought. When a conceptual model is developed in a logical manner, it will provide a rigor to the research process." (N Elangovan & R Rajendran, "Conceptual Model: A Framework for Institutionalizing the Vigor in Business Research", 2015) 

"A model or conceptual model is a schematic or representation that describes how something works. We create and adapt models all the time without realizing it. Over time, as you gain more information about a problem domain, your model will improve to better match reality." (James Padolsey, "Clean Code in JavaScript", 2020)

Resources:
Quotes on "Conceptual Models" at the-web-of-knowledge.blogspot.com.

09 March 2010

🕋Data Warehousing: Dimensional Modeling (Definitions)

"A methodology for logically modeling data for query performance and ease of use that starts from a set of base measurement events. In the relational DBMS environment, a fact table is constructed generally with one record for each discrete measurement. This fact table is then surrounded by a set of dimension tables describing precisely what is known in the context of each measurement record. Because of the characteristic structure of a dimensional model, it is often called a star schema." (Ralph Kimball & Margy Ross, "The Data Warehouse Toolkit" 2nd Ed., 2002)

"A formal data modeling technique that is used to organize and represent data for analytical and reporting use. The focus is on the business perspective and the representation of data." (Laura Reeves, "A Manager's Guide to Data Warehousing", 2009)

"A generally accepted practice in the data warehouse industry to structure data intended for user access, analysis, and reporting in dimensional data models" (Daniel Linstedt & W H Inmon, "Data Architecture: A Primer for the Data Scientist", 2014)

"The most frequently used data model for data warehouses." (Jan L Harrington, "Relational Database Design and Implementation" 3rd Ed., 2009)

"With dimensional data modeling or denormalization, data is collapsed, combined, or grouped together. Within dimensional data modeling, the concepts of facts (measures) and dimensions (context) are used. If dimensions are collapsed into single structures, the data model is also often called a star schema. If the dimensions are not collapsed, the data model is called snowflake. The dimensional models are typically seen within data warehouse systems." (Piethein Strengholt, "Data Management at Scale", 2020)

05 December 2006

✏️Dennis K Lieu - Collected Quotes

"Being a good team member takes work. Most people are used to working on their own - making decisions, prioritizing tasks, and being accountable for their own work. Working with others requires a different approach than working alone. To be a successful part of a team, you need to consider several issues. You should be prepared not to be in charge of everything. For some people, this requires a great deal of effort; for other people, it is less taxing. At times, you will be the supervisor; other times you will be supervised. You need to be flexible and understand that a team consisting only of leaders (or only of followers) is not likely to perform well." (Dennis K Lieu & Sheryl Sorby, "Visualization, Modeling, and Graphics for Engineering Design", 2009)

"Charts are used to represent quantitative data in a graphic format. A chart visually illustrates relationships between numbers. When creating a chart, keep in mind that the goal is to represent the data in a simplified and appealing way so as not to muddle the message the chart is meant to convey." (Dennis K Lieu & Sheryl Sorby, "Visualization, Modeling, and Graphics for Engineering Design", 2009)

"Design is a goal-oriented, problem-solving activity that typically takes many iterations - teams rarely come up with the 'optimal' design the first time around. [...] With each model, improvements were made to the original design such that the minivans of today are much improved compared to the initial product. The key activity in the design process is the development and testing of a descriptive model of the finished product before the product is finally manufactured or constructed." (Dennis K Lieu & Sheryl Sorby, "Visualization, Modeling, and Graphics for Engineering Design", 2009)

"Designers are responsible for the project’s fit and finish, that is, specifying the geometry and sizes of components so they properly mate with each other and are ergonomically and aesthetically acceptable within the operating environment." (Dennis K Lieu & Sheryl Sorby, "Visualization, Modeling, and Graphics for Engineering Design", 2009)

"Information graphics are an essential component of technical communication. Very few technical documents or presentations can be considered complete without graphical elements to present some essential data. Because engineers are visually oriented, graphic aids allow their thoughts and ideas to be better understood by other engineers. Information graphics are essential in presenting data because they simplify the content, offer a visually pleasing alternative to gray text in a proposal or an article, and thereby invite interest." (Dennis K Lieu & Sheryl Sorby, "Visualization, Modeling, and Graphics for Engineering Design", 2009)

"Most importantly, prepare to learn how to be a team member. Share your strengths with the team and be willing to contribute. Remember, the combined efforts of all team members should yield a better outcome than the efforts of one individual. Learn new team skills and be adaptable." (Dennis K Lieu & Sheryl Sorby, "Visualization, Modeling, and Graphics for Engineering Design", 2009)

"Reverse engineering is a systematic methodology for analyzing the design of an existing device or system, either as an approach to study the design or as a prerequisite for redesign. Reverse engineering essentially is a process used to gain information about the functionality and sizes of existing design components. [...] Reverse engineering is a technique within the practice of engineering design that can be useful in several ways. Reverse engineering can save time because there is no need to 'reinvent the wheel' when you can start from existing geometric data. The reverse engineering technique also can help an engineer develop a systematic approach to thinking about and improving the design of devices and systems." (Dennis K Lieu & Sheryl Sorby, "Visualization, Modeling, and Graphics for Engineering Design", 2009)

"Tables work in a variety of situations because they convey large amounts of data in a condensed fashion. Use tables in the following situations: (1) to structure data so the reader can easily pick out the information desired, (2) to display in a chart when the data contains too many variables or values, and (3) to display exact values that are more important than a visual moment in time." (Dennis K Lieu & Sheryl Sorby, "Visualization, Modeling, and Graphics for Engineering Design", 2009)

"The data [in tables] should not be so spaced out that it is difficult to follow or so cramped that it looks trapped. Keep columns close together; do not spread them out more than is necessary. If the columns must be spread out to fit a particular area, such as the width of a page, use a graphic device such as a line or screen to guide the reader’s eye across the row." (Dennis K Lieu & Sheryl Sorby, "Visualization, Modeling, and Graphics for Engineering Design", 2009)

"Whereas charts generally focus on a trend or comparison, tables organize data for the reader to scan. Tables present data in an easy-read-format, or matrix. Tables arrange data in columns or rows so readers can make side-by-side comparisons. Tables work for many situations because they convey large amounts of data and have several variables for each item. Tables allow the reader to focus quickly on a specific item by scanning the matrix or to compare multiple items by scanning the rows or columns."  (Dennis K Lieu & Sheryl Sorby, "Visualization, Modeling, and Graphics for Engineering Design", 2009)

29 April 2006

🖍️Randall E Schumacker - Collected Quotes

"Given the important role that correlation plays in structural equation modeling, we need to understand the factors that affect establishing relationships among multivariable data points. The key factors are the level of measurement, restriction of range in data values (variability, skewness, kurtosis), missing data, nonlinearity, outliers, correction for attenuation, and issues related to sampling variation, confidence intervals, effect size, significance, sample size, and power." (Randall E Schumacker & Richard G Lomax, "A Beginner’s Guide to Structural Equation Modeling" 3rd Ed., 2010)

"Need to consider outliers as they can affect statistics such as means, standard deviations, and correlations. They can either be explained, deleted, or accommodated (using either robust statistics or obtaining additional data to fill-in). Can be detected by methods such as box plots, scatterplots, histograms or frequency distributions." (Randall E Schumacker & Richard G Lomax, "A Beginner’s Guide to Structural Equation Modeling" 3rd Ed., 2010)

"Outliers or influential data points can be defined as data values that are extreme or atypical on either the independent (X variables) or dependent (Y variables) variables or both. Outliers can occur as a result of observation errors, data entry errors, instrument errors based on layout or instructions, or actual extreme values from self-report data. Because outliers affect the mean, the standard deviation, and correlation coefficient values, they must be explained, deleted, or accommodated by using robust statistics." (Randall E Schumacker & Richard G Lomax, "A Beginner’s Guide to Structural Equation Modeling" 3rd Ed., 2010)

"Structural equation modeling is a correlation research method; therefore, the measurement scale, restriction of range in the data values, missing data, outliers, nonlinearity, and nonnormality of data affect the variance–covariance among variables and thus can impact the SEM analysis." (Randall E Schumacker & Richard G Lomax, "A Beginner’s Guide to Structural Equation Modeling" 3rd Ed., 2010)

"Structural equation modeling (SEM) uses various types of models to depict relationships among observed variables, with the same basic goal of providing a quantitative test of a theoretical model hypothesized by the researcher. More specifically, various theoretical models can be tested in SEM that hypothesize how sets of variables define constructs and how these constructs are related to each other." (Randall E Schumacker & Richard G Lomax, "A Beginner’s Guide to Structural Equation Modeling" 3rd Ed., 2010)

"There are several key issues in the field of statistics that impact our analyses once data have been imported into a software program. These data issues are commonly referred to as the measurement scale of variables, restriction in the range of data, missing data values, outliers, linearity, and nonnormality." (Randall E Schumacker & Richard G Lomax, "A Beginner’s Guide to Structural Equation Modeling" 3rd Ed., 2010)

Related Posts Plugin for WordPress, Blogger...

About Me

My photo
Koeln, NRW, Germany
IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.