Showing posts with label deployment. Show all posts
Showing posts with label deployment. Show all posts

26 April 2025

🏭🗒️Microsoft Fabric: Deployment Pipelines [Notes]

Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

Last updated: 26-Apr-2025

[Microsoft Fabric] Deployment Pipelines

  • {def} a structured process that enables content creators to manage the lifecycle of their organizational assets [5]
    • enable creators to develop and test content in the service before it reaches the users [5]
      • can simplify the deployment process to development, test, and production workspaces [5]
      • one Premium workspace is assigned to each stage [5]
      • each stage can have 
        • different configurations [5]
        • different databases or different query parameters [5]
  • {action} create pipeline
    • from the deployment pipelines entry point in Fabric [5]
      • creating a pipeline from a workspace automatically assigns it to the pipeline [5]
    • {action} define how many stages it should have and what they should be called [5]
      • {default} has three stages
        • e.g. Development, Test, and Production
        • the number of stages can be changed anywhere between 2-10 
        • {action} add another stage,
        • {action} delete stage
        • {action} rename stage 
          • by typing a new name in the box
        • {action} share a pipeline with others
          • users receive access to the pipeline and become pipeline admins [5]
        • ⇐ the number of stages are permanent [5]
          • can't be changed after the pipeline is created [5]
    • {action} add content to the pipeline [5]
      • done by assigning a workspace to the pipeline stage [5]
        • the workspace can be assigned to any stage [5]
    • {action|optional} make a stage public
      • {default} the final stage of the pipeline is made public
      • a consumer of a public stage without access to the pipeline sees it as a regular workspace [5]
        • without the stage name and deployment pipeline icon on the workspace page next to the workspace name [5]
    • {action} deploy to an empty stage
      • when finishing the work in one pipeline stage, the content can be deployed to the next stage [5] 
        • deployment can happen in any direction [5]
      • {option} full deployment 
        • deploy all content to the target stage [5]
      • {option} selective deployment 
        • allows select the content to deploy to the target stage [5]
      • {option} backward deployment 
        • deploy content from a later stage to an earlier stage in the pipeline [5] 
        • {restriction} only possible when the target stage is empty [5]
    • {action} deploy content between pages [5]
      • content can be deployed even if the next stage has content
        • paired items are overwritten [5]
    • {action|optional} create deployment rules
      • when deploying content between pipeline stages, allow changes to content while keeping some settings intact [5] 
      • once a rule is defined or changed, the content must be redeployed
        • the deployed content inherits the value defined in the deployment rule [5]
        • the value always applies as long as the rule is unchanged and valid [5]
    • {feature} deployment history 
      • allows to see the last time content was deployed to each stage [5]
      • allows to to track time between deployments [5]
  • {concept} pairing
    • {def} the process by which an item in one stage of the deployment pipeline is associated with the same item in the adjacent stage
      • applies to reports, dashboards, semantic models
      • paired items appear on the same line in the pipeline content list [5]
        • ⇐ items that aren't paired, appear on a line by themselves [5]
      • the items remain paired even if their name changes
      • items added after the workspace is assigned to a pipeline aren't automatically paired [5]
        • ⇐ one can have identical items in adjacent workspaces that aren't paired [5]
  • [lakehouse]
    • can be removed as a dependent object upon deployment [3]
    • supports mapping different Lakehouses within the deployment pipeline context [3]
    • {default} a new empty Lakehouse object with same name is created in the target workspace [3]
      • ⇐ if nothing is specified during deployment pipeline configuration
      • notebook and Spark job definitions are remapped to reference the new lakehouse object in the new workspace [3]
      • {warning} a new empty Lakehouse object with same name still is created in the target workspace [3]
      • SQL Analytics endpoints and semantic models are provisioned
      • no object inside the Lakehouse is overwritten [3]
      • updates to Lakehouse name can be synchronized across workspaces in a deployment pipeline context [3] 
  • [notebook] deployment rules can be used to customize the behavior of notebooks when deployed [4]
    • e.g. change notebook's default lakehouse [4]
    • {feature} auto-binding
      • binds the default lakehouse and attached environment within the same workspace when deploying to next stage [4]
  • [environment] custom pool is not supported in deployment pipeline
    • the configurations of Compute section in the destination environment are set with default values [6]
    • ⇐ subject to change in upcoming releases [6]
  • [warehouse]
    • [database project] ALTER TABLE to add a constraint or column
      • {limitation} the table will be dropped and recreated when deploying, resulting in data loss
    • {recommendation} do not create a Dataflow Gen2 with an output destination to the warehouse
      • ⇐ deployment would be blocked by a new item named DataflowsStagingWarehouse that appears in the deployment pipeline [10]
    • SQL analytics endpoint is not supported
  • [Eventhouse]
    • {limitation} the connection must be reconfigured in destination that use Direct Ingestion mode [8]
  • [EventStream]
    • {limitation} limited support for cross-workspace scenarios
      • {recommendation} make sure all EventStream destinations within the same workspace [8]
  • KQL database
    • applies to tables, functions, materialized views [7]
  • KQL queryset
    • ⇐ tabs, data sources [7]
  • [real-time dashboard]
    • data sources, parameters, base queries, tiles [7]
  • [SQL database]
    • includes the specific differences between the individual database objects in the development and test workspaces [9]
  • can be also used with

    References:
    [1] Microsoft Learn (2024) Get started with deployment pipelines [link]
    [2] Microsoft Learn (2024) Implement continuous integration and continuous delivery (CI/CD) in Microsoft Fabric [link]
    [3] Microsoft Learn (2024)  Lakehouse deployment pipelines and git integration (Preview) [link]
    [4] Microsoft Learn (2024) Notebook source control and deployment [link
    [5] Microsoft Learn (2024) Introduction to deployment pipelines [link]
    [6] Environment Git integration and deployment pipeline [link]
    [7] Microsoft Learn (2024) Microsoft Learn (2024) Real-Time Intelligence: Git integration and deployment pipelines (Preview) [link]
    [8] Microsoft Learn (2024) Eventstream CI/CD - Git Integration and Deployment Pipeline [link]
    [9] Microsoft Learn (2024) Get started with deployment pipelines integration with SQL database in Microsoft Fabric [link]
    [10] Microsoft Learn (2025) Source control with Warehouse (preview) [link

    Resources:

    Acronyms:
    CLM - Content Lifecycle Management
    UAT - User Acceptance Testing

    🏭🗒️Microsoft Fabric: Power BI Environments [Notes]

    Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

    Last updated: 26-Apr-2025

    Enterprise Content Publishing [2]

    [Microsoft Fabric] Power BI Environments

    • {def} structured spaces within Microsoft Fabric that helps organizations manage the Power BI assets through the entire lifecycle
    • {environment} development 
      • allows to develop the solution
      • accessible only to the development team 
        • via Contributor access
      • {recommendation} use Power BI Desktop as local development environment
        • {benefit} allows to try, explore, and review updates to reports and datasets
          • once the work is done, upload the new version to the development stage
        • {benefit} enables collaborating and changing dashboards
        • {benefit} avoids duplication 
          • making online changes, downloading the .pbix file, and then uploading it again, creates reports and datasets duplication
      • {recommendation} use version control to keep the .pbix files up to date
        • [OneDrive] use Power BI's autosync
          • {alternative} SharePoint Online with folder synchronization
          • {alternative} GitHub and/or VSTS with local repository & folder synchronization
      • [enterprise scale deployments] 
        • {recommendation} separate dataset from reports and dashboards’ development
          • use the deployment pipelines selective deploy option [22]
          • create separate .pbix files for datasets and reports [22]
            • create a dataset .pbix file and uploaded it to the development stage (see shared datasets [22]
            • create .pbix only for the report, and connect it to the published dataset using a live connection [22]
          • {benefit} allows different creators to separately work on modeling and visualizations, and deploy them to production independently
        • {recommendation} separate data model from report and dashboard development
          • allows using advanced capabilities 
            • e.g. source control, merging diff changes, automated processes
          • separate the development from test data sources [1]
            • the development database should be relatively small [1]
      • {recommendation} use only a subset of the data [1]
        • ⇐ otherwise the data volume can slow down the development [1]
    • {environment} user acceptance testing (UAT)
      • test environment that within the deployment lifecycle sits between development and production
        • it's not necessary for all Power BI solutions [3]
        • allows to test the solution before deploying it into production
          • all tests must have 
            • View access for testing
            • Contributor access for report authoring
        • involves business users who are SMEs
          • provide approval that the content 
            • is accurate
            • meets requirements
            • can be deployed for wider consumption
      • {recommendation} check report’s load and the interactions to find out if changes impact performance [1]
      • {recommendation} monitor the load on the capacity to catch extreme loads before they reach production [1]
      • {recommendation} test data refresh in the Power BI service regularly during development [20]
    • {environment} production
      • {concept} staged deployment
        • {goal} help minimize risk, user disruption, or address other concerns [3]
          • the deployment involves a smaller group of pilot users who provide feedback [3]
      • {recommendation} set production deployment rules for data sources and parameters defined in the dataset [1]
        • allows ensuring the data in production is always connected and available to users [1]
      • {recommendation} don’t upload a new .pbix version directly to the production stage
        •  ⇐ without going through testing
    • {feature|preview} deployment pipelines 
      • enable creators to develop and test content in the service before it reaches the users [5]
    • {recommendation} build separate databases for development and testing 
      • helps protect production data [1]
    • {recommendation} make sure that the test and production environment have similar characteristics [1]
      • e.g. data volume, sage volume, similar capacity 
      • {warning} testing into production can make production unstable [1]
      • {recommendation} use Azure A capacities [22]
    • {recommendation} for formal projects, consider creating an environment for each phase
    • {recommendation} enable users to connect to published datasets to create their own reports
    • {recommendation} use parameters to store connection details 
      • e.g. instance names, database names
      • ⇐  deployment pipelines allow configuring parameter rules to set specific values for the development, test, and production stages
        • alternatively data source rules can be used to specify a connection string for a given dataset
          • {restriction} in deployment pipelines, this isn't supported for all data sources
    • {recommendation} keep the data in blob storage under the 50k blobs and 5GB data in total to prevent timeouts [29]
    • {recommendation} provide data to self-service authors from a centralized data warehouse [20]
      • allows to minimize the amount of work that self-service authors need to take on [20]
    • {recommendation} minimize the use of Excel, csv, and text files as sources when practical [20]
    • {recommendation} store source files in a central location accessible by all coauthors of the Power BI solution [20]
    • {recommendation} be aware of API connectivity issues and limits [20]
    • {recommendation} know how to support SaaS solutions from AppSource and expect further data integration requests [20]
    • {recommendation} minimize the query load on source systems [20]
      • use incremental refresh in Power BI for the dataset(s)
      • use a Power BI dataflow that extracts the data from the source on a schedule
      • reduce the dataset size by only extracting the needed amount of data 
    • {recommendation} expect data refresh operations to take some time [20]
    • {recommendation} use relational database sources when practical [20]
    • {recommendation} make the data easily accessible [20]
    • [knowledge area] knowledge transfer
      • {recommendation} maintain a list of best practices and review it regularly [24]
      • {recommendation} develop a training plan for the various types of users [24]
        • usability training for read only report/app users [24
        • self-service reporting for report authors & data analysts [24]
        • more elaborated training for advanced analysts & developers [24]
    • [knowledge area] lifecycle management
      • consists of the processes and practices used to handle content from its creation to its eventual retirement [6]
      • {recommendation} postfix files with 3-part version number in Development stage [24]
        • remove the version number when publishing files in UAT and production 
      • {recommendation} backup files for archive 
      • {recommendation} track version history 

      References:
      [1] Microsoft Learn (2021) Fabric: Deployment pipelines best practices [link]
      [2] Microsoft Learn (2024) Power BI: Power BI usage scenarios: Enterprise content publishing [link]
      [3] Microsoft Learn (2024) Deploy to Power BI [link]
      [4] Microsoft Learn (2024) Power BI implementation planning: Content lifecycle management [link]
      [5] Microsoft Learn (2024) Introduction to deployment pipelines [link]
      [6] Microsoft Learn (2024) Power BI implementation planning: Content lifecycle management [link]
      [20] Microsoft (2020) Planning a Power BI  Enterprise Deployment [White paper] [link]
      [22] Power BI Docs (2021) Create Power BI Embedded capacity in the Azure portal [link]
      [24] Paul Turley (2019)  A Best Practice Guide and Checklist for Power BI Projects

      Resources:

      Acronyms:
      API - Application Programming Interface
      CLM - Content Lifecycle Management
      COE - Center of Excellence
      SaaS - Software-as-a-Service
      SME - Subject Matter Expert
      UAT - User Acceptance Testing
      VSTS - Visual Studio Team System
      SME - Subject Matter Experts

      13 April 2025

      🏭🗒️Microsoft Fabric: Continuous Integration & Continuous Deployment [CI/CD] [Notes]

      Disclaimer: This is work in progress intended to consolidate information from various sources for learning purposes. For the latest information please consult the documentation (see the links below)! 

      Last updated: 13-Apr-2025

      [Microsoft Fabric] Continuous Integration & Continuous Deployment [CI/CD] 
      • {def} development processes, tools, and best practices used to automates the integration, testing, and deployment of code changes to ensure efficient and reliable development
        • can be used in combination with a client tool
          • e.g. VS Code, Power BI Desktop
          • don’t necessarily need a workspace
            • developers can create branches and commit changes to that branch locally, push those to the remote repo and create a pull request to the main branch, all without a workspace
            • workspace is needed only as a testing environment [1]
              • to check that everything works in a real-life scenario [1]
        • addresses a few pain points [2]
          • manual integration issues
            • manual changes can lead to conflicts and errors
              • slow down development [2]
          • development delays
            • manual deployments are time-consuming and prone to errors
              • lead to delays in delivering new features and updates [2]
          • inconsistent environments
            • inconsistencies between environment cause issues that are hard to debug [2]
          • lack of visibility
            • can be challenging to
              • track changes though their lifetime [2]
              • understand the state of the codebase[2]
        • {process} continuous integration (CI)
        • {process} continuous deployment (CD)
        • architecture
          • {layer} development database 
            • {recommendation} should be relatively small [1]
          • {layer} test database 
            • {recommendation{ should be as similar as possible to the production database [1]
          • {layer} production database

          • data items
            • items that store data
            • items' definition in Git defines how the data is stored [1]
        • {stage} development 
          • {best practice} back up work to a Git repository
            • back up the work by committing it into Git [1]
            • {prerequisite} the work environment must be isolated [1]
              • so others don’t override the work before it gets committed [1]
              • commit to a branch no other developer is using [1]
              • commit together changes that must be deployed together [1]
                • helps later when 
                  • deploying to other stages
                  • creating pull requests
                  • reverting changes
          • {warning} big commits might hit the max commit size limit [1]
            • {bad practice} store large-size items in source control systems, even if it works [1]
            • {recommendation} consider ways to reduce items’ size if they have lots of static [1] resources, like images [1]
          • {action} revert to a previous version
            • {operation} undo
              • revert the immediate changes made, as long as they aren't committed yet [1]
              • each item can be reverted separately [1]
            • {operation} revert
              • reverting to older commits
                • {recommendation} promote an older commit to be the HEAD 
                  • via git revert or git reset [1]
                  • shows that there’s an update in the source control pane [1]
                  • the workspace can be updated with that new commit [1]
              • {warning} reverting a data item to an older version might break the existing data and could possibly require dropping the data or the operation might fail [1]
              • {recommendation} check dependencies in advance before reverting changes back [1]
          • {concept} private workspace
            • a workspace that provides an isolated environment [1]
            • allows to work in isolation, use a separate [1]
            • {prerequisite} the workspace is assigned to a Fabric capacity [1]
            • {prerequisite} access to data to work in the workspace [1]
            • {step} create a new branch from the main branch [1]
              • allows to have most up-to-date version of the content [1]
              • can be used for any future branch created by the user [1]
                • when a sprint is over, the changes are merged and one can start a fresh new task [1]
                  • switch the connection to a new branch on the same workspace
                • approach can be used when is needed to fix a bug in the middle of a sprint [1]
              • {validation} connect to the correct folder in the branch to pull the right content into the workspace [1]
          • {best practice} make small incremental changes that are easy to merge and less likely to get into conflicts [1]
            • update the branch to resolve the conflicts first [1]
          • {best practice} change workspace’s configurations to enable productivity [1]
            • connection between items, or to different data sources or changes to parameters on a given item [1]
          • {recommendation} make sure you're working with the supported structure of the item you're authoring [1]
            • if you’re not sure, first clone a repo with content already synced to a workspace, then start authoring from there, where the structure is already in place [1]
          • {constraint} a workspace can only be connected to a single branch at a time [1]
            • {recommendation} treat this as a 1:1 mapping [1]
        • {stage} test
          • {best practice} allows to simulate a real production environment for testing purposes [1]
            • {alternative} simulate this by connecting Git to another workspace [1]
          • factors to consider for the test environment
            • data volume
            • usage volume
            • production environment’s capacity
              • stage and production should have the same (minimal) capacity [1]
                • using the same capacity can make production unstable during load testing [1]
                  • {recommendation} test using a different capacity similar in resources to the production capacity [1]
                  • {recommendation} use a capacity that allows to pay only for the testing time [1]
                    • allows to avoid unnecessary costs [1]
          • {best practice} use deployment rules with a real-life data source
            • {recommendation} use data source rules to switch data sources in the test stage or parameterize the connection if not working through deployment pipelines [1]
            • {recommendation} separate the development and test data sources [1]
            • {recommendation} check related items
              • the changes made can also affect the dependent items [1]
            • {recommendation} verify that the changes don’t affect or break the performance of dependent items [1]
              • via impact analysis.
          • {operation} update data items in the workspace
            • imports items’ definition into the workspace and applies it on the existing data [1]
            • the operation is same for Git and deployment pipelines [1]
            • {recommendation} know in advance what the changes are and what impact they have on the existing data [1]
            • {recommendation} use commit messages to describe the changes made [1]
            • {recommendation} upload the changes first to a dev or test environment [1]
              • {benefit} allows to see how that item handles the change with test data [1]
            • {recommendation} check the changes on a staging environment, with real-life data (or as close to it as possible) [1]
              • {benefit} allows to minimize the unexpected behavior in production [1]
            • {recommendation} consider the best timing when updating the Prod environment [1]
              • {benefit} minimize the impact errors might cause on the business [1]
            • {recommendation} perform post-deployment tests in Prod to verify that everything works as expected [1]
            • {recommendation} have a deployment, respectively a recovery plan [1]
              • {benefit) allows to minimize the effort, respectively the downtime [1]
        • {stage} production
          • {best practice} let only specific people manage sensitive operations [1]
          • {best practice} use workspace permissions to manage access [1]
            • applies to all BI creators for a specific workspace who need access to the pipeline
          • {best practice} limit access to the repo or pipeline by only enabling permissions to users [1] who are part of the content creation process [1]
          • {best practice} set deployment rules to ensure production stage availability [1]
            • {goal} ensure the data in production is always connected and available to users [1]
            • {benefit} allows deployments run while while minimizing the downtimes
            • applies to data sources and parameters defined in the semantic model [1]
          • deployment into production using Git branches
            • {recommendation} use release branches [1]
              • requires changing the connection of workspace to the new release branches before every deployment [1]
              • if the build or release pipeline requires to change the source code, or run scripts in a build environment before deployment, then connecting the workspace to Git won't help [1]
          • {recommendation} after deploying to each stage, make sure to change all the configuration specific to that stage [1]

        References:
        [1] Microsoft Learn (2025) Fabric: Best practices for lifecycle management in Fabric [link]
        [2] Microsoft Learn (2025) Fabric: CI/CD for pipelines in Data Factory in Microsoft Fabric [link]
        [3] Microsoft Learn (2025) Fabric: Choose the best Fabric CI/CD workflow option for you [link]

        Acronyms:
        API - Application Programming Interface
        BI - Business Intelligence
        CI/CD - Continuous Integration and Continuous Deployment
        VS - Visual Studio

        29 March 2021

        Notes: Team Data Science Process (TDSP)

        Team Data Science Process (TDSP)
        Acronyms:
        Artificial Intelligence (AI)
        Cross-Industry Standard Process for Data Mining (CRISP-DM)
        Data Mining (DM)
        Knowledge Discovery in Databases (KDD)
        Team Data Science Process (TDSP) 
        Version Control System (VCS)
        Visual Studio Team Services (VSTS)

        Resources:
        [1] Microsoft Azure (2020) What is the Team Data Science Process? [source]
        [2] Microsoft Azure (2020) The business understanding stage of the Team Data Science Process lifecycle [source]
        [3] Microsoft Azure (2020) Data acquisition and understanding stage of the Team Data Science Process [source]
        [4] Microsoft Azure (2020) Modeling stage of the Team Data Science Process lifecycle [source
        [5] Microsoft Azure (2020) Deployment stage of the Team Data Science Process lifecycle [source]
        [6] Microsoft Azure (2020) Customer acceptance stage of the Team Data Science Process lifecycle [source]

        12 December 2007

        🏗️Software Engineering: Releases (Just the Quotes)

        "Releasing software is too often an art; it should be an engineering discipline." (David Farley & Jez Humble, "Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation", 2010)

        "Releasing software should be easy. It should be easy because you have tested every single part of the release process hundreds of times before. It should be as simple as pressing a button. The repeatability and reliability derive from two principles: automate almost everything, and keep everything you need to build, deploy, test, and release your application in version control." (David Farley & Jez Humble, "Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation", 2010)

        "The deployment pipeline has its foundations in the process of continuous integration and is in essence the principle of continuous integration taken to its logical conclusion. The aim of the deployment pipeline is threefold. First, it makes every part of the process of building, deploying, testing, and releasing software visible to everybody involved, aiding collaboration. Second, it improves feedback so that problems are identified, and so resolved, as early in the process as possible. Finally, it enables teams to deploy and release any version of their software to any environment at will through a fully automated process." (David Farley & Jez Humble, "Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation", 2010)

        "Conflicts between development and operations teams often originate from time pressures. Typically, a new software release must be deployed quickly. Another scenario that requires operations team to react quickly is when the system is down, and restoring it quickly becomes the highest priority. Th is situation often leads to a blame game where each side accuses the other of causing the problem." (Michael Hüttermann et al, "DevOps for Developers", 2013)

        "DevOps is essentially about gaining fast feedback and decreasing the risk of releases through a holistic approach that is meaningful for both development and operations. One major step for achieving this approach is to improve the fl ow of features from their inception to availability. This process can be refined to the point that it becomes important to reduce batch size (the size of one package of changes or the amount of work that is done before the new version is shipped) without changing capacity or demand." (Michael Hüttermann et al, "DevOps for Developers", 2013)

        "Why is continuous deployment such a powerful tool? Fundamentally, it allows engineers to make and deploy small, incremental changes rather than the larger, batched changes typical at other companies. That shift in approach eliminates a significant amount of overhead associated with traditional release processes, making it easier to reason about changes and enabling engineers to iterate much more quickly." (Edmond Lau, "The Effective Engineer: How to Leverage Your Efforts In Software Engineering to Make a Disproportionate and Meaningful Impact", 2015)

        28 November 2007

        🏗️Software Engineering: Software Deployment (Just the Quotes)

        "A system that is comprehensively tested and passes all of its tests all of the time is a testable system. That’s an obvious statement, but an important one. Systems that aren’t testable aren’t verifiable. Arguably, a system that cannot be verified should never be deployed." (Robert C Martin, "Clean Code: A Handbook of Agile Software Craftsmanship", 2008)

        "Releasing software should be easy. It should be easy because you have tested every single part of the release process hundreds of times before. It should be as simple as pressing a button. The repeatability and reliability derive from two principles: automate almost everything, and keep everything you need to build, deploy, test, and release your application in version control." (David Farley & Jez Humble, "Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation", 2010)

        "So, when should you think about automating a process? The simplest answer is, 'When you have to do it a second time.' The third time you do something, it should be done using an automated process. This fine-grained incremental approach rapidly creates a system for automating the repeated parts of your development, build, test, and deployment process." (David Farley & Jez Humble, "Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation", 2010)

        "The deployment pipeline has its foundations in the process of continuous integration and is in essence the principle of continuous integration taken to its logical conclusion. The aim of the deployment pipeline is threefold. First, it makes every part of the process of building, deploying, testing, and releasing software visible to everybody involved, aiding collaboration. Second, it improves feedback so that problems are identified, and so resolved, as early in the process as possible. Finally, it enables teams to deploy and release any version of their software to any environment at will through a fully automated process." (David Farley & Jez Humble, "Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation", 2010)

        "In essence, Continuous Integration is about reducing risk by providing faster feedback. First and foremost, it is designed to help identify and fix integration and regression issues faster, resulting in smoother, quicker delivery, and fewer bugs. By providing better visibility for both technical and non-technical team members on the state of the project, Continuous Integration can open and facilitate communication channels between team members and encourage collaborative problem solving and process improvement. And, by automating the deployment process, Continuous Integration helps you get your software into the hands of the testers and the end users faster, more reliably, and with less effort." (John F Smart, "Jenkins: The Definitive Guide", 2011)

        "System engineering is concerned with all aspects of the development and evolution of complex systems where software plays a major role. System engineering is therefore concerned with hardware development, policy and process design and system deployment, as well as software engineering. System engineers are involved in specifying the system, defining its overall architecture, and then integrating the different parts to create the finished system. They are less concerned with the engineering of the system components (hardware, software, etc.)." (Ian Sommerville, "Software Engineering" 9th Ed., 2011)

        "A value stream is a series of activities required to deliver an outcome. The software development value stream may be described as: validate business case, analyze, design, build, test, deploy, learn from usage analytics and other feedback - rinse and repeat." (Sriram Narayan, "Agile IT Organization Design: For Digital Transformation and Continuous Delivery", 2015)

        "Continuous deployment is but one of many powerful tools at your disposal for increasing iteration speed. Other options include investing in time-saving tools, improving your debugging loops, mastering your programming workflows, and, more generally, removing any bottlenecks that you identify." (Edmond Lau, "The Effective Engineer: How to Leverage Your Efforts In Software Engineering to Make a Disproportionate and Meaningful Impact", 2015)

        "Why is continuous deployment such a powerful tool? Fundamentally, it allows engineers to make and deploy small, incremental changes rather than the larger, batched changes typical at other companies. That shift in approach eliminates a significant amount of overhead associated with traditional release processes, making it easier to reason about changes and enabling engineers to iterate much more quickly." (Edmond Lau, "The Effective Engineer: How to Leverage Your Efforts In Software Engineering to Make a Disproportionate and Meaningful Impact", 2015)

        12 April 2007

        🌁Software Engineering: Deployment (Definitions)

        "The process whereby software is installed into an operational environment." (Kim Haase et al, "The J2EE™ Tutorial", 2002)

        "The process of 'putting the product into service'. Delivering a new or updated product to users. This can be as simple as shipping magnetic media or posting files for downloading. It can involve installing the product at each user site, training the users at the sites, and activating a unique or very complex system, usually because the user lacks the skills and knowledge to do these activities. " (Richard D Stutzke, "Estimating Software-Intensive Systems: Projects, Products, and Processes", 2005)

        "The process whereby the results of the data analysis or data mining are provided to the user of the information." (Glenn J Myatt, "Making Sense of Data: A Practical Guide to Exploratory Data Analysis and Data Mining", 2006)

        [service deployment:] "A governed process that manages the registration and configuration of services and release into production. Service changes and versioning are also managed by this process." (Tilak Mitra et al, "SOA Governance", 2008)

        "The act of putting information technology into productive use. Installation puts the system into the production environment. Deployment includes installation, but also includes efforts to train and encourage effective use." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

        [staged deployment:] "Deployment that begins with building the application in a fully functional staging environment, so you can practice deployment until you’ve worked out all the kinks." (Rod Stephens, "Beginning Software Engineering", 2015)

        "The process of delivering a finished application to the users. Also called implementation or installation." (Rod Stephens, "Beginning Software Engineering", 2015)

        "Continuous Deployment is the process that takes validated Features from Continuous Integration and deploys them into the production environment, where they are tested and readied for release. It is the third element in the four-part Continuous Delivery Pipeline of Continuous Exploration (CE), Continuous Integration (CI), Continuous Deployment, and Release on Demand." (Dean Leffingwell, "SAFe 4.5 Reference Guide: Scaled Agile Framework for Lean Enterprises 2nd Ed", 2018)

         "activity responsible for movement of new or changed software, documentation, processes or any other deliverable to the live environment" (ITIL)

        "The process whereby software is installed into an operational environment." (Microfocus)

        [continuous deployment:] "The technical capabilities to continuously deploy infrastructure, software, and process changes in support of digital business applications or services to customers." (Forrester)

        06 March 2007

        🌁Software Engineering: Deployment Diagram (Definitions)

        "Shows the physical nodes on which a system executes. This is more closely associated with physical database design." (Toby J Teorey, ", Database Modeling and Design" 4th Ed., 2010)

        "A visual representation of the configuration of a system deployed in a production environment, including hardware, software, data objects, and all processes that use them, including processes that only exist while executing." (DAMA International, "The DAMA Dictionary of Data Management", 2011)

        "In UML a diagram that shows the execution architecture of systems." (IQBBA, "Standard glossary of terms used in Software Engineering", 2011)

        "In UML, a diagram that describes the deployment of artifacts (files, scripts, executables, and the like) on nodes (hardware devices or execution environments that can execute artifacts)." (Rod Stephens, "Beginning Software Engineering", 2015)


        Related Posts Plugin for WordPress, Blogger...

        About Me

        My photo
        Koeln, NRW, Germany
        IT Professional with more than 25 years experience in IT in the area of full life-cycle of Web/Desktop/Database Applications Development, Software Engineering, Consultancy, Data Management, Data Quality, Data Migrations, Reporting, ERP implementations & support, Team/Project/IT Management, etc.