Puppet Enterprise Changelog

What's new in Puppet Enterprise 2021.2.0

Jun 24, 2021
  • Enhancements:
  • Regenerate primary server certificates with updated command:
  • As part of the ongoing effort to remove harmful terminology, the command to regenerate primary server certificates has been renamed puppet infrastructure run regenerate_primary_certificate.
  • Submit updated CRLs:
  • You can now update CRLs using the new API endpoint: certificate_revocation_list. This new endpoint accepts a list of CRL PEMs as a body—inserting updated copies of the applicable CRLs into the trust chain. If your newly submitted CRLs have a higher number than their counterparts, the endpoint updates your CRLs to reflect and account for such changes. You can use this endpoint if your CRLs require frequent updates. Do not use the endpoint to update the CRL associated with the Puppet CA signing certificate (only earlier ones in the certificate chain).
  • Enable the argon2id algorithm for new password storage:
  • You can switch the algorithm PE uses to store passwords from the default SHA-256 to argon2id by configuring new password algorithm parameters. To configure the algorithm, see Configure the password algorithm.
  • Note: Argon2id is not compatible with FIPS-enabled PE installations.
  • Generate cryptographic tokens for password resets
  • RBAC now only generates and accepts cryptographic tokens instead of JSON web tokens (jwt), which were lengthy and directly tied to certificates used by the RBAC instance for validation.
  • Filter by node state in jobs endpoint
  • You can filter the nodes by their current state in the /jobs/:job-id/nodes endpoint when retrieving a list of nodes associated with a given job. The following node states are available to query:
  • new
  • ready
  • running
  • stopping
  • stopped
  • finished
  • failed
  • Export node data from task runs to CSV:
  • In the console, on the Task details page, you can now export the node data results from task runs to a CSV file by clicking Export data.
  • Sort activities by oldest to newest in events endpoint:
  • In the activity service API, the /v1/events and /v2/events endpoints now allow you to sort activity from either oldest to newest (asc) or newest to oldest (desc).
  • Disable force-sync mode:
  • File sync now always overrides the contents of the live directory when syncing. This default override corrects any local changes made in the live directory outside of Code Manager's workflow. You can no longer disable file sync's force-sync mode to implement this enhancement.
  • Differentiate backup and restore logs:
  • Backup and restore log files are now appended with timestamps and aren't overwritten with each backup or restore action. Previously, backup and restore logs were created as singular, statically named files, backup.log and restore.log, which were overwritten on each execution of the scripts.
  • Encrypt backups:
  • You can now encrypt backups created with the puppet-backup create command by specifying an optional --gpgkey.
  • Clean up old PE versions with smarter defaults:
  • When cleaning up old PE versions with puppet infrastructure run remove_old_pe_packages, you no longer need to specify pe_version=current to clean up versions prior to the current one. current is now the default.
  • Platform support:
  • This version adds support for these platforms.
  • Agent:
  • macOS 11
  • Red Hat Enterprise Linux 8 ppc64le
  • Ubuntu 20.04 aarch64
  • Deprecations and removals:
  • Purge-whitelist replaced with purge-allowlist:
  • For Code Manager and file sync, the term purge-whitelist is deprecated and replaced with the new setting name purge-allowlist. The functionality and purpose of both setting names are identical.
  • Pe_java_ks module removed:
  • The pe_java_ks module has been removed from PE packages. If you have an references to the packaged module in your code base, you must remove them to avoid errors in catalog runs.
  • Resolved issues:
  • Windows agent installation failed with a manually transferred certificate:
  • Performing a secure installation on Windows nodes by manually transferring the primary server CA certificate failed with the connection error: Could not establish trust relationship for the SSL/TLS secure channel.
  • Upgrading a replica failed after regenerating the master certificate:
  • If you previously regenerated the certificate for your master, upgrading a replica from 2019.6 or earlier could fail due to permission issues with backed up directories.
  • The apply shim in pxp-agent didn't pick up changes:
  • When upgrading agents, the ruby_apply_shim didn't update properly, which caused plans containing apply or apply_prep actions to fail when run through the orchestrator, and resulted in this error message:
  • Exited 1:n/opt/puppetlabs/pxp-agent/tasks-cache/apply_ruby_shim/apply_ruby_shim.rb:39:in `<main>': undefined method `map' for nil:NilClass (NoMethodError)n
  • Running client tool commands against a replica could produce errors:
  • Running puppet-code, puppet-access, or puppet query against a replica produced an error if the replica certificate used the legacy common name field instead of the subject alt name. The error has been downgraded to a warning, which you can bypass with some minimal security risk using the flag --use-cn-verification or -k, for example puppet-access login -k. To permanently fix the issue, you must regenerate the replica certificate: puppet infrastructure run regenerate_replica_certificate target=<REPLICA_HOSTNAME>.
  • Generating a token using puppet-access on Windows resulted in zero-byte token file error:
  • Running puppet-access login to generate a token on Windows resulted in a zero-byte token file error. This is now fixed due to the token file method being changed from os.chmod to file.chmod.
  • Invoking puppet-access when it wasn't configured resulted in unhelpful error:
  • If you invoked puppet-access while it was missing a configuration file, it failed and returned unhelpful errors. Now, a useful message displays when puppet-access needs to be configured or if there is an unexpected answer from the server.
  • Enabling manage_delta_rpm caused agent run failures on CentOS and RHEL 8:
  • Enabling the manage_delta_rpm parameter in the pe_patch class caused agent run failures on CentOS and RHEL 8 due to a package name change. The manage_delta_rpm parameter now appropriately installs the drpm package, resolving the agent run issue.
  • Editing a hash in configuration data caused parts of the hash to disappear:
  • If you edited configuring data with hash values in the console, the parts of the hash that did not get edited disappeared after committing changes—and then reappeared when the hash was edited again.
  • Null characters in task output caused errors:
  • Tasks that print null bytes caused an orchestrator database error that prevented the result from being stored. This issue occurred most frequently for tasks on Windows that print output in UTF-16 rather than UTF-8.
  • Plans still ran after failure:
  • When pe-orchestration-services exited unexpectedly, plan jobs sometimes continued running even though they failed. Now, jobs are correctly transitioned to failed status when pe-orchestration-services starts up again.
  • SAML rejected entity-id URIs
  • SAML only accepted URLs for the entity-id and would fail if a valid URI was specified. SAML now accepts both URLs and URIs for the entity-id.
  • Login length requirements applied to existing remote users:
  • The login length requirement prevented reinstating existing remote users when they were revoked, resulting in a permissions error in the console. The requirement now applies to local users only.
  • Plan apply activity logging contained malformed descriptions:
  • In activity entries for plan apply actions, the description was incorrectly prepended with desc.
  • Errors when enabling and disabling versioned deploys:
  • Previously, if you switched back and forth from enabling and disabling versioned deploys mode, file sync failed to correctly manage deleted control repository branches. This bug is now fixed.
  • Lockless code deployment lead to failed removal of old code directories:
  • Previously, turning on lockless code deployment led to full disk utilization because of the failed removal of previous old code directories. To work around this issue, you must manually delete existing old directories. However, going forward—the removal is automatic.

New in Puppet Enterprise 2021.1.0 (May 5, 2021)

  • Enhancements:
  • Customize value report estimates:
  • You can now customize the low, med, and high time-freed estimates provided by the PE value report by specifying any of the value_report_* parameters in the PE Console node group in the puppet_enterprise::profile::console class.
  • Add a custom disclaimer banner to the console
  • You can optionally add a custom disclaimer banner to the console login page. To add a banner, see Create a custom login disclaimer.
  • Configure and view password complexity requirements in the console
  • There are configurable password complexity requirements that local users see when creating a new password. For example, "Usernames must be at least {0} characters long." To configure the password complexity options, see Password complexity parameters.
  • Re-download CRL on a regular interval:
  • You can now configure the new parameter crl_refresh_interval to enable puppet agent to re-download its CRL on a regular interval. Use the console to configure the interval in the PE Agent group, in the puppet_enterprise::profile::agent class, and enter a duration (e.g. 60m) for Value.
  • Remove staging directory status for memory, disk usage, and timeout error improvements
  • The status output of the file sync storage service (specifically at the debug level), no longer reports the staging directory’s status. This staging information removal reduces timeout errors in the logs, removes heavy disk usage created by the endpoint, and preserves memory if there are many long-running status checks in the Puppet Server.
  • Exclude events from usage endpoint response
  • In the /usage endpoint, the new events parameter allows you to specify whether to include or exclude event activity information from the response. If set to exclude, the endpoint only returns information about node counts.
  • Avoid spam during patching:
  • The patching task and plan now log fact generation, rather than echoing Uploading facts. This change reduces spam from servers with a large amount of facts.
  • Changes to defaults
  • The environment timeout settings introduced in 2019.8.3 have been updated to simplify defaults. When you enable Code Manager, environment_timeout is now set to 5m, clearing short-lived environments 5 minutes from when they were last used. The environment_timeout_mode parameter has been removed, and the timeout countdown on environments now always begins from their last use.
  • Platform support:
  • This version adds support for these platforms.
  • Agent:
  • Fedora 32
  • Deprecations and removals:
  • Configuration settings deprecated
  • The following configuration setting names are deprecated in favor of new terminology
  • The previous setting names are available for backwards compatibly, but you should upgrade to the newer setting names at your earliest convenience.
  • Resolved issues:
  • Upgrade failed with cryptic errors if agent_version was configured for your infrastructure pe_repo class
  • If you configured the agent_version parameter for the pe_repo class that matches your infrastructure nodes, upgrade could fail with a timeout error when the installer attempted to download a non-default agent version. The installer now warns you to remove the agent_version parameter if applicable.
  • Upgrade with versioned deploys caused Puppet Server crash
  • If versioned_deploys was enabled when upgrading to version 2019.8.6 or 2021.1, then the Puppet Server crashed.
  • Compiler upgrade failed with client certnames defined:
  • Existing settings for client certnames could cause upgrade to fail on compilers, typically with the error Value does not match schema: {:client-certnames disallowed-key}.
  • Compiler upgrade failed with no-op configured:
  • Upgrade failed on compilers running in no-op mode. Upgrade now proceeds on infrastructure nodes regardless of their no-op configuration.
  • Installing Windows agents with the .msi package failed with a non-default INSTALLDIR
  • When installing Windows agents with the .msi package, if you specified a non-default installation directory, agent files were nonetheless installed at the default location, and the installation command failed when attempting to locate files in the specified INSTALLDIR.
  • Patching failed on Windows nodes with non-default agent location
  • On Windows nodes, if the Puppet agent was installed to a location other than the default C: drive, the patching task or plan failed with the error No such file or directory.
  • Patching failed on Windows nodes when run during a fact generation
  • The patching task and plan failed on Windows nodes if run during fact generation. Patching and fact generation processes, which share a lock file, now wait for each other to finish before proceeding.
  • File sync failed to copy symlinks if versioned deploys was enabled
  • If you enabled versioned deploys, then the file sync client failed to copy symlinks and incorrectly copied the symlinks' targets instead. This copy failure crashed the Puppet Server.
  • Backup failed with an error about the stockpile directory:
  • The puppet-backup create command failed under certain conditions with an error that the /opt/puppetlabs/server/data/puppetdb/stockpile directory was inaccessible. That directory is now excluded from backup.
  • Console reboot task failed:
  • Rebooting a node using the reboot task in the console failed due to the removal of win32 gems in Puppet 7. The reboot module packaged with PE has been updated to version 4.0.2, which resolves this issue.
  • Removed Pantomime dependency in the orchestrator:
  • The version of pantomime in the orchestrator had a third party vulnerability (tika-core). Because of the vulnerability, pantomime usage was removed from the orchestrator, but pantomime still existed in the orchestration-services build. The dependency has now been completely removed.
  • Injection attack vulnerability in csv exports
  • There was a vulnerability in the console where .csv files could contain malicious user input when exported. The values =, +, -, and @ are now prohibited at the beginning of cells to prevent an injection attack.
  • License page in the console timed out
  • Some large queries run by the License page caused the page to have trouble loading and timeout.

New in Puppet Enterprise 2019.8.5 (Mar 24, 2021)

  • A new command, puppet infrastructure run remove_old_pe_packages pe_version=current cleans up old PE packages remaining at /opt/puppet/packages/public. For pe_version, you can specify a SHA, a version number, or current. All packages older than the specified version are removed.
  • Get better insight into replica sync status after upgrade
  • Improved error handling for replica upgrades now results in a warning instead of an error if re-syncing PuppetDB between the primary and replica nodes takes longer than 15 minutes.
  • Fix replica enablement issues
  • When provisioning and enabling a replica (puppet infra provision replica --enable), the command now times out if there are issues syncing PuppetDB, and provides instructions for fixing any issues and separately provisioning the replica.
  • Patch nodes with built-in health checks
  • The new group_patching plan patches nodes with pre- and post-patching health checks. The plan verifies that Puppet is configured and running correctly on target nodes, patches the nodes, waits for any reboots, and then runs Puppet on the nodes to verify that they're still operational.
  • Run a command after patching nodes
  • A new parameter in the pe_patch class, post_patching_scriptpath enables you to run an executable script or binary on a target node after patching is complete. Additionally, the pre_patching_command parameter has been renamed pre_patching_scriptpath to more clearly indicate that you must provide the file path to a script, rather than an actual command.
  • Patch nodes despite certain read-only directory permissions
  • The file sync client uses SHAs corresponding to the branches of the control repository to name versioned directories. You must deploy an environment to update the directory names.
  • Configure failed deployments to display r10k stacktrace in error output
  • Configure the new r10k_trace parameter to include the r10k stack trace in the error output of failed deployments. The parameter defaults to false. Use the console to configure the parameter in the PE Master group, in the puppet_enterprise::master::code_manager class, and enter true for Value.
  • Reduce query time when querying nodes with a fact filter
  • When you run a query in the console that populates information on the Status page to PuppetDB, the query uses the optimize_drop_unused_joins feature in PuppetDB to increase performance when filtering on facts. You can disable drop-joins by setting the environment variable PE_CONSOLE_DISABLE_DROP_JOINS=yes in /etc/sysconfig/pe-console-services and restarting the console service.
  • Resolved issues
  • PuppetDB restarted continually after upgrade with deprecated parameters
  • After upgrade, if the deprecated parameters facts_blacklist or cert_whitelist_path remained, PuppetDB restarted after each Puppet run.
  • Tasks failed when specifying both as the input method
  • In task metadata, using both for the input method caused the task run to fail.
  • Patch task misreported success when it timed out on Windows nodes
  • If the pe_patch::patch_server task took longer than the timeout setting to apply patches on a Windows node, the debug output noted the timeout, but the task erroneously reported that it completed successfully. Now, the task fails with an error noting that the task timed out. Any updates in progress continue until they finish, but remaining patches aren't installed.
  • Orchestrator created an extra JRuby pool
  • During startup, the orchestrator created two JRuby pools - one for scheduled jobs and one for everything else. This is because the JRuby pool was not yet available in the configuration passed to the post-migration-fa function, which created its own JRuby pool in response. These JRuby pools accumulated over time because the stop function didn't know about them.
  • Console install script installed non-FIPS agents on FIPS Windows nodes
  • The command provided in the console to install Windows nodes installed a non-FIPS agent regardless of the node's FIPS status.
  • Unfinished sync reported as finished when clients shared the same identifier
  • Because the orchestrator and puppetserver file-sync clients shared the same identifier, Code Manager reported an unfinished sync as "all-synced": true. Whichever client finished polling first, notified the storage service that the sync was complete, regardless of the other client's sync status. This reported sync might have caused attempts to access tasks and plans before the newly-deployed code was available.
  • Refused connection in orchestrator startup caused PuppetDB migration failure
  • A condition on startup failed to delete stale scheduled jobs and prevented the orchestrator service from starting.
  • Upgrade failed with Hiera data based on certificate extensions
  • If your Hiera hierarchy contained levels based off certificate extensions, like {{trusted.extensions.pp_role}}, upgrade could fail if that Hiera entry was vital to running services, such as {{java_args}}. The failure was due to the puppet infrastructure recover_configuration command, which runs during upgrade, failing to recognize the hierarchy level.
  • File sync issued an alert when a repository had no commits
  • When a repository had no commits, the file-sync status recognized this repository’s state as invalid and issued an alert. A repository without any commits is still a valid state, and the service is fully functional even when there are no commits.
  • Upgrade failed with infrastructure nodes classified based on trusted facts
  • If your infrastructure nodes were classified into an environment based on a trusted fact, the recover configuration command used during upgrade could choose an incorrect environment when gathering data about infrastructure nodes, causing upgrade to fail.
  • Patch task failed on Windows nodes with old logs
  • When patching Windows nodes, if an existing patching log file was 30 or more days old, the task failed trying to both write to and clean up the log file.
  • Backups failed if a Puppet run was in progress
  • The puppet-backup command failed if a Puppet run was in progress.
  • Default branch override did not deploy from the module's default branch
  • A default branch override did not deploy from the module’s default branch if the branch override specified by Impact Analysis did not exist.
  • Module-only environment updates did not deploy in Versioned Deploys
  • Module-only environment updates did not deploy if you tracked a module's branch and redeployed the same control repository SHA, which pulled in new versions of the modules.

New in Puppet Enterprise 2019.8.3 (Nov 18, 2020)

  • NEW FEATURES:
  • Value report:
  • A new Value report page in the Admin section of the console estimates the amount of time reclaimed by using PE automation. The report is configurable based on your environment. See Access the value report for more information.
  • ENHANCEMENTS:
  • Spend less time waiting on puppet infrastructure commands:
  • The puppet infrastructure commands that use plans, for example for upgrading, provisioning compilers, and regenerating certificates, are now noticeably faster due to improvements in how target nodes are verified.
  • Provision a replica without manually pinning the target node:
  • You're no longer required to manually pin the target replica node to the PE Infrastructure Agent group before running puppet infrastructure provision replica. This action — which ensures that the correct catalog and PXP settings are applied to the replica node in load balanced installations — is now handled automatically by the command.
  • Configure environment caching:
  • Using new environment timeout settings, you can improve Puppet Server performance by caching long-lived environments and purging short-lived environments. For example, in the PE Master node group, in the puppet_enterprise::master class, set environment_timeout_mode = from_last_used and environment_timeout = 30m to clear short-lived environments 30 minutes from when they were last used. By default, when you enable Code Manager, environment_timeout is set to unlimited, which caches all environments.
  • Configure the number of threads used to download modules:
  • A new configuration parameter, download_pool_size, lets you specify the number of threads r10k uses to download modules. The default is 4, which improves deploy performance in most environments.
  • Configure PE-PostgreSQL autovacuum cost limit:
  • The cost limit value used in PE-PostgreSQL autovacuum operations is now set at a more reasonable default that scales with the number of CPUs and autovacuum workers available. The setting is also now configurable using the puppet_enterprise::profile::database::autovacuum_vacuum_cost_limit parameter.
  • Previously, the setting was not configurable, and it used the PostgreSQL default, which could result in database tables and indexes growing continuously.
  • Rerun Puppet or tasks on failed nodes only:
  • You can choose to rerun Puppet or a task only on the nodes that failed during the initial run by selecting Failed nodes on the Run again drop down.
  • Run plans only when required parameters are supplied:
  • In the console, the option to commit a plan appears after you supply all required parameters. This is to prevent plan failures by accidentally running plans without required parameters.
  • Schedule plans in the console and API:
  • You can use the console to schedule one time or recurring plans and view currently scheduled plans. Additionally, you can use the schedule_plan command for scheduling one-time plan runs using the POST /command/schedule_plan endpoint.
  • Use sensitive parameters in plans:
  • You can use the Sensitive type for parameters in plans. Parameters marked as Sensitive aren't stored in the orchestrator's database, aren't returned via API calls, and don't appear in the orchestrator's log. Sensitive parameters are also not visible from the console.
  • View more details about plans:
  • The environment a plan was run from is now displayed in the console on the Job details page. The environment is returned from the /plan_jobs endpoint using the new environment key. See GET /plan_jobs for more information.
  • Additionally, the parameters supplied to tasks that are run as part of a plan are displayed in the console on the Plan details page. Sensitive parameters are masked and are never stored for a task run.
  • Differentiate between software and driver patch types for Windows:
  • PE now ignores Driver update types in Windows Updates by default and only includes the Software type, cutting down on unnecessary patch updates. To change this default, configure the new windows_update_criteria parameter in the pe_patch class by removing or changing the Type argument. See Patch management parameters for more information about the parameter.
  • Serve patching module files more efficiently:
  • Certain pe_patch module files are now delivered to target nodes using methods that improve scalability and result in fewer file metadata checks during Puppet runs.
  • Receive a notification when CA certs are about to expire:
  • The console now notifies you if a CA certificate is expiring soon. The Certificates page in the sidebar displays a yellow ! badge if a certificate expires in less than 60 days, and a red ! badge if a certificate expires in less than 30 days. If there are certificates that need signing in addition to the certificates expiring, the number of certificates that need to be signed is displayed in the badge but the color stays the same.
  • View details about a particular code deployment:
  • The Code Manager /deploys/status endpoint now includes the deployment ID in the "deploys-status" section for incomplete deploys so you can correlate status output to a particular deployment request.
  • Additionally, you can query the Code Manager /deploys/status endpoint with a deployment ID to see details about a particular deployment. The response contains information about both the Code Manager deploy and the sync to compilers for the resulting commit.
  • Troubleshoot code deployments
  • File sync now uses the public git SHA recorded in the signature field of the .r10k-deploy.json file instead of an internal SHA that file sync created. Additionally, versioned directories used for lockless deploys now use an underscore instead of a hyphen so that paths are valid environment names. With these changes, you can now map versioned directories directly to SHAs in your control repository.
  • When upgrading to 2019.3 or later with versioned deploys enabled, versioned directories are recreated with underscores. You can safely remove orphaned directories with hyphens located at /opt/puppetlabs/server/data/puppetserver/filesync/client/versioned-dirs.
  • Report on user activities:
  • A new GET /v2/events API that tracks more user activities, like date, time, mote ID, user ID, and action. You can use the console to generate a report of activities on the User details page.
  • Deprecations and removals
  • Master removed from docs:
  • Documentation for this release replaces the term master with primary server. This change is part of a company-wide effort to remove harmful terminology from our products.
  • For the immediate future, you’ll continue to encounter master within the product, for example in parameters, commands, and preconfigured node groups. Where documentation references these codified product elements, we’ve left the term as-is.
  • As a result of this update, if you’ve bookmarked or linked to specific sections of a docs page that include master in the URL, you’ll need to update your link.
  • Whitelist and blacklist deprecated
  • In the interest of removing racially insensitive terminology, the terms whitelist and blacklist are deprecated in favor of allowlist and blocklist. Classes, parameters, and file names that use these terms continue to work, but we recommend updating your classification, Hiera data, and pe.conf files as soon as possible in preparation for their removal in a future release.
  • These are the classes, parameters, task parameters, and file names that are affected.
  • Classes:
  • puppet_enterprise::pg::cert_whitelist_entry
  • puppet_enterprise::certs::puppetdb_whitelist
  • puppet_enterprise::certs::whitelist_entry
  • Parameters:
  • puppet_enterprise::master::code_manager::purge_whitelist
  • puppet_enterprise::master::file_sync::whitelisted_certnames
  • puppet_enterprise::orchestrator::ruby_service::whitelist
  • puppet_enterprise::profile::ace_server::whitelist
  • puppet_enterprise::profile::bolt_server::whitelist
  • puppet_enterprise::profile::certificate_authority::client_whitelist
  • puppet_enterprise::profile::console::cache::cache_whitelist
  • puppet_enterprise::profile::console::whitelisted_certnames
  • puppet_enterprise::profile::puppetdb::sync_whitelist
  • puppet_enterprise::profile::puppetdb::whitelisted_certnames
  • puppet_enterprise::puppetdb::cert_whitelist_path
  • puppet_enterprise::puppetdb::database_ini::facts_blacklist
  • puppet_enterprise::puppetdb::database_ini::facts_blacklist_type
  • puppet_enterprise::puppetdb::jetty_ini::cert_whitelist_path
  • Files:
  • /etc/puppetlabs/console-services/rbac-certificate-whitelist
  • Split-to-mono migration removed:
  • The puppet infrastructure run migrate_split_to_mono command has been removed. The command migrated a split installation to a standard installation with the console and PuppetDB on the primary server. Upgrades to PE 2019.2 and later required migrating as a prerequisite to upgrade, so this command is no longer used. If you're upgrading from an earlier version of PE with a split installation, see Migrate from a split to a standard installation in the documentation for your current version.
  • Resolved issues:
  • Upgrade and puppet infrastructure commands failed if your primary server was not in the production environment
  • Upgrades and puppet infrastructure commands — including replica upgrade and compiler provisioning, conversion, and upgrade — failed with a Bolt::RunFailure if your primary server was not in the production environment.
  • This release fixes both issues, and upgrades to this version are unaffected.
  • For upgrades to previous versions, we recommended following these steps to ensure upgrades and puppet infrastructure commands worked as expected:
  • Verify that you've specified your non-production infrastructure environment for these parameters:
  • pe_install::install::classification::pe_node_group_environment
  • puppet_enterprise::master::recover_configuration::pe_environment
  • Run puppet infra recover_configuration --pe-environment <PRIMARY_ENVIRONMENT>
  • When upgrading, run the installer with the --pe_environment flag: sudo ./puppet-enterprise-installer –- --pe_environment <PRIMARY_ENVIRONMENT>
  • Upgrade failed if a PostgreSQL repack was in progress:
  • If a PostgreSQL repack operation was in progress when you attempted to upgrade PE, the upgrade could fail with the error cannot drop extension pg_repack because other objects depend on it.
  • Upgrade failed with an unenabled replica:
  • PE upgrade failed if you had a provisioned, but not enabled, replica.
  • Compiler provisioning failed if a single compiler was unresponsive
  • The puppet infrastructure provision compiler command failed if any compiler in your pool failed a pre-provisioning health check.
  • puppet infrastructure commands failed with an external node classifier
  • With an external node classifier, puppet infrastructure commands, such as puppet infrastructure compiler upgrade and puppet infrastructure provision compiler, failed.
  • Automated Puppet runs could fail after running compiler or certificate regeneration commands
  • After provisioning compilers, converting compilers, or regenerating certificates with puppet infrastructure commands, automated Puppet runs could fail because the Puppet service hadn't restarted.
  • puppet infrastructure recover_configuration misreported success if specified environment didn't exist
  • If you specified an invalid environment when running puppet infrastructure recover_configuration, the system erroneously reported that the environment's configuration was saved.
  • Runs, plans, and tasks failed after promoting a replica
  • After promoting a replica, infrastructure nodes couldn't connect to the newly promoted primary server because the master_uris value still pointed to the old primary server.
  • This release fixes the issue for newly-provisioned replicas, however if you have an enabled replica, in both the PE Agent and PE Infrastructure Agent node groups, in the puppet_enterprise::profile::agent class, verify that the setting for master_uris matches the setting for server_list. Both values must include both your primary server and replica, for example ["PRIMARY.EXAMPLE.COM", "REPLICA.EXAMPLE.COM"]. Setting these values ensures that agents can continue to communicate with the promoted replica in the event of a failover.
  • Replica commands could leave the Puppet service disabled
  • The reinitialize replica command as well as the provision replica command, which includes reinitializing, left the Puppet service disabled on the replica.
  • Skipping agent configuration when enabling a replica deleted settings for the PE Agent group
  • If you used the --skip-agent-config flag with puppet infra enable replica or puppet infra provision replica --enable, any custom settings that you specified for server_list and pcp_broker_list in the PE Agent node group were deleted.
  • Provisioning a replica failed after regenerating the primary server certificate
  • If you previously regenerated the certificate for your primary server, provisioning a replica can failed due to permission issues with backed up directories.
  • Console was inaccessible with PE set to IPv6
  • If you specified IPv6, PE Java services still listened to the IPv4 localhost. This mismatch could prevent access to the console as Nginx proxied traffic to the wrong localhost.
  • Apply blocks failed to compile
  • Puppet Server might have failed to compile apply blocks for plans when there were more than eight variables, or when variables had names that conflict with key names for hashes or target settings in plans.
  • Yaml plans displayed all parameters as optional in the console
  • Yaml plans listed all parameters as having default values, regardless of whether there is a default value set in the code or not. This caused all parameters to display defaults in orchestrator APIs and show as optional in the console. Yaml plans no longer display all parameters as optional.
  • Running puppet query produced a cryptic error
  • Running puppet query with insufficient permissions produced an error similar to this:
  • ERROR - &{<nil> } (*models.Error) is not supported by the TextConsumer, can be resolved by supporting TextUnmarshaler interface
  • Primary server reported HTTP error after Qualys scan
  • When running a Qualys scan, the primary server no longer reports the error "HTTP Security Header Not Detected. Issue at Port 443".
  • Nodes CSV export failed with PQL query
  • The csv export functionality no longer produces an error when you specified nodes using a PQL query.
  • The wait_until_available function didn’t work with multiple transports
  • When a target included in the TargetSpec argument to the wait_until_available plan function used the ACE (remote) transport, the function failed immediately and wouldn't wait for any of the targets in the TargetSpec argument.
  • Unnecessary logs and failed connections in bolt-server and ace-server
  • When requests were made with unsupported ciphers, bolt-server and ace-server would log stack traces. Stack traces might lead to unusual growth in the logs for those services when, for example, they are scanned by security scanning products. The Puma Server library in those services has been updated to prevent emitting the stack traces into the bolt-server.log and ace-server.log.
  • Patch task could misreport success for Windows nodes
  • When patching Windows nodes, running the pe_patch::patch_server task always reported success, even if there were problems installing one or more updates. With this fix, the task now fails with an error message about which updates couldn't be installed successfully.
  • The pe_patch fact didn't consider classifier environment group
  • When pe_patch scheduled scripts that uploaded facts, the facts didn't consider the current environment the node was compiling catalogs in. If the local agent environment didn’t match the environment specified by the server, the facts endpoint included tags for an unexpected environment.
  • Reenabling lockless code deploys could fail
  • Reenabling lockless code deploys could fail due to the persistence of the versioned code directory. With this release, any existing versioned code directory is deleted – and recreated – when you reenable lockless code deploys.
  • File-sync client repo grew after frequent commits
  • The file-sync client repo no longer grows rapidly when there are frequent commits to it. For example, when syncing the CA dir for DR, and many new certificates are signed are revoked quickly.

New in Puppet Enterprise 2019.3.0 (Jan 30, 2020)

  • Enhancements:
  • Puppet ensures platform repositories aren't installed in order to prevent accidental agent upgrade
  • Previously, Bolt users who installed the Puppet 5 or 6 platform repositories could experience unsupported agent upgrades on managed nodes. With this release, Puppet ensures that the release packages for those platforms are not installed on managed nodes by enforcing ensure => 'absent' for the packages.
  • Windows install script optionally downloads a tarball of plug-ins
  • For Windows agents, the agent install script optionally downloads a tarball of plug-ins from the master before the agent runs for the first time. Depending on how many modules you have installed, bulk plug-in sync can speed agent installation significantly.
  • Note: If your master runs in a different environment from your agent nodes, you might see some reduced benefit from bulk plug-in sync. The plug-in tarball is created based on the plug-ins running on the master agent, which might not match the plug-ins required for agents in a different environment.
  • This feature is controlled by the setting pe_repo::enable_windows_bulk_pluginsync which you can configure in Hiera or in the console. The default setting for bulk plug-in sync is false (disabled).
  • puppet infrastructure run commands no longer require an authentication token
  • puppet infrastructure run commands that affect PuppetDB, including migrate_split_to_mono, convert_legacy_compiler, and enable_ha_failover, no longer require setting up token-based authentication as a prerequisite for running the command. By default, these commands use the master's PuppetDB certificate for authentication.
  • puppet infrastructure run commands provide more useful output
  • puppet infrastructure run commands, such as those for regenerating certificates or enabling high availability failover, provide more readable output, making them easier to troubleshoot.
  • Calculations for PostgreSQL settings are fine-tuned
  • The shared_buffers setting uses less RAM by default due to improvements in calculating PostgreSQL settings. Previously, PostgreSQL settings were based on the total RAM allocated to the node it was installed on. Settings are now calculated based on total RAM less the default RAM used by PE services. As a result, on an 8GB installation for example, the default shared_buffers setting is reduced from ~2GB to ~1GB.
  • PostgreSQL can optionally be cleaned up after upgrading
  • After upgrading, you can optionally remove packages and directories associated with older PostgreSQL versions with the command puppet infrastructure run remove_old_postgresql_versions. If applicable, the installer prompts you to complete this cleanup.
  • nix command for regenerating agent certificates includes a parameter for CRL cleanup
  • The puppet infra run regenerate_agent_certificate command includes a clean_crl parameter. Setting clean_crl to true cleans up the local CRL bundle. When you regenerate certificates for *nix agents after recreating your certificate authority, you must include this parameter with the value set to true. If you're regenerating agent certificates without recreating the CA, you don't need to clean up the CRL.
  • Puppet Server multithreaded setting
  • There is a new Puppet Server setting for enabling a multithreaded mode, which uses a single JRuby instance to process requests, like catalog compiles, concurrently. Activate the setting by changing the puppet_enterprise::master::puppetserver::jruby_puppet_multithreaded parameter to true.
  • puppetlabs-pe_bootstrap task supports Puppet agent on CentOS 8
  • The puppetlabs-pe_bootstrap task that ships in PE has been updated to support Puppet agent installation on CentOS 8.
  • New task targets API
  • Use the new task targets API to fine-tune task permissions automatically. See POST /command/task_target and Puppet orchestrator API: scopes endpoint.
  • Console enhancements:
  • These are enhancements to the console in this release:
  • Plan metadata
  • View plan metadata and parameters. To view them in the console, type in a name of a plan in the Plan field and click View plan metadata. To view metadata on the command line, run puppet plan show <PLAN NAME>.
  • Test connections option
  • Test connections for nodes and devices before adding them to your inventory. This option is enabled by default on the Inventory page. If a connection fails, you can edit the node or device information and try again.
  • Custom PQL queries
  • Add your own custom PQL queries to the console and use them for running Puppet and tasks. See Add custom PQL queries to the console for more information.
  • Breadcrumbs:
  • Pages in the console now have breadcrumbs, showing you where you are in the interface. The breadcrumbs are links you can use to move to parent pages.
  • Transport details:
  • View the transport mechanism, SSH or WinRM for example, for task runs in the Connections and Activity tabs on the Nodes page.
  • Run drop-down menu
  • The Run Puppet on these nodes button has been replaced with a Run drop down menu so you can run Puppetor run a task for the nodes listed on the current page. The new option is available on the Overview, Events, and Packages pages.
  • Task and plan environment default
  • When you run a task or a plan in the console, you no longer need to specify which environment you would like to run the plan or task in if you are running in production. The console now defaults to production.
  • Additional run options
  • In addition to no-op, you can now specify debug, trace, and eval-trace run options when running Puppet.
  • Platform support:
  • This version adds support for these platforms.
  • Agent:
  • Fedora 31
  • Deprecations and removals:
  • Deprecated platform support
  • Support for these platforms is deprecated in this release and will be removed in a future version of PE:
  • Master
  • Enterprise Linux 6
  • Ubuntu 16.04
  • Razor deprecated:
  • Razor, the provisioning application that deploys bare-metal systems, is deprecated in this release, and will be removed in a future version. If you can to continue using Razor, you can use the open source version of the tool, available from GitHub.
  • Node graph removed:
  • The node graph in the console has been removed due to infrequent use. The graph was used to view relationships between resources and classes within a node catalog. To generate a node graph now, use the Puppet VS Code extension.
  • Resolved issues:
  • puppet infrastructure run commands could fail if the agent was run with cron
  • puppet infrastructure run commands, such as those used for certain installation, upgrade, and certificate management tasks, could fail if the Puppet agent was run with cron. The failure occurred if the command conflicted with a Puppet run.
  • Mismatch between classifier classification and matching nodes for regexp rules
  • PuppetDB’s regular expression matching had surprising behaviors for structured fact value comparisons. For example, the structured fact os is a rule that matches ["~", "os", ":"]. PuppetDB would unintentionally match every node that has the os structured fact because the regular expression was applied to the JSON encoded version of the fact value.
  • The classifier does not use PuppetDB for determining classification and regular expressions in the classifier rules syntax only support direct value comparisons for string types.
  • This caused issues in the console where the node list and counts for the "matching nodes" display sometimes indicated that nodes were matching even though the classifier would not consider them matching.
  • Now, the same criteria is applied to the displays and counts that the classifier uses. The output of the classifier’s rule translation endpoints makes queries that match the classifier behavior.
  • Note: This fix doesn't change the way nodes are classified, only how the console displays matching nodes.
  • Code manager could not deploy Forge modules with a proxy
  • The commands puppet code deploy and r10k failed when behind a proxy. The commands didn't use the configured proxy settings and using them would result in problems downloading modules from the Puppet Forge. This was due to an issue in a dependency gem.
  • Now, the commands work behind a proxy.
  • Orchestrator error message included Bolt command suggestions
  • When a plan or task was not found, the resulting error message gave a suggestion to run bolt {plan,task} show, which is unhelpful in PE. The error message no longer shows the Bolt command suggestion.
  • bolt.yaml plans did not work in PE
  • Plans with bolt.yaml in the root directory of the environment will no longer fail. Don't use the modulepath setting in bolt.yaml, because it may lead to unintended consequences when loading tasks and plans.
  • Ed25519 SSH keys couldn't be used to run task on agentless node
  • Running a task on an agentless node using an ed25519 SSH keys would result in an error.

New in Puppet Enterprise 2019.2.1 (Dec 12, 2019)

  • Resolved issues:
  • Upgrade failed if PuppetDB was offline
  • When upgrading to this version, installation failed during validation with an error about connecting to PuppetDB.
  • Upgrade could fail in environments with external PE-PostgreSQL
  • When you ran the installer in an environment with external PE-PostgreSQL, if the orchestrator database username was not pe-orchestrator, upgrade on the master failed.
  • Upgrade failed if IPv6 was disabled
  • If IPv6 was disabled on your system, upgrade failed with a Protocol family unavailable error. With this release, upgrade uses IPv4 by default. If you prefer to use IPv6, you can modify "puppet_enterprise::ip_version": 6 in your pe.conf file before you upgrade.

New in Puppet Enterprise 2019.1.0 (May 7, 2019)

  • New features:
  • Continuous Delivery for PE console installation
  • You can now install Continuous Delivery for PE directly from the console using a new Integrations page. Installation leverages a Bolt task requiring a limited set of paramaters, so you no longer have to install a separate module or dependencies. For details about installing Continuous Delivery for PE, see the Continuous Delivery for PE documentation.
  • puppet infrastructure run command
  • A new puppet infrastructure run command leverages built-in Bolt plans to perform certain PE management tasks, such as regenerating certificates and migrating from a split to a monolithic installation. To use the command, you must be able to connect using SSH from your master to any nodes that the command modifies. You can establish an SSH connection using key forwarding, a local key file, or by specifying keys in .ssh/config on your master. For information about available plans, run puppet infrastructure run --help.
  • Enable a new HA replica using a failed master
  • After promoting a replica, you can use your old master as a new replica, effectively swapping the roles of your failed master and promoted replica.
  • Add nodes without agents to Puppet Enterprise
  • Using the new Inventory option on the console, you can add nodes to your Puppet Enterprise deployment without installing the Puppet agent. When you add nodes and their credentials to the inventory, the information is securely stored and made available in the console through remote connections (SSH or WinRM). Authorized users can then run tasks on these nodes without re-entering credentials. For more information, see Adding and removing nodes that do not have agents.
  • Enhancements:
  • Remote tasks
  • You can now run tasks on a proxy target that remotely interacts with the real target, as defined by the run-on option. Remote tasks are useful for targets like network devices that have limited shell environments, or cloud services driven only by HTTP APIs. Connection information for non-server targets, like HTTP endpoints, can be stored in inventory.
  • Simplified Code Manager control repo configuration
  • Setting up control repositories for Code Manager no longer requires manually creating an SSH directory and configuring permissions on the key pair and directory. These steps have been automated with the puppet infrastructure configure command.
  • Improved handling of server settings
  • Server settings specified in puppet.conf are now used in this order:
  • server_list
  • server
  • In high availability installations which rely on the server_list setting, this order prevents unexpected behavior if you promote a replica.
  • Improved RBAC API log messages
  • The RBAC service log entries for revoked users attempting to log in now includes the username and UUID.
  • Infrastructure terminology changes
  • With this version, we've unified infrastructure terminology across all installation types. We now call compile masters compilers to reflect their role: compiling catalogs. Similarly, we call the master a master, whether or not your installation includes compilers. In high availability installations, the node that replicates data from the master is simply a replica or master replica.
  • R.I.P. MoM, master of masters, and primary master replica.
  • Platform support:
  • This version adds support for these platforms.
  • Enterprise Linux 8
  • Note: Enterprise Linux 8 support for agents was added in previous Z releases.
  • Deprecations and removals
  • Split and large environment installations
  • The split and large environment installations, where the master, console, and PuppetDB were installed on separate nodes, are no longer recommended. Because compilers do most of the intensive computing, installing the console and PuppetDB on separate nodes doesn't substantially improve load capability, and adds unnecessary complexity.
  • For new installations, we now recommend only monolithic configurations, where the infrastructure components are installed on the master. You can add one or more compilers and a load balancer to this configuration to expand capacity up to 20,000 nodes, and for even larger installations, you can install standalone PE-PostgreSQL on a separate node. For details about current installation configurations, see Choosing an architecture. For instructions on migrating from a split installation to a monolithic installation, see Migrate from a split to a monolithic installation.
  • pe_accounts module
  • The pe_accounts module has been removed. The module was used internally to manage PE users but was superseded several versions ago by the puppetlabs/accounts module. If you're using the pe_accounts module for account management, migrate to the puppetlabs/accounts module as soon as possible. As a short term workaround, you can copy the pe_accounts module from an existing PE installation or from the pe-modules package inthe PE installer tarball and place the module in your own modulepath.
  • TLSv1 and v1.1
  • TLSv1 and TLSv1.1 is now disabled by default to comply with security regulations. You must enable TLSv1 to install agents on these platforms:
  • AIX
  • CentOS 5
  • RHEL 5
  • SLES 11
  • Solaris 10
  • Windows Server 2008r2
  • puppet_enterprise parameters
  • These deprecated parameters have been removed in this version:
  • mcollective_middleware_hosts
  • mcollective
  • mcollective_middleware_port
  • mcollective_middleware_user
  • mcollective_middleware_password
  • activity_database_user
  • classifier_database_user
  • orchestrator_database_user
  • rbac_database_user
  • dashboard_port
  • dashboard_database_name
  • dashboard_database_user
  • dashboard_database_password
  • Resolved issues
  • Puppet file permissions on Windows were modified with every run
  • Changes to how Puppet handled system permissions caused permissions for Windows file resources to be rewritten with each run.
  • Puppet now treats owner and group on file resources as in-sync in either of these scenarios:
  • owner and group are not set in the resource
  • owner and/or group are set to the system user on the running node and the system user is set to full control
  • You can specifically configure the system user to less than full control by setting the owner and/or group parameters to SYSTEM in the file resource. In this case, Puppet emits a warning, because setting SYSTEM to less than full control might have unintended consequences, but it does not modify the permissions.

New in Puppet Enterprise 2018.1.0 (May 3, 2018)

  • Announcing enhancements to Puppet Tasks:
  • Last October we introduced Puppet Tasks, which includes Puppet Bolt, our standalone open source task runner and the Task Management capability in Puppet Enterprise. Puppet Tasks gives everyone, whether they have a Puppet agent installed or not, the ability to deploy point-in-time changes on demand. It lets you do things like run a script written in any language across your infrastructure, stop or start a service, upgrade a package, and automate changes that need to happen in a particular order as part of an application deployment. Our goal is to help teams manage more of their infrastructure, including nodes that don’t have an agent running, cloud resources, and network devices.
  • Per node RBAC for Tasks:
  • Now in Puppet Enterprise 2018.1, you can use role-based access control (RBAC) to specify who can run which tasks against specific groups of nodes. Because node groups are dynamic when new nodes are provisioned, they’re automatically added to the appropriate groups. Your RBAC policies ensure that specific user roles can execute tasks against the appropriate node groups. For example, as new machines are provisioned, an app developer will have the right permissions to run tasks on their development nodes without accessing nodes in production. This enables Puppet administrators to give more teams self-service access to the infrastructure they’re responsible for.
  • Extending Puppet Enterprise with Puppet Bolt:
  • When we introduced Puppet Tasks last fall, we also introduced Puppet Bolt, our open source task runner. Since then, we’ve made some major improvements and have been adding feature releases to Bolt on a weekly basis. Bolt gives you an agentless way to run simple commands or orchestrated workflows across your entire infrastructure. It uses SSH and WinRM to make connections to nodes, and tasks can be run as sudo or any other user.
  • Bolt now integrates with the Puppet Orchestrator, which is built on our ultra-scalable PCP (Puppet Communications Protocol) transport designed for customers that manage hundreds of thousands of nodes. This allows them to deploy changes instantly and see results in the Puppet Enterprise console faster than an SSH handshake. Let’s say you need to upgrade a package on nodes managed by Puppet Enterprise and nodes where you don’t have an agent installed. You can run the same task across your entire infrastructure using SSH, WinRM and PCP. The combination of agentless transports and our enterprise-grade PCP transport gives you the flexibility to scale automation across all types of infrastructure, from traditional VMs to cloud resources, network devices and more.
  • Check out Puppet Bolt Task Plans:
  • In Puppet Bolt, you can run task plans, which are simply a set of tasks run in a specific sequence as part of an orchestrated deployment. For example, you can automate changes like a database migration or a rolling deployment that requires logic in between steps. If you’re using another tool to do this type of procedural automation, you can save yourself an extra step and do it all with Puppet.
  • Unlike other tools, you can do complex error handling for more advanced use cases. For example, if a step in your plan fails, you can determine whether to retry the task if it was caused by a timeout error, or stop the plan if it was due to an authentication error. Task plans are ideal for when you need to run multiple tasks or commands procedurally, compute values as an input to a task, or make decisions based on the results of specific steps in the plan.
  • With the new Puppet Orchestrator integration, you can now run task plans across hundreds of thousands of nodes and see the results in the Puppet Enterprise console. Bolt task plans will show up as jobs in the console alongside the rest of your Puppet runs and tasks and all actions will be tracked by the activity service giving you the auditability you expect from Puppet Enterprise.
  • Inventory file for Puppet Bolt:
  • To help manage hosts in your environment with or without the Puppet agent, we’ve added an inventory file to Bolt that stores information about your nodes. For example, you can organize your nodes into groups or set up connection information for nodes or groups of nodes. It’s a great way to store information about your hosts that will be available at run time.
  • The inventory file is a yaml file stored by default at ~/.puppetlabs/bolt/inventory.yaml.
  • If you’re using PuppetDB to store information about a portion of your infrastructure, you can use the bolt-inventory-pdb script to generate inventory files based on PuppetDB queries.
  • Improved support for disaster recovery planning:
  • Puppet Enterprise has long been a core part of disaster recovery planning, making it straightforward to reproduce business critical infrastructure in the event of catastrophe.
  • With Puppet Enterprise 2018.1, it’s even easier to incorporate Puppet Enterprise into those plans with built-in tools to backup and restore your Puppet deployment. Combined with its high-availability features, Puppet Enterprise is always ready if there’s a problem anywhere in your data center or cloud infrastructure.
  • Take Puppet further with PDK support:
  • We believe that all Puppet users are and can be Puppet code developers. Even if they’re mostly adopting existing modules, they write and iterate on Puppet code when composing the high-level building blocks that define the state of their infrastructure.
  • Puppet Development Kit (PDK) was created to give users prescriptive tools and best practices for testing their Puppet code, and it’s now fully supported. It offers a collection of tools in a powerful all-in-one package that helps users develop, test, convert and update modules right from a Windows, Mac or Linux workstation with a simple unified interface; catch issues before Puppet code is applied to live infrastructure; and get going faster with a complete batteries-included Puppet development environment.
  • Accessibility, performance and usability improvements:
  • Last, but not least, we’ve added some major improvements to the Puppet Enterprise console to make it accessible to more people, including those who use a screen reader, work exclusively with a keyboard, or see color differently. We want everyone who uses Puppet Enterprise to have the same great experience and we’re planning more accessibility improvements in the future
  • Additionally, we’ve also added inline documentation to the console for instant help when you need it. Console workflows have been optimized for faster load times, better performance across large numbers of resources, and expanding out the number of users logged in at once. Since many of our customers use Puppet Enterprise across hundreds of thousands of nodes, scalability is always our top priority

New in Puppet Enterprise 2017.3.0 (Oct 11, 2017)

  • Puppet Enterprise Task Management:
  • Puppet Enterprise Task Management complements Puppet’s model-driven approach to automation by adding the ability to simply run ad hoc tasks across your infrastructure and applications. With Puppet Task Management, it’s easy to troubleshoot individual systems, deploy one-off changes, and execute sequenced actions as part of an application deployment workflow. Combined with the model-driven approach of Puppet runs, Puppet Enterprise provides a unified solution and single pane of glass for managing the entire lifecycle of your infrastructure, so you can improve collaboration, standardize processes and eliminate reliance on a patchwork set of runbook style automation tools.
  • Puppet 5 Platform:
  • The components of the Puppet platform which Puppet Enterprise is built upon — Puppet agent, Puppet Server and PuppetDB — have now moved to a more coordinated release model, with compatibility guarantees and consistent versioning among them. The initial release, Puppet Platform 5.0, brought these components' major versions together so your Puppet code that operates on Puppet 4 won’t need to be changed to work on the Puppet 5 Platform. The major benefit of the Puppet 5 Platform is that you will be able to download, implement and upgrade your Puppet platform more easily without requiring additional testing and troubleshooting.
  • Package Inspector:
  • Package Inspector, introduced in Puppet Enterprise 2017.2, gives you complete visibility into all the packages installed on nodes, including those that aren’t managed by Puppet. In Puppet Enterprise 2017.3, Package Inspector makes it simple to discover packages and browse them by version, environment, operating system, and other facts. You can quickly identify whether or not Puppet manages the package, and if it does, where in the codebase the package resource is declared. Best of all, Puppet Tasks™ are built right into the console workflow so you can quickly and confidently remediate packages with known vulnerabilities. This seamless workflow from discovery to management via the UI guarantees that you are not more than a click away from managing a newly discovered package.
  • Configuration data:
  • Configuration data is an extension of Puppet Enterprise Console classification that enables class parameter data to be specified as a Hiera backend, improving code reusability and the flexibility of Hiera 5 in the Puppet Platform.
  • Japanese language support:
  • Puppet Enterprise now includes additional Japanese language support. Enhancements include translations of Task Management, module READMEs for Apache, Azure, and PostgreSQL, error and informational messages displayed when using the MySQL module, the text-based installer used to install Puppet Enterprise, the Learning VM, and the Code Manager user guide.

New in Puppet Enterprise 2017.2 (May 12, 2017)

  • New features in PE 2017.2:
  • Orchestrator in the console:
  • You can now set up orchestrator jobs in the console. You can create node lists, either static or using Puppet Query Language, on which to run Puppet.
  • With orchestrator integrated into the console, you can set up jobs with ease, and use the console’s reporting and infrastructure monitoring tools to review jobs and dig deeper into node run results.
  • Packages inventory in the console:
  • View an inventory of all packages installed on your nodes, and learn which nodes are using each package version. Use this data when determining which nodes are impacted by packages eligible for maintenance updates, security patches, and license renewals. Package inventory reporting is available for all nodes with a Puppet agent installed, including systems that are not actively managed by Puppet.
  • PE is now available in Japanese:
  • As part of our ongoing commitment to PE users in Japan, PE 2017.2 features Version 1 of Puppet Enterprise (Japanese).
  • Version 1 of Puppet Enterprise (Japanese) includes a Japanese GUI, and localization of the following services and resources into Japanese.
  • Messages and responses in the Classifier, Role-Based Access Control (RBAC), Activity, and Orchestrator services. These messages are displayed when you query the API endpoints for each service.
  • Instructions and messages in the PE web-based installer.
  • Messages displayed when using the Orchestrator command line tool.
  • The Puppet Forge home page.
  • The Beginner’s Guide to Modules.
  • The Puppet Language Style Guide.
  • Module READMEs and descriptions for seven Puppet-supported modules (NTP, SQL Server, Stdlib, AWS, Tomcat, MySQL, and Tagmail). View the Japanese READMEs on the Puppet Forge by setting your browser language preference to Japanese. You can also find them in your modules directory. The default location is ./readmes/README_ja_JP.md.
  • User documentation for Roles and profiles, PE overview pages, and PE release notes.
  • Puppet Enterprise (Japanese) is included in the same tarball as the English version of PE. To view the PE installer, console, and the Puppet Forge in Japanese, set your web browser language preference to Japanese. To view API messages and command line tool messages in Japanese, set your system locale preference to Japanese. If you already have your browser and system preferences set to Japanese, the Japanese strings are displayed automatically.
  • We have also improved UTF-8 character encoding support in PE and in the Puppet components and services that are used with PE.
  • Enhancements in PE 2017.2:
  • Console and console services enhancements:
  • The console now redirects to HTTPS when you attempt to connect over HTTP. The pe-nginx now listens on port 80 by default.
  • Previously, the node classifier service stored a check-in for each node when its classification was requested. The check-in included an explanation of how the node matched the rule of every group it was classified into. This functionality created performance issues when managing a large deployment of nodes. The check-in storage is still available, but it’s now disabled (false) by default.
  • In this release, you can determine the amount of time that should pass after a node sends its last report before it is considered unresponsive. Set an integer to specify the value in seconds. The default is 3600 seconds (one hour).
  • The ping_interval setting controls how long PXP agents will ping PCP brokers. If the agents don’t receive responses, they will attempt to reconnect. The default is 120 seconds (two minutes).
  • We’ve redesigned the console’s navigation pane, and reduced its width by half.
  • Quickly access the run report associated with a particular event by using the View run report link that now appears on the Events detail page.
  • The fact value filters on the Overview and Reports pages now display warning messages if you attempt to use an invalid regular expression, invalid string operator, or empty fact name.
  • Puppet orchestrator enhancements:
  • The Puppet orchestrator communicates with PCP brokers on compile masters and sends job-related messages to the brokers, which are then relayed by the brokers to PXP agents. As you add compile masters, you’re able to scale the number of PCP brokers that can send orchestration messages to agents. See Configure compile masters for orchestration scale for instructions.
  • In High Availability installations, you can now configure PXP agents to communicate with compile masters, instead of just the master or replica, using the new pcp_broker_list parameter.
  • Use the PXP agent log file to debug issues with the Puppet orchestrator. You can change its location from the default as needed.
  • The pe-puppetserver service now defaults to an open file limit of 12000 to support orchestrator scale with PCP brokers.
  • Code Manager enhancements:
  • We’ve added a new flag, --dry-run, to the puppet-code command. When you run puppet-code with this flag, it tests connections to your control repos and returns a consolidated list of all environments in the control repos.
  • Analytics enhancements:
  • Puppet Enterprise collects data about your PE installation and sends it to Puppet so we can improve our product. In addition to previously collected analytics, we now also collects basic information about:
  • Puppet Server performance
  • Cloud platform and hypervisor use
  • All-in-one puppet-agent package version
  • Use of MCollective and non-default user roles
  • JVM memory usage
  • Certificate autosign setting
  • Security enhancements:
  • For those with security compliance needs, PE now supports disabling TLSv1. Services in PE support TLS versions 1, 1.1, and 1.2.
  • The MCollective package agent plug-in helps you install packages from any source (including a URL) and does not require that the packages are signed. This provides a peadmin user the ability to execute arbitrary code on any MCollective server.
  • A default action policy has been put into place in PE that disallows using the package install, uninstall, and purge actions. The policy can be modified and additional action policies can be added using the puppet_enterprise::profile::agent::allowed_actions parameter to specify agent plug-ins you want to apply an action policy to, and a list of the actions you want to explicitly allow.
  • MCollective client keys are now labeled sensitive and will not be stored in PuppetDB.
  • Razor enhancements:
  • This version adds support for the Razor client on Windows 2016 Servers and adds new Puppet-supported tasks for SLES 11 and 12 and Windows 2016.
  • Other enhancements:
  • Previously, compile masters downloaded agent packages from puppet.com to make them available for agent installs, meaning they had to reach the internet to retrieve those packages. Compile masters now retrieve agent packages directly from the master of masters.
  • Java garbage collection logs can help you diagnose performance issues with JVM-based PE services. Garbage collection logs are now enabled by default in PE, and the results are captured in the support script, but you can disable them if you need to.
  • To help with troubleshooting, you can customize the MCollective client logging level either in the console or in pe.conf by setting puppet_enterprise::profile::mcollective::peadmin::mco_loglevel to debug, warning, or error instead of the default info.
  • Deprecations and removals in PE 2017.2:
  • We’ve removed the previously unsupported option to disable file sync while Code Manager remains enabled.
  • This release deprecates the file_sync_repo_id and file_sync_auto_commit Code Manager parameters. PE ignores these parameters and raises a warning if you have set them.
  • Platforms reaching end of support:
  • RHEL 4, Fedora 23, and Ubuntu 12.04 have reached end-of-life (EOL).
  • Refer to the system requirements for a list of platforms that will soon be EOL.
  • Refer to the Puppet Enterprise support life cycle for a list of support dates for our latest versions.

New in Puppet Enterprise 3.8.0 (May 2, 2015)

  • Accelerate deployment and reduce downtime with new automated provisioning and code management capabilities:
  • IT is under increased pressure to deploy systems and applications faster while building highly available and reliable infrastructure. The latest release of Puppet Enterprise helps IT teams accomplish these objectives, with new capabilities to help:
  • Automate provisioning of containers, cloud and physical servers
  • Manage your infrastructure as code across dynamic environments
  • Get to the latest version of Puppet Enterprise faster
  • Automate provisioning across containers, cloud and bare metal, gain new ways to manage infrastructure as code across environments, plus much more.
  • Cut Provisioning Time to Minutes:
  • Automate provisioning of systems across your entire infrastructure. The next-generation Puppet Node Manager gets Docker containers, AWS Cloud instances and bare metal servers up and running faster while providing a seamless handoff to the configuration management capabilities of Puppet Enterprise.
  • Containers: Launch and manage Docker and Docker containers more quickly and simplify application deployments by eliminating the complexity of troubleshooting configuration issues.
  • Cloud Environments: Launch new Amazon Web Services resources in a consistent and repeatable way, with support for EC2, Virtual Private Cloud, Elastic Load Balancing, Auto Scaling, Security Groups and Route53.
  • Bare Metal: Automatically discover bare-metal hardware, dynamically configure operating systems and hypervisors, and hand off resources to the configuration management capabilities of Puppet Enterprise. Razor, the Puppet Labs bare metal provisioning tool, moves from tech preview to a fully supported set of capabilities with this release.
  • Manage Puppet Code Across Dynamic Environments:
  • Quickly review, test and promote infrastructure as code within your dynamic environments, from development to testing and production. The new Puppet Code Manager leverages the popular r10k technology to accelerate stable deployment of infrastructure changes in a testable and programmatic way.
  • Take Advantage of Puppet Enterprise Innovation:
  • A new set of upgrade tools for node classification make it easier to leverage the latest and greatest capabilities available in Puppet Enterprise. Whether you are upgrading from previous versions or simply want to take advantage of additional innovation coming from Puppet Labs later this year, Puppet Enterprise 3.8 will get you there faster.

New in Puppet Enterprise 3.7.0 (Nov 12, 2014)

  • Puppet Apps:
  • Puppet Apps are purpose-built applications that focus on solving IT automation challenges in new, innovative ways. While Puppet Apps work closely with the Puppet platform and other Puppet Apps, they will be released independently, allowing for frequent updates, and affording you the ability to adopt new functionality as needed. Puppet Apps will be available to Puppet Enterprise customers based on their existing license.
  • Puppet Node Manager:
  • Puppet Node Manager is the first of these new Puppet Apps. Node Manager makes it much simpler to manage a large number of frequently-changing systems. This new approach allows you to manage infrastructure based on its job, rather than its name, providing you with the ability to adopt a modern, cattle-not-pets approach to managing dynamic infrastructure.
  • Instead of managing machines merely by host name, or through manual classification, a rules-based classifier groups nodes based on key characteristics, like operating system, geographic location, and IP address. It’s a lot like using a Smart Playlist to manage a large music library. Combined with Puppet’s aspect-oriented configuration management, nodes can be classified with little to no manual effort.
  • Puppet Node Manager will ship in Q4 as part of Puppet Enterprise. Additional Puppet Apps will be made available in early 2015.
  • Role-Based Access Control:
  • Granular RBAC capabilities make their debut in the newest release of Puppet Enterprise. RBAC makes it possible for Puppet Enterprise nodes to be segmented so that tasks can be safely delegated to the right people. Puppet Enterprise RBAC provides granular delegation of management capabilities across teams and individuals. For instance, RBAC allows segmenting of infrastructure across application teams so that they can manage their own servers without affecting other applications.
  • Plus, to ease the administration of users and authentication, it integrates directly with standard directory services including Microsoft Active Directory and OpenLDAP. The new RBAC service is leveraged first by Puppet Node Manager and will be available to other Puppet Apps and services in subsequent releases.
  • Puppet Server Reporting:
  • A new Profiler and Metrics Service tracks key metrics associated with the health and performance of a Puppet Server. The service collects a wide variety of metrics, including active requests, request duration, execution times, and compilation load. The metrics are made available for monitoring and alerting in any third-party app, such as those that support JMX, and the popular Graphite server. To help you get started, pre-packaged Graphite reports covering performance and system health are available.
  • Updates to the Puppet Code language:
  • Puppet’s simple programming language is the most widely used means of describing and managing infrastructure. The new release includes a preview of major enhancements to the Puppet language that make it easier to write and maintain Puppet code. Using Puppet’s Future Parser, you can preview the language enhancements now in your test environments, in advance of the enhancements becoming standard language in early 2015.
  • In addition to many changes to enhance the usability, completeness, and consistency of the language, enhancements include:
  • Iteration - makes it possible to do common data transformation and reduce manual repetition of code without resorting to writing logic in Ruby.
  • Data System - A new data-type system makes it much easier to write high-quality Puppet code, as Puppet does the parameter checking.
  • Puppet Templates - Templates can be written using Puppet code rather than Ruby, which makes template writing easier and safer, as it runs in a container managed by Puppet.
  • Error Handling - Puppet provides detailed information about errors and can more accurately point to where the problem was detected.
  • Puppet Modules:
  • Puppet modules provide the building blocks to manage infrastructure so you don’t have to write your own custom modules to manage common infrastructure. With over 2,800 modules available on the Puppet Forge, we make it easy for anyone to find the highest quality, supported, and recommended modules.
  • New Puppet Supported Modules:
  • The most recent Puppet Supported modules focus on heterogeneity, specifically on adding breadth around network device support , and around additional support for Microsoft technologies. Our newest round of modules cover devices from partners like F5 with both SOAP and REST modules, and our latest Microsoft-related modules include the Windows ACL, Windows Powershell and MS SQL modules.
  • New Puppet Approved Modules:
  • With this release we introduced Puppet Approved modules so you can find the modules we recommend. In order for a module to be Puppet Approved, it must meet our quality and usability requirements. This will make it easier for customers to choose modules for a specific automation task and quickly deploy them to production.