MySQL Cluster Changelog

What's new in MySQL Cluster 8.1.0 Innovation

Jul 19, 2023
  • Account Management Notes:
  • A new password-validation system variable now permits the configuration and enforcement of a minimum number of characters that users must change when attempting to replace their own MySQL account passwords. This new verification setting is a percentage of the total characters in the current password. For example, if validate_password.changed_characters_percentage has a value of 50, at least half of the characters in the replacement account password must not be present in the current password, or the password is rejected.
  • This new capability is one several that provide DBAs more complete control over password management. For more information, see Password Management. (WL #15751)
  • Audit Log Notes:
  • In MySQL 8.0.33, the audit_log plugin added support for choosing which database to use to store JSON filter tables. It is now possible to specify an alternative to the default system database, mysql, when run the plugin installation script. Use the audit_log_database server system variable (or -D database_name) on the command line together with the alternative database name, for example:
  • $> mysql -u root -D database_name -p < audit_log_filter_linux_install.sql
  • For additional information about using audit_log plugin installation scripts, see Installing or Uninstalling MySQL Enterprise Audit. (Bug #35252268)
  • The new Audit_log_direct_writes system variable is added to count direct writes into the audit file.
  • MySQL Enterprise Audit allocates a temporary buffer to hold data that forms a single event written into the log file. The audit plugin buffers every query that arrives into the audit log. While this is effective for short queries, the server is not always capable of allocating extra memory to hold a long query. Now, the audit_log plugin is optimized not to use a temporary buffer when JSON-format logging is used. (WL #15403)
  • MySQL Enterprise Audit now supports using the scheduler component to configure and execute a recurring task to flush the in-memory cache. For setup instructions, see Enabling the Audit Log Flush Task. (WL #15567)
  • Binary Logging:
  • Several functions now are added to the libmysqlclient.so shared library that enable developers to access a MySQL server binary log: mysql_binlog_open(), mysql_binlog_fetch(), and mysql_binlog_close().
  • Our thanks to Yura Sorokin for the contribution. (Bug #110658, Bug #35282154)
  • C API Notes:
  • Added the new mysql_reset_connection_nonblocking() C API function. It is the counterpart of the mysql_reset_connection() synchronous function, for use by applications that require asynchronous communication with the server. Our thanks to Meta for the contribution. (Bug #32202058, WL #15633)
  • The new mysql_get_connect_nonblocking_stage() C API function permits applications to monitor the progress of asynchronous connections for the purpose of taking appropriate actions based on the progress. Our thanks to Meta for the contribution. (Bug #32202053, WL #15651)
  • In the calling function, len is initialized to 0 and never changed if net->vio is null. This fix adds a check of net before dereferencing vio.
  • Our thanks to Meta for the contribution. (Bug #30809590)
  • A variable in the async client was uninitialized in certain code paths. It is fixed by always initializing the variable.
  • Our thanks to Meta for the contribution. (Bug #30809590)
  • Compilation Notes:
  • Microsoft Windows: For Windows, improved MSVC_CPPCHECK support; and check for MSVC warnings similar to "maintainer" mode. For example, check after all third party configurations are complete. (Bug #35283401)
  • References: See also: Bug #34828882.
  • Microsoft Windows: For Windows builds, improved WIN_DEBUG_NO_INLINE=1 support; usage would exceed the library limit of 65535 objects. (Bug #35259704)
  • Upgraded the bundled robin-hood-hashing from v3.8.1 to v3.11.5. (Bug #35448980)
  • Removed the unused extra/libcbor/doc/ directory as extra/libcbor/doc/source/requirements.txt inspired bogus pull requests on GitHub. (Bug #35433370)
  • Updated the bundled ICU files from version 69.1 to version 73 for the icu-data-files package. (Bug #35353708)
  • ZSTD sources bundled in the source tree were upgraded to ZSTD 1.5.5 from 1.5.0. (Bug #35353698)
  • For SUSE-based systems, changed the default GCC version from version 9 to 12; which is the default compiler on these platforms. (Bug #35341000)
  • MySQL did not compile correctly with GCC 12. (Bug #35327995)
  • Initialize the internal MEM_ROOT class memory with garbage using the TRASH macro to make easier to reproduce bugs caused by reading initialized memory allocated from MEM_ROOT. (Bug #35277644)
  • Fixed ODR violations due to multiple different instances of YYSTYPE and other symbols generated by Bison. This includes Bison implementation changes, such as replacing the --name-prefix argument on the Bison command line with api.prefix definitions. (Bug #35232738)
  • We now determine stack direction at runtime rather than at configure time. (Bug #35181008)
  • Added the OPTIMIZE_SANITIZER_BUILDS CMake option that adds -O1 -fno-inline to sanitizer builds. It defaults to ON. (Bug #35158758)
  • Changed the minimum Bison version requirement from v2.1 to v3.0.4. For macOS, this may require installing Bison via a package manager such as Homebrew. (Bug #35154645, Bug #35191333)
  • On Windows, the default for the MSVC_CPPCHECK CMake option has changed from OFF to ON. (Bug #35067705)
  • MySQL now sets LANG=C in the environment when executing readelf to avoid problems with non-ASCII output.
  • Our thanks to Kento Takeuchi for the contribution. (Bug #111190, Bug #35442825)
  • On macOS, MySQL would not compile if rapidjson was installed via Homebrew. The workaround was to brew unlink rapidjson. (Bug #110736, Bug #35311140)
  • References: This issue is a regression of: Bug #35006191.
  • MySQL would not build with -DWITH_ZLIB=system; it'd complain about not finding the system zlib library despite finding it. (Bug #110727, Bug #110745, Bug #35307674, Bug #35312227)
  • Component Notes:
  • MySQL Enterprise Edition now supports collecting server trace data in the OpenTelemetry format using the component_telemetry component. This data is then forwarded to a configurable endpoint where it can be used by any OpenTelemetry-compatible system.
  • Deprecation and Removal Notes:
  • Important Change: Since MySQL provides other means of performing database dumps and backups with the same or additional functionality, including mysqldump and MySQL Shell Utilities, the mysqlpump client utility program has become redundant, and is now deprecated. Invocation of this program now produces a warning. You should keep in mind that mysqlpump is subject to removal in a future version of MySQL, and move applications depending on it to another solution, such as those mentioned previously. (WL #15652)
  • Replication: The sync_relay_log_info server system variable is deprecated in this release, and getting or setting this variable or its equivalent startup option --sync-relay-log-info now raises a warning.
  • Expect this variable to be removed in a future version of MySQL; applications which make use of it should be rewritten not to depend on it before this happens. (Bug #35367005, WL #13968)
  • Replication: The binlog_format server system variable is now deprecated, and subject to removal in a future version of MySQL. The functionality associated with this variable, that of changing the binary logging format, is also deprecated.
  • The implication of this change is that, when binlog_format is removed, only row-based binary logging, already the default in MySQL 8.0, will be supported by the MySQL server. For this reason, new installations should use only row-based binary logging, and existing ones using the statement-based or mixed logging format should be migrated to the row-based format. See Replication Formats, for more information.
  • The system variables log_bin_trust_function_creators and log_statements_unsafe_for_binlog, being useful only in the context of statement-based logging, are now also deprecated, and are thus also subject to removal in a future release of MySQL.
  • Setting or selecting the values of any of the variables just mentioned now raises a warning. (WL #13966, WL #15669)
  • Group Replication: The group_replication_recovery_complete_at server system variable is now deprecated, and setting it produces a warning. You should expect its removal in a future release of MySQL. (WL #15460)
  • The mysql_native_password authentication plugin now is deprecated and subject to removal in a future version of MySQL. CREATE USER, ALTER USER, and SET PASSWORD operations now insert a deprecation warning into the server error log if an account attempts to authenticate using mysql_native_password as an authentication method. (Bug #35336317)
  • Previously, if the audit_log plugin was installed without the accompanying audit tables and functions needed for rule-based filtering, the plugin operated in legacy filtering mode. Now, legacy filtering mode is deprecated. New deprecation warnings are emitted for legacy audit log filtering system variables. These deprecated variables are either read-only or dynamic.
  • (Read-only) audit_log_policy now writes a warning message to the MySQL server error log during server startup when the value is not ALL (default value).
  • (Dynamic) audit_log_include_accounts, audit_log_exclude_accounts, audit_log_statement_policy, and audit_log_connection_policy. Dynamic variables print a warning message based on usage:
  • Passing in a non-NULL value to audit_log_include_accounts or audit_log_exclude_accounts during MySQL server startup now writes a warning message to the server error log.
  • Passing in a non-default value to audit_log_statement_policy or audit_log_connection_policy during MySQL server startup now writes a warning message to the server error log. ALL is the default value for both variables.
  • Changing an existing value using SET syntax during a MySQL client session now writes a warning message to the client log.
  • Persisting a variable using SET PERSIST syntax during a MySQL client session now writes a warning message to the client log.
  • (WL #11248)
  • The use of the dollar sign ($) as the initial character of an unquoted identifier was deprecated in MySQL 8.0.32. In this release, the use of an unquoted identifier starting with the dollar sign and containing one or more dollar signs in addition to the first one generates a syntax error. Quoted identifiers, and unquoted identifiers that start with a dollar sign but contain no additional occurrences of this character, are not affected by this change. Use of an unquoted identifier with a leading dollar sign that is otherwise permitted continues to raise a warning.
  • For more information, see Schema Object Names. (WL #15254)
  • References: See also: Bug #34684193.
  • MySQL enables control of FIPS mode on the server side and the client side using a system variable and client option. Application programs can use the MYSQL_OPT_SSL_FIPS_MODE option to mysql_options() to enable FIPS mode on the client. Alternatively, it is possible to handle FIPS mode directly through OpenSSL configuration files rather than using the current server-side system variable and client-side options. When MySQL is compiled using OpenSSL 3.0, and an OpenSSL library and FIPS Object Module are available at runtime, the server reads the OpenSSL configuration file and respects the preference to use a FIPS provider, if one is set. OpenSSL 3.0 is certified for use with FIPS.
  • To favor the OpenSSL alternative, the ssl_fips_mode server system variable, --ssl-fips-mode client option, and the MYSQL_OPT_SSL_FIPS_MODE option now are deprecated and subject to removal in a future version of MySQL. A deprecation warning prints to standard error output when an application uses the MYSQL_OPT_SSL_FIPS_MODE option or when a client user specifies the --ssl-fips-mode option on the command line, through option files, or both.
  • Prior to being deprecated, the ssl_fips_mode server-side system variable was dynamically settable. It is now a read-only variable (accepts SET PERSIST_ONLY, but not SET PERSIST or SET GLOBAL). When specified on the command line or in the mysqld-auto.cnf option file (with SET PERSIST_ONLY) a deprecation warning prints to the server error log. (WL #15631)
  • The mysql_ssl_rsa_setup program originally provided a simple way for community users to generate certificates manually, if OpenSSL was installed on the system. Now, mysql_ssl_rsa_setup is deprecated because MySQL Community Edition no longer supports using yaSSL as the SSL library, and source distributions no longer include yaSSL. Instead, use MySQL server to generate missing SSL and RSA files automatically at startup (see Automatic SSL and RSA File Generation). (WL #15668)
  • The keyring_file and keyring_encrypted_file plugins now are deprecated. These keyring plugins are superseded by the component_keyring_file and component_keyring_encrypted_file components. For a concise comparison of keyring components and plugins, see Keyring Components Versus Keyring Plugins. (WL #15659)
  • Previously, the MySQL server processed a version-specific comment without regard as to whether any whitespace followed the MySQL version number contained within it. For example, the comments /*!80034KEY_BLOCK_SIZE=1024*/ and /*!80034 KEY_BLOCK_SIZE=1024*/ were handled identically. Beginning with this release, when the next character following the version number in such a comment is neither a whitespace character nor the end of the comment, the server issues a warning: Immediately starting the version comment after the version number is deprecated and may change behavior in a future release. Please insert a whitespace character after the version number.
  • You should expect the whitespace requirement for version-specific comments to become strictly enforced in a future version of MySQL.
  • See Comments, for more information. (WL #15686)
  • The MySQL client library currently supports performing an automatic reconnection to the server if it finds that the connection is down and an application attempts to send a statement to the server to be executed. Now, this feature is deprecated and subject to removal in a future release of MySQL.
  • The related MYSQL_OPT_RECONNECT option is still available but it is also deprecated. C API functions mysql_get_option() and mysql_options() now write a deprecation warning to the standard error output when an application specifies MYSQL_OPT_RECONNECT. (WL #15766)
  • IPv6 Support:
  • NDB Cluster: NDB did not start if IPv6 support was not enabled on the host, even when no nodes in the cluster used any IPv6 addresses. (Bug #106485, Bug #33324817, Bug #33870642, WL #15561)
  • Logging Notes:
  • To aid in troubleshooting in the event of an excessively long server shutdown, this release introduces a number of new messages that are written to the MySQL error log during shutdown, including those listed here:
  • Startup and shutdown log messages for the MySQL server, including when it has been started with --initialize
  • Log messages showing start and end of shutdown phases for plugins
  • Log messages showing start and end of shutdown phases for components
  • Start-of-phase and end-of-phase log messages for connection closing phases
  • Log messages showing the number and IDs of threads still alive after being forceably disconnected, and potentially causing a wait
  • See The Error Log, for more information. (WL #15369)
  • Performance Schema Notes:
  • The type used for the Performance Schema clone_status table's gtid_executed column has been changed from VARCHAR(4096) to LONGTEXT. (Bug #109171, Bug #34828542)
  • Spatial Data Support:
  • The EPSG data set containing spatial reference system data for spatial calculations has been upgraded from version 9.3 to version 9.7. (Bug #28615740)
  • SQL Syntax Notes:
  • JSON: It is now possible to capture EXPLAIN FORMAT=JSON output in a user variable using a syntax extension added in this release. EXPLAIN FORMAT=JSON INTO var_name stmt works with any explainable statement stmt to store the output in the user variable var_name, where it can be retrieved for later use in analysis. This value is a valid JSON document and can be inspected and manipulated with MySQL JSON functions such as JSON_EXTRACT(). (See JSON Functions.)
  • The INTO clause is supported only with FORMAT=JSON; the value of the explain_format system variable has no effect on this requirement. If the statement cannot be executed (due to, for instance, a syntax error), the user variable is not updated.
  • INTO is not supported for EXPLAIN ANALYZE or EXPLAIN FOR CONNECTION.
  • For additional information and examples, see Obtaining Execution Plan Information. (Bug #35362996, WL #15588, WL #15606)
  • CURRENT_USER() can now be used as a default value for VARCHAR and TEXT columns in CREATE TABLE and ALTER TABLE ... ADD COLUMN statements.
  • The functions SESSION_USER(), USER(), and SYSTEM_USER() are also supported in all of the cases just mentioned. By way of example, the following sequence of statements now works similarly to what is shown here, with the precise output dependent on your environment
  • When used in this way, these functions are also included in the output of SHOW CREATE TABLE and SHOW COLUMNS, and referenced in the COLUMN_DEFAULT column of the Information Schema COLUMNS table where applicable.
  • If you need to insure that values having the maximum possible length can be stored in such a column, you should make sure that the column can accommodate at least 288 characters (255 for the user name and 32 for the host name, plus 1 for the separator @). For this reason—while it is possible to use one of these functions as the default for a CHAR column, it is not recommended due to the risk of errors or truncation of values. (Bug #17809, Bug #11745618)
  • Functionality Added or Changed
  • Important Change; Replication: The default value for the SOURCE_RETRY_COUNT option of the CHANGE REPLICATION SOURCE TO statement has been changed to 10. This means that, using the default values for this option and for SOURCE_CONNECT_RETRY (60), the replica waits 60 seconds between reconnection attempts, and keeps attempting to reconnect at this rate for 10 minutes before timing out and failing over.
  • This change also applies to the default value of the --master-retry-count server option. You should note that this option is deprecated and therefore subject to removal in a future MySQL release. Use SOURCE_RETRY_COUNT with CHANGE REPLICATION SOURCE TO, instead.
  • See CHANGE REPLICATION SOURCE TO Statement, as well as Asynchronous Connection Failover for Sources, for further information. (WL #15702)
  • Important Change: For platforms on which OpenSSL libraries are bundled, the linked OpenSSL library for MySQL Server has been updated from OpenSSL 1.1.1 to OpenSSL 3.0. The exact version is now 3.0.9. More information on changes from 1.1.1 to 3.0 can be found at https://www.openssl.org/docs/man3.0/man7/migration_guide.html. (Bug #35475140, WL #15614)
  • Important Change: MySQL version numbers used in versioned comments now support a major version consisting of one or two digits (previously, only a single digit was supported for this value). See Comments, for more information about how this change affects handling of version-specific comments in MySQL. (WL #15687)
  • Important Change: Dropped support for Enterprise Linux 6 (and associated glibc 2.12 generic), SUSE 12, Debian 10, MacOS 12, Ubuntu 18.04 and 20.04, Windows 10 and Server 2012R2; and 32-bit versions are no longer built.
  • Replication: When running in debug mode, mysqlbinlog now prints all Rows_log_event flags (and not only STMT_END_F), and now asserts with UNKNOWN_FLAG(0xN) if it encounters an invalid flag.
  • Our thanks to Meta for this contribution. (Bug #33172581)
  • Group Replication: Any statement that fetches values of system status variables fetches them all, and acquires a read lock on them all as well. This includes statements such as SHOW STATUS LIKE 'Uptime' and SELECT * FROM performance_schema.global_status WHERE VARIABLE_NAME='Uptime'. In addition, the following operations all acquire a write lock on the status variables:
  • START GROUP_REPLICATION and STOP GROUP_REPLICATION statements
  • Setting group_replication_force_members or group_replication_message_cache_size
  • Invoking group_replication_get_write_concurrency() or group_replication_set_communication_protocol()
  • Automatic rejoin
  • Change of primary with group_replication_single_primary_mode enabled
  • This meant that a SHOW STATUS started after one of the operations just listed was required to wait until the operation was complete before returning.
  • Now in such cases, the statement fetching status variables immediately returns their cached values instead of waiting. (Bug #35373030)
  • References: See also: Bug #35312441.
  • Group Replication: Before it elects a new primary, group_replication_set_as_primary() waits for all transactions to finish, including all DML operations that are currently being processed. In this release, this function now also waits for all ongoing DDL statements, such as ALTER TABLE, to complete as well.
  • Listed here are all operations now considered to be DDL statements by group_replication_set_as_primary():
  • ALTER TABLE
  • ANALYZE TABLE
  • CACHE INDEX
  • CHECK TABLE
  • CREATE INDEX
  • CREATE TABLE
  • DROP INDEX
  • LOAD INDEX
  • OPTIMIZE TABLE
  • REPAIR TABLE
  • TRUNCATE TABLE
  • DROP TABLE
  • This also includes any open cursors (see Cursors).
  • For more information, see the description of the group_replication_set_as_primary() function, in the MySQL 8.1 Manual. (Bug #34664197, WL #15497)
  • Group Replication: For better diagnosis and troubleshooting of network instabilities, MySQL Group replication adds a number of variables in this release providing network, control message, and data message statistics for each member of Group Replication. This makes it possible to observe directly the time spent in each of several steps in Group Replication operations.
  • Group Replication also adds a new MEMBER_FAILURE_SUSPICIONS_COUNT column to the Performance Schema replication_group_communication_information table, which shows how many times each group member has been seen as suspect by the local node. For example, in a group with three members, the value of this column should look something like this:
  • "d57da302-e404-4395-83b5-ff7cf9b7e055": 0,
  • "6ace9d39-a093-4fe0-b24d-bacbaa34c339": 10,
  • "9689c7c5-c71c-402a-a3a1-2f57bfc2ca62": 0
  • These enhancements also help pinpoint how much time and network resources are consumed by user-initiated or background operations, which can then be correlated with overall performance.
  • See Group Replication Status Variables, for more information. (WL #13849)
  • References: See also: Bug #34279841.
  • Binary packages that include curl rather than linking to the system curl library have been upgraded to use curl 8.1.1. (Bug #35329529)
  • MySQL now implements client-side Server Name Indication (SNI), which is an extension to the TLS protocol. Client applications can pass a server name to the libmysqlclient C API library with the new MYSQL_OPT_TLS_SNI_SERVERNAME option for mysql_options(). Similarly, each MySQL client program now includes a --tls-sni-servername command option to pass in a name. The new Tls_sni_server_name server status variable indicates the name if one is set for the session. Our thanks to Meta for the contribution. (Bug #33176362, WL #14839)
  • Comments in the mysql client are now enabled by default. To disable them, start mysql with the --skip-comments option.
  • Our thanks to Daniël van Eeden for the contribution. (Bug #109972, Bug #35061087, WL #15597)
  • Implemented a SHOW PARSE_TREE statement in debug builds to display the JSON-formatted parse tree for a SELECT statement. This statement is not supported in release builds, and is available only in debug builds, or by compiling the server using -DWITH_SHOW_PARSE_TREE. (WL #15426)
  • Previously, invalid SSL server and CA certificates were not identified as problematic until after the server started or after an invalid certificate was loaded at runtime. Now, the new tls-certificates-enforced-validation system variable permits a DBA to enforce certificate validation at server startup or when using the ALTER INSTANCE RELOAD TLS statement to reload certificates at runtime. With enforcement enabled, discovering an invalid certificate halts server invocation at startup, prevents loading invalid certificates at runtime, and emits warnings. For more information, see Configuring Certificate Validation Enforcement. (WL #13470)
  • New server system variables now control the amount of time MySQL accounts that connect to a MySQL server using LDAP pluggable authentication must wait when the LDAP server is down or unresponsive. The default timeout is 30 seconds for the following simple and SASL-based LDAP authentication variables:
  • authentication_ldap_simple_connect_timeout
  • authentication_ldap_simple_response_timeout
  • authentication_ldap_sasl_connect_timeout
  • authentication_ldap_sasl_response_timeout
  • Connection and response timeouts are configurable through the system variables on Linux platforms only. For more information, see Setting Timeouts for LDAP Pluggable Authentication. (WL #14757)
  • Previously, MySQL Server generated and emitted activity-monitoring events through plugin APIs. Now, the server emits events using component APIs. At the same time, to provide backward compatibility with plugins that use audit plugin APIs (such as audit_log, MYSQL_FIREWALL, CONNECTION_CONTROL, Rewriter, and so on), the server also implements an intermediate layer that generates required events through plugin APIs. Some of the related error messages may have an EVENT_TRACKING_ prefix, rather than the current MYSQL_AUDIT_ prefix. (WL #12652)
  • Bugs Fixed
  • Incompatible Change; Replication: Setting server variables equal to SQL NULL as options on the command line should not be possible and is not supported. Beginning with this release, setting any of these to NULL is disallowed, and attempting to do is rejected with an error.
  • The following variables are excepted from this restriction: admin_ssl_ca, admin_ssl_capath, admin_ssl_cert, admin_ssl_cipher, admin_tls_ciphersuites, admin_ssl_key, admin_ssl_crl, admin_ssl_crlpath, basedir, character_sets_dir, ft_stopword_file, group_replication_recovery_tls_ciphersuites, init_file, lc_messages_dir, plugin_dir, relay_log, relay_log_info_file, replica_load_tmpdir, ssl_ca, ssl_capath, ssl_cert, ssl_cipher, ssl_crl, ssl_crlpath, ssl_key, socket, tls_ciphersuites, and tmpdir.
  • See Server System Variables, for more information. (Bug #109387, Bug #34897517)
  • Important Change: The default value of the connection_memory_chunk_size server system variable, when introduced in MySQL 8.0.28, was mistakenly set at 8912. This fix changes the default to 8192, which is the value originally intended. (Bug #35218020)
  • NDB Cluster: The fix for a previous issue introduced a slight possibility of unequal string values comparing as equal, if any Unicode 9.0 collations were in use, and the collation hash methods calculated identical hash keys for two unequal strings. (Bug #35168702)
  • References: See also: Bug #27522732. This issue is a regression of: Bug #30884622.
  • InnoDB: An error due to the bulk load of data that is larger than the InnoDB page size has been fixed. (Bug #35332046, Bug #110813)
  • InnoDB: Possible congestion due to purging a large number of system threads has been fixed. (Bug #35289390, Bug #110685)
  • InnoDB: Errors when innodb_thread_concurrency set to 999 have been fixed. (Bug #34925101)
  • InnoDB: Performance regression due to hash function changes in MySQL 8.0.30 have been fixed. (Bug #34870256)
  • InnoDB: Errors that can occur with the character sets ucs2, utf16, and utf32 have been fixed. (Bug #34790366)
  • InnoDB: The rules for aggregating entries in the redo log have been fixed. (Bug #34752625, Bug #108944)
  • InnoDB: Contradictory warning and error messages for recovery in read-only mode when the redo log is not empty have been fixed. (Bug #34506094, Bug #108177)
  • InnoDB: Several errors due to tablespace deletion and the buffer pool have been fixed. (Bug #34330475, Bug #107689)
  • InnoDB: An error due to multiplication in ibd2sdi has been fixed. (Bug #33172685, Bug #104474)
  • InnoDB: Errors that can cause buffer pool exhaustion have been fixed. (Bug #27238364)
  • Packaging; Group Replication: The group replication plugin from the Generic Linux packages did not load on some platforms that lacked a compatible version of tirpc. (Bug #35323208)
  • Replication: Changes in session_track_gtids were not always propagated correctly. (Bug #35401212)
  • Replication: By design, all DDL operations (including binary log operations such as purging the binary log) acquire a shared lock on the BACKUP_LOCK object, which helps to prevent simultaneous backup and DDL operations. For binary log operations, we checked whether any locks existed on BACKUP_LOCK but did not check the types of any such locks. This caused problems due to the fact that binary log operations should be prevented only when an exclusive lock is held on the BACKUP_LOCK object, that is, only when a backup is actually in progress, and backups should be prevented when purging the binary log.
  • Now in such cases, instead of checking for locks held on the BACKUP_LOCK object, we acquire a shared lock on BACKUP_LOCK while purging the binary log. (Bug #35342521)
  • Replication: In all cases except one, when mysqlbinlog encountered an error while reading an event, it wrote an error message and returned a nonzero exit code, the exception being for the active binary log file (or any binary log where the format_description_log_event had the LOG_EVENT_BINLOG_IN_USE_F flag set), in which case it did not write a message, and returned exit code 0, thus hiding the error.
  • Now mysqlbinlog suppresses only those errors which are related to truncated events, and when doing so, prints a comment rather than an error message. This fix also improves the help text for the --force-if-open option. (Bug #35083373)
  • Replication: Compressed binary log event handling was improved. (Bug #33666652)
  • Replication: A transaction consisting of events each smaller than 1 GiB, but whose total size was larger than 1 GiB, and where compression did not make it smaller than 1 GiB, was still written to the binary log as one event bigger than 1 GiB. This made the binary log unusable; in effect, it was corrupted since neither the server nor other tools such as mysqlbinlog could read it.
  • Now, when the compressed data grows larger than 1 GiB, we fall back to processing the transaction without any compression. (Bug #33588473)
  • Replication: The multithreaded applier wrote messages similar to Multi-threaded slave: Coordinator has waited 312251 times hitting slave_pending_jobs_size_max; current event size = 8176 into the error log, although they did not belong there. (Bug #32587480)
  • Replication: Executing either of the statements FLUSH BINARY LOGS or SET GLOBAL binlog_checksum = CRC32 after setting the session transaction access mode to READ ONLY led to an unplanned shutdown. Execution of either of these statements causes rotation of the binary log; before doing so, it is necessary to update the mysql.gtid_executed table, but this was rejected due to the session transaction access mode being READ ONLY.
  • We fix this by allowing the binary log rotation to proceed by ignoring READ ONLY access mode, as when the server is running in read-only or super-read-only mode. (Bug #109894, Bug #35041573)
  • Group Replication: In a group replication setup, when there was a source of transactions other than the applier channel, the following sequence of events was possible:
  • Several transactions being applied locally were already certified, and so were associated with a ticket, which we refer to as Ticket 2, but had not yet been committed. These could be local or nonlocal transactions.
  • A view is created with Ticket 3, and must wait on transactions from Ticket 2.
  • The view change (VC1) entered the GR applier channel applier and waited for the ticket to change to 3.
  • Another group change, and another view change (VC2), occurred while the transactions from Ticket 2 were still completing.
  • This gave rise to the following issue: There was a window wherein the last transaction from Ticket 2 had already marked itself as being executed but had not yet popped the ticket; VC2 popped the ticket instead but never notified any of the participants. This meant that VC1 continued to wait indefinitely for the ticket to change, and with the additional effect that the worker could not be killed.
  • We fix this by checking for the need to break each second so that this loop is responsive to changes in the loop condition; we also register a new stage, so that the loop is more responsive to kill signals. (Bug #35392640)
  • References: See also: Bug #35206392, Bug #35374425.
  • Group Replication: Executing SET GLOBAL group_replication_force_members = host:port and SHOW STATUS LIKE 'group_replication_primary_member' on the host in parallel sometimes led to a timeout while waiting for a new view. (Bug #35312441)
  • Group Replication: Removed a memory leak discovered in Network_provider_manager::open_xcom_connection(). (Bug #34991101)
  • Group Replication: When a group action was sent to the group and the connection was killed on the coordinator, group members were in different states, with members which received the coordinated action waiting for the member that executed it, and the member which started execution having nothing to process, which caused problems with coordination of the group.
  • Now in such cases, we prevent this issue from occurring by causing group actions to wait until all members have completed the action. (Bug #34815537)
  • Group Replication: Cleanup of resources used by OpenSSL connections created indirectly by group replication was not carried out as expected at all times. We fix this by adding cleanup functionality that can be called at any time such connections are created by group replication. (Bug #34727136)
  • Group Replication: In some cases, the MySQL server continued to accept connections intended for group replication even after the group replication plugin had commenced shutdown. (Bug #34398622)
  • Microsoft Windows: On Windows, a new MySQL Configurator application was added to help configure a MySQL server. It replaces the MySQL Installer application that installed and configured MySQL products in previous versions of MySQL. MySQL Configurator (mysql_configurator.exe) is included with both the MSI and Zip archive packages. (Bug #35461041)
  • JSON: When the result of JSON_VALUE() was an empty string and was assigned to a user variable, the user variable could in some cases be set to NULL instead, as shown here:
  • mysql> SELECT JSON_VALUE('{"fname": "Joe", "lname": ""}', '$.lname') INTO @myvar;
  • Query OK, 1 row affected (0.01 sec)
  • mysql> SELECT @myvar = '', @myvar IS NULL;
  • +-------------+----------------+
  • | @myvar = '' | @myvar IS NULL |
  • +-------------+----------------+
  • | NULL | 1 |
  • +-------------+----------------+
  • 1 row in set (0.00 sec)
  • With this fix, the query just shown now returns (1, 0), as expected. (Bug #35206138)
  • JSON: Some JSON schemas were not always processed correctly by JSON_SCHEMA_VALID(). (Bug #109296, Bug #34867398)
  • Some combinations of regular expression functions and arithmetic functions were not always evaluated correctly. (Bug #35462660)
  • In rare cases, MySQL server could exit rather than emit an error message as expected. (Bug #35442407)
  • The internal resource-group enhancement added in MySQL 8.0.31 and refactored in MySQL 8.0.32 is now reverted. (Bug #35434219)
  • References: Reverted patches: Bug #34702833.
  • An in-place upgrade from MySQL 5.7 to MySQL 8.0, without a server restart, could result in unexpected errors when executing queries on tables. This fix eliminates the need to restart the server between the upgrade and queries. (Bug #35410528)
  • A fix in MySQL 8.0.33 made a change for ORDER BY items already resolved so as not to resolve them again (as is usually the case when a derived table is merged), but this did not handle the case in which an ORDER BY item was itself a reference. (Bug #35410465)
  • References: This issue is a regression of: Bug #34890862.
  • Changes in session_track_gtids were not always handled correctly. (Bug #35401212)
  • Some pointers were not always released following statement execution. (Bug #35395965)
  • In Item_func_min_max::cmp_datetimes(), it was sometimes possible to set null_value when the current item was not actually nullable. (Bug #35380560, Bug #35492532)
  • Some instances of subqueries within stored routines were not always handled correctly. (Bug #35377192)
  • Fortified parsing of the network packet data sent by the server to the client. (Bug #35374491)
  • Some queries using INTERSECT were not always processed correctly. (Bug #35362424)
  • A SELECT statement within a prepared statement unexpectedly returned different results on successive executions. (Bug #35340987)
  • References: This issue is a regression of: Bug #35060385.
  • Encryption enhancements now strengthen compliance and remove the use of deprecated APIs. (Bug #35339886)
  • When a column reference given by table name and column name was looked up in the function find_item_in_list(), we ignored that the item searched for might not have a table name, as it was not yet resolved. We fix this by making an explicit check for a null table name in the sought-after item. (Bug #35338776)
  • Deprecated the lz4_decompress and zlib_decompress command-line utilities that exist to support the deprecated mysqlpump command-line utility. (Bug #35328235)
  • Certain queries using NULLIF() led to an assertion. The issue was found to originate in Item_func_nullif::resolve_type_inner(), where, if the original data type was a temporal type, the type was adjusted to a string type but the result type was not also adjusted accordingly, which could later lead to later inconsistencies. This is fixed by setting the result type in such cases to STRING_RESULT. (Bug #35323398)
  • On Linux, the mysql client's ssl_session_data_print command now saves files with an 0600 absolute mode (permissions) instead of the default 0644; when passing in the optional filename parameter. (Bug #35304195)
  • Queries using LIKE '%...%' ran more poorly than in previous versions of MySQL. (Bug #35296563)
  • Some complex queries using multiple common table expressions were not always handled correctly. (Bug #35284734)
  • References: This issue is a regression of: Bug #34377854.
  • We calculate the cost of MATERIALIZE paths by adding the cost of materialization to the sum of the cost of the child paths. If the number of output rows is undefined for a child, we ignore that child, as we assume that the cost of that child is then also undefined. If the child was an AGGREGATE path with implicit grouping, the number of output rows could be set to 1, even when the cost was undefined. We fix this by checking in such cases whether the cost of the child is actually defined, and—if it is not—skipping it. (Bug #35240913)
  • References: See also: Bug #33834146, Bug #34302461.
  • In Bounded_queue::push(), when Key_generator::make_sortkey() returns UINT_MAX (error), then no key has been produced; now when this occurs, we no longer update the internal queue.
  • As part of this fix, push() now returns true on error. (Bug #35237721)
  • The authentication_oci plugin is fixed to allow federated and provisioned users to connect to a DB System as a mapped Proxy User using an ephemeral key-pair generated through the OCI CLI. (Bug #35232697)
  • Some queries using common table expressions were not always processed correctly. (Bug #35231475)
  • The internal function compare_pair_for_nulls() did not always set an explicit return value. (Bug #35217471)
  • Removed the clang-tidy checks that clash with the MySQL coding style. (Bug #35208735)
  • Some subqueries using EXISTS in both the inner and outer parts of the query were not handled correctly. (Bug #35201901)
  • Rotated audit log files now always reset the ID value of the bookmark to zero, rather than continuing the value from the previous file. (Bug #35200070)
  • Errors were not always propagated correctly when evaluating items to be sorted by filesort. (Bug #35195181)
  • References: See also: Bug #35145246.
  • In certain cases, UNIX_TIMESTAMP() was evaluated prematurely. (Bug #35174730)
  • When attempting to transform a scalar subquery to a derived table, we saw the top level query is implicitly grouped, so we moved the grouping into a first derived table. If, after this, we did not perform the original transformation, the initial transform had still been carried out, which should have been valid, but we neglected to look at join conditions in subqueries when substituting reference fields. In such cases we also did not descend into any subqueries other than derived table subqueries. (Bug #35170671)
  • The fix for a previous issue with ROLLUP led to a premature server exit in debug builds. (Bug #35168639)
  • References: This issue is a regression of: Bug #33830659.
  • Simplified the implementation of Item_func_make_set::val_str() to make sure that we never try to reuse any of the input arguments, always using the local string buffer instead. (Bug #35154335, Bug #35158340)
  • The transform of a scalar subquery into a join with a derived table where the subquery is in the SELECT list and the containing query is implicitly grouped should be allowed, but was rejected when the subquery_to_derived optimizer switch was enabled. (Bug #35150438)
  • When transforming subqueries to a join with derived tables, with the containing query being grouped, we created an extra derived table in which to do the grouping. This process moved the initial select list items from the containing query into the extra derived table, replacing all of the original select list items (other than subqueries, which get their own derived tables) with columns from the extra derived table.
  • This logic did not handle DEFAULT correctly due to the manner in which default values were modelled internally. This fix adds support for DEFAULT(expression) in queries undergoing the transform previously mentioned. This fix also solves an issue with item names in metadata whereby two occurrences of the same column in the select list were given the same item name as a result of this same transform. (Bug #35150085, Bug #35101169)
  • A query of the form SELECT * FROM t1 WHERE (SELECT a FROM t2 WHERE t2.a=t1.a + ABS(t2.b)) > 0 should be rejected with Subquery returns more than 1 row, but when the subquery_to_derived optimization was enabled, the transform was erroneously applied and the query returned an incorrect result. (Bug #35101630)
  • Handling of certain potentially conflicting GRANT statements has been improved. (Bug #35089304)
  • A query using both MEMBER OF() and ORDER BY DESC returned only a partial result set following the creation of a multi-valued index on a JSON column. This is similar to an issue fixed in MySQL 8.0.30, but with the addition of the ORDER BY DESC clause to the prblematic query. (Bug #35012146)
  • References: See also: Bug #106621, Bug #33917625.
  • For index skip scans, the first range read set an end-of-range value to indicate the end of the first range, but the next range read did not clear the stale end-of-range value and applies this stale value to the current range. Since the indicated end-of-range boundary had already been crossed in the previous range read, this caused the reads to stop, causing multiple rows to be missed in the result.
  • We fix this by making sure in such cases that the old end-of-range value is cleared. (Bug #34982949)
  • The debug server asserted on certain operations involving DECIMAL values. (Bug #34973932)
  • The nullability of ANY subqueries was sometimes incorrect because the nullability of the left operand was not taken into account. We fix this by marking an ANY subquery as nullable whenever the left operand is nullable. (Bug #34940790)
  • All instances of adding and replacing expressions in the select list when transforming subqueries to use derived tables and joins have been changed so that their reference counts are maintained properly. (Bug #34927110)
  • Aggregation of item type from multiple arguments required processing in multiple internal functions; this has been simplified such that it is now performed in one function only. This should improve the efficiency of this process, which is used for expressions that are the results of set operations, and those that are output from the CASE operator (and the associated functions COALESCE() and IF()), as well as LEAD() and LAG(). (Bug #34847836)
  • Index Merge (see Index Merge Optimization) should favor ROR-union plans (that is, using RowID Ordered Retrieval) over sort-union plans if they have similar costs, since sort-union requires additionally sorting of the rows by row ID whereas ROR-union does not.
  • For each part of a WHERE clause containing an OR condition, the range optimizer gets the best range scan possible and uses all these range scans to build an index merge scan (that is, a sort-union scan). If it finds that all the best range scans are also ROR-scans, the range optimizer always proposes a ROR-union scan because it is always cheaper than a sort-union scan. Problems arose when the best range scan for any one part of an OR condition is not a ROR-scan, in which case, the range optimizer always chose sort-union. This was true even in cases, where it might be advantageous to choose a ROR-scan (even though it might not be the best range scan to handle one part of the OR condition), since this would eleminate any need to sort the rows by row ID.
  • Now, in such cases, when determining the best range scan, the range optimizer also detects whether there is any possible ROR-scan, and uses this information to see whether each part of the OR condition has at least one possible ROR-scan. If so, we rerun the range optimizer to obtain the best ROR-scan for handling each part of the OR condition, and to make a ROR-union path. We then compare this cost with the cost of a sort-union when proposing the final plan. (Bug #34826692, Bug #35302794)
  • Selecting from a view sometimes raised the error Illegal mix of collations ... for operation '=' when the collation used in the table or tables from which the view definition selected did not match the current session value of collation_connection. (Bug #34801210)
  • If a view (v1) accessed another view (v2), and if v2 was recreated, then SHOW COLUMNS FROM v1 reported an invalid view error. This issue occurred when the user was granted privileges to all resources (*.*), but not table-level or column-level privileges. It is fixed by removing the condition that caused an omission of the proper table-level check. (Bug #34467659)
  • Queries using DISTINCT treated 0 and -0 differently. (Bug #34361437)
  • ANALYZE TABLE with UPDATE HISTOGRAM or DROP HISTOGRAM invalidated the TABLE_SHARE, which meant that subsequent queries were required to wait for all queries then running to terminate before the old TABLE_SHARE could be freed and a new one initialized with the updated collection of histograms for the table. This could introduce long waits, as queries issued after the TABLE_SHARE was invalidated had to wait for any existing long-running queries that referenced the old TABLE_SHARE to terminate.
  • This fix changes the behavior of the histogram commands to mark tables for reopening instead of invalidating the TABLE_SHARE. Instead of having a single set of table histograms cached on the TABLE_SHARE, we now maintain a collection of reference-counted sets of table histograms on the share. When the histograms on a given table are modified, we now insert a new snapshot of the set of histograms into the collection on the TABLE_SHARE and mark it current. When a table object is opened, it acquires a pointer to the current snapshot of the set of histograms for the table from the share, and when the table object is closed it releases its pointer back to the share.
  • By using multiple reference-counted versions of histogram statistics for a table we avoid the potential wait for synchronization of all queries on the table around the reinitialization of the TABLE_SHARE when histograms are updated or dropped. (Bug #34288890, Bug #35419418)
  • Valid MySQL commands (use and status) and C API functions (mysql_refresh, mysql_stat, mysql_dump_debug_info, mysql_ping, mysql_set_server_option, mysql_list_processes, and mysql_reset_connection) could write an error message to the audit log, even though running the command or calling the function emitted no such error. (Bug #33966181)
  • Increased the maximum fixed array size to 8192 instead of 512. This fixes an issue with mysqladmin extended status requests, which can exceed 512 entries.
  • Our thanks to Meta for the contribution. (Bug #30810617)
  • The mysqldump --column-statistics option attempted to select from information_schema.column_statistics against MySQL versions before 8.0.2, but this now generates the warning column statistics not supported by the server and sets the option to false.
  • Our thanks to Meta for the contribution. (Bug #28782417)
  • The function used by MySQL to get the length of a directory name was enhanced. (Bug #28047376)
  • Executing a query with an implicit aggregation should return exactly one row, unless the query has a HAVING clause that filters out the row, but a query with a HAVING clause which evaluated to FALSE sometimes ignored this, and returned a row regardless. (Bug #14272020)
  • The presence of an unused window function in a query, along with an ORDER BY that could have been eliminated, led to an unplanned server exit. (Bug #111585, Bug #35168639, Bug #35204224, Bug #35545377)
  • References: This issue is a regression of: Bug #35118579.
  • ORDER BY RANDOM_BYTES() had no effect on query output. (Bug #111252, Bug #35148945, Bug #35457136)
  • Improved the mysql client's status output; the Protocol row now includes the compression algorithm and zstd level.
  • Our thanks to Daniël van Eeden for the contribution. (Bug #110950, Bug #35369870)
  • The MySQL source code documentation was missing the following information about C API protocols: zstd_compression_level is only sent when CLIENT_ZSTD_COMPRESSION_ALGORITHM is set.
  • Our thanks to Daniël van Eeden for the contribution. (Bug #110939, Bug #35365351)
  • In certain cases, VALUES ROW() did not handle expressions which evaluated to NULL correctly. (Bug #110925, Bug #35363550)
  • The QUOTE() function returned unexpected results with columns selected from a table having the utf16 character set. (Bug #110672, Bug #35286970)
  • Fixed an issue which could occur when loading user-defined functions. (Bug #110576, Bug #35242734)
  • Concurrent execution of FLUSH STATUS, COM_CHANGE_USER, and SELECT FROM I_S.PROCESSLIST could result in a deadlock. A similar issue was observed for concurrent execution of COM_STATISTICS, COM_CHANGE_USER, and SHOW PROCESSLIST.
  • Our thanks to Dmitry Lenev for the contribution. (Bug #110494, Bug #35218030)
  • The mysqldump utility could generate invalid INSERT statements for generated columns. (Bug #110462, Bug #35208605)
  • For mysqldump: usage would unexpectedly halt when used against tables with functional indexes. (Bug #110452, Bug #35205310)
  • An impossible WHERE similar to WHERE int_col = 05687.3E-84 was not always handled correctly. (Bug #110434, Bug #35200367)
  • The loading and unloading of UCA character sets has been rewritten to improve memory handling when cycling through initialization and deinitialization. (Bug #109540, Bug #34969838)
  • During optimization, range-select tree creation uses logic which differs based on the left-hand side of the IN() predicate. For a field item, each value on the right-hand side is added to an OR tree to create the necessary expression. In the case of a row item comparison (example: WHERE (a,b) IN ((n1,m1), (n2, m2), ...)), an expression in disjunctive normal form (DNF) is needed. A DNF expression is created by adding an AND tree with column values to an OR tree for each set of RHS values, but instead the OR tree was added to the AND tree causing the tree merge to require exponential time due to O(n2) runtime complexity. (Bug #108963, Bug #34758905)
  • When using SELECT to create a table and the statement has an expression of type GEOMETRY, MySQL could generate an empty string as the column value by default. To resolve this issue, MySQL no longer generates default values for columns of type GEOMETRY under these circumstances. Our thanks to Tencent for the contribution. (Bug #107996, Bug #34426943)
  • Removed an assertion encountered when creating fields of type YEAR for temporary tables holding results of UNION operations. (Bug #107826, Bug #34370933, Bug #35282236)

New in MySQL Cluster 8.0.33 (Apr 18, 2023)

  • Parallel Event Execution (Multithreaded Replica):
  • NDB Replication: NDB Cluster replication now supports the MySQL multithreaded applier (MTA) on replica servers. This makes it possible for binary log transactions to be applied in parallel on the replica, increasing peak replication throughput. To enable this on the replica, it is necessary to perform the following steps:
  • Set the --ndb-log-transaction-dependency option, added in this release, to ON. This must be done on startup of the source mysqld.
  • Set the binlog_transaction_dependency_tracking server system variable to WRITESET, also on the source, which causes transaction dependencies to be determined at the source. This can be done at runtime.
  • Make sure the replica uses multiple worker threads; this is determined by the value of the replica_parallel_workers server system variable, which NDB now honors (previously, NDB effectively ignored any value set for this variable). The default is 4, and can be changed on the replica at runtime.
  • You can adjust the size of the buffer used to store the transaction dependency history on the source using the --binlog-transaction-dependency-history-size option. The source should also have replica_parallel_type set to LOGICAL_CLOCK (the default).
  • Additionally, on the replica, replica_preserve_commit_order must be ON (the default).
  • For more information about the MySQL replication applier, see Replication Threads. For more information about NDB Cluster replication and the multithreaded applier, see NDB Cluster Replication Using the Multithreaded Applier. (Bug #27960, Bug #11746675, Bug #35164090, Bug #34229520, WL #14885, WL #15145, WL #15455, WL #15457)
  • Functionality Added or Changed:
  • NDB Cluster APIs: The Node.js library used to build the MySQL NoSQL Connector for JavaScript has been upgraded to version 18.12.1. (Bug #35095122)
  • MySQL NDB ClusterJ: Performance has been improved for accessing tables using a single-column partition key when the column is of type CHAR or VARCHAR. (Bug #35027961)
  • Beginning with this release, ndb_restore implements the --timestamp-printouts option, which causes all error, info, and debug node log messages to be prefixed with timestamps. (Bug #34110068)
  • Bugs Fixed:
  • Microsoft Windows: Two memory leaks found by code inspection were removed from NDB process handles on Windows platforms. (Bug #34872901)
  • Microsoft Windows: On Windows platforms, the data node angel process did not detect whether a child data node process exited normally. We fix this by keeping an open process handle to the child and using this when probing for the child's exit. (Bug #34853213)
  • NDB Replication: When using a multithreaded applier, the start_pos and end_pos columns of the ndb.apply_status table (see ndb_apply_status Table) did not contain the correct position information. (Bug #34806344)
  • NDB Cluster APIs; MySQL NDB ClusterJ: MySQL ClusterJ uses a scratch buffer for primary key hash calculations which was limited to 10000 bytes, which proved too small in some cases. Now we malloc() the buffer if its size is not sufficient.
  • This also fixes an issue with the Ndb object methods startTransaction() and computeHash() in the NDB API: Previously, if either of these methods was passed a temporary buffer of insufficient size, the method failed. Now in such cases a temporary buffer is allocated.
  • Our thanks to Mikael Ronström for this contribution. (Bug #103814, Bug #32959894)
  • NDB Cluster APIs: When dropping an event operation (NdbEventOperation) in the NDB API, it was sometimes possible for the dropped event operation to remain visible to the application after instructing the data nodes to stop sending events related to this event operation, but before all pending buffered events were consumed and discarded. This could be observed in certain cases when performing an online alter operation, such as ADD COLUMN or RENAME COLUMN, along with concurrent writes to the affected table.
  • Further analysis showed that the dropped events were accessible when iterating through event operations with Ndb::getGCIEventOperations(). Now, this method skips dropped events when called iteratively. (Bug #34809944)
  • NDB Cluster APIs: Event::getReport() always returned ER_UPDATED for an event opened from NDB, instead of returning the flags actually used by the report object. (Bug #34667384)
  • Before a new NDB table definition can be stored in the data dictionary, any existing definition must be removed. Table definitions have two unique values, the table name and the NDB Cluster se_private_id. During installation of a new table definition, we check whether there is any existing definition with the same table name and, if so, remove it. Then we check whether the table removed and the one being installed have the same se_private_id; if they do not, any definition that is occupying this se_private_id is considered stale, and removed as well.
  • Problems arose when no existing definition was found by the search using the table's name, since no definition was dropped even if one occupied se_private_id, leading to a duplicate key error when attempting to store the new table. The internal store_table() function attempted to clear the diagnostics area, remove the stale definition of se_private_id, and try to store it once again, but the diagnostics area was not actually cleared, thus leaking the error is thus leaked and presenting it to the user.
  • To fix this, we remove any stale table definition, regardless of any action taken (or not) by store_table(). (Bug #35089015)
  • Fixed the following two issues in the output of ndb_restore:
  • The backup file format version was shown for both the backup file format version and the version of the cluster which produced the backup.
  • To reduce confusion between the version of the file format and the version of the cluster which produced the backup, the backup file format version is now shown using hexadecimal notation.
  • (Bug #35079426)
  • References: This issue is a regression of: Bug #34110068.
  • Removed a memory leak in the DBDICT kernel block caused when an internal foreign key definition record was not released when no longer needed. This could be triggered by either of the following events:
  • Drop of a foreign key constraint on an NDB table
  • Rejection of an attempt to create a foreign key constraint on an NDB table
  • Such records use the DISK_RECORDS memory resource; you can check this on a running cluster by executing SELECT node_id, used FROM ndbinfo.resources WHERE resource_name='DISK_RECORDS' in the mysql client. This resource uses SharedGlobalMemory, exhaustion of which could lead not only to the rejection of attempts to create foreign keys, but of queries making use of joins as well, since the DBSPJ block also uses shared global memory by way of QUERY_MEMORY. (Bug #35064142)
  • When attempting a copying alter operation with --ndb-allow-copying-alter-table = OFF, the reason for rejection of the statement was not always made clear to the user. (Bug #35059079)
  • When a transaction coordinator is starting fragment scans with many fragments to scan, it may take a realtime break (RTB) during the process to ensure fair CPU access for other requests. When the requesting API disconnected and API failure handling for the scan state occurred before the RTB continuation returned, continuation processing could not proceed because the scan state had been removed.
  • We fix this by adding appropriate checks on the scan state as part of the continuation process. (Bug #35037683)
  • Sender and receiver signal IDs were printed in trace logs as signed values even though they are actually unsigned 32-bit numbers. This could result in confusion when the top bit was set, as it cuased such numbers to be shown as negatives, counting upwards from -MAX_32_BIT_SIGNED_INT. (Bug #35037396)
  • A fiber used by the DICT block monitors all indexes, and triggers index statistics calculations if requested by DBTUX index fragment monitoring; these calculations are performed using a schema transaction. When the DICT fiber attempts but fails to seize a transaction handle for requesting a schema transaction to be started, fiber exited, so that no more automated index statistics updates could be performed without a node failure. (Bug #34992370)
  • References: See also: Bug #34007422.
  • Schema objects in NDB use composite versioning, comprising major and minor subversions. When a schema object is first created, its major and minor versions are set; when an existing schema object is altered in place, its minor subversion is incremented.
  • At restart time each data node checks schema objects as part of recovery; for foreign key objects, the versions of referenced parent and child tables (and indexes, for foreign key references not to or from a table's primary key) are checked for consistency. The table version of this check compares only major subversions, allowing tables to evolve, but the index version also compares minor subversions; this resulted in a failure at restart time when an index had been altered.
  • We fix this by comparing only major subversions for indexes in such cases. (Bug #34976028)
  • References: See also: Bug #21363253.
  • ndb_import sometimes silently ignored hint failure for tables having large VARCHAR primary keys. For hinting which transaction coordinator to use, ndb_import can use the row's partitioning key, using a 4092 byte buffer to compute the hash for the key.
  • This was problematic when the key included a VARCHAR column using UTF8, since the hash buffer may require in bytes up to 24 times the number of maximum characters in the column, depending on the column's collation; the hash computation failed but the calling code in ndb_import did not check for this, and continued using an undefined hash value which yielded an undefined hint.
  • This did not lead to any functional problems, but was not optimal, and the user was not notified of it.
  • We fix this by ensuring that ndb_import always uses sufficient buffer for handling character columns (regardless of their collations) in the key, and adding a check in ndb_import for any failures in hash computation and reporting these to the user. (Bug #34917498)
  • When the ndbcluster plugin creates the ndb_schema table, the plugin inserts a row containing metadata, which is needed to keep track of this NDB Cluster instance, and which is stored as a set of key-value pairs in a row in this table.
  • The ndb_schema table is hidden from MySQL and so not possible to query using SQL, but contains a UUID generated by the same MySQL server that creates the ndb_schema table; the same UUID is also stored as metadata in the data dictionary of each MySQL Server when the ndb_schema table is installed on it.
  • When a mysqld connects (or reconnects) to NDB, it compares the UUID in its own data dictionary with the UUID stored in NDB in order to detect whether it is reconnecting to the same cluster; if not, the entire contents of the data dictionary are scrapped in order to make it faster and easier to install all tables fresh from NDB.
  • One such case occurs when all NDB data nodes have been restarted with --initial, thus removing all data and tables. Another happens when the ndb_schema table has been restored from a backup without restoring any of its data, since this means that the row for the ndb_schema table would be missing.
  • To deal with these types of situations, we now make sure that, when synchronization has completed, there is always a row in the NDB dictionary with a UUID matching the UUID stored in the MySQL server data dictionary. (Bug #34876468)
  • When running an NDB Cluster with multiple management servers, termination of the ndb_mgmd processes required an excessive amount of time when shutting down the cluster. (Bug #34872372)
  • Schema distribution timeout was detected by the schema distribution coordinator after dropping and re-creating the mysql.ndb_schema table when any nodes that were subscribed beforehand had not yet resubscribed when the next schema operation began. This was due to a stale list of subscribers being left behind in the schema distribution data; these subscribers were assumed by the coordinator to be participants in subsequent schema operations.
  • We fix this issue by clearing the list of known subscribers whenever the mysql.ndb_schema table is dropped. (Bug #34843412)
  • When requesting a new global checkpoint (GCP) from the data nodes, such as by the NDB Cluster handler in mysqld to speed up delivery of schema distribution events and responses, the request was sent 100 times. While the DBDIH block attempted to merge these duplicate requests into one, it was possible on occasion to trigger more than one immediate GCP. (Bug #34836471)
  • When the DBSPJ block receives a query for execution, it sets up its own internal plan for how to do so. This plan is based on the query plan provided by the optimizer, with adaptions made to provide the most efficient execution of the query, both in terms of elapsed time and of total resources used.
  • Query plans received by DBSPJ often contain star joins, in which several child tables depend on a common parent, as in the query shown here:
  • SELECT STRAIGHT_JOIN * FROM t AS t1
  • INNER JOIN t AS t2 ON t2.a = t1.k
  • INNER JOIN t AS t3 ON t3.k = t1.k;
  • In such cases DBSPJ could submit key-range lookups to t2 and t3 in parallel (but does not do so). An inner join also has the property that each inner joined row requires a match from the other tables in the same join nest, else the row is eliminated from the result set. Thus, by using the key-range lookups, we may retrieve rows from one such lookup which have no matches in the other, which effort is ultimately wasted. Instead, DBSPJ sets up a sequential plan for such a query.
  • It was found that that this worked as intended for queries having only inner joins, but if any of the tables are left-joined, we did not take complete advantage of the preceding inner joined tables before issuing the outer joined tables. Suppose the previous query is modified to include a left join, like this:
  • SELECT STRAIGHT_JOIN * FROM t AS t1
  • INNER JOIN t AS t2 ON t2.a = t1.k
  • LEFT JOIN t AS t3 ON t3.k = t1.k;
  • Using the following query against the ndbinfo.counters table, it is possible to observe how many rows are returned for each query before and after query execution:
  • SELECT counter_name, SUM(val)
  • FROM ndbinfo.counters
  • WHERE block_name="DBSPJ" AND counter_name = "SCAN_ROWS_RETURNED";
  • It was thus determined that that requests on t2 and t3 were submitted in parallel. Now in such cases, we wait for the inner join to complete before issuing the left join, so that unmatched rows from t1 can be eliminated from the outer join on t1 and t3. This results in less work to be performed by the data nodes, and reduces the volumne handled by the transporter as well. (Bug #34782276)
  • SPJ handling of a sorted result was found to suffer a significant performance impact compared to the same result set when not sorted. Further investigation showed that most of the additional performance overhead for sorted results lay in the implementation for sorted result retrieval, which required an excessive number of SCAN_NEXTREQ round trips between the client and DBSPJ on the data nodes. (Bug #34768353)
  • DBSPJ now implements the firstMatch optimization for semijoins and antijoins, such as those found in EXISTS and NOT EXISTS subqueries. (Bug #34768191)
  • When the DBSPJ block sends SCAN_FRAGREQ and SCAN_NEXTREQ signals to the data nodes, it tries to determine the optimum number of fragments to scan in parallel without starting more parallel scans than needed to fill the available batch buffers, thus avoiding any need to send additional SCAN_NEXTREQ signals to complete the scan of each fragment.
  • The DBSPJ block's statistics module calculates and samples the parallelism which was optimal for fragment scans just completed, for each completed SCAN_FRAGREQ, providing a mean and standard deviation of the sampled parallelism. This makes it possible to calculate a lower 95th percentile of the parallelism (and batch size) which makes it possible to complete a SCAN_FRAGREQ without needing additional SCAN_NEXTREQ signals.
  • It was found that the parallelism statistics seemed unable to provide a stable parallelism estimate and that the standard deviation was unexpectedly high. This often led to the parallelism estimate being a negative number (always rounded up to 1).
  • The flaw in the statistics calculation was found to be an underlying assumption that each sampled SCAN_FRAGREQ contained the same number of key ranges to be scanned, which is not necessarily the case. Typically a full batch of rows for the first SCAN_FRAGREQ, and relatively few rows for the final SCAN_NEXTREQ returning the remaining rows; this resulted in wide variation in parallelism samples which made the statistics obtained from them unreliable.
  • We fix this by basing the statistics on the number of keys actually sent in the SCAN_FRAGREQ, and counting the rows returned from this request. Based on this it is possible to obtain record-per-key statistics to be calculated and sampled. This makes it possible to calculate the number of fragments which can be scanned, without overflowing the batch buffers. (Bug #34768106)
  • It was possible in certain cases that both the NDB binary logging thread and metadata synchronization attempted to synchronize the ndb_apply_status table, which led to a race condition. We fix this by making sure that the ndb_apply_status table is monitored and created (or re-created) by the binary logging thread only. (Bug #34750992)
  • While starting a schema operation, the client is responsible for detecting timeouts until the coordinator has received the first schema event; from that point, any schema operation timeout should be detected by the coordinator. A problem occurred while the client was checking the timeout; it mistakenly set the state indicating that timeout had occurred, which caused the coordinator to ignore the first schema event taking longer than approximately one second to receive (that is, to write the send event plus handle in the binary logging thread). This had the effect that, in these cases, the coordinator was not involved in the schema operation.
  • We fix this by change the schema distribution timeout checking to be atomic, and to let it be performed by either the client or the coordinator. In addition, we remove the state variable used for keeping track of events received by the coordinator, and rely on the list of participants instead. (Bug #34741743)
  • An SQL node did not start up correctly after restoring data with ndb_restore, such that, when it was otherwise ready to accept connections, the binary log injector thread never became ready. It was found that, when a mysqld was started after a data node initial restore from which new table IDs were generated, the utility table's (ndb_*) MySQL data dictionary definition might not match the NDB dictionary definition.
  • The existing mysqld definition is dropped by name, thus removing the unique ndbcluster-ID key in the MySQL data dictionary but the new table ID could also already be occupied by another (stale) definition. The resulting mistmatch prevented setup of the binary log.
  • To fix this problem we now explicitly drop any ndbcluster-ID definitions that might clash in such cases with the table being installed. (Bug #34733051)
  • After receiving a SIGTERM signal, ndb_mgmd did not wait for all threads to shut down before exiting. (Bug #33522783)
  • References: See also: Bug #32446105.
  • When multiple operations are pending on a single row, it is not possible to commit an operation which is run concurrently with an operation which is pending abort. This could lead to data node shutdown during the commit operation in DBACC, which could manifest when a single transaction contained more than MaxDMLOperationsPerTransaction DML operations.
  • In addition, a transaction containing insert operations is rolled back if a statement that uses a locking scan on the prepared insert fails due to too many DML operations. This could lead to an unplanned data node shutdown during tuple deallocation due to a missing reference to the expected DBLQH deallocation operation.
  • We solve this issue by allowing commit of a scan operation in such cases, in order to release locks previously acquired during the transaction. We also add a new special case for this scenario, so that the deallocation is performed in a single phase, and DBACC tells DBLQH to deallocate immediately; in DBLQH, execTUP_DEALLOCREQ() is now able to handle this immediate deallocation request. (Bug #32491105)
  • References: See also: Bug #28893633, Bug #32997832.
  • Cluster nodes sometimes reported Failed to convert connection to transporter warnings in logs, even when this was not really necessary. (Bug #14784707)
  • When started with no connection string on the command line, ndb_waiter printed Connecting to mgmsrv at (null). Now in such cases, it prints Connecting to management server at nodeid=0,localhost:1186 if no other default host is specified.
  • The --help option and other ndb_waiter program output was also improved. (Bug #12380163)
  • NdbSpin_Init() calculated the wrong number of loops in NdbSpin, and contained logic errors. (Bug #108448, Bug #32497174, Bug #32594825)
  • References: See also: Bug #31765660, Bug #32413458, Bug #102506, Bug #32478388.

New in MySQL Cluster 8.0.32 (Jan 26, 2023)

  • Functionality Added or Changed:
  • Removed distutils support, which is deprecated as of Python 3.10 and removed in Python 3.12.
  • Adopted type hint enforcement for function and class attributes with mypy; this is compliant with PEP 8 for module mysql.connector. The integration includes a git pre-commit hook for mypy.
  • On Windows, added a kerberos_auth_mode connection option that's set to either "SSPI" (default) or "GSSAPI". This allows choosing between SSPI and GSSAPI at runtime for the authentication_kerberos_client authentication plugin on Windows. Previously, only the SSPI mode was supported on Windows. For general usage information, see Kerberos Pluggable Authentication. This connection is ignored on other platforms, such as Linux, as they only support GSSAPI.
  • Limitation: GSSAPI can't be used with the pure Python implementation on Windows using authentication with a username and password, a limitation with the C library used by the python-gssapi package used by the pure Python implementation of Connector/Python.
  • Bugs Fixed:
  • Using USE_TZ=True in the Django settings would raise this exception: ValueError: Not naive datetime (tzinfo is already set).
  • Removed debug messages that showed authentication data. (Bug #34695103)
  • Updated the protobuf version requirement to version >= 3.11.0, <=3.20.3.
  • Connecting to MariaDB could fail with an unsupported character set because the default MySQL character collection is MySQL 8.0 specific. Now the 5.7 character set is used by default, but is switched to a 8.0 character set if the queried server is version 8.0.
  • Incorrect MySQLCursor.statement values were returned with cursor.execute(query_string, multi=True) under the following conditions: The query string contained two or more queries seperated by a semicolon, and the non-first query used a literal or identifier containing either an odd number of backticks, quotes, or double quotes.
  • On Windows, changed the security support provider (SSP) from Kerberos to Negotiate. Using Negotiate will then select either Kerberos or NTLM as the SSP.
  • On Windows, the Connector/Python MSI did would not detect and install with Python 3.11. The workaround is to use pip install mysql-connector-python instead.
  • When using a prepared cursor, if a datetime column contained 00:00:00 as the time, a Python date object was returned instead of datetime.
  • The MySQLCursor.executemany() method failed to batch insert data since the regular expression (RE) sentinel did not detect batch cases correctly; this meant using the one-on-one insert instead, which led to decreased performance.
  • Added a new init_command connection option; an SQL query that's immediately executed after the connection is established.
  • Russian characters were not handled correctly by the c-ext version's X DevAPI driver. String values are now encoded to their byte string representation before being sent to protobuf.
  • When fetching results from a prepared cursor using the pure Python implementation, it would fail if the VARBINARY column contained bytes that could not be decoded. The bytes are now returned if they cannot be decoded.
  • Fixed multiple reference leaks and removed redundant code.
  • The cursors (both pure and c-ext versions) uses a single SELECT query to retrieve procedure result parameters after a procedure call; but one SET call was used per parameter when setting the input parameters. This was optimized to always use a single SET call for one or multiple parameters.
  • Improved warning handling throughout the connector.
  • Added a new MySQLCursorPreparedDict class option that is similar to MySQLCursorPrepared; the difference is that the former returns a fetched row as a dictionary where column names are used as keys while the latter returns a row as a traditional record (tuple).
  • Enable the use of dictionaries as parameters in prepared statements using the '(param)s' format as placeholders.
  • Using MySQLConverter.escape() on datetime objects raised this error: TypeError: an integer is required. Now it does not attempt to escape values that are not bytes or string types.
  • Not all parameters were added to the INSERT statement when using INSERT IGNORE with cursor.executemany().

New in MySQL Cluster 8.0.28 (Apr 26, 2022)

  • Functionality Added or Changed:
  • Added the ndbinfo index_stats table, which provides very basic information about NDB index statistics. It is intended primarily for use in our internal testing, but may be helpful in conjunction with ndb_index_stat and other tools. (Bug #32906654)
  • Previously, ndb_import always tried to import data into a table whose name was derived from the name of the CSV file being read. This release adds a --table option (short form: -t) for this program, which overrides this behavior and specifies the name of the target table directly. (Bug #30832382)
  • Bugs Fixed:
  • Important Change: The deprecated data node option --connect-delay has been removed. This option was a synonym for --connect-retry-delay, which was not honored in all cases; this issue has been fixed, and the option now works correctly. In addition, the short form -r for this option has been deprecated, and you should expect it to be removed in a future release. (Bug #31565810)
  • References: See also: Bug #33362935.
  • Microsoft Windows: On Windows, added missing debug and test suite binaries for MySQL Server (commercial) and MySQL NDB Cluster (commercial and community). (Bug #32713189)
  • NDB Replication: The mysqld option --slave-skip-errors can be used to allow the replication applier SQL thread to skip over certain numbered errors automatically. This is not recommended in production because it allows replicas to diverge since whole transactions in the binary log are not applied; for NDBCLUSTER with its epoch transactions, this results in entire epochs of changes not being applied, likely leading to inconsistent data.
  • Ndb also checks the sequence of epochs applied, and stops the replica applier with an error if there is a sequence problem. Where --slave-skip-errors is in use, and an error is skipped, this results in a whole epoch transaction being skipped; this is detected on any subsequent attempt to apply an epoch transaction, which results in the replica applier SQL thread being stopped.
  • A new option --ndb-applier-allow-skip-epoch is added in this release to allow users to ignore wholly skipped epoch transactions, so that they can use the --slave-skip-errors option as with other MySQL storage engines. This is intended for use in testing, and not in a production setting. Use of these options is entirely at your own risk.
  • When mysqld is started with the new option (together with --slave-skip-errors), detection of a missing epoch generates a warning, but the replica applier SQL thread continues applying. (Bug #33398973)
  • NDB Replication: The log_name column of the ndb_apply_status table was created as VARBINARY, despite being defined as VARCHAR, using the latin1 character set, causing hex-decoded output when querying the table using some tools.
  • We fix this by detecting the faulty column type in ndb_apply_status and reinstalling the table definition into the data dictionary while connecting to NDB, when mysqld checks the layout of this table. (Bug #33380726)
  • NDB Cluster APIs: Several new basic example C++ NDB API programs have been added to the distribution, under storage/ndb/ndbapi-examples/ndbapi_basic/ in the source tree. These are shorter and should be easier to understand for newcomers to the NDB API than the existing API examples. They also follow recent C++ standards and practices. These examples have also been added to the NDB API documentation; see Basic NDB API Examples, for more information. (Bug #33378579, Bug #33517296)
  • NDB Cluster APIs: It is no longer possible to use the DIVERIFYREQ signal asynchronously. (Bug #33161562)
  • Timing of wait for scans log output during online reorganization was not performed correctly. As part of this fix, we change timing to generate one message every 10 seconds rather than scaling indefinitely, so as to supply regular updates. (Bug #35523977)

New in MySQL Cluster 8.0.13 Development (Oct 24, 2018)

  • Major changes and new features in NDB Cluster 8.0 which are likely to be of interest are shown in the following list:
  • INFORMATION_SCHEMA changes. The following changes are made in the display of information about Disk Data files in the INFORMATION_SCHEMA.FILES table:
  • Tablespaces and log file groups are no longer represented in the FILES table. (These constructs are not actually files.)
  • Each data file is now represented by a single row in the FILES table. Each undo log file is also now represented in this table by one row only. (Previously, a row was displayed for each copy of each of these files on each data node.)
  • For rows corresponding to data files or undo log files, node ID and undo log buffer information is no longer displayed in the EXTRA column of the FILES table.
  • In addition, INFORMATION_SCHEMA tables now are populated with tablespace statistics for MySQL Cluster tables. (Bug #27167728)
  • Error information with ndb_perror. Removed the deprecated --ndb option for perror. Use ndb_perror to obtain error message information from NDB error codes instead. (Bug #81704, Bug #81705, Bug #23523926, Bug #23523957)
  • Development in parallel with MySQL server. Beginning with this release, MySQL NDB Cluster is being developed in parallel with the standard MySQL 8.0 server under a new unified release model with the following features:
  • NDB 8.0 is developed in, built from, and released with the MySQL 8.0 source code tree.
  • The numbering scheme for NDB Cluster 8.0 releases follows the scheme for MySQL 8.0, starting with the current MySQL release (8.0.13).
  • Building the source with NDB support appends -cluster to the version string returned by mysql -V, as shown here:
  • shell≫ mysql -V
  • mysql Ver 8.0.13-cluster for Linux on x86_64 (Source distribution)
  • NDB binaries continue to display both the MySQL Server version and the NDB engine version, like this:
  • shell> ndb_mgm -V
  • MySQL distrib mysql-8.0.13 ndb-8.0.13-dmr, for Linux (x86_64)
  • In MySQL Cluster NDB 8.0, these two version numbers are always the same.
  • To build the MySQL 8.0.13 (or later) source with NDB Cluster support, use the CMake option -DWITH_NDBCLUSTER.
  • Offline multithreaded index builds. It is now possible to specify a set of cores to be used for I/O threads performing offline multithreaded builds of ordered indexes, as opposed to normal I/O duties such as file I/O, compression, or decompression. “Offline” in this context refers to building of ordered indexes performed when the parent table is not being written to; such building takes place when an NDB cluster performs a node or system restart, or as part of restoring a cluster from backup using ndb_restore --rebuild-indexes.
  • In addition, the default behaviour for offline index build work is modified to use all cores available to ndbmtd, rather limiting itself to the core reserved for the I/O thread. Doing so can improve restart and restore times and performance, availability, and the user experience.
  • This enhancement is implemented as follows:
  • The default value for BuildIndexThreads is changed from 0 to 128. This means that offline ordered index builds are now multithreaded by default.
  • The default value for TwoPassInitialNodeRestartCopy is changed from false to true. This means that an initial node restart first copies all data from a “live” node to one that is starting—without creating any indexes—builds ordered indexes offline, and then again synchronizes its data with the live node, that is, synchronizing twice and building indexes offline between the two synchonizations. This causes an initial node restart to behave more like the normal restart of a node, and reduces the time required for building indexes.
  • A new thread type (idxbld) is defined for the ThreadConfig configuration parameter, to allow locking of offline index build threads to specific CPUs.
  • In addition, NDB now distinguishes the thread types that are accessible to “ThreadConfig” by the following two criteria:
  • Whether the thread is an execution thread. Threads of types main, ldm, recv, rep, tc, and send are execution threads; thread types io, watchdog, and idxbld are not.
  • Whether the allocation of the thread to a given task is permanent or temporary. Currently all thread types except idxbld are permanent.
  • For additonal information, see the descriptions of the parameters in the Manual. (Bug #25835748, Bug #26928111)
  • logbuffers table backup process information. When performing an NDB backup, the ndbinfo.logbuffers table now displays information regarding buffer usage by the backup process on each data node. This is implemented as rows reflecting two new log types in addition to REDO and DD-UNDO. One of these rows has the log type BACKUP-DATA, which shows the amount of data buffer used during backup to copy fragments to backup files. The other row has the log type BACKUP-LOG, which displays the amount of log buffer used during the backup to record changes made after the backup has started. One each of these log_type rows is shown in the logbuffers table for each data node in the cluster. Rows having these two log types are present in the table only while an NDB backup is currently in progress. (Bug #25822988)
  • processes table on Windows. The process ID of the monitor process used on Windows platforms by RESTART to spawn and restart a mysqld is now shown in the ndbinfo.processes table as an angel_pid.
  • ODirectSyncFlag. Added the ODirectSyncFlag configuration parameter for data nodes. When enabled, the data node treats all completed filesystem writes to the redo log as though they had been performed using fsync.
  • Note
  • This parameter has no effect if at least one of the following conditions is true:
  • ODirect is not enabled.
  • InitFragmentLogFiles is set to SPARSE.
  • (Bug #25428560)
  • Data node log buffer size control. Added the --logbuffer-size option for ndbd and ndbmtd, for use in debugging with a large number of log messages. This controls the size of the data node log buffer; the default (32K) is intended for normal operations. (Bug #89679, Bug #27550943)
  • String hashing improvements. Prior to NDB 8.0, all string hashing was based on first transforming the string into a normalized form, then MD5-hashing the resulting binary image. This could give rise to some performance problems, for the following reasons:
  • The normalized string is always space padded to its full length. For a VARCHAR, this often involved adding more spaces than there were characters in the original string.
  • The string libraries were not optimized for this space padding, and added considerable overhead in some use cases.
  • The padding semantics varied between character sets, some of which were not padded to their full length.
  • The transformed string could become quite large, even without space padding; some Unicode 9.0 collations can transform a single code point into 100 bytes of character data or more.
  • Subsequent MD5 hashing consisted mainly of padding with spaces, and was not particularly efficient, possibly causing additional performance penalties by flush significant portions of the L1 cache.
  • Collations provide their own hash functions, which hash the string directly without first creating a normalized string. In addition, for Unicode 9.0 collations, the hashes are computed without padding. NDB now takes advantage of this built-in function whenever hashing a string identified as using a Unicode 9.0 collation.
  • Since, for other collations there are existing databases which are hash partitioned on the transformed string, NDB continues to employ the previous method for hashing strings that use these, to maintain compatibility. (Bug #89590, Bug #89604, Bug #89609, Bug #27515000, Bug #27523758, Bug #27522732)
  • (See also.)
  • On-the-fly upgrades of tables using .frm files. A table created in NDB 7.6 and earlier contains metadata in the form of a compressed .frm file, which is no longer supported in MySQL 8.0. To facilitate online upgrades to NDB 8.0, NDB performs on-the-fly translation of this metadata and writes it into the MySQL Server's data dictionary, which enables the mysqld in NDB Cluster 8.0 to work with the table without preventing subsequent use of the table by a previous version of the NDB software.
  • Important
  • Once a table's structure has been modified in NDB 8.0, its metadata is stored using the Data Dictionary, and it can no longer be accessed by NDB 7.6 and earlier.
  • This enhancement also makes it possible to restore an NDB backup made using an earlier version to a cluster running NDB 8.0 (or later).
  • Schema synchronization of tablespace objects. When a MySQL Server connects as an SQL node to an NDB cluster, it synchronizes its data dictionary with the information found in NDB dictionary.
  • Previously, the only NDB objects synchronized on connection of a new SQL node were databases and tables; MySQL NDB Cluster 8.0.14 and later also implement schema synchronization of disk data objects including tablespaces and log file groups. Among other benefits, this eliminates the possibility of a mismatch between the MySQL data dictionary and the NDB dictionary following a native backup and restore, in which tablespaces and log file groups were restored to the NDB dictionary, but not to the MySQL Server's data dictionary.
  • Handling of NO_AUTO_CREATE_USER in mysqld options file. An error now is written to the server log when the presence of the NO_AUTO_CREATE_USER value for the sql_mode option in the options file prevents mysqld from starting.
  • Handling of references to nonexistent tablespaces. It is no longer possible to issue a CREATE TABLE statement that refers to a nonexistent tablespace. Such a statement now fails with an error.
  • RESET MASTER changes. Because the MySQL Server now executes RESET MASTER with a global read lock, the behavior of this statement when used with NDB Cluster has changed in the following two respects:
  • It is no longer guaranteed to be synonchrous; that is, it is now possible that a read coming immediately before RESET MASTER is issued may not be logged until after the binary log has been rotated.
  • It now behaves identically, regardless of whether the statement is issued on the same SQL node that is writing the binary log, or on a different SQL node in the same cluster.

New in MySQL Cluster 7.6.5 Development (Apr 26, 2018)

  • Bugs Fixed:
  • NDB Client Programs: On Unix platforms, the Auto-Installer failed to stop the cluster when ndb_mgmd was installed in a directory other than the default. (Bug #89624, Bug #27531186)
  • NDB Client Programs: The Auto-Installer did not provide a mechanism for setting the ServerPort parameter. (Bug #89623, Bug #27539823)
  • Writing of LCP control files was not always done correctly, which in some cases could lead to an unplanned shutdown of the cluster.
  • This fix adds the requirement that upgrades from NDB 7.6.4 (or earlier) to this release (or a later one) include initial node restarts. (Bug #26640486)

New in MySQL Cluster 7.5.10 (Apr 26, 2018)

  • Bugs Fixed:
  • NDB Cluster APIs: A previous fix for an issue, in which the failure of multiple data nodes during a partial restart could cause API nodes to fail, did not properly check the validity of the associated NdbReceiver object before proceeding. Now in such cases an invalid object triggers handling for invalid signals, rather than a node failure. (Bug #25902137)
  • References: This issue is a regression of: Bug #25092498
  • NDB Cluster APIs: Incorrect results, usually an empty result set, were returned when setBound() was used to specify a NULL bound. This issue appears to have been caused by a problem in gcc, limited to cases using the old version of this method (which does not employ NdbRecord), and is fixed by rewriting the problematic internal logic in the old implementation. (Bug #89468, Bug #27461752)
  • MySQL NDB ClusterJ: ClusterJ quit unexpectedly as there was no error handling in the scanIndex() function of the ClusterTransactionImpl class for a null returned to it internally by the scanIndex() method of the ndbTransaction class. (Bug #27297681, Bug #88989)
  • In some circumstances, when a transaction was aborted in the DBTC block, there remained links to trigger records from operation records which were not yet reference-counted, but when such an operation record was released the trigger reference count was still decremented. (Bug #27629680)
  • ANALYZE TABLE used excessive amounts of CPU on large, low-cardinality tables. (Bug #27438963)
  • Queries using very large lists with IN were not handled correctly, which could lead to data node failures. (Bug #27397802)
  • A data node overload could in some situations lead to an unplanned shutdown of the data node, which led to all data nodes disconnecting from the management and nodes
  • This was due to a situation in which API_FAILREQ was not the last received signal prior to the node failure
  • As part of this fix, the transaction coordinator's handling of SCAN_TABREQ signals for an ApiConnectRecord in an incorrect state was also improved. (Bug #27381901)
  • References: See also: Bug #47039, Bug #11755287
  • In a two-node cluster, when the node having the lowest ID was started using --nostart, API clients could not connect, failing with Could not alloc node id at HOST port PORT_NO: No free node id found for mysqld(API). (Bug #27225212)
  • Race conditions sometimes occurred during asynchronous disconnection and reconnection of the transporter while other threads concurrently inserted signal data into the send buffers, leading to an unplanned shutdown of the cluster
  • As part of the work fixing this issue, the internal templating function used by the Transporter Registry when it prepares a send is refactored to use likely-or-unlikely logic to speed up execution, and to remove a number of duplicate checks for NULL. (Bug #24444908, Bug #25128512)
  • References: See also: Bug #20112700
  • ndb_restore sometimes logged data file and log file progress values much greater than 100%. (Bug #20989106)
  • As a result of the reuse of code intended for send threads when performing an assist send, all of the local release send buffers were released to the global pool, which caused the intended level of the local send buffer pool to be ignored. Now send threads and assisting worker threads follow their own policies for maintaining their local buffer pools. (Bug #89119, Bug #27349118)
  • When sending priority A signals, we now ensure that the number of pending signals is explicitly initialized. (Bug #88986, Bug #27294856)
  • ndb_restore --print_data --hex did not print trailing 0s of LONGVARBINARY values. (Bug #65560, Bug #14198580)

New in MySQL Cluster 7.4.6 (Apr 14, 2015)

  • Bugs Fixed:
  • During backup, loading data from one SQL node followed by repeated DELETE statements on the tables just loaded from a different SQL node could lead to data node failures. (Bug #18949230)
  • When an instance of NdbEventBuffer was destroyed, any references to GCI operations that remained in the event buffer data list were not freed. Now these are freed, and items from the event bufer data list are returned to the free list when purging GCI containers. (Bug #76165, Bug #20651661)
  • When a bulk delete operation was committed early to avoid an additional round trip, while also returning the number of affected rows, but failed with a timeout error, an SQL node performed no verification that the transaction was in the Committed state. (Bug #74494, Bug #20092754)

New in MySQL Cluster 7.4.5 (Mar 21, 2015)

  • Bugs Fixed:
  • It was found during testing that problems could arise when the node registered as the arbitrator disconnected or failed during the arbitration process. In this situation, the node requesting arbitration could never receive a positive acknowledgement from the registered arbitrator; this node also lacked a stable set of members and could not initiate selection of a new arbitrator. Now in such cases, when the arbitrator fails or loses contact during arbitration, the requesting node immediately fails rather than waiting to time out. (Bug #20538179)
DROP DATABASE failed to remove the database when the database directory contained a .ndb file which had no corresponding table in NDB. Now, when executing DROP DATABASE, NDB performs an check specifically for leftover .ndb files, and deletes any that it finds. (Bug #20480035)
The maximum failure time calculation used to ensure that normal node failure handling mechanisms are given time to handle survivable cluster failures (before global checkpoint watchdog mechanisms start to kill nodes due to GCP delays) was excessively conservative, and neglected to consider that there can be at most number_of_data_nodes / NoOfReplicas node failures before the cluster can no longer survive. Now the value of NoOfReplicas is properly taken into account when performing this calculation. (Bug #20069617, Bug #20062754)
During a node restart, if there was no global checkpoint completed between the START_LCP_REQ for a local checkpoint and its LCP_COMPLETE_REP it was possible for a comparison of the LCP ID sent in the LCP_COMPLETE_REP signal with the internal value SYSFILE->latestLCP_ID to fail. (Bug #76113, Bug #20631645)
When sending LCP_FRAG_ORD signals as part of master takeover, it is possible that the master not is not synchronized with complete accuracy in real time, so that some signals must be dropped. During this time, the master can send a LCP_FRAG_ORD signal with its lastFragmentFlag set even after the local checkpoint has been completed. This enhancement causes this flag to persist until the statrt of the next local checkpoint, which causes these signals to be dropped as well. This change affects ndbd only; the issue described did not occur with ndbmtd. (Bug #75964, Bug #20567730)
When reading and copying transporter short signal data, it was possible for the data to be copied back to the same signal with overlapping memory. (Bug #75930, Bug #20553247)
NDB node takeover code made the assumption that there would be only one takeover record when starting a takeover, based on the further assumption that the master node could never perform copying of fragments. However, this is not the case in a system restart, where a master node can have stale data and so need to perform such copying to bring itself up to date. (Bug #75919, Bug #20546899)
Cluster API: A scan operation, whether it is a single table scan or a query scan used by a pushed join, stores the result set in a buffer. This maximum size of this buffer is calculated and preallocated before the scan operation is started. This buffer may consume a considerable amount of memory; in some cases we observed a 2 GB buffer footprint in tests that executed 100 parallel scans with 2 single-threaded (ndbd) data nodes. This memory consumption was found to scale linearly with additional fragments.

New in MySQL Cluster 7.4.4 (Feb 26, 2015)

  • Conflict detection and resolution enhancements:
  • A reserved column name namespace NDB$ is now employed for exceptions table metacolumns, allowing an arbitrary subset of main table columns to be recorded, even if they are not part of the original table's primary key.
  • Recording the complete original primary key is no longer required, due to the fact that matching against exceptions table columns is now done by name and type only. It is now also possible for you to record values of columns which not are part of the main table's primary key in the exceptions table.
  • Read conflict detection is now possible. All rows read by the conflicting transaction are flagged, and logged in the exceptions table. Rows inserted in the same transaction are not included among the rows read or logged. This read tracking depends on the slave having an exclusive read lock which requires setting ndb_log_exclusive_reads in advance. See Read conflict detection and resolution, for more information and examples.
  • Existing exceptions tables remain supported. For more information, see Section 18.6.11, “MySQL Cluster Replication Conflict Resolution”.
  • Role management in circular (“active-active”) replication:
  • When using a circular or “active-active” MySQL Cluster Replication topology, you can assign one of the roles of primary of secondary to a given MySQL Cluster using the ndb_slave_conflict_role server system variable. This variable can take any one of the values PRIMARY, SECONDARY, PASS, or NULL. The default is NULL. This can be employed when failing over from a MySQL Cluster acting as primary.
  • A passthrough state (PASS) is also supported, in which the effects of any conflict resolution function are ignored. See the description of the ndb_slave_conflict_role variable, as well as Section 18.6.11, “MySQL Cluster Replication Conflict Resolution”, for more information.
  • Per-fragment memory usage reporting:
  • You can now obtain data about memory usage by individual MySQL Cluster fragments from the memory_per_fragment view, added in MySQL Cluster NDB 7.4.1 to the ndbinfo information database. For more information, see Section 18.5.10.17, “The ndbinfo memory_per_fragment Table”.
  • Node restart improvements - MySQL Cluster NDB 7.4 includes a number of improvements which decrease the time needed for data nodes to be restarted. These are described in the following list:
  • Memory allocated that is allocated on node startup cannot be used until it has been, which causes the operating system to set aside the actual physical memory required. In previous versions of MySQL Cluster, the process of touching each page of memory that was allocated was singlethreaded, which made it relatively time-consuming. This process has now been reimplimented with multithreading. In tests with 16 threads, touch times on the order of 3 times shorter than with a single thread were observed.
  • Increased parallelization of local checkpoints; in MySQL Cluster NDB 7.4, LCPs now support 32 fragments rather than 2 as before. This greatly increases utilization of CPU power that would otherwise go unused, and can make LCPs faster by up to a factor of 10; this speedup in turn can greatly improve node restart times.
  • Reporting on disk writes is provided by new ndbinfo tables disk_write_speed_base, disk_write_speed_aggregate, and disk_write_speed_aggregate_node, which provide information about the speed of disk writes for each LDM thread that is in use.
  • This release also adds the data node configuration parameters MinDiskWriteSpeed, MaxDiskWriteSpeed, MaxDiskWriteSpeedOtherNodeRestart, and MaxDiskWriteSpeedOwnRestart to control write speeds for LCPs and backups when the present node, another node, or no node is currently restarting.
  • These changes are intended to supersede configuration of disk writes using the DiskCheckpointSpeed and DiskCheckpointSpeedInRestart configuration parameters. These 2 parameters have now been deprecated, and are subject to removal in a future MySQL Cluster release.
  • Faster times for restoring a MySQL Cluster from backup have been obtained by replacing delayed signals found at a point which was found to be critical to performance with normal (undelayed) signals. The elimination or replacement of these unnecessary delayed signals should noticeably reduce the amount of time required to back up a MySQL Cluster, or to restore a MySQL Cluster from backup.
  • Several internal methods relating to the NDB receive thread have been optimized, to increase the efficiency of SQL processing by NDB. The receiver thread at time may have to process several million received records per second, so it is critical that it not perform unnecessary work or waste resources when retrieving records from MySQL Cluster data nodes.
  • Improved reporting of MySQL Cluster start phases:
  • Reporting of MySQL Cluster start phases provides more frequent and specific printouts during startup.
  • NDB API: new Event API:
  • MySQL Cluster NDB 7.4.3 introduces an epoch-driven Event API that supercedes the earlier GCI-based model. The new version of the API also simplifies error detection and handling. These changes are realized in the NDB API by implementing a number of new methods for Ndb and NdbEventOperation, deprecating several other methods of both classes, and adding new type values to Event::TableEvent.
  • The event handling methods added to Ndb in MySQL Cluster NDB 7.4.3 are pollEvents2(), nextEvent2(), getHighestQueuedEpoch(), and getNextEventOpInEpoch2(). The Ndb methods pollEvents(), nextEvent(), getLatestGCI(), getGCIEventOperations(), isConsistent(), and isConsistentGCI() are deprecated beginning with the same release.
  • MySQL Cluster NDB 7.4.3 adds the NdbEventOperation event handling methods getEventType2(), getEpoch(), isEmptyEpoch(), and isErrorEpoch; it obsoletes getEventType(), getGCI(), getLatestGCI(), isOverrun(), hasError(), and clearError().
  • While some (but not all) of the new methods are direct replacements for deprecated methods, not all of the deprecated methods map to new ones. The Event Class, provides information as to which old methods correspond to new ones.
  • Error handling using the new API is no longer handled using dedicated hasError() and clearError() methods, which are now deprecated (and thus subject to removal in a future release of MySQL Cluster). To support this change, the list of TableEvent types now includes the values TE_EMPTY (empty epoch), TE_INCONSISTENT (inconsistent epoch), and TE_OUT_OF_MEMORY (inconsistent data).
  • Improvements in event buffer management have also been made by implementing new get_eventbuffer_free_percent(), set_eventbuffer_free_percent(), and get_eventbuffer_memory_usage() methods. Memory buffer usage can now be represented in application code using EventBufferMemoryUsage. The ndb_eventbuffer_free_percent system variable, also implemented in MySQL Cluster NDB 7.4, makes it possible for event buffer memory usage to be checked from MySQL client applications.
  • Per-fragment operations information:
  • In MySQL Cluster NDB 7.4.3 and later, counts of various types of operations on a given fragment or fragment replica can obtained easily using the operations_per_fragment table in the ndbinfo information database. This includes read, write, update, and delete operations, as well as scan and index operations performed by these. Information about operations refused, and about rows scanned and returned from a given fragment replica, is also shown in operations_per_fragment. This table also provides information about interpreted programs used as attribute values, and values returned by them.

New in MySQL Cluster 7.3.5 (Apr 8, 2014)

  • Functionality Added or Changed:
  • Handling of LongMessageBuffer shortages and statistics has been improved as follows:
  • The default value of LongMessageBuffer has been increased from 4 MB to 64 MB.
  • When this resource is exhausted, a suitable informative message is now printed in the data node log describing possible causes of the problem and suggesting possible solutions.
  • LongMessageBuffer usage information is now shown in the ndbinfo.memoryusage table. See the description of this table for an example and additional information.
  • Bugs Fixed:
  • Important Change: The server system variables ndb_index_cache_entries and ndb_index_stat_freq, which had been deprecated in a previous MySQL Cluster release series, have now been removed. (Bug #11746486, Bug #26673)
  • When an ALTER TABLE statement changed table schemas without causing a change in the table's partitioning, the new table definition did not copy the hash map from the old definition, but used the current default hash map instead. However, the table data was not reorganized according to the new hashmap, which made some rows inaccessible using a primary key lookup if the two hash maps had incompatible definitions. To keep this situation from occurring, any ALTER TABLE that entails a hashmap change now triggers a reorganisation of the table. In addition, when copying a table definition in such cases, the hashmap is now also copied. (Bug #18436558)
  • When certain queries generated signals having more than 18 data words prior to a node failure, such signals were not written correctly in the trace file. (Bug #18419554)
  • The ServerPort and TcpBind_INADDR_ANY configuration parameters were not included in the output of ndb_mgmd --print-full-config. (Bug #18366909)
  • After dropping an NDB table, neither the cluster log nor the output of the REPORT MemoryUsage command showed that the IndexMemory used by that table had been freed, even though the memory had in fact been deallocated. This issue was introduced in MySQL Cluster NDB 7.3.2. (Bug #18296810)
  • -DWITH_NDBMTD=0 did not function correctly, which could cause the build to fail on platforms such as ARM and Raspberry Pi which do not define the memory barrier functions required to compile ndbmtd. (Bug #18267919)
  • ndb_show_tables sometimes failed with the error message Unable to connect to management server and immediately terminated, without providing the underlying reason for the failure. To provide more useful information in such cases, this program now also prints the most recent error from the Ndb_cluster_connection object used to instantiate the connection. (Bug #18276327)
  • The block threads managed by the multi-threading scheduler communicate by placing signals in an out queue or job buffer which is set up between all block threads. This queue has a fixed maximum size, such that when it is filled up, the worker thread must wait for the consumer to drain the queue. In a highly loaded system, multiple threads could end up in a circular wait lock due to full out buffers, such that they were preventing each other from performing any useful work. This condition eventually led to the data node being declared dead and killed by the watchdog timer. To fix this problem, we detect situations in which a circular wait lock is about to begin, and cause buffers which are otherwise held in reserve to become available for signal processing by queues which are highly loaded. (Bug #18229003)
  • The ndb_mgm client START BACKUP command (see Commands in the MySQL Cluster Management Client) could experience occasional random failures when a ping was received prior to an expected BackupCompleted event. Now the connection established by this command is not checked until it has been properly set up. (Bug #18165088)
  • An issue found when compiling the MySQL Cluster software for Solaris platforms could lead to problems when using ThreadConfig on such systems. (Bug #18181656)
  • When creating a table with foreign key referencing an index in another table, it sometimes appeared possible to create the foreign key even if the order of the columns in the indexes did not match, due to the fact that an appropriate error was not always returned internally. This fix improves the error used internally to work in most cases; however, it is still possible for this situation to occur in the event that the parent index is a unique index. (Bug #18094360)
  • Updating parent tables of foreign keys used excessive scan resources and so required unusually high settings for MaxNoOfLocalScans and MaxNoOfConcurrentScans. (Bug #18082045)
  • Dropping a nonexistent foreign key on an NDB table (using, for example, ALTER TABLE) appeared to succeed. Now in such cases, the statement fails with a relevant error message, as expected. (Bug #17232212)
  • Data nodes running ndbmtd could stall while performing an online upgrade of a MySQL Cluster containing a great many tables from a version prior to NDB 7.2.5 to version 7.2.5 or later. (Bug #16693068)
  • Replication: Log rotation events could cause group_relay_log_pos to be moved forward incorrectly within a group. This meant that, when the transaction was retried, or if the SQL thread was stopped in the middle of a transaction following one or more log rotations (such that the transaction or group spanned multiple relay log files), part or all of the group was silently skipped. This issue has been addressed by correcting a problem in the logic used to avoid touching the coordinates of the SQL thread when updating the log position as part of a relay log rotation whereby it was possible to update the SQL thread's coordinates when not using a multi-threaded slave, even in the middle of a group. (Bug #18482854)
  • Cluster Replication: A slave in MySQL Cluster Replication now monitors the progression of epoch numbers received from its immediate upstream master, which can both serve as a useful check on the low-level functioning of replication, and provide a warning in the event replication is restarted accidentally at an already-applied position.
  • Cluster API: When an NDB API client application received a signal with an invalid block or signal number, NDB provided only a very brief error message that did not accurately convey the nature of the problem. Now in such cases, appropriate printouts are provided when a bad signal or message is detected. In addition, the message length is now checked to make certain that it matches the size of the embedded signal. (Bug #18426180)
  • Cluster API: Refactoring that was performed in MySQL Cluster NDB 7.3.4 inadvertently introduced a dependency in Ndb.hpp on a file that is not included in the distribution, which caused NDB API applications to fail to compile. The dependency has been removed. (Bug #18293112, Bug #71803)
  • Cluster API: An NDB API application sends a scan query to a data node; the scan is processed by the transaction coordinator (TC). The TC forwards a LQHKEYREQ request to the appropriate LDM, and aborts the transaction if it does not receive a LQHKEYCONF response within the specified time limit. After the transaction is successfully aborted, the TC sends a TCROLLBACKREP to the NDBAPI client, and the NDB API client processes this message by cleaning up any Ndb objects associated with the transaction.
  • Cluster API: The example ndbapi-examples/ndbapi_blob_ndbrecord/main.cpp included an internal header file (ndb_global.h) not found in the MySQL Cluster binary distribution. The example now uses stdlib.h and string.h instead of this file. (Bug #18096866, Bug #71409)
  • Cluster API: When Dictionary::dropTable() attempted (as a normal part of its internal operations) to drop an index used by a foreign key constraint, this led to data node failure. Now in such cases, dropTable() drops all foreign keys on the table being dropped, whether this table acts as a parent table, child table, or both. (Bug #18069680)
  • References: See also Bug #17591531.
  • Cluster API: ndb_restore could sometimes report Error 701 System busy with other schema operation unnecessarily when restoring in parallel. (Bug #17916243)

New in MySQL Cluster 7.3.3 (Dec 4, 2013)

  • Functionality Added or Changed:
  • The length of time a management node waits for a heartbeat message from another management node is now configurable using the HeartbeatIntervalMgmdMgmd management node configuration parameter added in this release. The connection is considered dead after 3 missed heartbeats. The default value is 1500 milliseconds, or a timeout of approximately 6000 ms. (Bug #17807768, Bug #16426805)
  • The MySQL Cluster Auto-Installer now generates a my.cnf file for each mysqld in the cluster before starting it. For more information, see Using the MySQL Cluster Auto-Installer. (Bug #16994782)
  • BLOB and TEXT columns are now reorganized by the ALTER ONLINE TABLE ... REORGANIZE PARTITION statement. (Bug #13714148)
  • Bugs Fixed:
  • ndbmemcache:
  • When attempting to start memcached with a cache_size larger than that of the available memory and with preallocate=true failed, the error message provided only a numeric code, and did not indicate what the actual source of the error was. (Bug #17509293, Bug #70403)
  • The CMakeLists.txt files for ndbmemcache wrote into the source tree, preventing out-of-source builds. (Bug #14650456)
  • ndbmemcache did not handle passed-in BINARY values correctly.
  • Performance:
  • In a number of cases found in various locations in the MySQL Cluster codebase, unnecessary iterations were performed; this was caused by failing to break out of a repeating control structure after a test condition had been met. This community-contributed fix removes the unneeded repetitions by supplying the missing breaks. (Bug #16904243, Bug #69392, Bug #16904338, Bug #69394, Bug #16778417, Bug #69171, Bug #16778494, Bug #69172, Bug #16798410, Bug #69207, Bug #16801489, Bug #69215, Bug #16904266, Bug #69393)
  • Packaging:
  • Portions of the documentation specific to MySQL Cluster and the NDB storage engine were not included when installing from RPMs. (Bug #16303451)
  • File system errors occurring during a local checkpoint could sometimes cause an LCP to hang with no obvious cause when they were not handled correctly. Now in such cases, such errors always cause the node to fail. Note that the LQH block always shuts down the node when a local checkpoint fails; the change here is to make likely node failure occur more quickly and to make the original file system error more visible. (Bug #19691443)
  • Trying to restore to a table having a BLOB column in a different position from that of the original one caused ndb_restore --restore_data to fail. (Bug #17395298)
  • It was not possible to start MySQL Cluster processes created by the Auto-Installer on a Windows host running freeSSHd. (Bug #17269626)
  • ndb_restore could abort during the last stages of a restore using attribute promotion or demotion into an existing table. This could happen if a converted attribute was nullable and the backup had been run on active database. (Bug #17275798)
  • ALTER ONLINE TABLE ... REORGANIZE PARTITION failed when run against a table having or using a reference to a foreign key. (Bug #17036744, Bug #69619)
  • The DBUTIL data node block is now less strict about the order in which it receives certain messages from other nodes. (Bug #17052422)
  • TUPKEYREQ signals are used to read data from the tuple manager block (DBTUP), and are used for all types of data access, especially for scans which read many rows. A TUPKEYREQ specifies a series of 'columns' to be read, which can be either single columns in a specific table, or pseudocolumns, two of which—READ_ALL and READ_PACKED—are aliases to read all columns in a table, or some subset of these columns. Pseudocolumns are used by modern NDB API applications as they require less space in the TUPKEYREQ to specify columns to be read, and can return the data in a more compact (packed) format.
  • This fix moves the creation and initialization of on-stack Signal objects to only those pseudocolumn reads which need to EXECUTE_DIRECT to other block instances, rather than for every read. In addition, the size of an on-stack signal is now varied to suit the requirements of each pseudocolumn, so that only reads of the INDEX_STAT pseudocolumn now require initialization (and 3KB memory each time this is performed). (Bug #17009502)
  • A race condition could sometimes occur when trying to lock receive threads to cores. (Bug #17009393)
  • RealTimeScheduler did not work correctly with data nodes running ndbmtd. (Bug #16961971)
  • Results from joins using a WHERE with an ORDER BY ... DESC clause were not sorted properly; the DESC keyword in such cases was effectively ignored. (Bug #16999886, Bug #69528)
  • The Windows error ERROR_FILE_EXISTS was not recognized by NDB, which treated it as an unknown error. (Bug #16970960)
  • Maintenance and checking of parent batch completion in the SPJ block of the NDB kernel was reimplemented. Among other improvements, the completion state of all ancestor nodes in the tree are now preserved. (Bug #16925513)
  • Dropping a column, which was not itself a foreign key, from an NDB table having foreign keys failed with ER_TABLE_DEF_CHANGED. (Bug #16912989)
  • The LCP fragment scan watchdog periodically checks for lack of progress in a fragment scan performed as part of a local checkpoint, and shuts down the node if there is no progress after a given amount of time has elapsed. This interval, formerly hard-coded as 60 seconds, can now be configured using the LcpScanProgressTimeout data node configuration parameter added in this release.
  • This configuration parameter sets the maximum time the local checkpoint can be stalled before the LCP fragment scan watchdog shuts down the node. The default is 60 seconds, which provides backward compatibility with previous releases.
  • You can disable the LCP fragment scan watchdog by setting this parameter to 0. (Bug #16630410)
  • Added the ndb_error_reporter options --connection-timeout, which makes it possible to set a timeout for connecting to nodes, --dry-scp, which disables scp connections to remote hosts, and --skip-nodegroup, which skips all nodes in a given node group. (Bug #16602002)
  • ndb_mgm treated backup IDs provided to ABORT BACKUP commands as signed values, so that backup IDs greater than 231 wrapped around to negative values. This issue also affected out-of-range backup IDs, which wrapped around to negative values instead of causing errors as expected in such cases. The backup ID is now treated as an unsigned value, and ndb_mgm now performs proper range checking for backup ID values greater than MAX_BACKUPS (232). (Bug #16585497, Bug #68798)
  • After issuing START BACKUP id WAIT STARTED, if id had already been used for a backup ID, an error caused by the duplicate ID occurred as expected, but following this, the START BACKUP command never completed. (Bug #16593604, Bug #68854)
  • When trying to specify a backup ID greater than the maximum allowed, the value was silently truncated. (Bug #16585455, Bug #68796)
  • The unexpected shutdown of another data node as a starting data node received its node ID caused the latter to hang in Start Phase 1. (Bug #16007980)
  • The NDB receive thread waited unnecessarily for additional job buffers to become available when receiving data. This caused the receive mutex to be held during this wait, which could result in a busy wait when the receive thread was running with real-time priority.
  • This fix also handles the case where a negative return value from the initial check of the job buffer by the receive thread prevented further execution of data reception, which could possibly lead to communication blockage or configured ReceiveBufferMemory underutilization. (Bug #15907515)
  • When the available job buffers for a given thread fell below the critical threshold, the internal multi-threading job scheduler waited for job buffers for incoming rather than outgoing signals to become available, which meant that the scheduler waited the maximum timeout (1 millisecond) before resuming execution. (Bug #15907122)
  • SELECT ... WHERE ... LIKE from an NDB table could return incorrect results when using engine_condition_pushdown=ON. (Bug #15923467, Bug #67724)
  • Setting lower_case_table_names to 1 or 2 on Windows systems caused ALTER TABLE ... ADD FOREIGN KEY statements against tables with names containing uppercase letters to fail with Error 155, No such table: '(null)'. (Bug #14826778, Bug #67354)
  • When a node fails, the Distribution Handler (DBDIH kernel block) takes steps together with the Transaction Coordinator (DBTC) to make sure that all ongoing transactions involving the failed node are taken over by a surviving node and either committed or aborted. Transactions taken over which are then committed belong in the epoch that is current at the time the node failure occurs, so the surviving nodes must keep this epoch available until the transaction takeover is complete. This is needed to maintain ordering between epochs.
  • A problem was encountered in the mechanism intended to keep the current epoch open which led to a race condition between this mechanism and that normally used to declare the end of an epoch. This could cause the current epoch to be closed prematurely, leading to failure of one or more surviving data nodes. (Bug #14623333, Bug #16990394)
  • When using dynamic listening ports for accepting connections from API nodes, the port numbers were reported to the management server serially. This required a round trip for each API node, causing the time required for data nodes to connect to the management server to grow linearly with the number of API nodes. To correct this problem, each data node now reports all dynamic ports at once. (Bug #12593774)
  • Formerly, the node used as the coordinator or leader for distributed decision making between nodes (also known as the DICT manager—see The DBDICT Block) was indicated in the output of the ndb_mgm client SHOW command as the “master” node, although this node has no relationship to a master server in MySQL Replication. (It should also be noted that it is not necessary to know which node is the leader except when debugging NDBCLUSTER source code.) To avoid possible confusion, this label has been removed, and the leader node is now indicated in SHOW command output using an asterisk (*) character. (Bug #11746263, Bug #24880)
  • ndb_error-reporter did not support the --help option. (Bug #11756666, Bug #48606)
  • ABORT BACKUP in the ndb_mgm client (see Commands in the MySQL Cluster Management Client) took an excessive amount of time to return (approximately as long as the backup would have required to complete, had it not been aborted), and failed to remove the files that had been generated by the aborted backup. (Bug #68853, Bug #17719439)
  • Program execution failed to break out of a loop after meeting a desired condition in a number of internal methods, performing unneeded work in all cases where this occurred. (Bug #69610, Bug #69611, Bug #69736, Bug #17030606, Bug #17030614, Bug #17160263)
  • Attribute promotion and demotion when restoring data to NDB tables using ndb_restore --restore_data with the --promote-attributes and --lossy-conversions options has been improved as follows: Columns of types CHAR, and VARCHAR can now be promoted to BINARY and VARBINARY, and columns of the latter two types can be demoted to one of the first two. Note that converted character data is not checked to conform to any character set. Any of the types CHAR, VARCHAR, BINARY, and VARBINARY can now be promoted to TEXT or BLOB. When performing such promotions, the only other sort of type conversion that can be performed at the same time is between character types and binary types.
  • Cluster Replication:
  • Trying to use a stale .frm file and encountering a mismatch bewteen table definitions could cause mysqld to make errors when writing to the binary log. (Bug #17250994)
  • Replaying a binary log that had been written by a mysqld from a MySQL Server distribution (and from not a MySQL Cluster distribution), and that contained DML statements, on a MySQL Cluster SQL node could lead to failure of the SQL node. (Bug #16742250)
  • Cluster API:
  • The Event::setTable() method now supports a pointer or a reference to table as its required argument. If a null table pointer is used, the method now returns -1 to make it clear that this is what has occurred. (Bug #16329082)