Chapter 22 MySQL NDB Cluster 8.0


内容表

22.1 NDB Cluster Overview
22.1.1 NDB Cluster Core Concepts
22.1.2 NDB Cluster Nodes, Node Groups, Replicas, and Partitions
22.1.3 NDB Cluster Hardware, Software, and Networking Requirements
22.1.4 What is New in NDB Cluster
22.1.5 Options, Variables, and Parameters Added, Deprecated or Removed in NDB 8.0
22.1.6 MySQL Server Using InnoDB Compared with NDB Cluster
22.1.7 Known Limitations of NDB Cluster
22.2 NDB Cluster Installation
22.2.1 The NDB Cluster Auto-Installer
22.2.2 Installation of NDB Cluster on Linux
22.2.3 Installing NDB Cluster on Windows
22.2.4 Initial Configuration of NDB Cluster
22.2.5 Initial Startup of NDB Cluster
22.2.6 NDB Cluster Example with Tables and Data
22.2.7 Safe Shutdown and Restart of NDB Cluster
22.2.8 Upgrading and Downgrading NDB Cluster
22.3 Configuration of NDB Cluster
22.3.1 Quick Test Setup of NDB Cluster
22.3.2 Overview of NDB Cluster Configuration Parameters, Options, and Variables
22.3.3 NDB Cluster Configuration Files
22.3.4 Using High-Speed Interconnects with NDB Cluster
22.4 NDB Cluster Programs
22.4.1 ndbd — The NDB Cluster Data Node Daemon
22.4.2 ndbinfo_select_all — Select From ndbinfo Tables
22.4.3 ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)
22.4.4 ndb_mgmd — The NDB Cluster Management Server Daemon
22.4.5 ndb_mgm — The NDB Cluster Management Client
22.4.6 ndb_blob_tool — Check and Repair BLOB and TEXT columns of NDB Cluster Tables
22.4.7 ndb_config — Extract NDB Cluster Configuration Information
22.4.8 ndb_delete_all — Delete All Rows from an NDB Table
22.4.9 ndb_desc — Describe NDB Tables
22.4.10 ndb_drop_index — Drop Index from an NDB Table
22.4.11 ndb_drop_table — Drop an NDB Table
22.4.12 ndb_error_reporter — NDB Error-Reporting Utility
22.4.13 ndb_import — Import CSV Data Into NDB
22.4.14 ndb_index_stat — NDB Index Statistics Utility
22.4.15 ndb_move_data — NDB Data Copy Utility
22.4.16 ndb_perror — Obtain NDB Error Message Information
22.4.17 ndb_print_backup_file — Print NDB Backup File Contents
22.4.18 ndb_print_file — Print NDB Disk Data File Contents
22.4.19 ndb_print_frag_file — Print NDB Fragment List File Contents
22.4.20 ndb_print_schema_file — Print NDB Schema File Contents
22.4.21 ndb_print_sys_file — Print NDB System File Contents
22.4.22 ndb_redo_log_reader — Check and Print Content of Cluster Redo Log
22.4.23 ndb_restore — Restore an NDB Cluster Backup
22.4.24 ndb_select_all — Print Rows from an NDB Table
22.4.25 ndb_select_count — Print Row Counts for NDB Tables
22.4.26 ndb_setup.py — Start browser-based Auto-Installer for NDB Cluster
22.4.27 ndb_show_tables — Display List of NDB Tables
22.4.28 ndb_size.pl — NDBCLUSTER Size Requirement Estimator
22.4.29 ndb_top — View CPU usage information for NDB threads
22.4.30 ndb_waiter — Wait for NDB Cluster to Reach a Given Status
22.4.31 Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs
22.5 Management of NDB Cluster
22.5.1 Summary of NDB Cluster Start Phases
22.5.2 Commands in the NDB Cluster Management Client
22.5.3 Online Backup of NDB Cluster
22.5.4 MySQL Server Usage for NDB Cluster
22.5.5 Performing a Rolling Restart of an NDB Cluster
22.5.6 Event Reports Generated in NDB Cluster
22.5.7 NDB Cluster Log Messages
22.5.8 NDB Cluster Single User Mode
22.5.9 Quick Reference: NDB Cluster SQL Statements
22.5.10 ndbinfo: The NDB Cluster Information Database
22.5.11 INFORMATION_SCHEMA Tables for NDB Cluster
22.5.12 NDB Cluster Security Issues
22.5.13 NDB Cluster Disk Data Tables
22.5.14 Online Operations with ALTER TABLE in NDB Cluster
22.5.15 Adding NDB Cluster Data Nodes Online
22.5.16 Distributed MySQL Privileges for NDB Cluster
22.5.17 NDB API Statistics Counters and Variables
22.6 NDB Cluster Replication
22.6.1 NDB Cluster Replication: Abbreviations and Symbols
22.6.2 General Requirements for NDB Cluster Replication
22.6.3 Known Issues in NDB Cluster Replication
22.6.4 NDB Cluster Replication Schema and Tables
22.6.5 Preparing the NDB Cluster for Replication
22.6.6 Starting NDB Cluster Replication (Single Replication Channel)
22.6.7 Using Two Replication Channels for NDB Cluster Replication
22.6.8 Implementing Failover with NDB Cluster Replication
22.6.9 NDB Cluster Backups With NDB Cluster Replication
22.6.10 NDB Cluster Replication: Multi-Master and Circular Replication
22.6.11 NDB Cluster Replication Conflict Resolution
22.7 NDB Cluster Release Notes

MySQL NDB Cluster is a high-availability, high-redundancy version of MySQL adapted for the distributed computing environment. Recent NDB Cluster release series use version 8 of the NDB storage engine (also known as NDBCLUSTER ) to enable running several computers with MySQL servers and other software in a cluster. NDB Cluster 8.0, now available as a Developer Milestone Release (DMR) beginning with version 8.0.13, incorporates version 8.0 of the NDB storage engine. NDB Cluster 7.6, is the current GA release, and uses version 7.6 of NDB . Previous GA releases still available for use in production, NDB Cluster 7.5 and NDB Cluster 7.4, incorporate NDB versions 7.5 and 7.4, respectively. NDB Cluster 7.2, which uses version 7.2 of the NDB storage engine, is a previous GA release that is currently still maintained; 7.2 users are encouraged to upgrade to NDB 7.5 or NDB 7.6. NDB 7.1 and previous release series are no longer supported or maintained .

Support for the NDB storage engine is not included in standard MySQL Server 8.0 binaries built by Oracle. Instead, users of NDB Cluster binaries from Oracle should upgrade to the most recent binary release of NDB Cluster for supported platforms—these include RPMs that should work with most Linux distributions. NDB Cluster 8.0 users who build from source should use the sources provided for MySQL 8.0 and build with the options required to provide NDB support. (Locations where the sources can be obtained are listed later in this section.)

Important

MySQL NDB Cluster does not support InnoDB cluster, which must be deployed using MySQL Server 8.0 with the InnoDB storage engine as well as additional applications that are not included in the NDB Cluster distribution. MySQL Server 8.0 binaries cannot be used with MySQL NDB Cluster. For more information about deploying and using InnoDB cluster, see Chapter 21, InnoDB Cluster . Section 22.1.6, “MySQL Server Using InnoDB Compared with NDB Cluster” , discusses differences between the NDB and InnoDB storage engines.

This chapter contains information about NDB Cluster 8.0 releases through 8.0.15-ndb-8.0.15, currently available as a Developer Preview. NDB Cluster 7.6 is the latest General Availability release, and is recommended for new deployments; for information about NDB Cluster 7.6, see What is New in NDB Cluster 7.6 . For similar information about NDB Cluster 7.5, see What is New in NDB Cluster 7.5 . NDB Cluster 7.4 and 7.3 are previous GA releases still supported in production; see MySQL NDB Cluster 7.3 and NDB Cluster 7.4 . NDB Cluster 7.2 is a previous GA release series which is still maintained, although we recommend that new deployments for production use NDB Cluster 7.6. For more information about NDB Cluster 7.2, see MySQL NDB Cluster 7.2 .

Supported Platforms. NDB Cluster is currently available and supported on a number of platforms. For exact levels of support available for on specific combinations of operating system versions, operating system distributions, and hardware platforms, please refer to https://www.mysql.com/support/supportedplatforms/cluster.html .

Availability. NDB Cluster binary and source packages are available for supported platforms from https://dev.mysql.com/downloads/cluster/ .

NDB Cluster release numbers. NDB 8.0 follows the same release pattern as the MySQL Server 8.0 series of releases, beginning with MySQL 8.0.13 and MySQL NDB Cluster 8.0.13. In this Manual and other MySQL documentation, we identify these and later NDB Cluster releases employing a version number that begins with NDB . This version number is that of the NDBCLUSTER storage engine used in the NDB 8.0 release, and is the same as the MySQL 8.0 server version on which the NDB Cluster 8.0 release is based.

Version strings used in NDB Cluster software. The version string displayed by the mysql client supplied with the MySQL NDB Cluster distribution uses this format:

mysql-mysql_server_version-cluster
                

mysql_server_version represents the version of the MySQL Server on which the NDB Cluster release is based. For all NDB Cluster 8.0 releases, this is 8.0. n , where n is the release number. Building from source using -DWITH_NDBCLUSTER or the equivalent adds the -cluster suffix to the version string. (See Section 22.2.2.4, “Building NDB Cluster from Source on Linux” , and Section 22.2.3.2, “Compiling and Installing NDB Cluster from Source on Windows” .) You can see this format used in the mysql client, as shown here:

shell> mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 8.0.15-cluster Source distribution
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> SELECT VERSION()\G
*************************** 1. row ***************************
VERSION(): 8.0.15-cluster
1 row in set (0.00 sec)
                

The first release of NDB Cluster using MySQL 8.0 was NDB 8.0.13, which used MySQL 8.0.13.

The version string displayed by other NDB Cluster programs not normally included with the MySQL 8.0 distribution uses this format:

mysql-mysql_server_version ndb-ndb_engine_version
                

mysql_server_version represents the version of the MySQL Server on which the NDB Cluster release is based. For all NDB Cluster 8.0 releases, this is 8.0. n , where n is the release number. ndb_engine_version is the version of the NDB storage engine used by this release of the NDB Cluster software. For all NDB 8.0 releases, this number is the same as the MySQL Server version. You can see this format used in the output of the SHOW command in the ndb_mgm client, like this:

ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @10.0.10.6  (mysql-8.0.17 ndb-8.0.15-ndb-8.0.15, Nodegroup: 0, *)
id=2    @10.0.10.8  (mysql-8.0.17 ndb-8.0.15-ndb-8.0.15, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=3    @10.0.10.2  (mysql-8.0.17 ndb-8.0.15-ndb-8.0.15)
[mysqld(API)]   2 node(s)
id=4    @10.0.10.10  (mysql-8.0.17 ndb-8.0.15-ndb-8.0.15)
id=5 (not connected, accepting connect from any host)
                

Compatibility with standard MySQL 8.0 releases. While many standard MySQL schemas and applications can work using NDB Cluster, it is also true that unmodified applications and database schemas may be slightly incompatible or have suboptimal performance when run using NDB Cluster (see Section 22.1.7, “Known Limitations of NDB Cluster” ). Most of these issues can be overcome, but this also means that you are very unlikely to be able to switch an existing application datastore—that currently uses, for example, MyISAM or InnoDB —to use the NDB storage engine without allowing for the possibility of changes in schemas, queries, and applications. A mysqld compiled without NDB support (that is, built without -DWITH_NDBCLUSTER_STORAGE_ENGINE or its alias -DWITH_NDBCLUSTER ) cannot function as a drop-in replacement for a mysqld that is built with it.

NDB Cluster development source trees. NDB Cluster development trees can also be accessed from https://github.com/mysql/mysql-server .

The NDB Cluster development sources maintained at https://github.com/mysql/mysql-server are licensed under the GPL. For information about obtaining MySQL sources using Git and building them yourself, see Section 2.9.3, “Installing MySQL Using a Development Source Tree” .

Note

As with MySQL Server 8.0, NDB Cluster 8.0 releases are built using CMake .

NDB Cluster 7.5 and NDB Cluster 7.6 are available as General Availability (GA) releases; NDB 7.6 is recommended for new deployments. NDB Cluster 7.4 and NDB Cluster 7.3 are previous GA releases which are still supported in production. NDB 7.2 is a previous GA release series which is still maintained; it is no longer recommended for new deployments. For an overview of major features added in NDB 7.6, see What is New in NDB Cluster 7.6 . For similar information about NDB Cluster 7.5, see What is New in NDB Cluster 7.5 . For information about previous NDB Cluster releases, see MySQL NDB Cluster 7.3 and NDB Cluster 7.4 , and MySQL NDB Cluster 7.2 .

The contents of this chapter are subject to revision as NDB Cluster continues to evolve. Additional information regarding NDB Cluster can be found on the MySQL website at http://www.mysql.com/products/cluster/ .

Additional Resources. More information about NDB Cluster can be found in the following places:

22.1 NDB Cluster Overview

NDB Cluster is a technology that enables clustering of in-memory databases in a shared-nothing system. The shared-nothing architecture enables the system to work with very inexpensive hardware, and with a minimum of specific requirements for hardware or software.

NDB Cluster is designed not to have any single point of failure. In a shared-nothing system, each component is expected to have its own memory and disk, and the use of shared storage mechanisms such as network shares, network file systems, and SANs is not recommended or supported.

NDB Cluster integrates the standard MySQL server with an in-memory clustered storage engine called NDB (which stands for N etwork D ata B ase ). In our documentation, the term NDB refers to the part of the setup that is specific to the storage engine, whereas MySQL NDB Cluster refers to the combination of one or more MySQL servers with the NDB storage engine.

An NDB Cluster consists of a set of computers, known as hosts , each running one or more processes. These processes, known as nodes , may include MySQL servers (for access to NDB data), data nodes (for storage of the data), one or more management servers, and possibly other specialized data access programs. The relationship of these components in an NDB Cluster is shown here:

Figure 22.1 NDB Cluster Components

In this cluster, three MySQL servers (mysqld program) are SQL nodes that provide access to four data nodes (ndbd program) that store data. The SQL nodes and data nodes are under the control of an NDB management server (ndb_mgmd program). Various clients and APIs can interact with the SQL nodes - the mysql client, the MySQL C API, PHP, Connector/J, and Connector/NET. Custom clients can also be created using the NDB API to interact with the data nodes or the NDB management server. The NDB management client (ndb_mgm program) interacts with the NDB management server.

All these programs work together to form an NDB Cluster (see Section 22.4, “NDB Cluster Programs” . When data is stored by the NDB storage engine, the tables (and table data) are stored in the data nodes. Such tables are directly accessible from all other MySQL servers (SQL nodes) in the cluster. Thus, in a payroll application storing data in a cluster, if one application updates the salary of an employee, all other MySQL servers that query this data can see this change immediately.

Although an NDB Cluster SQL node uses the mysqld server daemon, it differs in a number of critical respects from the mysqld binary supplied with the MySQL 8.0 distributions, and the two versions of mysqld are not interchangeable.

In addition, a MySQL server that is not connected to an NDB Cluster cannot use the NDB storage engine and cannot access any NDB Cluster data.

The data stored in the data nodes for NDB Cluster can be mirrored; the cluster can handle failures of individual data nodes with no other impact than that a small number of transactions are aborted due to losing the transaction state. Because transactional applications are expected to handle transaction failure, this should not be a source of problems.

Individual nodes can be stopped and restarted, and can then rejoin the system (cluster). Rolling restarts (in which all nodes are restarted in turn) are used in making configuration changes and software upgrades (see Section 22.5.5, “Performing a Rolling Restart of an NDB Cluster” ). Rolling restarts are also used as part of the process of adding new data nodes online (see Section 22.5.15, “Adding NDB Cluster Data Nodes Online” ). For more information about data nodes, how they are organized in an NDB Cluster, and how they handle and store NDB Cluster data, see Section 22.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions” .

Backing up and restoring NDB Cluster databases can be done using the NDB -native functionality found in the NDB Cluster management client and the ndb_restore program included in the NDB Cluster distribution. For more information, see Section 22.5.3, “Online Backup of NDB Cluster” , and Section 22.4.23, “ ndb_restore — Restore an NDB Cluster Backup” . You can also use the standard MySQL functionality provided for this purpose in mysqldump and the MySQL server. See Section 4.5.4, “ mysqldump — A Database Backup Program” , for more information.

NDB Cluster nodes can employ different transport mechanisms for inter-node communications; TCP/IP over standard 100 Mbps or faster Ethernet hardware is used in most real-world deployments.

22.1.1 NDB Cluster Core Concepts

NDBCLUSTER (also known as NDB ) is an in-memory storage engine offering high-availability and data-persistence features.

The NDBCLUSTER storage engine can be configured with a range of failover and load-balancing options, but it is easiest to start with the storage engine at the cluster level. NDB Cluster's NDB storage engine contains a complete set of data, dependent only on other data within the cluster itself.

The Cluster portion of NDB Cluster is configured independently of the MySQL servers. In an NDB Cluster, each part of the cluster is considered to be a node .

Note

In many contexts, the term node is used to indicate a computer, but when discussing NDB Cluster it means a process . It is possible to run multiple nodes on a single computer; for a computer on which one or more cluster nodes are being run we use the term cluster host .

There are three types of cluster nodes, and in a minimal NDB Cluster configuration, there will be at least three nodes, one of each of these types:

  • Management node : The role of this type of node is to manage the other nodes within the NDB Cluster, performing such functions as providing configuration data, starting and stopping nodes, and running backups. Because this node type manages the configuration of the other nodes, a node of this type should be started first, before any other node. An MGM node is started with the command ndb_mgmd .

  • Data node : This type of node stores cluster data. There are as many data nodes as there are replicas, times the number of fragments (see Section 22.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions” ). For example, with two replicas, each having two fragments, you need four data nodes. One replica is sufficient for data storage, but provides no redundancy; therefore, it is recommended to have 2 (or more) replicas to provide redundancy, and thus high availability. A data node is started with the command ndbd (see Section 22.4.1, “ ndbd — The NDB Cluster Data Node Daemon” ) or ndbmtd (see Section 22.4.3, “ ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)” ).

    NDB Cluster tables are normally stored completely in memory rather than on disk (this is why we refer to NDB Cluster as an in-memory database). However, some NDB Cluster data can be stored on disk; see Section 22.5.13, “NDB Cluster Disk Data Tables” , for more information.

  • SQL node : This is a node that accesses the cluster data. In the case of NDB Cluster, an SQL node is a traditional MySQL server that uses the NDBCLUSTER storage engine. An SQL node is a mysqld process started with the --ndbcluster and --ndb-connectstring options, which are explained elsewhere in this chapter, possibly with additional MySQL server options as well.

    An SQL node is actually just a specialized type of API node , which designates any application which accesses NDB Cluster data. Another example of an API node is the ndb_restore utility that is used to restore a cluster backup. It is possible to write such applications using the NDB API. For basic information about the NDB API, see Getting Started with the NDB API .

Important

It is not realistic to expect to employ a three-node setup in a production environment. Such a configuration provides no redundancy; to benefit from NDB Cluster's high-availability features, you must use multiple data and SQL nodes. The use of multiple management nodes is also highly recommended.

For a brief introduction to the relationships between nodes, node groups, replicas, and partitions in NDB Cluster, see Section 22.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions” .

Configuration of a cluster involves configuring each individual node in the cluster and setting up individual communication links between nodes. NDB Cluster is currently designed with the intention that data nodes are homogeneous in terms of processor power, memory space, and bandwidth. In addition, to provide a single point of configuration, all configuration data for the cluster as a whole is located in one configuration file.

The management server manages the cluster configuration file and the cluster log. Each node in the cluster retrieves the configuration data from the management server, and so requires a way to determine where the management server resides. When interesting events occur in the data nodes, the nodes transfer information about these events to the management server, which then writes the information to the cluster log.

In addition, there can be any number of cluster client processes or applications. These include standard MySQL clients, NDB -specific API programs, and management clients. These are described in the next few paragraphs.

Standard MySQL clients. NDB Cluster can be used with existing MySQL applications written in PHP, Perl, C, C++, Java, Python, Ruby, and so on. Such client applications send SQL statements to and receive responses from MySQL servers acting as NDB Cluster SQL nodes in much the same way that they interact with standalone MySQL servers.

MySQL clients using an NDB Cluster as a data source can be modified to take advantage of the ability to connect with multiple MySQL servers to achieve load balancing and failover. For example, Java clients using Connector/J 5.0.6 and later can use jdbc:mysql:loadbalance:// URLs (improved in Connector/J 5.1.7) to achieve load balancing transparently; for more information about using Connector/J with NDB Cluster, see Using Connector/J with NDB Cluster .

NDB client programs. Client programs can be written that access NDB Cluster data directly from the NDBCLUSTER storage engine, bypassing any MySQL Servers that may be connected to the cluster, using the NDB API , a high-level C++ API. Such applications may be useful for specialized purposes where an SQL interface to the data is not needed. For more information, see The NDB API .

NDB -specific Java applications can also be written for NDB Cluster using the NDB Cluster Connector for Java . This NDB Cluster Connector includes ClusterJ , a high-level database API similar to object-relational mapping persistence frameworks such as Hibernate and JPA that connect directly to NDBCLUSTER , and so does not require access to a MySQL Server. Support is also provided in NDB Cluster for ClusterJPA , an OpenJPA implementation for NDB Cluster that leverages the strengths of ClusterJ and JDBC; ID lookups and other fast operations are performed using ClusterJ (bypassing the MySQL Server), while more complex queries that can benefit from MySQL's query optimizer are sent through the MySQL Server, using JDBC. See Java and NDB Cluster , and The ClusterJ API and Data Object Model , for more information.

NDB Cluster also supports applications written in JavaScript using Node.js. The MySQL Connector for JavaScript includes adapters for direct access to the NDB storage engine and as well as for the MySQL Server. Applications using this Connector are typically event-driven and use a domain object model similar in many ways to that employed by ClusterJ. For more information, see MySQL NoSQL Connector for JavaScript .

The Memcache API for NDB Cluster, implemented as the loadable ndbmemcache storage engine for memcached version 1.6 and later, can be used to provide a persistent NDB Cluster data store, accessed using the memcache protocol.

The standard memcached caching engine is included in the NDB Cluster 8.0 distribution. Each memcached server has direct access to data stored in NDB Cluster, but is also able to cache data locally and to serve (some) requests from this local cache.

更多信息,请见 ndbmemcache—Memcache API for NDB Cluster .

Management clients. These clients connect to the management server and provide commands for starting and stopping nodes gracefully, starting and stopping message tracing (debug versions only), showing node versions and status, starting and stopping backups, and so on. An example of this type of program is the ndb_mgm management client supplied with NDB Cluster (see Section 22.4.5, “ ndb_mgm — The NDB Cluster Management Client” ). Such applications can be written using the MGM API , a C-language API that communicates directly with one or more NDB Cluster management servers. For more information, see The MGM API .

Oracle also makes available MySQL Cluster Manager, which provides an advanced command-line interface simplifying many complex NDB Cluster management tasks, such restarting an NDB Cluster with a large number of nodes. The MySQL Cluster Manager client also supports commands for getting and setting the values of most node configuration parameters as well as mysqld server options and variables relating to NDB Cluster. See MySQL™ Cluster Manager 1.4.7 User Manual , for more information.

Event logs. NDB Cluster logs events by category (startup, shutdown, errors, checkpoints, and so on), priority, and severity. A complete listing of all reportable events may be found in Section 22.5.6, “Event Reports Generated in NDB Cluster” . Event logs are of the two types listed here:

  • Cluster log : Keeps a record of all desired reportable events for the cluster as a whole.

  • Node log : A separate log which is also kept for each individual node.

Note

Under normal circumstances, it is necessary and sufficient to keep and examine only the cluster log. The node logs need be consulted only for application development and debugging purposes.

Checkpoint. Generally speaking, when data is saved to disk, it is said that a checkpoint has been reached. More specific to NDB Cluster, a checkpoint is a point in time where all committed transactions are stored on disk. With regard to the NDB storage engine, there are two types of checkpoints which work together to ensure that a consistent view of the cluster's data is maintained. These are shown in the following list:

  • Local Checkpoint (LCP) : This is a checkpoint that is specific to a single node; however, LCPs take place for all nodes in the cluster more or less concurrently. An LCP usually occurs every few minutes; the precise interval varies, and depends upon the amount of data stored by the node, the level of cluster activity, and other factors.

    NDB 8.0 supports partial LCPs, which can significantly improve performance under some conditions. See the descriptions of the EnablePartialLcp and RecoveryWork configuration parameters which enable partial LCPs and control the amount of storage they use.

  • Global Checkpoint (GCP) : A GCP occurs every few seconds, when transactions for all nodes are synchronized and the redo-log is flushed to disk.

For more information about the files and directories created by local checkpoints and global checkpoints, see NDB Cluster Data Node File System Directory Files .

22.1.2 NDB Cluster Nodes, Node Groups, Replicas, and Partitions

This section discusses the manner in which NDB Cluster divides and duplicates data for storage.

A number of concepts central to an understanding of this topic are discussed in the next few paragraphs.

Data node. An ndbd or ndbmtd process, which stores one or more replicas —that is, copies of the partitions (discussed later in this section) assigned to the node group of which the node is a member.

Each data node should be located on a separate computer. While it is also possible to host multiple data node processes on a single computer, such a configuration is not usually recommended.

It is common for the terms node and data node to be used interchangeably when referring to an ndbd or ndbmtd process; where mentioned, management nodes ( ndb_mgmd processes) and SQL nodes ( mysqld processes) are specified as such in this discussion.

Node group. A node group consists of one or more nodes, and stores partitions, or sets of replicas (see next item).

The number of node groups in an NDB Cluster is not directly configurable; it is a function of the number of data nodes and of the number of replicas ( NoOfReplicas configuration parameter), as shown here:

[# of node groups] = [# of data nodes] / NoOfReplicas
                        

Thus, an NDB Cluster with 4 data nodes has 4 node groups if NoOfReplicas is set to 1 in the config.ini file, 2 node groups if NoOfReplicas is set to 2, and 1 node group if NoOfReplicas is set to 4. Replicas are discussed later in this section; for more information about NoOfReplicas , see Section 22.3.3.6, “Defining NDB Cluster Data Nodes” .

Note

All node groups in an NDB Cluster must have the same number of data nodes.

You can add new node groups (and thus new data nodes) online, to a running NDB Cluster; see Section 22.5.15, “Adding NDB Cluster Data Nodes Online” , for more information.

Partition. This is a portion of the data stored by the cluster. Each node is responsible for keeping at least one copy of any partitions assigned to it (that is, at least one replica) available to the cluster.

The number of partitions used by default by NDB Cluster depends on the number of data nodes and the number of LDM threads in use by the data nodes, as shown here:

[# of partitions] = [# of data nodes] * [# of LDM threads]
                        

When using data nodes running ndbmtd , the number of LDM threads is controlled by the setting for MaxNoOfExecutionThreads . When using ndbd there is a single LDM thread, which means that there are as many cluster partitions as nodes participating in the cluster. This is also the case when using ndbmtd with MaxNoOfExecutionThreads set to 3 or less. (You should be aware that the number of LDM threads increases with the value of this parameter, but not in a strictly linear fashion, and that there are additional constraints on setting it; see the description of MaxNoOfExecutionThreads for more information.)

NDB and user-defined partitioning. NDB Cluster normally partitions NDBCLUSTER tables automatically. However, it is also possible to employ user-defined partitioning with NDBCLUSTER tables. This is subject to the following limitations:

  1. Only the KEY and LINEAR KEY partitioning schemes are supported in production with NDB tables.

  2. The maximum number of partitions that may be defined explicitly for any NDB table is 8 * MaxNoOfExecutionThreads * [ number of node groups ] , the number of node groups in an NDB Cluster being determined as discussed previously in this section. When using ndbd for data node processes, setting MaxNoOfExecutionThreads has no effect; in such a case, it can be treated as though it were equal to 1 for purposes of performing this calculation.

    See Section 22.4.3, “ ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)” , for more information.

For more information relating to NDB Cluster and user-defined partitioning, see Section 22.1.7, “Known Limitations of NDB Cluster” , and Section 23.6.2, “Partitioning Limitations Relating to Storage Engines” .

Replica. This is a copy of a cluster partition. Each node in a node group stores a replica. Also sometimes known as a partition replica . The number of replicas is equal to the number of nodes per node group.

A replica belongs entirely to a single node; a node can (and usually does) store several replicas.

The following diagram illustrates an NDB Cluster with four data nodes running ndbd , arranged in two node groups of two nodes each; nodes 1 and 2 belong to node group 0, and nodes 3 and 4 belong to node group 1.

Note

Only data nodes are shown here; although a working NDB Cluster requires an ndb_mgmd process for cluster management and at least one SQL node to access the data stored by the cluster, these have been omitted from the figure for clarity.

Figure 22.2 NDB Cluster with Two Node Groups

Content is described in the surrounding text.

The data stored by the cluster is divided into four partitions, numbered 0, 1, 2, and 3. Each partition is stored—in multiple copies—on the same node group. Partitions are stored on alternate node groups as follows:

  • Partition 0 is stored on node group 0; a primary replica (primary copy) is stored on node 1, and a backup replica (backup copy of the partition) is stored on node 2.

  • Partition 1 is stored on the other node group (node group 1); this partition's primary replica is on node 3, and its backup replica is on node 4.

  • Partition 2 is stored on node group 0. However, the placing of its two replicas is reversed from that of Partition 0; for Partition 2, the primary replica is stored on node 2, and the backup on node 1.

  • Partition 3 is stored on node group 1, and the placement of its two replicas are reversed from those of partition 1. That is, its primary replica is located on node 4, with the backup on node 3.

What this means regarding the continued operation of an NDB Cluster is this: so long as each node group participating in the cluster has at least one node operating, the cluster has a complete copy of all data and remains viable. This is illustrated in the next diagram.

Figure 22.3 Nodes Required for a 2x2 NDB Cluster

Content is described in the surrounding text.

In this example, the cluster consists of two node groups each consisting of two data nodes. Each data node is running an instance of ndbd . Any combination of at least one node from node group 0 and at least one node from node group 1 is sufficient to keep the cluster alive . However, if both nodes from a single node group fail, the combination consisting of the remaining two nodes in the other node group is not sufficient. In this situation, the cluster has lost an entire partition and so can no longer provide access to a complete set of all NDB Cluster data.

The maximum number of node groups supported for a single NDB Cluster instance is 48.

22.1.3 NDB Cluster Hardware, Software, and Networking Requirements

One of the strengths of NDB Cluster is that it can be run on commodity hardware and has no unusual requirements in this regard, other than for large amounts of RAM, due to the fact that all live data storage is done in memory. (It is possible to reduce this requirement using Disk Data tables—see Section 22.5.13, “NDB Cluster Disk Data Tables” , for more information about these.) Naturally, multiple and faster CPUs can enhance performance. Memory requirements for other NDB Cluster processes are relatively small.

The software requirements for NDB Cluster are also modest. Host operating systems do not require any unusual modules, services, applications, or configuration to support NDB Cluster. For supported operating systems, a standard installation should be sufficient. The MySQL software requirements are simple: all that is needed is a production release of NDB Cluster. It is not strictly necessary to compile MySQL yourself merely to be able to use NDB Cluster. We assume that you are using the binaries appropriate to your platform, available from the NDB Cluster software downloads page at https://dev.mysql.com/downloads/cluster/ .

For communication between nodes, NDB Cluster supports TCP/IP networking in any standard topology, and the minimum expected for each host is a standard 100 Mbps Ethernet card, plus a switch, hub, or router to provide network connectivity for the cluster as a whole. We strongly recommend that an NDB Cluster be run on its own subnet which is not shared with machines not forming part of the cluster for the following reasons:

  • Security. Communications between NDB Cluster nodes are not encrypted or shielded in any way. The only means of protecting transmissions within an NDB Cluster is to run your NDB Cluster on a protected network. If you intend to use NDB Cluster for Web applications, the cluster should definitely reside behind your firewall and not in your network's De-Militarized Zone ( DMZ ) or elsewhere.

    See Section 22.5.12.1, “NDB Cluster Security and Networking Issues” , for more information.

  • Efficiency. Setting up an NDB Cluster on a private or protected network enables the cluster to make exclusive use of bandwidth between cluster hosts. Using a separate switch for your NDB Cluster not only helps protect against unauthorized access to NDB Cluster data, it also ensures that NDB Cluster nodes are shielded from interference caused by transmissions between other computers on the network. For enhanced reliability, you can use dual switches and dual cards to remove the network as a single point of failure; many device drivers support failover for such communication links.

Network communication and latency. NDB Cluster requires communication between data nodes and API nodes (including SQL nodes), as well as between data nodes and other data nodes, to execute queries and updates. Communication latency between these processes can directly affect the observed performance and latency of user queries. In addition, to maintain consistency and service despite the silent failure of nodes, NDB Cluster uses heartbeating and timeout mechanisms which treat an extended loss of communication from a node as node failure. This can lead to reduced redundancy. Recall that, to maintain data consistency, an NDB Cluster shuts down when the last node in a node group fails. Thus, to avoid increasing the risk of a forced shutdown, breaks in communication between nodes should be avoided wherever possible.

The failure of a data or API node results in the abort of all uncommitted transactions involving the failed node. Data node recovery requires synchronization of the failed node's data from a surviving data node, and re-establishment of disk-based redo and checkpoint logs, before the data node returns to service. This recovery can take some time, during which the Cluster operates with reduced redundancy.

Heartbeating relies on timely generation of heartbeat signals by all nodes. This may not be possible if the node is overloaded, has insufficient machine CPU due to sharing with other programs, or is experiencing delays due to swapping. If heartbeat generation is sufficiently delayed, other nodes treat the node that is slow to respond as failed.

This treatment of a slow node as a failed one may or may not be desirable in some circumstances, depending on the impact of the node's slowed operation on the rest of the cluster. When setting timeout values such as HeartbeatIntervalDbDb and HeartbeatIntervalDbApi for NDB Cluster, care must be taken care to achieve quick detection, failover, and return to service, while avoiding potentially expensive false positives.

Where communication latencies between data nodes are expected to be higher than would be expected in a LAN environment (on the order of 100 µs), timeout parameters must be increased to ensure that any allowed periods of latency periods are well within configured timeouts. Increasing timeouts in this way has a corresponding effect on the worst-case time to detect failure and therefore time to service recovery.

LAN environments can typically be configured with stable low latency, and such that they can provide redundancy with fast failover. Individual link failures can be recovered from with minimal and controlled latency visible at the TCP level (where NDB Cluster normally operates). WAN environments may offer a range of latencies, as well as redundancy with slower failover times. Individual link failures may require route changes to propagate before end-to-end connectivity is restored. At the TCP level this can appear as large latencies on individual channels. The worst-case observed TCP latency in these scenarios is related to the worst-case time for the IP layer to reroute around the failures.

22.1.4 What is New in NDB Cluster

The following sections describe changes in the implementation of NDB Cluster in MySQL NDB Cluster 8.0 through 8.0.15, as compared to earlier release series. NDB Cluster 8.0 is currently available in a Developer Preview release. NDB Cluster 7.6 is available as a General Availability release, as is NDB Cluster 7.5 For information about additions and other changes in NDB Cluster 7.6, see What is New in NDB Cluster 7.5 ; for information about new features and other changes in NDB Cluster 7.5, see What is New in NDB Cluster 7.6 .

NDB Cluster 7.4 and 7.3 are recent General Availability releases, and are still supported. NDB Cluster 7.2 is a previous GA release, still supported in production for existing deployments. NDB 7.1 and earlier releases series are no longer maintained or supported in production . We recommend that new deployments use NDB Cluster 7.6 or NDB Cluster 7.5. For information about NDB 7.4 and NDB 7.3, see MySQL NDB Cluster 7.3 and NDB Cluster 7.4 . For information about NDB 7.2 and previous NDB releases, see MySQL NDB Cluster 7.2 .

What is New in NDB Cluster 8.0

Major changes and new features in NDB Cluster 8.0 which are likely to be of interest are shown in the following list:

  • INFORMATION_SCHEMA changes. The following changes are made in the display of information about Disk Data files in the INFORMATION_SCHEMA.FILES table:

    • Tablespaces and log file groups are no longer represented in the FILES table. (These constructs are not actually files.)

    • Each data file is now represented by a single row in the FILES table. Each undo log file is also now represented in this table by one row only. (Previously, a row was displayed for each copy of each of these files on each data node.)

    • For rows corresponding to data files or undo log files, node ID and undo log buffer information is no longer displayed in the EXTRA column of the FILES table.

    In addition, INFORMATION_SCHEMA tables now are populated with tablespace statistics for MySQL Cluster tables. (Bug #27167728)

  • Error information with ndb_perror. Removed the deprecated --ndb option for perror . Use ndb_perror to obtain error message information from NDB error codes instead. (Bug #81704, Bug #81705, Bug #23523926, Bug #23523957)

  • Development in parallel with MySQL server. Beginning with this release, MySQL NDB Cluster is being developed in parallel with the standard MySQL 8.0 server under a new unified release model with the following features:

    • NDB 8.0 is developed in, built from, and released with the MySQL 8.0 source code tree.

    • The numbering scheme for NDB Cluster 8.0 releases follows the scheme for MySQL 8.0, starting with the current MySQL release (8.0.13).

    • Building the source with NDB support appends -cluster to the version string returned by mysql -V , as shown here:

      shell≫ mysql -V
      mysql  Ver 8.0.13-cluster for Linux on x86_64 (Source distribution)
                                                      

      NDB binaries continue to display both the MySQL Server version and the NDB engine version, like this:

      shell> ndb_mgm -V
      MySQL distrib mysql-8.0.13 ndb-8.0.13-dmr, for Linux (x86_64)
                                                      

      In MySQL Cluster NDB 8.0, these two version numbers are always the same.

    To build the MySQL 8.0.13 (or later) source with NDB Cluster support, use the CMake option -DWITH_NDBCLUSTER .

  • Offline multithreaded index builds. It is now possible to specify a set of cores to be used for I/O threads performing offline multithreaded builds of ordered indexes, as opposed to normal I/O duties such as file I/O, compression, or decompression. Offline in this context refers to building of ordered indexes performed when the parent table is not being written to; such building takes place when an NDB cluster performs a node or system restart, or as part of restoring a cluster from backup using ndb_restore --rebuild-indexes .

    In addition, the default behaviour for offline index build work is modified to use all cores available to ndbmtd , rather limiting itself to the core reserved for the I/O thread. Doing so can improve restart and restore times and performance, availability, and the user experience.

    This enhancement is implemented as follows:

    1. The default value for BuildIndexThreads is changed from 0 to 128. This means that offline ordered index builds are now multithreaded by default.

    2. The default value for TwoPassInitialNodeRestartCopy is changed from false to true . This means that an initial node restart first copies all data from a live node to one that is starting—without creating any indexes—builds ordered indexes offline, and then again synchronizes its data with the live node, that is, synchronizing twice and building indexes offline between the two synchonizations. This causes an initial node restart to behave more like the normal restart of a node, and reduces the time required for building indexes.

    3. A new thread type ( idxbld ) is defined for the ThreadConfig configuration parameter, to allow locking of offline index build threads to specific CPUs.

    In addition, NDB now distinguishes the thread types that are accessible to ThreadConfig by the following two criteria:

    1. Whether the thread is an execution thread. Threads of types main , ldm , recv , rep , tc , and send are execution threads; thread types io , watchdog , and idxbld are not.

    2. Whether the allocation of the thread to a given task is permanent or temporary. Currently all thread types except idxbld are permanent.

    For additonal information, see the descriptions of the parameters in the Manual. (Bug #25835748, Bug #26928111)

  • logbuffers table backup process information. When performing an NDB backup, the ndbinfo.logbuffers table now displays information regarding buffer usage by the backup process on each data node. This is implemented as rows reflecting two new log types in addition to REDO and DD-UNDO . One of these rows has the log type BACKUP-DATA , which shows the amount of data buffer used during backup to copy fragments to backup files. The other row has the log type BACKUP-LOG , which displays the amount of log buffer used during the backup to record changes made after the backup has started. One each of these log_type rows is shown in the logbuffers table for each data node in the cluster. Rows having these two log types are present in the table only while an NDB backup is currently in progress. (Bug #25822988)

  • processes table on Windows. The process ID of the monitor process used on Windows platforms by RESTART to spawn and restart a mysqld is now shown in the ndbinfo.processes table as an angel_pid .

  • ODirectSyncFlag. Added the ODirectSyncFlag configuration parameter for data nodes. When enabled, the data node treats all completed filesystem writes to the redo log as though they had been performed using fsync .

    Note

    This parameter has no effect if at least one of the following conditions is true:

    • ODirect is not enabled.

    • InitFragmentLogFiles is set to SPARSE .

    (Bug #25428560)

  • Data node log buffer size control. Added the --logbuffer-size option for ndbd and ndbmtd , for use in debugging with a large number of log messages. This controls the size of the data node log buffer; the default (32K) is intended for normal operations. (Bug #89679, Bug #27550943)

  • String hashing improvements. Prior to NDB 8.0, all string hashing was based on first transforming the string into a normalized form, then MD5-hashing the resulting binary image. This could give rise to some performance problems, for the following reasons:

    • The normalized string is always space padded to its full length. For a VARCHAR , this often involved adding more spaces than there were characters in the original string.

    • The string libraries were not optimized for this space padding, and added considerable overhead in some use cases.

    • The padding semantics varied between character sets, some of which were not padded to their full length.

    • The transformed string could become quite large, even without space padding; some Unicode 9.0 collations can transform a single code point into 100 bytes of character data or more.

    • Subsequent MD5 hashing consisted mainly of padding with spaces, and was not particularly efficient, possibly causing additional performance penalties by flush significant portions of the L1 cache.

    Collations provide their own hash functions, which hash the string directly without first creating a normalized string. In addition, for Unicode 9.0 collations, the hashes are computed without padding. NDB now takes advantage of this built-in function whenever hashing a string identified as using a Unicode 9.0 collation.

    Since, for other collations there are existing databases which are hash partitioned on the transformed string, NDB continues to employ the previous method for hashing strings that use these, to maintain compatibility. (Bug #89590, Bug #89604, Bug #89609, Bug #27515000, Bug #27523758, Bug #27522732)

    (See also.)

  • On-the-fly upgrades of tables using .frm files. A table created in NDB 7.6 and earlier contains metadata in the form of a compressed .frm file, which is no longer supported in MySQL 8.0. To facilitate online upgrades to NDB 8.0, NDB performs on-the-fly translation of this metadata and writes it into the MySQL Server's data dictionary, which enables the mysqld in NDB Cluster 8.0 to work with the table without preventing subsequent use of the table by a previous version of the NDB software.

    Important

    Once a table's structure has been modified in NDB 8.0, its metadata is stored using the Data Dictionary, and it can no longer be accessed by NDB 7.6 and earlier.

    This enhancement also makes it possible to restore an NDB backup made using an earlier version to a cluster running NDB 8.0 (or later).

  • Schema synchronization of tablespace objects. When a MySQL Server connects as an SQL node to an NDB cluster, it synchronizes its data dictionary with the information found in NDB dictionary.

    Previously, the only NDB objects synchronized on connection of a new SQL node were databases and tables; MySQL NDB Cluster 8.0.14 and later also implement schema synchronization of disk data objects including tablespaces and log file groups. Among other benefits, this eliminates the possibility of a mismatch between the MySQL data dictionary and the NDB dictionary following a native backup and restore, in which tablespaces and log file groups were restored to the NDB dictionary, but not to the MySQL Server's data dictionary.

  • Handling of NO_AUTO_CREATE_USER in mysqld options file. An error now is written to the server log when the presence of the NO_AUTO_CREATE_USER value for the sql_mode option in the options file prevents mysqld from starting.

  • Handling of references to nonexistent tablespaces. It is no longer possible to issue a CREATE TABLE statement that refers to a nonexistent tablespace. Such a statement now fails with an error.

  • RESET MASTER changes. Because the MySQL Server now executes RESET MASTER with a global read lock, the behavior of this statement when used with NDB Cluster has changed in the following two respects:

    • It is no longer guaranteed to be synonchrous; that is, it is now possible that a read coming immediately before RESET MASTER is issued may not be logged until after the binary log has been rotated.

    • It now behaves identically, regardless of whether the statement is issued on the same SQL node that is writing the binary log, or on a different SQL node in the same cluster.

    Note

    SHOW BINLOG EVENTS , FLUSH LOGS , and most data definition statements continue, as they did in previous NDB versions, to operate in a synchronous fashion.

  • NDB table extra metadata changes. In NDB 8.0.14 and later, the extra metadata property of an NDB table is used for storing serialized metadata from the MySQL data dictionary rather than storing the binary representation of the table as in previous versions. (This was a .frm file, no longer used by the MySQL Server—see Chapter 14, MySQL Data Dictionary .) As part of the work to support this change, the available size of the table's extra metadata has been increased. This means that NDB tables created in NDB Cluster 8.0.14 and later are not compatible with previous NDB Cluster releases. Tables created in previous releases can be used with NDB 8.0.14 and later, but cannot be opened afterwards by an earlier version.

    更多信息,请见 Section 22.2.8, “Upgrading and Downgrading NDB Cluster” .

  • Disk data file distribution. Beginning with NDB Cluster 8.0.14, NDB uses the MySQL data dictionary to make sure that disk data files and related constructs such as tablespaces and log file groups are correctly distributed between all connected SQL nodes.

  • ndb_restore options. Beginning with NDB 8.0.15, the --nodeid and --backupid options are both required when invoking ndb_restore .

22.1.5 Options, Variables, and Parameters Added, Deprecated or Removed in NDB 8.0

The next few sections contain information about NDB configuration parameters and NDB-specific mysqld options and variables that have been added to, deprecated in, or removed from NDB 8.0.

Node Configuration Parameters Introduced in NDB 8.0

No node configuration parameters have been added to NDB 8.0.

Node Configuration Parameters Deprecated in NDB 8.0

No node configuration parameters have been deprecated in NDB 8.0.

Node Configuration Parameters Removed in NDB 8.0

No node configuration parameters have been removed from NDB 8.0.

MySQL Server Options and Variables Introduced in NDB 8.0

No new system variables, status variables, or options have been added to NDB 8.0.

MySQL Server Options and Variables Deprecated in NDB 8.0

No system variables, status variables, or options have been deprecated in NDB 8.0.

MySQL Server Options and Variables Removed in NDB 8.0

No system variables, status variables, or options have been removed from NDB 8.0.

22.1.6 MySQL Server Using InnoDB Compared with NDB Cluster

MySQL Server offers a number of choices in storage engines. Since both NDB and InnoDB can serve as transactional MySQL storage engines, users of MySQL Server sometimes become interested in NDB Cluster. They see NDB as a possible alternative or upgrade to the default InnoDB storage engine in MySQL 8.0. While NDB and InnoDB share common characteristics, there are differences in architecture and implementation, so that some existing MySQL Server applications and usage scenarios can be a good fit for NDB Cluster, but not all of them.

In this section, we discuss and compare some characteristics of the NDB storage engine used by NDB 8.0 with InnoDB used in MySQL 8.0. The next few sections provide a technical comparison. In many instances, decisions about when and where to use NDB Cluster must be made on a case-by-case basis, taking all factors into consideration. While it is beyond the scope of this documentation to provide specifics for every conceivable usage scenario, we also attempt to offer some very general guidance on the relative suitability of some common types of applications for NDB as opposed to InnoDB back ends.

NDB Cluster 8.0 uses a mysqld based on MySQL 8.0, including support for InnoDB 1.1. While it is possible to use InnoDB tables with NDB Cluster, such tables are not clustered. It is also not possible to use programs or libraries from an NDB Cluster 8.0 distribution with MySQL Server 8.0, or the reverse.

While it is also true that some types of common business applications can be run either on NDB Cluster or on MySQL Server (most likely using the InnoDB storage engine), there are some important architectural and implementation differences. Section 22.1.6.1, “Differences Between the NDB and InnoDB Storage Engines” , provides a summary of the these differences. Due to the differences, some usage scenarios are clearly more suitable for one engine or the other; see Section 22.1.6.2, “NDB and InnoDB Workloads” . This in turn has an impact on the types of applications that better suited for use with NDB or InnoDB . See Section 22.1.6.3, “NDB and InnoDB Feature Usage Summary” , for a comparison of the relative suitability of each for use in common types of database applications.

For information about the relative characteristics of the NDB and MEMORY storage engines, see When to Use MEMORY or NDB Cluster .

See Chapter 16, Alternative Storage Engines , for additional information about MySQL storage engines.

22.1.6.1 Differences Between the NDB and InnoDB Storage Engines

The NDB storage engine is implemented using a distributed, shared-nothing architecture, which causes it to behave differently from InnoDB in a number of ways. For those unaccustomed to working with NDB , unexpected behaviors can arise due to its distributed nature with regard to transactions, foreign keys, table limits, and other characteristics. These are shown in the following table:

Table 22.1 Differences between InnoDB and NDB storage engines

Feature InnoDB (MySQL 8.0) NDB 8.0
MySQL Server Version 8.0 8.0
InnoDB Version InnoDB 8.0.17 InnoDB 8.0.17
NDB Cluster Version N/A NDB 8.0.15/8.0.15
Storage Limits 64TB 128TB
Foreign Keys Yes Yes
Transactions All standard types READ COMMITTED
MVCC Yes No
Data Compression Yes No (NDB checkpoint and backup files can be compressed)
Large Row Support (> 14K) Supported for VARBINARY , VARCHAR , BLOB , and TEXT columns Supported for BLOB and TEXT columns only (Using these types to store very large amounts of data can lower NDB performance)
Replication Support Asynchronous and semisynchronous replication using MySQL Replication; MySQL Group Replication Automatic synchronous replication within an NDB Cluster; asynchronous replication between NDB Clusters, using MySQL Replication (Semisynchronous replication is not supported)
Scaleout for Read Operations Yes (MySQL Replication) Yes (Automatic partitioning in NDB Cluster; NDB Cluster Replication)
Scaleout for Write Operations Requires application-level partitioning (sharding) Yes (Automatic partitioning in NDB Cluster is transparent to applications)
High Availability (HA) Built-in, from InnoDB cluster Yes (Designed for 99.999% uptime)
Node Failure Recovery and Failover From MySQL Group Replication Automatic (Key element in NDB architecture)
Time for Node Failure Recovery 30 seconds or longer Typically < 1 second
Real-Time Performance No Yes
In-Memory Tables No Yes (Some data can optionally be stored on disk; both in-memory and disk data storage are durable)
NoSQL Access to Storage Engine Yes Yes (Multiple APIs, including Memcached, Node.js/JavaScript, Java, JPA, C++, and HTTP/REST)
Concurrent and Parallel Writes Yes Up to 48 writers, optimized for concurrent writes
Conflict Detection and Resolution (Multiple Replication Masters) Yes (MySQL Group Replication) Yes
Hash Indexes No Yes
Online Addition of Nodes Read/write replicas using MySQL Group Replication Yes (all node types)
Online Upgrades Yes (using replication) Yes
Online Schema Modifications Yes, as part of MySQL 8.0 Yes

22.1.6.2 NDB and InnoDB Workloads

NDB Cluster has a range of unique attributes that make it ideal to serve applications requiring high availability, fast failover, high throughput, and low latency. Due to its distributed architecture and multi-node implementation, NDB Cluster also has specific constraints that may keep some workloads from performing well. A number of major differences in behavior between the NDB and InnoDB storage engines with regard to some common types of database-driven application workloads are shown in the following table::

Table 22.2 Differences between InnoDB and NDB storage engines, common types of data-driven application workloads.

Workload InnoDB NDB Cluster ( NDB )
High-Volume OLTP Applications Yes Yes
DSS Applications (data marts, analytics) Yes Limited (Join operations across OLTP datasets not exceeding 3TB in size)
Custom Applications Yes Yes
Packaged Applications Yes Limited (should be mostly primary key access); NDB Cluster 8.0 supports foreign keys
In-Network Telecoms Applications (HLR, HSS, SDP) No Yes
Session Management and Caching Yes Yes
E-Commerce Applications Yes Yes
User Profile Management, AAA Protocol Yes Yes

22.1.6.3 NDB and InnoDB Feature Usage Summary

When comparing application feature requirements to the capabilities of InnoDB with NDB , some are clearly more compatible with one storage engine than the other.

The following table lists supported application features according to the storage engine to which each feature is typically better suited.

Table 22.3 Supported application features according to the storage engine to which each feature is typically better suited

Preferred application requirements for InnoDB Preferred application requirements for NDB
  • Foreign keys

    Note

    NDB Cluster 8.0 supports foreign keys

  • Full table scans

  • Very large databases, rows, or transactions

  • Transactions other than READ COMMITTED

  • Write scaling

  • 99.999% uptime

  • Online addition of nodes and online schema operations

  • Multiple SQL and NoSQL APIs (see NDB Cluster APIs: Overview and Concepts )

  • Real-time performance

  • Limited use of BLOB columns

  • Foreign keys are supported, although their use may have an impact on performance at high throughput


22.1.7 Known Limitations of NDB Cluster

In the sections that follow, we discuss known limitations in current releases of NDB Cluster as compared with the features available when using the MyISAM and InnoDB storage engines. If you check the Cluster category in the MySQL bugs database at http://bugs.mysql.com , you can find known bugs in the following categories under MySQL Server: in the MySQL bugs database at http://bugs.mysql.com , which we intend to correct in upcoming releases of NDB Cluster:

  • NDB Cluster

  • Cluster Direct API (NDBAPI)

  • Cluster Disk Data

  • Cluster Replication

  • ClusterJ

This information is intended to be complete with respect to the conditions just set forth. You can report any discrepancies that you encounter to the MySQL bugs database using the instructions given in Section 1.7, “How to Report Bugs or Problems” . If we do not plan to fix the problem in NDB Cluster 8.0, we will add it to the list.

See Previous NDB Cluster Issues Resolved in NDB Cluster 7.3 for a list of issues in earlier releases that have been resolved in NDB Cluster 8.0.

Note

Limitations and other issues specific to NDB Cluster Replication are described in Section 22.6.3, “Known Issues in NDB Cluster Replication” .

22.1.7.1 Noncompliance with SQL Syntax in NDB Cluster

Some SQL statements relating to certain MySQL features produce errors when used with NDB tables, as described in the following list:

  • Temporary tables. Temporary tables are not supported. Trying either to create a temporary table that uses the NDB storage engine or to alter an existing temporary table to use NDB fails with the error Table storage engine 'ndbcluster' does not support the create option 'TEMPORARY' .

  • Indexes and keys in NDB tables. Keys and indexes on NDB Cluster tables are subject to the following limitations:

    • Column width. Attempting to create an index on an NDB table column whose width is greater than 3072 bytes succeeds, but only the first 3072 bytes are actually used for the index. In such cases, a warning Specified key was too long; max key length is 3072 bytes is issued, and a SHOW CREATE TABLE statement shows the length of the index as 3072.

    • TEXT and BLOB columns. You cannot create indexes on NDB table columns that use any of the TEXT or BLOB data types.

    • FULLTEXT indexes. The NDB storage engine does not support FULLTEXT indexes, which are possible for MyISAM and InnoDB tables only.

      However, you can create indexes on VARCHAR columns of NDB tables.

    • USING HASH keys and NULL. Using nullable columns in unique keys and primary keys means that queries using these columns are handled as full table scans. To work around this issue, make the column NOT NULL , or re-create the index without the USING HASH option.

    • Prefixes. There are no prefix indexes; only entire columns can be indexed. (The size of an NDB column index is always the same as the width of the column in bytes, up to and including 3072 bytes, as described earlier in this section. Also see Section 22.1.7.6, “Unsupported or Missing Features in NDB Cluster” , for additional information.)

    • BIT columns. A BIT column cannot be a primary key, unique key, or index, nor can it be part of a composite primary key, unique key, or index.

    • AUTO_INCREMENT columns. Like other MySQL storage engines, the NDB storage engine can handle a maximum of one AUTO_INCREMENT column per table. However, in the case of an NDB table with no explicit primary key, an AUTO_INCREMENT column is automatically defined and used as a hidden primary key. For this reason, you cannot define a table that has an explicit AUTO_INCREMENT column unless that column is also declared using the PRIMARY KEY option. Attempting to create a table with an AUTO_INCREMENT column that is not the table's primary key, and using the NDB storage engine, fails with an error.

  • Restrictions on foreign keys. Support for foreign key constraints in NDB 8.0 is comparable to that provided by InnoDB , subject to the following restrictions:

    • Every column referenced as a foreign key requires an explicit unique key, if it is not the table's primary key.

    • ON UPDATE CASCADE is not supported when the reference is to the parent table's primary key.

      This is because an update of a primary key is implemented as a delete of the old row (containing the old primary key) plus an insert of the new row (with a new primary key). This is not visible to the NDB kernel, which views these two rows as being the same, and thus has no way of knowing that this update should be cascaded.

    • SET DEFAULT is not supported. (Also not supported by InnoDB .)

    • The NO ACTION keywords are accepted but treated as RESTRICT . (Also the same as with InnoDB .)

    • In earlier versions of NDB Cluster, when creating a table with foreign key referencing an index in another table, it sometimes appeared possible to create the foreign key even if the order of the columns in the indexes did not match, due to the fact that an appropriate error was not always returned internally. A partial fix for this issue improved the error used internally to work in most cases; however, it remains possible for this situation to occur in the event that the parent index is a unique index. (Bug #18094360)

    更多信息,请见 Section 13.1.20.6, “Using FOREIGN KEY Constraints” , and Section 1.8.3.2, “FOREIGN KEY Constraints” .

  • NDB Cluster and geometry data types. Geometry data types ( WKT and WKB ) are supported for NDB tables. However, spatial indexes are not supported.

  • Character sets and binary log files. Currently, the ndb_apply_status and ndb_binlog_index tables are created using the latin1 (ASCII) character set. Because names of binary logs are recorded in this table, binary log files named using non-Latin characters are not referenced correctly in these tables. This is a known issue, which we are working to fix. (Bug #50226)

    To work around this problem, use only Latin-1 characters when naming binary log files or setting any the --basedir , --log-bin , or --log-bin-index options.

  • Creating NDB tables with user-defined partitioning. Support for user-defined partitioning in NDB Cluster is restricted to [ LINEAR ] KEY partitioning. Using any other partitioning type with ENGINE=NDB or ENGINE=NDBCLUSTER in a CREATE TABLE statement results in an error.

    It is possible to override this restriction, but doing so is not supported for use in production settings. For details, see User-defined partitioning and the NDB storage engine (NDB Cluster) .

    Default partitioning scheme. All NDB Cluster tables are by default partitioned by KEY using the table's primary key as the partitioning key. If no primary key is explicitly set for the table, the hidden primary key automatically created by the NDB storage engine is used instead. For additional discussion of these and related issues, see Section 23.2.5, “KEY Partitioning” .

    CREATE TABLE and ALTER TABLE statements that would cause a user-partitioned NDBCLUSTER table not to meet either or both of the following two requirements are not permitted, and fail with an error:

    1. The table must have an explicit primary key.

    2. All columns listed in the table's partitioning expression must be part of the primary key.

    Exception. If a user-partitioned NDBCLUSTER table is created using an empty column-list (that is, using PARTITION BY [LINEAR] KEY() ), then no explicit primary key is required.

    Maximum number of partitions for NDBCLUSTER tables. The maximum number of partitions that can defined for a NDBCLUSTER table when employing user-defined partitioning is 8 per node group. (See Section 22.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions” , for more information about NDB Cluster node groups.

    DROP PARTITION not supported. It is not possible to drop partitions from NDB tables using ALTER TABLE ... DROP PARTITION . The other partitioning extensions to ALTER TABLE ADD PARTITION , REORGANIZE PARTITION , and COALESCE PARTITION —are supported for NDB tables, but use copying and so are not optimized. See Section 23.3.1, “Management of RANGE and LIST Partitions” and Section 13.1.9, “ALTER TABLE Syntax” .

  • Row-based replication. When using row-based replication with NDB Cluster, binary logging cannot be disabled. That is, the NDB storage engine ignores the value of sql_log_bin .

  • JSON data type. The MySQL JSON data type is supported for NDB tables in the mysqld supplied with NDB 8.0.

    An NDB table can have a maximum of 3 JSON columns.

    The NDB API has no special provision for working with JSON data, which it views simply as BLOB data. Handling data as JSON must be performed by the application.

22.1.7.2 Limits and Differences of NDB Cluster from Standard MySQL Limits

In this section, we list limits found in NDB Cluster that either differ from limits found in, or that are not found in, standard MySQL.

Memory usage and recovery. Memory consumed when data is inserted into an NDB table is not automatically recovered when deleted, as it is with other storage engines. Instead, the following rules hold true:

22.1.7.3 Limits Relating to Transaction Handling in NDB Cluster

A number of limitations exist in NDB Cluster with regard to the handling of transactions. These include the following:

  • Transaction isolation level. The NDBCLUSTER storage engine supports only the READ COMMITTED transaction isolation level. ( InnoDB , for example, supports READ COMMITTED , READ UNCOMMITTED , REPEATABLE READ , and SERIALIZABLE .) You should keep in mind that NDB implements READ COMMITTED on a per-row basis; when a read request arrives at the data node storing the row, what is returned is the last committed version of the row at that time.

    Uncommitted data is never returned, but when a transaction modifying a number of rows commits concurrently with a transaction reading the same rows, the transaction performing the read can observe before values, after values, or both, for different rows among these, due to the fact that a given row read request can be processed either before or after the commit of the other transaction.

    To ensure that a given transaction reads only before or after values, you can impose row locks using SELECT ... LOCK IN SHARE MODE . In such cases, the lock is held until the owning transaction is committed. Using row locks can also cause the following issues:

    • Increased frequency of lock wait timeout errors, and reduced concurrency

    • Increased transaction processing overhead due to reads requiring a commit phase

    • Possibility of exhausting the available number of concurrent locks, which is limited by MaxNoOfConcurrentOperations

    NDB uses READ COMMITTED for all reads unless a modifier such as LOCK IN SHARE MODE or FOR UPDATE is used. LOCK IN SHARE MODE causes shared row locks to be used; FOR UPDATE causes exclusive row locks to be used. Unique key reads have their locks upgraded automatically by NDB to ensure a self-consistent read; BLOB reads also employ extra locking for consistency.

    See Section 22.5.3.4, “NDB Cluster Backup Troubleshooting” , for information on how NDB Cluster's implementation of transaction isolation level can affect backup and restoration of NDB databases.

  • Transactions and BLOB or TEXT columns. NDBCLUSTER stores only part of a column value that uses any of MySQL's BLOB or TEXT data types in the table visible to MySQL; the remainder of the BLOB or TEXT is stored in a separate internal table that is not accessible to MySQL. This gives rise to two related issues of which you should be aware whenever executing SELECT statements on tables that contain columns of these types:

    1. For any SELECT from an NDB Cluster table: If the SELECT includes a BLOB or TEXT column, the READ COMMITTED transaction isolation level is converted to a read with read lock. This is done to guarantee consistency.

    2. For any SELECT which uses a unique key lookup to retrieve any columns that use any of the BLOB or TEXT data types and that is executed within a transaction, a shared read lock is held on the table for the duration of the transaction—that is, until the transaction is either committed or aborted.

      This issue does not occur for queries that use index or table scans, even against NDB tables having BLOB or TEXT columns.

      For example, consider the table t defined by the following CREATE TABLE statement:

      CREATE TABLE t (
          a INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
          b INT NOT NULL,
          c INT NOT NULL,
          d TEXT,
          INDEX i(b),
          UNIQUE KEY u(c)
      ) ENGINE = NDB,
                                                          

      Either of the following queries on t causes a shared read lock, because the first query uses a primary key lookup and the second uses a unique key lookup:

      SELECT * FROM t WHERE a = 1;
      SELECT * FROM t WHERE c = 1;
                                                          

      However, none of the four queries shown here causes a shared read lock:

      SELECT * FROM t WHERE b = 1;
      SELECT * FROM t WHERE d = '1';
      SELECT * FROM t;
      SELECT b,c WHERE a = 1;
                                                          

      This is because, of these four queries, the first uses an index scan, the second and third use table scans, and the fourth, while using a primary key lookup, does not retrieve the value of any BLOB or TEXT columns.

      You can help minimize issues with shared read locks by avoiding queries that use unique key lookups that retrieve BLOB or TEXT columns, or, in cases where such queries are not avoidable, by committing transactions as soon as possible afterward.

  • Rollbacks. There are no partial transactions, and no partial rollbacks of transactions. A duplicate key or similar error causes the entire transaction to be rolled back.

    This behavior differs from that of other transactional storage engines such as InnoDB that may roll back individual statements.

  • Transactions and memory usage. As noted elsewhere in this chapter, NDB Cluster does not handle large transactions well; it is better to perform a number of small transactions with a few operations each than to attempt a single large transaction containing a great many operations. Among other considerations, large transactions require very large amounts of memory. Because of this, the transactional behavior of a number of MySQL statements is affected as described in the following list:

    • TRUNCATE TABLE is not transactional when used on NDB tables. If a TRUNCATE TABLE fails to empty the table, then it must be re-run until it is successful.

    • DELETE FROM (even with no WHERE clause) is transactional. For tables containing a great many rows, you may find that performance is improved by using several DELETE FROM ... LIMIT ... statements to chunk the delete operation. If your objective is to empty the table, then you may wish to use TRUNCATE TABLE instead.

    • LOAD DATA statements. LOAD DATA is not transactional when used on NDB tables.

      Important

      When executing a LOAD DATA statement, the NDB engine performs commits at irregular intervals that enable better utilization of the communication network. It is not possible to know ahead of time when such commits take place.

    • ALTER TABLE and transactions. When copying an NDB table as part of an ALTER TABLE , the creation of the copy is nontransactional. (In any case, this operation is rolled back when the copy is deleted.)

  • Transactions and the COUNT() function. When using NDB Cluster Replication, it is not possible to guarantee the transactional consistency of the COUNT() function on the slave. In other words, when performing on the master a series of statements ( INSERT , DELETE , or both) that changes the number of rows in a table within a single transaction, executing SELECT COUNT(*) FROM table queries on the slave may yield intermediate results. This is due to the fact that SELECT COUNT(...) may perform dirty reads, and is not a bug in the NDB storage engine. (See Bug #31321 for more information.)

22.1.7.4 NDB Cluster Error Handling

Starting, stopping, or restarting a node may give rise to temporary errors causing some transactions to fail. These include the following cases:

  • Temporary errors. When first starting a node, it is possible that you may see Error 1204 Temporary failure, distribution changed and similar temporary errors.

  • Errors due to node failure. The stopping or failure of any data node can result in a number of different node failure errors. (However, there should be no aborted transactions when performing a planned shutdown of the cluster.)

In either of these cases, any errors that are generated must be handled within the application. This should be done by retrying the transaction.

另请参阅 Section 22.1.7.2, “Limits and Differences of NDB Cluster from Standard MySQL Limits” .

22.1.7.5 Limits Associated with Database Objects in NDB Cluster

Some database objects such as tables and indexes have different limitations when using the NDBCLUSTER storage engine:

  • Database and table names. When using the NDB storage engine, the maximum allowed length both for database names and for table names is 63 characters. A statement using a database name or table name longer than this limit fails with an appropriate error.

  • Number of database objects. The maximum number of all NDB database objects in a single NDB Cluster—including databases, tables, and indexes—is limited to 20320.

  • Attributes per table. The maximum number of attributes (that is, columns and indexes) that can belong to a given table is 512.

  • Attributes per key. The maximum number of attributes per key is 32.

  • Row size. The maximum permitted size of any one row is 14000 bytes. Each BLOB or TEXT column contributes 256 + 8 = 264 bytes to this total.

  • BIT column storage per table. The maximum combined width for all BIT columns used in a given NDB table is 4096.

  • FIXED column storage. NDB Cluster 8.0 supports a maximum of 128 TB per fragment of data in FIXED columns.

22.1.7.6 Unsupported or Missing Features in NDB Cluster

A number of features supported by other storage engines are not supported for NDB tables. Trying to use any of these features in NDB Cluster does not cause errors in or of itself; however, errors may occur in applications that expects the features to be supported or enforced. Statements referencing such features, even if effectively ignored by NDB , must be syntactically and otherwise valid.

  • Index prefixes. Prefixes on indexes are not supported for NDB tables. If a prefix is used as part of an index specification in a statement such as CREATE TABLE , ALTER TABLE , or CREATE INDEX , the prefix is not created by NDB .

    A statement containing an index prefix, and creating or modifying an NDB table, must still be syntactically valid. For example, the following statement always fails with Error 1089 Incorrect prefix key; the used key part isn't a string, the used length is longer than the key part, or the storage engine doesn't support unique prefix keys , regardless of storage engine:

    CREATE TABLE t1 (
        c1 INT NOT NULL,
        c2 VARCHAR(100),
        INDEX i1 (c2(500))
    );
                                            

    This happens on account of the SQL syntax rule that no index may have a prefix larger than itself.

  • Savepoints and rollbacks. Savepoints and rollbacks to savepoints are ignored as in MyISAM .

  • Durability of commits. There are no durable commits on disk. Commits are replicated, but there is no guarantee that logs are flushed to disk on commit.

  • Replication. Statement-based replication is not supported. Use --binlog-format=ROW (or --binlog-format=MIXED ) when setting up cluster replication. See Section 22.6, “NDB Cluster Replication” , for more information.

    Replication using global transaction identifiers (GTIDs) is not compatible with NDB Cluster, and is not supported in NDB Cluster 8.0. Do not enable GTIDs when using the NDB storage engine, as this is very likely to cause problems up to and including failure of NDB Cluster Replication.

    Semisynchronous replication is not supported in NDB Cluster.

  • Generated columns. The NDB storage engine does not support indexes on virtual generated columns.

    As with other storage engines, you can create an index on a stored generated column, but you should bear in mind that NDB uses DataMemory for storage of the generated column as well as IndexMemory for the index. See JSON columns and indirect indexing in NDB Cluster , for an example.

    NDB Cluster writes changes in stored generated columns to the binary log, but does log not those made to virtual columns. This should not effect NDB Cluster Replication or replication between NDB and other MySQL storage engines.

Note

See Section 22.1.7.3, “Limits Relating to Transaction Handling in NDB Cluster” , for more information relating to limitations on transaction handling in NDB .

22.1.7.7 Limitations Relating to Performance in NDB Cluster

The following performance issues are specific to or especially pronounced in NDB Cluster:

  • Range scans. There are query performance issues due to sequential access to the NDB storage engine; it is also relatively more expensive to do many range scans than it is with either MyISAM or InnoDB .

  • Reliability of Records in range. The Records in range statistic is available but is not completely tested or officially supported. This may result in nonoptimal query plans in some cases. If necessary, you can employ USE INDEX or FORCE INDEX to alter the execution plan. See Section 8.9.4, “Index Hints” , for more information on how to do this.

  • Unique hash indexes. Unique hash indexes created with USING HASH cannot be used for accessing a table if NULL is given as part of the key.

22.1.7.8 Issues Exclusive to NDB Cluster

The following are limitations specific to the NDB storage engine:

  • Machine architecture. All machines used in the cluster must have the same architecture. That is, all machines hosting nodes must be either big-endian or little-endian, and you cannot use a mixture of both. For example, you cannot have a management node running on a PowerPC which directs a data node that is running on an x86 machine. This restriction does not apply to machines simply running mysql or other clients that may be accessing the cluster's SQL nodes.

  • Binary logging. NDB Cluster has the following limitations or restrictions with regard to binary logging:

  • Schema operations (DDL statements) are rejected while any data node restarts.

  • Number of replicas. The number of replicas, as determined by the NoOfReplicas data node configuration parameter, is the number of copies of all data stored by NDB Cluster. Setting this parameter to 1 means there is only a single copy; in this case, no redundancy is provided, and the loss of a data node entails loss of data. To guarantee redundancy, and thus preservation of data even if a data node fails, set this parameter to 2, which is the default and recommended value in production.

    Setting NoOfReplicas to a value greater than 2 is possible (to a maximum of 4) but unnecessary to guard against loss of data. In addition, values greater than 2 for this parameter are not supported in production .

另请参阅 Section 22.1.7.10, “Limitations Relating to Multiple NDB Cluster Nodes” .

22.1.7.9 Limitations Relating to NDB Cluster Disk Data Storage

Disk Data object maximums and minimums. Disk data objects are subject to the following maximums and minimums:

  • Maximum number of tablespaces: 2 32 (4294967296)

  • Maximum number of data files per tablespace: 2 16 (65536)

  • The minimum and maximum possible sizes of extents for tablespace data files are 32K and 2G, respectively. See Section 13.1.21, “CREATE TABLESPACE Syntax” , for more information.

In addition, when working with NDB Disk Data tables, you should be aware of the following issues regarding data files and extents:

  • Data files use DataMemory . Usage is the same as for in-memory data.

  • Data files use file descriptors. It is important to keep in mind that data files are always open, which means the file descriptors are always in use and cannot be re-used for other system tasks.

  • Extents require sufficient DiskPageBufferMemory ; you must reserve enough for this parameter to account for all memory used by all extents (number of extents times size of extents).

Disk Data tables and diskless mode. Use of Disk Data tables is not supported when running the cluster in diskless mode.

22.1.7.10 Limitations Relating to Multiple NDB Cluster Nodes

Multiple SQL nodes. The following are issues relating to the use of multiple MySQL servers as NDB Cluster SQL nodes, and are specific to the NDBCLUSTER storage engine:

  • No distributed table locks. A LOCK TABLES works only for the SQL node on which the lock is issued; no other SQL node in the cluster sees this lock. This is also true for a lock issued by any statement that locks tables as part of its operations. (See next item for an example.)

  • ALTER TABLE operations. ALTER TABLE is not fully locking when running multiple MySQL servers (SQL nodes). (As discussed in the previous item, NDB Cluster does not support distributed table locks.)

Multiple management nodes. When using multiple management servers:

  • If any of the management servers are running on the same host, you must give nodes explicit IDs in connection strings because automatic allocation of node IDs does not work across multiple management servers on the same host. This is not required if every management server resides on a different host.

  • When a management server starts, it first checks for any other management server in the same NDB Cluster, and upon successful connection to the other management server uses its configuration data. This means that the management server --reload and --initial startup options are ignored unless the management server is the only one running. It also means that, when performing a rolling restart of an NDB Cluster with multiple management nodes, the management server reads its own configuration file if (and only if) it is the only management server running in this NDB Cluster. See Section 22.5.5, “Performing a Rolling Restart of an NDB Cluster” , for more information.

Multiple network addresses. Multiple network addresses per data node are not supported. Use of these is liable to cause problems: In the event of a data node failure, an SQL node waits for confirmation that the data node went down but never receives it because another route to that data node remains open. This can effectively make the cluster inoperable.

Note

It is possible to use multiple network hardware interfaces (such as Ethernet cards) for a single data node, but these must be bound to the same address. This also means that it not possible to use more than one [tcp] section per connection in the config.ini file. See Section 22.3.3.10, “NDB Cluster TCP/IP Connections” , for more information.

22.2 NDB Cluster Installation

This section describes the basics for planning, installing, configuring, and running an NDB Cluster. Whereas the examples in Section 22.3, “Configuration of NDB Cluster” provide more in-depth information on a variety of clustering options and configuration, the result of following the guidelines and procedures outlined here should be a usable NDB Cluster which meets the minimum requirements for availability and safeguarding of data.

This section covers hardware and software requirements; networking issues; installation of NDB Cluster; basic configuration issues; starting, stopping, and restarting the cluster; loading of a sample database; and performing queries.

NDB Cluster also provides the NDB Cluster Auto-Installer, a web-based graphical installer, as part of the NDB Cluster distribution. The Auto-Installer can be used to perform basic installation and setup of an NDB Cluster on one (for testing) or more host computers. See Section 22.2.1, “The NDB Cluster Auto-Installer” , for more information.

Assumptions. The following sections make a number of assumptions regarding the cluster's physical and network configuration. These assumptions are discussed in the next few paragraphs.

Cluster nodes and host computers. The cluster consists of four nodes, each on a separate host computer, and each with a fixed network address on a typical Ethernet network as shown here:

Table 22.4 Network addresses of nodes in example cluster

Node IP Address
Management node ( mgmd ) 198.51.100.10
SQL node ( mysqld ) 198.51.100.20
Data node "A" ( ndbd ) 198.51.100.30
Data node "B" ( ndbd ) 198.51.100.40

This setup is also shown in the following diagram:

Figure 22.4 NDB Cluster Multi-Computer Setup

Most content is described in the surrounding text. The four nodes each connect to a central switch that connects to a network.

Network addressing. In the interest of simplicity (and reliability), this How-To uses only numeric IP addresses. However, if DNS resolution is available on your network, it is possible to use host names in lieu of IP addresses in configuring Cluster. Alternatively, you can use the hosts file (typically /etc/hosts for Linux and other Unix-like operating systems, C:\WINDOWS\system32\drivers\etc\hosts on Windows, or your operating system's equivalent) for providing a means to do host lookup if such is available.

Potential hosts file issues. A common problem when trying to use host names for Cluster nodes arises because of the way in which some operating systems (including some Linux distributions) set up the system's own host name in the /etc/hosts during installation. Consider two machines with the host names ndb1 and ndb2 , both in the cluster network domain. Red Hat Linux (including some derivatives such as CentOS and Fedora) places the following entries in these machines' /etc/hosts files:

#  ndb1 /etc/hosts:
127.0.0.1   ndb1.cluster ndb1 localhost.localdomain localhost
                    
#  ndb2 /etc/hosts:
127.0.0.1   ndb2.cluster ndb2 localhost.localdomain localhost
                    

SUSE Linux (including OpenSUSE) places these entries in the machines' /etc/hosts files:

#  ndb1 /etc/hosts:
127.0.0.1       localhost
127.0.0.2       ndb1.cluster ndb1
                    
#  ndb2 /etc/hosts:
127.0.0.1       localhost
127.0.0.2       ndb2.cluster ndb2
                    

In both instances, ndb1 routes ndb1.cluster to a loopback IP address, but gets a public IP address from DNS for ndb2.cluster , while ndb2 routes ndb2.cluster to a loopback address and obtains a public address for ndb1.cluster . The result is that each data node connects to the management server, but cannot tell when any other data nodes have connected, and so the data nodes appear to hang while starting.

Caution

You cannot mix localhost and other host names or IP addresses in config.ini . For these reasons, the solution in such cases (other than to use IP addresses for all config.ini HostName entries) is to remove the fully qualified host names from /etc/hosts and use these in config.ini for all cluster hosts.

Host computer type. Each host computer in our installation scenario is an Intel-based desktop PC running a supported operating system installed to disk in a standard configuration, and running no unnecessary services. The core operating system with standard TCP/IP networking capabilities should be sufficient. Also for the sake of simplicity, we also assume that the file systems on all hosts are set up identically. In the event that they are not, you should adapt these instructions accordingly.

Network hardware. Standard 100 Mbps or 1 gigabit Ethernet cards are installed on each machine, along with the proper drivers for the cards, and that all four hosts are connected through a standard-issue Ethernet networking appliance such as a switch. (All machines should use network cards with the same throughput. That is, all four machines in the cluster should have 100 Mbps cards or all four machines should have 1 Gbps cards.) NDB Cluster works in a 100 Mbps network; however, gigabit Ethernet provides better performance.

Important

NDB Cluster is not intended for use in a network for which throughput is less than 100 Mbps or which experiences a high degree of latency. For this reason (among others), attempting to run an NDB Cluster over a wide area network such as the Internet is not likely to be successful, and is not supported in production.

Sample data. We use the world database which is available for download from the MySQL website (see https://dev.mysql.com/doc/index-other.html ). We assume that each machine has sufficient memory for running the operating system, required NDB Cluster processes, and (on the data nodes) storing the database.

For general information about installing MySQL, see Chapter 2, Installing and Upgrading MySQL . For information about installation of NDB Cluster on Linux and other Unix-like operating systems, see Section 22.2.2, “Installation of NDB Cluster on Linux” . For information about installation of NDB Cluster on Windows operating systems, see Section 22.2.3, “Installing NDB Cluster on Windows” .

For general information about NDB Cluster hardware, software, and networking requirements, see Section 22.1.3, “NDB Cluster Hardware, Software, and Networking Requirements” .

22.2.1 The NDB Cluster Auto-Installer

This section describes the web-based graphical configuration installer included as part of the NDB Cluster distribution. Topics discussed include an overview of the installer and its parts, software and other requirements for running the installer, navigating the GUI, and using the installer to set up and start or stop an NDB Cluster on one or more host computers.

The NDB Cluster Auto-Installer is made up of two components. The front end is a GUI client implemented as a Web page that loads and runs in a standard Web browser such as Firefox or Microsoft Internet Explorer. The back end is a server process ( ndb_setup.py ) that runs on the local machine or on another host to which you have access.

These two components (client and server) communicate with each other using standard HTTP requests and responses. The back end can manage NDB Cluster software programs on any host where the back end user has granted access. If the NDB Cluster software is on a different host, the back end relies on SSH for access, using the Paramiko library for executing commands remotely (see Section 22.2.1.1, “NDB Cluster Auto-Installer Requirements” ).

22.2.1.1 NDB Cluster Auto-Installer Requirements

This section provides information on supported operating platforms and software, required software, and other prerequisites for running the NDB Cluster Auto-Installer.

Supported platforms. The NDB Cluster Auto-Installer is available with NDB 8.0 distributions for recent versions of Linux, Windows, Solaris, and MacOS X. For more detailed information about platform support for NDB Cluster and the NDB Cluster Auto-Installer, see https://www.mysql.com/support/supportedplatforms/cluster.html .

Supported web browsers. The web-based installer is supported with recent versions of Firefox and Microsoft Internet Explorer. It should also work with recent versions of Opera, Safari, and Chrome, although we have not thoroughly tested for compability with these browsers.

Required software—setup host. The following software must be installed on the host where the Auto-Installer is run:

  • Python 2.6 or higher. The Auto-Installer requires the Python interpreter and standard libraries. If these are not already installed on the system, you may be able to add them using the system's package manager. Otherwise, you can download them from http://python.org/download/ .

  • Paramiko 2 or higher. This is required to communicate with remote hosts using SSH. You can download it from http://www.lag.net/paramiko/ . Paramiko may also be available from your system's package manager.

  • Pycrypto version 1.9 or higher. This cryptography module is required by Paramiko, and can be iunstalled using pip install cryptography . If pip is not installed, and the module is not available using your system's package manage, you can download it from https://www.dlitz.net/software/pycrypto/ .

All of the software in the preceding list is included in the Windows version of the configuration tool, and does not need to be installed separately.

The Paramiko and Pycrypto libraries are required only if you intend to deploy NDB Cluster nodes on remote hosts, and are not needed if all nodes are on the same host where the installer is run.

Required software—remote hosts. The only software required for remote hosts where you wish to deploy NDB Cluster nodes is the SSH server, which is usually installed by default on Linux and Solaris systems. Several alternatives are available for Windows; for an overview of these, see http://en.wikipedia.org/wiki/Comparison_of_SSH_servers .

An additional requirement when using multiple hosts is that it is possible to authenticate to any of the remote hosts using SSH and the proper keys or user credentials, as discussed in the next few paragraphs:

Authentication and security. Three basic security or authentication mechanisms for remote access are available to the Auto-Installer, which we list and describe here:

  • SSH. A secure shell connection is used to enable the back end to perform actions on remote hosts. For this reason, an SSH server must be running on the remote host. In addition, the system user running the installer must have access to the remote server, either with a user name and password, or by using public and private keys.

    Important

    You should never use the system root account for remote access, as this is extremely insecure. In addition, mysqld cannot normally be started by system root . For these and other reasons, you should provide SSH credentials for a regular user account on the target system, and not for system root . For more information about this issue, see Section 6.1.5, “How to Run MySQL as a Normal User” .

  • HTTPS. Remote communication between the Web browser front end and the back end is not encrypted by default, which means that information such as the user's SSH password is transmitted in clear text that is readable to anyone. For communication from a remote client to be encrypted, the back end must have a certificate, and the front end must communicate with the back end using HTTPS rather than HTTP. Enabling HTTPS is accomplished most easily through issuing a self-signed certificate. Once the certificate is issued, you must make sure that it is used. You can do this by starting ndb_setup.py from the command line with the --use-https ( -S ) and --cert-file ( -c ) options.

    A sample certificate file cfg.pem is included and is used by default. This file is located in the mcc directory under the installation share directory; on Linux, the full path to the file is normally /usr/share/mysql/mcc/cfg.pem . On Windows systems, this is usually C:\Program Files\MySQL\MySQL Server 8.0\share\mcc\cfg.pem . Letting the default be used means that, for testing purposes, you can simply start the installer with the -S option to use an HTTPS connection between the browser and the back end.

    The Auto-Installer saves the configuration file for a given cluster mycluster01 as mycluster01.mcc in the home directory of the user invoking the ndb_setup.py executable. This file is encrypted with a passphrase supplied by the user (using Fernet ); because HTTP transmits the passphrase in the clear, it is strongly recommended that you always use an HTTPS connection to access the Auto-Installer on a remote host .

  • Certificate-based authentication. The back end ndb_setup.py process can execute commands on the local host as well as remote hosts. This means that anyone connecting to the back end can take charge of how commands are executed. To reject unwanted connections to the back end, a certificate may be required for authentication of the client. In this case, a certificate must be issued by the user, installed in the browser, and made available to the back end for authentication purposes. You can enact this requirement (together with or in place of password or key authentication) by starting ndb_setup.py with the --ca-certs-file ( -a ) option.

There is no need or requirement for secure authentication when the client browser is running on the same host as the Auto-Installer back end.

另请参阅 Section 22.5.12, “NDB Cluster Security Issues” , which discusses security considerations to take into account when deploying NDB Cluster, as well as Chapter 6, Security , for more general MySQL security information.

22.2.1.2 Using the NDB Cluster Auto-Installer

The NDB Cluster Auto-Installer interface is made up of several pages, each corresponding to a step in the process used to configure and deploy an NDB Cluster. These pages are listed here, in order:

  • Welcome : Begin using the Auto-Installer by choosing either to configure a new NDB Cluster, or to continue configuring an existing one.

  • Define Cluster : Set basic information about the cluster as a whole, such as name, hosts, and load type. Here you can also set the SSH authentication type for accessing remote hosts, if needed.

  • Define Hosts : Identify the hosts where you intend to run NDB Cluster processes.

  • Define Processes : Assign one or more processes of a given type or types to each cluster host.

  • Define Parameters : Set configuration attributes for processes or types of processes.

  • Deploy Configuration : Deploy the cluster with the configuration set previously; start and stop the deployed cluster.

NDB Cluster Installer Settings and Help Menus

These menus are shown on all screens except for the Welcome screen. They provide access to installer settings and information. The Settings menu is shown here in more detail:

Figure 22.5 NDB Cluster Auto-Installer Settings menu

Content is described in the surrounding text.

The Settings menu has the following entries:

  • Automatically save configuration as cookies : Save your configuration information—such as host names, process data, and parameter values—as a cookie in the browser. When this option is chosen, all information except any SSH password is saved. This means that you can quit and restart the browser, and continue working on the same configuration from where you left off at the end of the previous session. This option is enabled by default.

    The SSH password is never saved; if you use one, you must supply it at the beginning of each new session.

  • Show advanced configuration options : Shows by default advanced configuration parameters where available.

    Once set, the advanced parameters continue to be used in the configuration file until they are explicitly changed or reset. This is regardless of whether the advanced parameters are currently visible in the installer; in other words, disabling the menu item does not reset the values of any of these parameters.

    You can also toggle the display of advanced parameters for individual processes on the Define Parameters screen .

    This option is disabled by default.

  • Automatically get resource information for new hosts : Query new hosts automatically for hardware resource information to pre-populate a number of configuration options and values. In this case, the suggested values are not mandatory, but they are used unless explicitly changed using the appropriate editing options in the installer.

    This option is enabled by default.

The installer Help menu is shown here:

Figure 22.6 NDB Cluster Auto-Installer Help menu

Content is described in the surrounding text.

The Help menu provides several options, described in the following list:

  • 内容 : Show the built-in user guide. This is opened in a separate browser window, so that it can be used simultaneously with the installer without interrupting workflow.

  • Current page : Open the built-in user guide to the section describing the page currently displayed in the installer.

  • About : open a dialog displaying the installer name and the version number of the NDB Cluster distribution with which it was supplied.

The Auto-Installer also provides context-sensitive help in the form of tooltips for most input widgets.

In addition, the names of most NDB configuration parameters are linked to their descriptions in the online documentation. The documentation is displayed in a separate browser window.

The next section discusses starting the Auto-Installer. The sections immediately following it describe in greater detail the purpose and function of each of these pages in the order listed previously.

Starting the NDB Cluster Auto-Installer

The Auto-Installer is provided together with the NDB Cluster software. Separate RPM and .deb packages containing only the Auto-Installer are also available for many Linux distributions. (See Section 22.2, “NDB Cluster Installation” .)

The present section explains how to start the installer. You can do by invoking the ndb_setup.py executable.

User and privileges

You should run the ndb_setup.py as a normal user; no special privileges are needed to do so. You should not run this program as the mysql user, or using the system root or Administrator account; doing so may cause the installation to fail.

ndb_setup.py is found in the bin within the NDB Cluster installation directory; a typical location might be /usr/local/mysql/bin on a Linux system or C:\Program Files\MySQL\MySQL Server 8.0\bin on a Windows system. This can vary according to where the NDB Cluster software is installed on your system, and the installation method.

On Windows, you can also start the installer by running setup.bat in the NDB Cluster installation directory. When invoked from the command line, this batch file accepts the same options as ndb_setup.py .

ndb_setup.py can be started with any of several options that affect its operation, but it is usually sufficient to allow the default settings be used, in which case you can start ndb_setup.py by either of the following two methods:

  1. Navigate to the NDB Cluster bin directory in a terminal and invoke it from the command line, without any additional arguments or options, like this:

    shell> ndb_setup.py
    Running out of install dir: /usr/local/mysql/bin
    Starting web server on port 8081
    URL is https://localhost:8081/welcome.html
    deathkey=627876
    Press CTRL+C to stop web server.
    The application should now be running in your browser.
    (Alternatively you can navigate to https://localhost:8081/welcome.html to start it)
                                                

    This works regardless of operating platform.

  2. Navigate to the NDB Cluster bin directory in a file browser (such as Windows Explorer on Windows, or Konqueror, Dolphin, or Nautilus on Linux) and activate (usually by double-clicking) the ndb_setup.py file icon. This works on Windows, and should work with most common Linux desktops as well.

    On Windows, you can also navigate to the NDB Cluster installation directory and activate the setup.bat file icon.

In either case, once ndb_setup.py is invoked, the Auto-Installer's Welcome screen should open in the system's default web browser. If not, you should be able to open the page http://localhost:8081/welcome.html or https://localhost:8081/welcome.html manually in the browser.

In some cases, you may wish to use non-default settings for the installer, such as specifying HTTPS for connections, or a different port for the Auto-Installer's included web server to run on, in which case you must invoke ndb_setup.py with one or more startup options with values overriding the necessary defaults. The same startup options can be used on Windows systems with the setup.bat file supplied for such platforms in the NDB Cluster software distribution. This can be done using the command line, but if you want or need to start the installer from a desktop or file browser while employing one or more of these options, it is also possible to create a script or batch file containing the proper invocation, then to double-click its file icon in the file browser to start the installer. (On Linux systems, you might also need to make the script file executable first.) If you plan to use the Auto-Installer from a remote host, you should start using the -S option. For information about this and other advanced startup options for the NDB Cluster Auto-Installer, see Section 22.4.26, “ ndb_setup.py — Start browser-based Auto-Installer for NDB Cluster” .

NDB Cluster Auto-Installer Welcome Screen

The Welcome screen is loaded in the default browser when ndb_setup.py is invoked. The first time the Auto-Installer is run (or if for some other reason there are no existing configurations), this screen appears as shown here:

Figure 22.7 The NDB Cluster Auto-Installer Welcome screen, first run

Content is described in the surrounding text.

In this case, the only choice of cluster listed is for configuration of a new cluster, and both the View Cfg and Continue buttons are inactive.

To create a new configuration, enter and confirm a passphrase in the text boxes provided. When this has been done, you can click Continue to proceed to the Define Cluster screen where you can assign a name to the new cluster.

If you have previously created one or more clusters with the Auto-Installer, they are listed by name. This example shows an existing cluster named mycluster-1 :

Figure 22.8 The NDB Cluster Auto-Installer Welcome screen, with previously created cluster mycluster-1

Content is described in the surrounding text.

To view the configuration for and work with a given cluster, select the radiobutton next to its name in the list, then enter and confirm the passphrase that was used to create it. When you have done this correctly, you can click View Cfg to view and edit this cluster's configuration.

NDB Cluster Auto-Installer Define Cluster Screen

The Define Cluster screen is appears following the Welcome screen , and is used for setting general properties of the cluster. The layout of the Define Cluster screen is shown here:

Figure 22.9 The NDB Cluster Auto-Installer Define Cluster screen

Content is described in the surrounding text.

This screen and subsequent screens also include Settings and Help menus which are described later in this section; see NDB Cluster Installer Settings and Help Menus .

The Define Cluster screen allows you to set three sorts of properties for the cluster: cluster properties, SSH properties, and installation properties.

Cluster properties that can be set on this screen are listed here:

  • Cluster name : A name that identifies the cluster; in this example, this is mycluster-1 . The name is set on the previous screen and cannot be changed here.

  • Host list : A comma-delimited list of one or more hosts where cluster processes should run. By default, this is 127.0.0.1 . If you add remote hosts to the list, you must be able to connect to them using the credentials supplied as SSH properties.

  • Application type : Choose one of the following:

    1. Simple testing : Minimal resource usage for small-scale testing. This the default. Not intended for production environments .

    2. Web : Maximize performance for the given hardware.

    3. Real-time : Maximize performance while maximizing sensitivity to timeouts in order to minimize the time needed to detect failed cluster processes.

  • Write load : Choose a level for the anticipated number of writes for the cluster as a whole. You can choose any one of the following levels:

    1. Low : The expected load includes fewer than 100 write transactions for second.

    2. Medium : The expected load includes 100 to 1000 write transactions per second; this is the default.

    3. High : The expected load includes more than 1000 write transactions per second.

SSH properties are described in the following list:

  • Key-Based SSH : Check this box to use key-enabled login to the remote host. If checked, the key user and passphrase must also be supplied; otherwise, a user and password for a remote login account are needed.

  • User : Name of user with remote login access.

  • Password : Password for remote user.

  • Key user : Name of the user for whom the key is valid, if not the same as the login user.

  • Key passphrase : Passphrase for the key, if required.

  • Key file : Path to the key file. The default is ~/.ssh/id_rsa .

The SSH properties set on this page apply to all hosts in the cluster. They can be overridden for a given host by editing that hosts's properties on the Define Hosts screen.

Two installation properties can also be set on this screen:

  • Install MySQL Cluster : This setting determines the source from which the Auto-Installer installs NDB Cluster software, if any, on the cluster hosts. Possible values and their effects are listed here:

    1. DOCKER : Try to install the MySQL Cluster Docker image from https://hub.docker.com/r/mysql/mysql-cluster/ on each host

    2. REPO : Try to install the NDB Cluster software from the MySQL Repositories on each host

    3. BOTH : Try to install either the Docker image or the software from the repository on each host, giving preference to the repository

    4. NONE : Do not install the NDB Cluster software on the hosts; this is the default

  • Open FW Ports : Check this checkbox to have the installer attempt to open ports required by NDB CLuster processes on all hosts.

The next figure shows the Define Cluster page with settings for a small test cluster with all nodes running on localhost :

Figure 22.10 The NDB Cluster Auto-Installer Define Cluster screen, with settings for a test cluster

Content is described in the surrounding text.

After making the desired settings, you can save them to the configuration file and proceed to the Define Hosts screen by clicking the Save & Next button.

If you exit the installer without saving, no changes are made to the configuration file.

NDB Cluster Auto-Installer Define Hosts Screen

The Define Hosts screen, shown here, provides a means of viewing and specifying several key properties of each cluster host:

Figure 22.11 NDB Cluster Define Hosts screen, start

Content is described in the surrounding text.

Properties shown include the following:

  • Host : Name or IP address of this host

  • Res.info : Shows OK if the installer was able to retrieve requested resource information from this host

  • Platform : Operating system or platform

  • Memory (MB) : Amount of RAM on this host

  • Cores : Number of CPU cores available on this host

  • MySQL Cluster install directory : Path to directory where the NDB Cluster software is installed on this host; defaults to /usr/local/bin

  • MySQL Cluster data directory : Path to directory used for data by NDB Cluster processes on this host; defaults to /var/lib/mysql-cluster .

  • DiskFree : Free disk space in bytes

    For hosts with multiple disks, only the space available on the disk used for the data directory is shown.

This screen also provides an extended view for each host that includes the following properties:

  • FDQN : This host's fully qualified domain name, used by the installer to connect with it, distribute configuration information to it, and start and stop cluster processes on it.

  • Internal IP : The IP address used for communication with cluster processes running on this host by processes running elsewhere.

  • OS Details : Detailed operating system name and version information.

  • Open FW : If this checkbox is enabled, the installer attempts to open ports in the host's firewall needed by cluster processes.

  • REPO URL : URL for MySQL NDB Cluster repository

  • DOCKER URL : URL for MySQL NDB CLuster Docker images; for NDB 8.0, this is mysql/mysql-cluster:8.0 .

  • Install : If this checkbox is enabled, the Auto-Installer attempts to install the NDB Cluster software on this host

The extended view is shown here:

Figure 22.12 NDB Cluster Define Hosts screen, extended host info view

Content is described in the surrounding text.

All cells in the display are editable, with the exceptions of those in the Host , Res.info , and FQDN columns.

Be aware that it may take some time for information to be retrieved from remote hosts. Fields for which no value could be retrieved are indicated with an ellipsis ( ). You can retry the fetching of resource information from one or more hosts by selecting the hosts in the list and then clicking the Refresh selected host(s) button.

Adding and Removing Hosts

You can add one or more hosts by clicking the Add Host button and entering the required properties where indicated in the Add new host dialog, shown here:

Figure 22.13 NDB Cluster Add Host dialog

Content is described in the surrounding text.

This dialog includes the following fields:

  • Host name : A comma-separated list of one or more host names, IP addresses, or both. These must be accessible from the host where the Auto-Installer is running.

  • Host internal IP (VPN) : If you are setting up the cluster to run on a VPN or other internal network, enter the IP address or addresses used for contact by cluster nodes on other hosts.

  • Key-based auth : If checked, enables key-based authentication. You can enter any additional needed information in the User , Passphrase , and Key file fields.

  • Ordinary login : If accessing this host using a password-based login, enter the appropriate information in the User and Password fields.

  • Open FW ports : Selecting this checkbox allows the installer try opening any ports needed by cluster processes in this host's firewall.

  • Configure installation : Checking this allows the Auto-Install to attempt to set up the NDB Cluster software on this host.

To save the new host and its properties, click Add . If you wish to cancel without saving any changes, click Cancel instead.

Similarly, you can remove one or more hosts using the button labelled Remove selected host(s) . When you remove a host, any process which was configured for that host is also removed .

Warning

Remove selected host(s) acts immediately. There is no confirmation dialog. If you remove a host in error, you must re-enter its name and properties manually using Add host .

If the SSH user credentials on the Define Cluster screen are changed, the Auto-Installer attempts to refresh the resource information from any hosts for which information is missing.

You can edit the host's platform name, hardware resource information, installation directory, and data directory by clicking the corresponding cell in the grid, by selecting one or more hosts and clicking the button labelled Edit selected host(s) . This causes a dialog box to appear, in which these fields can be edited, as shown here:

Figure 22.14 NDB Cluster Auto-Installer Edit Hosts dialog

Content is described in the surrounding text.

When more than one host is selected, any edited values are applied to all selected hosts.

Once you have entered all desired host information, you can use the Save & Next button to save the information to the cluster's configuration file and proceed to the Define Processes screen , where you can set up NDB Cluster processes on one or more hosts.

NDB Cluster Auto-Installer Define Processes Screen

The Define Processes screen, shown here, provides a way to assign NDB Cluster processes (nodes) to cluster hosts:

Figure 22.15 NDB Cluster Auto-Installer Define Processes dialog

Content is described in the surrounding text. The example process tree topology includes "Any host" and "localhost", as defined earlier. The localhost tree includes the following processes: Management mode 1, API node 1, API node 2, API node 3, SQL node 1, SQL node 2, Multi threaded data node 1, and Multi threaded data node 2. This panel also includes "Add process" and "Del[ete] process" buttons.

This screen contains a process tree showing cluster hosts and processes set up to run on each one, as well as a panel which displays information about the item currently selected in the tree.

When this screen is accessed for the first time for a given cluster, a default set of processes is defined for you, based on the number of hosts. If you later return to the Define Hosts screen , remove all hosts, and add new hosts, this also causes a new default set of processes to be defined.

NDB Cluster processes are of the types described in this list:

  • Management node. Performs administrative tasks such as stopping individual data nodes, querying node and cluster status, and making backups. Executable: ndb_mgmd .

  • Single-threaded data node. Stores data and executes queries. Executable: ndbd .

  • Multi threaded data node. Stores data and executes queries with multiple worker threads executing in parallel. Executable: ndbmtd .

  • SQL node. MySQL server for executing SQL queries against NDB . Executable: mysqld .

  • API node. A client accessing data in NDB by means of the NDB API or other low-level client API, rather than by using SQL. See MySQL NDB Cluster API Developer Guide , for more information.

For more information about process (node) types, see Section 22.1.1, “NDB Cluster Core Concepts” .

Processes shown in the tree are numbered sequentially by type, for each host—for example, SQL node 1 , SQL node 2 , and so on—to simplify identification.

Each management node, data node, or SQL process must be assigned to a specific host, and is not allowed to run on any other host. An API node may be assigned to a single host, but this is not required. Instead, you can assign it to the special Any host entry which the tree also contains in addition to any other hosts, and which acts as a placeholder for processes that are allowed to run on any host. Only API processes may use this Any host entry .

Adding processes. To add a new process to a given host, either right-click that host's entry in the tree, then select the Add process popup when it appears, or select a host in the process tree, and press the Add process button below the process tree. Performing either of these actions opens the add process dialog, as shown here:

Figure 22.16 NDB Cluster Auto-Installer Add Process Dialog

Most content is described in the surrounding text. Shows a window titled "Add new process" with two options: "Select process type:" that shows a select box with "API node" selected, and "Enter process name:" with "API node 4" entered as plain text. Action buttons include "Cancel" and "Add".

Here you can select from among the available process types described earlier this section; you can also enter an arbitrary process name to take the place of the suggested value, if desired.

Removing processes. To delete a process, select that process in the tree and use the Del process button.

When you select a process in the process tree, information about that process is displayed in the information panel, where you can change the process name and possibly its type. You can change a multi-threaded data node ( ndbmtd ) to a single-threaded data node ( ndbd ), or the reverse, only; no other process type changes are allowed. If you want to make a change between any other process types, you must delete the original process first, then add a new process of the desired type .

NDB Cluster Auto-Installer Define Parameters Screen

Like the Define Processes screen , this screen includes a process tree; the Define Parameters process tree is organized by process or node type, in groups labelled Management Layer , Data Layer , SQL Layer , and API Layer . An information panel displays information regarding the item currently selected. The Define Attributes screen is shown here:

Figure 22.17 NDB Cluster Auto-Installer Define Parameters screen

Content is described in the surrounding text.

The checkbox labelled Show advanced configuration , when checked, makes advanced options for data node and SQL node processes visible in the information pane. These options are set and used whether or not they are visible. You can also enable this behavior globally by checking Show advanced configuration options under Settings (see NDB Cluster Installer Settings and Help Menus ).

You can edit attributes for a single process by selecting that process from the tree, or for all processes of the same type in the cluster by selecting one of the Layer folders. A per-process value set for a given attribute overrides any per-group setting for that attribute that would otherwise apply to the process in question. An example of such an information panel (for an SQL process) is shown here:

Figure 22.18 Define Parameters—Process Attributes

Content is described in the surrounding text.

Attributes whose values can be overridden are shown in the information panel with a button bearing a plus sign. This + button activates an input widget for the attribute, enabling you to change its value. When the value has been overridden, this button changes into a button showing an X . The X button undoes any changes made to a given attribute, which immediately reverts to the predefined value.

All configuration attributes have predefined values calculated by the installer, based such factors as host name, node ID, node type, and so on. In most cases, these values may be left as they are. If you are not familiar with it already, it is highly recommended that you read the applicable documentation before making changes to any of the attribute values. To make finding this information easier, each attribute name shown in the information panel is linked to its description in the online NDB Cluster documentation.

NDB Cluster Auto-Installer Deploy Configuration Screen

This screen allows you to perform the following tasks:

  • Review process startup commands and configuration files to be applied

  • Distribute configuration files by creating any necessary files and directories on all cluster hosts—that is, deploy the cluster as presently configured

  • Start and stop the cluster

The Deploy Configuration screen is shown here:

Figure 22.19 NDB Cluster Auto-Installer Deploy Configuration screen

Content is described in the surrounding text.

Like the Define Parameters screen , this screen features a process tree which is organized by process type. Next to each process in the tree is a status icon indicating the current status of the process: connected ( CONNECTED ), starting ( STARTING ), running ( STARTED ), stopping ( STOPPING ), or disconnected ( NO_CONTACT ). The icon shows green if the process is connected or running; yellow if it is starting or stopping; red if the process is stopped or cannot be contacted by the management server.

This screen also contains two information panels, one showing the startup command or commands needed to start the selected process. (For some processes, more than one command may be required—for example, if initialization is necessary.) The other panel shows the contents of the configuration file, if any, for the given process.

This screen also contains four buttons, labelled as and performing the functions described in the following list:

  • Install cluster : Nonfunctional in this release; implementation intended for a future release.

  • Deploy cluster : Verify that the configuration is valid. Create any directories required on the cluster hosts, and distribute the configuration files onto the hosts. A progress bar shows how far the deployment has proceeded, as shown here, and a dialog is pisplayed when the deployment has completed, as shown here:

    Figure 22.20 Cluster Deployment Process

    Content is described in the surrounding text.

  • Start cluster : The cluster is deployed as with Deploy cluster , after which all cluster processes are started in the correct order.

    Starting these processes may take some time. If the estimated time to completion is too large, the installer provides an opportunity to cancel or to continue of the startup procedure. A progress bar indicates the current status of the startup procedure, as shown here:

    Figure 22.21 Cluster Startup Process with Progress Bar

    Content is described in the surrounding text.

    The process status icons next to the items shown in the process tree also update with the status of each process.

    A confirmation dialog is shown when the startup process has completed, as shown here:

    Figure 22.22 Cluster Startup, Process Completed Dialog

    Content is described in the surrounding text.

  • Stop cluster : After the cluster has been started, you can stop it using this. As with starting the cluster, cluster shutdown is not instantaneous, and may require some time complete. A progress bar, similar to that displayed during cluster startup, shows the approximate current status of the cluster shutdown procedure, as do the process status icons adjoining the process tree. The progress bar is shown here:

    Figure 22.23 Cluster Shutdown Process, with Progress Bar

    Content is described in the surrounding text.

    A confirmation dialog indicates when the shutdown process is complete:

    Figure 22.24 Cluster Shutdown, Process Completed Dialog

    Content is described in the surrounding text.

The Auto-Installer generates a config.ini file containing NDB node parameters for each management node, as well as a my.cnf file containing the appropriate options for each mysqld process in the cluster. No configuration files are created for data nodes or API nodes.

22.2.2 Installation of NDB Cluster on Linux

This section covers installation methods for NDB Cluster on Linux and other Unix-like operating systems. While the next few sections refer to a Linux operating system, the instructions and procedures given there should be easily adaptable to other supported Unix-like platforms. For manual installation and setup instructions specific to Windows systems, see Section 22.2.3, “Installing NDB Cluster on Windows” .

Each NDB Cluster host computer must have the correct executable programs installed. A host running an SQL node must have installed on it a MySQL Server binary ( mysqld ). Management nodes require the management server daemon ( ndb_mgmd ); data nodes require the data node daemon ( ndbd or ndbmtd ). It is not necessary to install the MySQL Server binary on management node hosts and data node hosts. It is recommended that you also install the management client ( ndb_mgm ) on the management server host.

Installation of NDB Cluster on Linux can be done using precompiled binaries from Oracle (downloaded as a .tar.gz archive), with RPM packages (also available from Oracle), or from source code. All three of these installation methods are described in the section that follow.

Regardless of the method used, it is still necessary following installation of the NDB Cluster binaries to create configuration files for all cluster nodes, before you can start the cluster. See Section 22.2.4, “Initial Configuration of NDB Cluster” .

22.2.2.1 Installing an NDB Cluster Binary Release on Linux

This section covers the steps necessary to install the correct executables for each type of Cluster node from precompiled binaries supplied by Oracle.

For setting up a cluster using precompiled binaries, the first step in the installation process for each cluster host is to download the binary archive from the NDB Cluster downloads page . (For the most recent 64-bit NDB 8.0 release, this is mysql-cluster-gpl-8.0.15-linux-glibc2.12-x86_64.tar.gz .) We assume that you have placed this file in each machine's /var/tmp directory.

If you require a custom binary, see Section 2.9.3, “Installing MySQL Using a Development Source Tree” .

Note

After completing the installation, do not yet start any of the binaries. We show you how to do so following the configuration of the nodes (see Section 22.2.4, “Initial Configuration of NDB Cluster” ).

SQL nodes. On each of the machines designated to host SQL nodes, perform the following steps as the system root user:

  1. Check your /etc/passwd and /etc/group files (or use whatever tools are provided by your operating system for managing users and groups) to see whether there is already a mysql group and mysql user on the system. Some OS distributions create these as part of the operating system installation process. If they are not already present, create a new mysql user group, and then add a mysql user to this group:

    shell> groupadd mysql
    shell> useradd -g mysql -s /bin/false mysql
                                            

    The syntax for useradd and groupadd may differ slightly on different versions of Unix, or they may have different names such as adduser and addgroup .

  2. Change location to the directory containing the downloaded file, unpack the archive, and create a symbolic link named mysql to the mysql directory.

    Note

    The actual file and directory names vary according to the NDB Cluster version number.

    shell> cd /var/tmp
    shell> tar -C /usr/local -xzvf mysql-cluster-gpl-8.0.15-linux-glibc2.12-x86_64.tar.gz
    shell> ln -s /usr/local/mysql-cluster-gpl-8.0.15-linux-glibc2.12-x86_64 /usr/local/mysql
                                            
  3. Change location to the mysql directory and set up the system databases using mysqld --initialize as shown here:

    shell> cd mysql
    shell> mysqld --initialize
                                            

    This generates a random password for the MySQL root account. If you do not want the random password to be generated, you can substitute the --initialize-insecure option for --initialize . In either case, you should review Section 2.10.1.1, “Initializing the Data Directory Manually Using mysqld” , for additional information before performing this step. See also Section 4.4.2, “ mysql_secure_installation — Improve MySQL Installation Security” .

  4. Set the necessary permissions for the MySQL server and data directories:

    shell> chown -R root .
    shell> chown -R mysql data
    shell> chgrp -R mysql .
                                            
  5. Copy the MySQL startup script to the appropriate directory, make it executable, and set it to start when the operating system is booted up:

    shell> cp support-files/mysql.server /etc/rc.d/init.d/
    shell> chmod +x /etc/rc.d/init.d/mysql.server
    shell> chkconfig --add mysql.server
                                            

    (The startup scripts directory may vary depending on your operating system and version—for example, in some Linux distributions, it is /etc/init.d .)

    Here we use Red Hat's chkconfig for creating links to the startup scripts; use whatever means is appropriate for this purpose on your platform, such as update-rc.d on Debian.

Remember that the preceding steps must be repeated on each machine where an SQL node is to reside.

Data nodes. Installation of the data nodes does not require the mysqld binary. Only the NDB Cluster data node executable ndbd (single-threaded) or ndbmtd (multithreaded) is required. These binaries can also be found in the .tar.gz archive. Again, we assume that you have placed this archive in /var/tmp .

As system root (that is, after using sudo , su root , or your system's equivalent for temporarily assuming the system administrator account's privileges), perform the following steps to install the data node binaries on the data node hosts:

  1. Change location to the /var/tmp directory, and extract the ndbd and ndbmtd binaries from the archive into a suitable directory such as /usr/local/bin :

    shell> cd /var/tmp
    shell> tar -zxvf mysql-cluster-gpl-8.0.15-linux-glibc2.12-x86_64.tar.gz
    shell> cd mysql-cluster-gpl-8.0.15-linux-glibc2.12-x86_64
    shell> cp bin/ndbd /usr/local/bin/ndbd
    shell> cp bin/ndbmtd /usr/local/bin/ndbmtd
                                            

    (You can safely delete the directory created by unpacking the downloaded archive, and the files it contains, from /var/tmp once ndb_mgm and ndb_mgmd have been copied to the executables directory.)

  2. Change location to the directory into which you copied the files, and then make both of them executable:

    shell> cd /usr/local/bin
    shell> chmod +x ndb*
                                            

The preceding steps should be repeated on each data node host.

Although only one of the data node executables is required to run an NDB Cluster data node, we have shown you how to install both ndbd and ndbmtd in the preceding instructions. We recommend that you do this when installing or upgrading NDB Cluster, even if you plan to use only one of them, since this will save time and trouble in the event that you later decide to change from one to the other.

Note

The data directory on each machine hosting a data node is /usr/local/mysql/data . This piece of information is essential when configuring the management node. (See Section 22.2.4, “Initial Configuration of NDB Cluster” .)

Management nodes. Installation of the management node does not require the mysqld binary. Only the NDB Cluster management server ( ndb_mgmd ) is required; you most likely want to install the management client ( ndb_mgm ) as well. Both of these binaries also be found in the .tar.gz archive. Again, we assume that you have placed this archive in /var/tmp .

As system root , perform the following steps to install ndb_mgmd and ndb_mgm on the management node host:

  1. Change location to the /var/tmp directory, and extract the ndb_mgm and ndb_mgmd from the archive into a suitable directory such as /usr/local/bin :

    shell> cd /var/tmp
    shell> tar -zxvf mysql-cluster-gpl-8.0.15-linux-glibc2.12-x86_64.tar.gz
    shell> cd mysql-cluster-gpl-8.0.15-linux-glibc2.12-x86_64
    shell> cp bin/ndb_mgm* /usr/local/bin
                                            

    (You can safely delete the directory created by unpacking the downloaded archive, and the files it contains, from /var/tmp once ndb_mgm and ndb_mgmd have been copied to the executables directory.)

  2. Change location to the directory into which you copied the files, and then make both of them executable:

    shell> cd /usr/local/bin
    shell> chmod +x ndb_mgm*
                                            

In Section 22.2.4, “Initial Configuration of NDB Cluster” , we create configuration files for all of the nodes in our example NDB Cluster.

22.2.2.2 Installing NDB Cluster from RPM

This section covers the steps necessary to install the correct executables for each type of NDB Cluster 8.0 node using RPM packages supplied by Oracle. For information about RPMs for previous versions of NDB Cluster, see Installation using old-style RPMs (NDB 7.5.3 and earlier) .

As an alternative to the method described in this section, Oracle provides MySQL Repositories for NDB Cluster that are compatible with many common Linux distributions. Two repostories, listed here, are available for RPM-based distributions:

RPMs are available for both 32-bit and 64-bit Linux platforms. The filenames for these RPMs use the following pattern:

mysql-cluster-community-data-node-8.0.15-1.el7.x86_64.rpm
mysql-cluster-license-component-ver-rev.distro.arch.rpm
    license:= {commercial | community}
    component: {management-server | data-node | server | client | other—see text}
    ver: major.minor.release
    rev: major[.minor]
    distro: {el6 | el7 | sles12}
    arch: {i686 | x86_64}
                            

license indicates whether the RPM is part of a Commercial or Community release of NDB Cluster. In the remainder of this section, we assume for the examples that you are installing a Community release.

Possible values for component , with descriptions, can be found in the following table:

Table 22.5 Components of the NDB Cluster RPM distribution

Component 描述
auto-installer NDB Cluster Auto Installer program; see Section 22.2.1, “The NDB Cluster Auto-Installer” , for usage
client MySQL and NDB client programs; includes mysql client, ndb_mgm client, and other client tools
common Character set and error message information needed by the MySQL server
data-node ndbd and ndbmtd data node binaries
devel Headers and library files needed for MySQL client development
嵌入式 Embedded MySQL server
embedded-compat Backwards-compatible embedded MySQL server
embedded-devel Header and library files for developing applications for embedded MySQL
java JAR files needed for support of ClusterJ applications
libs MySQL client libraries
libs-compat Backwards-compatible MySQL client libraries
management-server The NDB Cluster management server ( ndb_mgmd )
memcached Files needed to support ndbmemcache
minimal-debuginfo Debug information for package server-minimal; useful when developing applications that use this package or when debugging this package
ndbclient NDB client library for running NDB API and MGM API applications ( libndbclient )
ndbclient-devel Header and other files needed for developing NDB API and MGM API applications
nodejs Files needed to set up Node.JS support for NDB Cluster
server The MySQL server ( mysqld ) with NDB storage engine support included, and associated MySQL server programs
server-minimal Minimal installation of the MySQL server for NDB and related tools
test mysqltest , other MySQL test programs, and support files


A single bundle ( .tar file) of all NDB Cluster RPMs for a given platform and architecture is also available. The name of this file follows the pattern shown here:

mysql-cluster-license-ver-rev.distro.arch.rpm-bundle.tar
                            

You can extract the individual RPM files from this file using tar or your preferred tool for extracting archives.

The components required to install the three major types of NDB Cluster nodes are given in the following list:

  • Management node : management-server

  • Data node : data-node

  • SQL node : server and common

In addition, the client RPM should be installed to provide the ndb_mgm management client on at least one management node. You may also wish to install it on SQL nodes, to have mysql and other MySQL client programs available on these. We discuss installation of nodes by type later in this section.

ver represents the three-part NDB storage engine version number in 8.0. x format, shown as 8.0.15 in the examples. rev provides the RPM revision number in major . minor format. In the examples shown in this section, we use 1.1 for this value.

The distro (Linux distribution) is one of rhel5 (Oracle Linux 5, Red Hat Enterprise Linux 4 and 5), el6 (Oracle Linux 6, Red Hat Enterprise Linux 6), el7 (Oracle Linux 7, Red Hat Enterprise Linux 7), or sles12 (SUSE Enterprise Linux 12). For the examples in this section, we assume that the host runs Oracle Linux 7, Red Hat Enterprise Linux 7, or the equivalent ( el7 ).

arch is i686 for 32-bit RPMs and x86_64 for 64-bit versions. In the examples shown here, we assume a 64-bit platform.

The NDB Cluster version number in the RPM file names (shown here as 8.0.15 ) can vary according to the version which you are actually using. It is very important that all of the Cluster RPMs to be installed have the same version number . The architecture should also be appropriate to the machine on which the RPM is to be installed; in particular, you should keep in mind that 64-bit RPMs ( x86_64 ) cannot be used with 32-bit operating systems (use i686 for the latter).

Data nodes. On a computer that is to host an NDB Cluster data node it is necessary to install only the data-node RPM. To do so, copy this RPM to the data node host, and run the following command as the system root user, replacing the name shown for the RPM as necessary to match that of the RPM downloaded from the MySQL website:

shell> rpm -Uhv mysql-cluster-community-data-node-8.0.15-1.el7.x86_64.rpm
                            

This installs the ndbd and ndbmtd data node binaries in /usr/sbin . Either of these can be used to run a data node process on this host.

SQL nodes. Copy the server and common RPMs to each machine to be used for hosting an NDB Cluster SQL node ( server requires common ). Install the server RPM by executing the following command as the system root user, replacing the name shown for the RPM as necessary to match the name of the RPM downloaded from the MySQL website:

shell> rpm -Uhv mysql-cluster-community-server-8.0.15-1.el7.x86_64.rpm
                            

This installs the MySQL server binary ( mysqld ), with NDB storage engine support, in the /usr/sbin directory. It also installs all needed MySQL Server support files and useful MySQL server programs, including the mysql.server and mysqld_safe startup scripts (in /usr/share/mysql and /usr/bin , respectively). The RPM installer should take care of general configuration issues (such as creating the mysql user and group, if needed) automatically.

Important

You must use the versions of these RPMs released for NDB Cluster ; those released for the standard MySQL server do not provide support for the NDB storage engine.

To administer the SQL node (MySQL server), you should also install the client RPM, as shown here:

shell> rpm -Uhv mysql-cluster-community-client-8.0.15-1.el7.x86_64.rpm
                            

This installs the mysql client and other MySQL client programs, such as mysqladmin and mysqldump , to /usr/bin .

Management nodes. To install the NDB Cluster management server, it is necessary only to use the management-server RPM. Copy this RPM to the computer intended to host the management node, and then install it by running the following command as the system root user (replace the name shown for the RPM as necessary to match that of the management-server RPM downloaded from the MySQL website):

shell> rpm -Uhv mysql-cluster-commercial-management-server-8.0.15-1.el7.x86_64.rpm
                            

This RPM installs the management server binary ndb_mgmd in the /usr/sbin directory. While this is the only program actually required for running a management node, it is also a good idea to have the ndb_mgm NDB Cluster management client available as well. You can obtain this program, as well as other NDB client programs such as ndb_desc and ndb_config , by installing the client RPM as described previously.

See Section 2.5.4, “Installing MySQL on Linux Using RPM Packages from Oracle” , for general information about installing MySQL using RPMs supplied by Oracle.

After installing from RPM, you still need to configure the cluster; see Section 22.2.4, “Initial Configuration of NDB Cluster” , for the relevant information.

It is very important that all of the Cluster RPMs to be installed have the same version number . The architecture designation should also be appropriate to the machine on which the RPM is to be installed; in particular, you should keep in mind that 64-bit RPMs cannot be used with 32-bit operating systems.

Data nodes. On a computer that is to host a cluster data node it is necessary to install only the server RPM. To do so, copy this RPM to the data node host, and run the following command as the system root user, replacing the name shown for the RPM as necessary to match that of the RPM downloaded from the MySQL website:

shell> rpm -Uhv MySQL-Cluster-server-gpl-8.0.15-1.sles11.i386.rpm
                            

Although this installs all NDB Cluster binaries, only the program ndbd or ndbmtd (both in /usr/sbin ) is actually needed to run an NDB Cluster data node.

SQL nodes. On each machine to be used for hosting a cluster SQL node, install the server RPM by executing the following command as the system root user, replacing the name shown for the RPM as necessary to match the name of the RPM downloaded from the MySQL website:

shell> rpm -Uhv MySQL-Cluster-server-gpl-8.0.15-1.sles11.i386.rpm
                            

This installs the MySQL server binary ( mysqld ) with NDB storage engine support in the /usr/sbin directory, as well as all needed MySQL Server support files. It also installs the mysql.server and mysqld_safe startup scripts (in /usr/share/mysql and /usr/bin , respectively). The RPM installer should take care of general configuration issues (such as creating the mysql user and group, if needed) automatically.

To administer the SQL node (MySQL server), you should also install the client RPM, as shown here:

shell> rpm -Uhv MySQL-Cluster-client-gpl-8.0.15-1.sles11.i386.rpm
                            

This installs the mysql client program.

Management nodes. To install the NDB Cluster management server, it is necessary only to use the server RPM. Copy this RPM to the computer intended to host the management node, and then install it by running the following command as the system root user (replace the name shown for the RPM as necessary to match that of the server RPM downloaded from the MySQL website):

shell> rpm -Uhv MySQL-Cluster-server-gpl-8.0.15-1.sles11.i386.rpm
                            

Although this RPM installs many other files, only the management server binary ndb_mgmd (in the /usr/sbin directory) is actually required for running a management node. The server RPM also installs ndb_mgm , the NDB management client.

See Section 2.5.4, “Installing MySQL on Linux Using RPM Packages from Oracle” , for general information about installing MySQL using RPMs supplied by Oracle. See Section 22.2.4, “Initial Configuration of NDB Cluster” , for information about required post-installation configuration.

22.2.2.3 Installing NDB Cluster Using .deb Files

The section provides information about installing NDB Cluster on Debian and related Linux distributions such Ubuntu using the .deb files supplied by Oracle for this purpose.

Oracle also provides an NDB Cluster APT repository for Debian and other distributions. See Installing MySQL NDB Cluster Using the APT Repository , for instructions and additional information.

Oracle provides .deb installer files for NDB Cluster for 32-bit and 64-bit platforms. For a Debian-based system, only a single installer file is necessary. This file is named using the pattern shown here, according to the applicable NDB Cluster version, Debian version, and architecture:

mysql-cluster-gpl-ndbver-debiandebianver-arch.deb
                            

Here, ndbver is the 3-part NDB engine version number, debianver is the major version of Debian ( 8 or 9 ), and arch is one of i686 or x86_64 . In the examples that follow, we assume you wish to install NDB 8.0.15 on a 64-bit Debian 9 system; in this case, the installer file is named mysql-cluster-gpl-8.0.15-debian9-x86_64.deb-bundle.tar .

Once you have downloaded the appropriate .deb file, you can untar it, and then install it from the command line using dpkg , like this:

shell> dpkg -i mysql-cluster-gpl-8.0.15-debian9-i686.deb
                            

You can also remove it using dpkg as shown here:

shell> dpkg -r mysql
                            

The installer file should also be compatible with most graphical package managers that work with .deb files, such as GDebi for the Gnome desktop.

The .deb file installs NDB Cluster under /opt/mysql/server- version / , where version is the 2-part release series version for the included MySQL server. For NDB 8.0, this is always 5.7 . The directory layout is the same as that for the generic Linux binary distribution (see Table 2.3, “MySQL Installation Layout for Generic Unix/Linux Binary Package” ), with the exception that startup scripts and configuration files are found in support-files instead of share . All NDB Cluster executables, such as ndb_mgm , ndbd , and ndb_mgmd , are placed in the bin directory.

22.2.2.4 Building NDB Cluster from Source on Linux

This section provides information about compiling NDB Cluster on Linux and other Unix-like platforms. Building NDB Cluster from source is similar to building the standard MySQL Server, although it differs in a few key respects discussed here. For general information about building MySQL from source, see Section 2.9, “Installing MySQL from Source” . For information about compiling NDB Cluster on Windows platforms, see Section 22.2.3.2, “Compiling and Installing NDB Cluster from Source on Windows” .

Building MySQL NDB Cluster 8.0 requires using the MySQL Server 8.0 sources. These are available from the MySQL downloads page at https://dev.mysql.com/downloads/ . The archived source file should have a name similar to mysql-8.0.15.tar.gz . You can also obtain MySQL development sources from launchpad.net .

Note

In previous versions, building of NDB Cluster from standard MySQL Server sources was not supported. In MySQL 8.0 and NDB Cluster 8.0, this is no longer the case— both products are now built from the same sources .

The WITH_NDBCLUSTER option for CMake causes the binaries for the management nodes, data nodes, and other NDB Cluster programs to be built; it also causes mysqld to be compiled with NDB storage engine support. This option (or one of its aliases WITH_NDBCLUSTER_STORAGE_ENGINE and WITH_PLUGIN_NDBCLUSTER ) is required when building NDB Cluster.

Important

The WITH_NDB_JAVA option is enabled by default. This means that, by default, if CMake cannot find the location of Java on your system, the configuration process fails; if you do not wish to enable Java and ClusterJ support, you must indicate this explicitly by configuring the build using -DWITH_NDB_JAVA=OFF . Use WITH_CLASSPATH to provide the Java classpath if needed.

For more information about CMake options specific to building NDB Cluster, see Options for Compiling NDB Cluster .

After you have run make && make install (or your system's equivalent), the result is similar to what is obtained by unpacking a precompiled binary to the same location.

Management nodes. When building from source and running the default make install , the management server and management client binaries ( ndb_mgmd and ndb_mgm ) can be found in /usr/local/mysql/bin . Only ndb_mgmd is required to be present on a management node host; however, it is also a good idea to have ndb_mgm present on the same host machine. Neither of these executables requires a specific location on the host machine's file system.

Data nodes. The only executable required on a data node host is the data node binary ndbd or ndbmtd . ( mysqld , for example, does not have to be present on the host machine.) By default, when building from source, this file is placed in the directory /usr/local/mysql/bin . For installing on multiple data node hosts, only ndbd or ndbmtd need be copied to the other host machine or machines. (This assumes that all data node hosts use the same architecture and operating system; otherwise you may need to compile separately for each different platform.) The data node binary need not be in any particular location on the host's file system, as long as the location is known.

When compiling NDB Cluster from source, no special options are required for building multithreaded data node binaries. Configuring the build with NDB storage engine support causes ndbmtd to be built automatically; make install places the ndbmtd binary in the installation bin directory along with mysqld , ndbd , and ndb_mgm .

SQL nodes. If you compile MySQL with clustering support, and perform the default installation (using make install as the system root user), mysqld is placed in /usr/local/mysql/bin . Follow the steps given in Section 2.9, “Installing MySQL from Source” to make mysqld ready for use. If you want to run multiple SQL nodes, you can use a copy of the same mysqld executable and its associated support files on several machines. The easiest way to do this is to copy the entire /usr/local/mysql directory and all directories and files contained within it to the other SQL node host or hosts, then repeat the steps from Section 2.9, “Installing MySQL from Source” on each machine. If you configure the build with a nondefault PREFIX option, you must adjust the directory accordingly.

In Section 22.2.4, “Initial Configuration of NDB Cluster” , we create configuration files for all of the nodes in our example NDB Cluster.

22.2.3 Installing NDB Cluster on Windows

This section describes installation procedures for NDB Cluster on Windows hosts. NDB Cluster 8.0 binaries for Windows can be obtained from https://dev.mysql.com/downloads/cluster/ . For information about installing NDB Cluster on Windows from a binary release provided by Oracle, see Section 22.2.3.1, “Installing NDB Cluster on Windows from a Binary Release” .

It is also possible to compile and install NDB Cluster from source on Windows using Microsoft Visual Studio. For more information, see Section 22.2.3.2, “Compiling and Installing NDB Cluster from Source on Windows” .

22.2.3.1 Installing NDB Cluster on Windows from a Binary Release

This section describes a basic installation of NDB Cluster on Windows using a binary no-install NDB Cluster release provided by Oracle, using the same 4-node setup outlined in the beginning of this section (see Section 22.2, “NDB Cluster Installation” ), as shown in the following table:

Table 22.6 Network addresses of nodes in example cluster

Node IP Address
Management node ( mgmd ) 198.51.100.10
SQL node ( mysqld ) 198.51.100.20
Data node "A" ( ndbd ) 198.51.100.30
Data node "B" ( ndbd ) 198.51.100.40

As on other platforms, the NDB Cluster host computer running an SQL node must have installed on it a MySQL Server binary ( mysqld.exe ). You should also have the MySQL client ( mysql.exe ) on this host. For management nodes and data nodes, it is not necessary to install the MySQL Server binary; however, each management node requires the management server daemon ( ndb_mgmd.exe ); each data node requires the data node daemon ( ndbd.exe or ndbmtd.exe ). For this example, we refer to ndbd.exe as the data node executable, but you can install ndbmtd.exe , the multithreaded version of this program, instead, in exactly the same way. You should also install the management client ( ndb_mgm.exe ) on the management server host. This section covers the steps necessary to install the correct Windows binaries for each type of NDB Cluster node.

Note

As with other Windows programs, NDB Cluster executables are named with the .exe file extension. However, it is not necessary to include the .exe extension when invoking these programs from the command line. Therefore, we often simply refer to these programs in this documentation as mysqld , mysql , ndb_mgmd , and so on. You should understand that, whether we refer (for example) to mysqld or mysqld.exe , either name means the same thing (the MySQL Server program).

For setting up an NDB Cluster using Oracles's no-install binaries, the first step in the installation process is to download the latest NDB Cluster Windows ZIP binary archive from https://dev.mysql.com/downloads/cluster/ . This archive has a filename of the mysql-cluster-gpl- ver -win arch .zip , where ver is the NDB storage engine version (such as 8.0.15 ), and arch is the architecture ( 32 for 32-bit binaries, and 64 for 64-bit binaries). For example, the NDB Cluster 8.0.15 archive for 64-bit Windows systems is named mysql-cluster-gpl-8.0.15-win64.zip .

You can run 32-bit NDB Cluster binaries on both 32-bit and 64-bit versions of Windows; however, 64-bit NDB Cluster binaries can be used only on 64-bit versions of Windows. If you are using a 32-bit version of Windows on a computer that has a 64-bit CPU, then you must use the 32-bit NDB Cluster binaries.

To minimize the number of files that need to be downloaded from the Internet or copied between machines, we start with the computer where you intend to run the SQL node.

SQL node. We assume that you have placed a copy of the archive in the directory C:\Documents and Settings\ username \My Documents\Downloads on the computer having the IP address 198.51.100.20, where username is the name of the current user. (You can obtain this name using ECHO %USERNAME% on the command line.) To install and run NDB Cluster executables as Windows services, this user should be a member of the Administrators group.

Extract all the files from the archive. The Extraction Wizard integrated with Windows Explorer is adequate for this task. (If you use a different archive program, be sure that it extracts all files and directories from the archive, and that it preserves the archive's directory structure.) When you are asked for a destination directory, enter C:\ , which causes the Extraction Wizard to extract the archive to the directory C:\mysql-cluster-gpl- ver -win arch . Rename this directory to C:\mysql .

It is possible to install the NDB Cluster binaries to directories other than C:\mysql\bin ; however, if you do so, you must modify the paths shown in this procedure accordingly. In particular, if the MySQL Server (SQL node) binary is installed to a location other than C:\mysql or C:\Program Files\MySQL\MySQL Server 8.0 , or if the SQL node's data directory is in a location other than C:\mysql\data or C:\Program Files\MySQL\MySQL Server 8.0\data , extra configuration options must be used on the command line or added to the my.ini or my.cnf file when starting the SQL node. For more information about configuring a MySQL Server to run in a nonstandard location, see Section 2.3.5, “Installing MySQL on Microsoft Windows Using a noinstall ZIP Archive” .

For a MySQL Server with NDB Cluster support to run as part of an NDB Cluster, it must be started with the options --ndbcluster and --ndb-connectstring . While you can specify these options on the command line, it is usually more convenient to place them in an option file. To do this, create a new text file in Notepad or another text editor. Enter the following configuration information into this file:

[mysqld]
# Options for mysqld process:
ndbcluster                       # run NDB storage engine
ndb-connectstring=198.51.100.10  # location of management server
                            

You can add other options used by this MySQL Server if desired (see Section 2.3.5.2, “Creating an Option File” ), but the file must contain the options shown, at a minimum. Save this file as C:\mysql\my.ini . This completes the installation and setup for the SQL node.

Data nodes. An NDB Cluster data node on a Windows host requires only a single executable, one of either ndbd.exe or ndbmtd.exe . For this example, we assume that you are using ndbd.exe , but the same instructions apply when using ndbmtd.exe . On each computer where you wish to run a data node (the computers having the IP addresses 198.51.100.30 and 198.51.100.40), create the directories C:\mysql , C:\mysql\bin , and C:\mysql\cluster-data ; then, on the computer where you downloaded and extracted the no-install archive, locate ndbd.exe in the C:\mysql\bin directory. Copy this file to the C:\mysql\bin directory on each of the two data node hosts.

To function as part of an NDB Cluster, each data node must be given the address or hostname of the management server. You can supply this information on the command line using the --ndb-connectstring or -c option when starting each data node process. However, it is usually preferable to put this information in an option file. To do this, create a new text file in Notepad or another text editor and enter the following text:

[mysql_cluster]
# Options for data node process:
ndb-connectstring=198.51.100.10  # location of management server
                            

Save this file as C:\mysql\my.ini on the data node host. Create another text file containing the same information and save it on as C:mysql\my.ini on the other data node host, or copy the my.ini file from the first data node host to the second one, making sure to place the copy in the second data node's C:\mysql directory. Both data node hosts are now ready to be used in the NDB Cluster, which leaves only the management node to be installed and configured.

Management node. The only executable program required on a computer used for hosting an NDB Cluster management node is the management server program ndb_mgmd.exe . However, in order to administer the NDB Cluster once it has been started, you should also install the NDB Cluster management client program ndb_mgm.exe on the same machine as the management server. Locate these two programs on the machine where you downloaded and extracted the no-install archive; this should be the directory C:\mysql\bin on the SQL node host. Create the directory C:\mysql\bin on the computer having the IP address 198.51.100.10, then copy both programs to this directory.

You should now create two configuration files for use by ndb_mgmd.exe :

  1. A local configuration file to supply configuration data specific to the management node itself. Typically, this file needs only to supply the location of the NDB Cluster global configuration file (see item 2).

    To create this file, start a new text file in Notepad or another text editor, and enter the following information:

    [mysql_cluster]
    # Options for management node process
    config-file=C:/mysql/bin/config.ini
                                            

    Save this file as the text file C:\mysql\bin\my.ini .

  2. A global configuration file from which the management node can obtain configuration information governing the NDB Cluster as a whole. At a minimum, this file must contain a section for each node in the NDB Cluster, and the IP addresses or hostnames for the management node and all data nodes ( HostName configuration parameter). It is also advisable to include the following additional information:

    Create a new text file using a text editor such as Notepad, and input the following information:

    [ndbd default]
    # Options affecting ndbd processes on all data nodes:
    NoOfReplicas=2                      # Number of replicas
    DataDir=C:/mysql/cluster-data       # Directory for each data node's data files
                                        # Forward slashes used in directory path,
                                        # rather than backslashes. This is correct;
                                        # see Important note in text
    DataMemory=80M    # Memory allocated to data storage
    IndexMemory=18M   # Memory allocated to index storage
                      # For DataMemory and IndexMemory, we have used the
                      # default values. Since the "world" database takes up
                      # only about 500KB, this should be more than enough for
                      # this example Cluster setup.
    [ndb_mgmd]
    # Management process options:
    HostName=198.51.100.10              # Hostname or IP address of management node
    DataDir=C:/mysql/bin/cluster-logs   # Directory for management node log files
    [ndbd]
    # Options for data node "A":
                                    # (one [ndbd] section per data node)
    HostName=198.51.100.30          # Hostname or IP address
    [ndbd]
    # Options for data node "B":
    HostName=198.51.100.40          # Hostname or IP address
    [mysqld]
    # SQL node options:
    HostName=198.51.100.20          # Hostname or IP address
                                            

    Save this file as the text file C:\mysql\bin\config.ini .

Important

A single backslash character ( \ ) cannot be used when specifying directory paths in program options or configuration files used by NDB Cluster on Windows. Instead, you must either escape each backslash character with a second backslash ( \\ ), or replace the backslash with a forward slash character ( / ). For example, the following line from the [ndb_mgmd] section of an NDB Cluster config.ini file does not work:

DataDir=C:\mysql\bin\cluster-logs
                                

Instead, you may use either of the following:

DataDir=C:\\mysql\\bin\\cluster-logs  # Escaped backslashes
                                
DataDir=C:/mysql/bin/cluster-logs     # Forward slashes
                                

For reasons of brevity and legibility, we recommend that you use forward slashes in directory paths used in NDB Cluster program options and configuration files on Windows.

22.2.3.2 Compiling and Installing NDB Cluster from Source on Windows

Oracle provides precompiled NDB Cluster binaries for Windows which should be adequate for most users. However, if you wish, it is also possible to compile NDB Cluster for Windows from source code. The procedure for doing this is almost identical to the procedure used to compile the standard MySQL Server binaries for Windows, and uses the same tools. However, there are two major differences:

  • To build NDB Cluster 8.0, use the MySQL Server 8.0 sources, which you can obtain from https://dev.mysql.com/downloads/ .

    Formerly, NDB Cluster used its own source code. In MySQL 8.0 and NDB Cluster 8.0, this is no longer the case, and both products are now built from the same source .

  • You must configure the build using the WITH_NDBCLUSTER option in addition to any other build options you wish to use with CMake . WITH_NDBCLUSTER_STORAGE_ENGINE and WITH_PLUGIN_NDBCLUSTER are supported as aliases for WITH_NDBCLUSTER , and work in exactly the same way.

Important

The WITH_NDB_JAVA option is enabled by default. This means that, by default, if CMake cannot find the location of Java on your system, the configuration process fails; if you do not wish to enable Java and ClusterJ support, you must indicate this explicitly by configuring the build using -DWITH_NDB_JAVA=OFF . (Bug #12379735) Use WITH_CLASSPATH to provide the Java classpath if needed.

For more information about CMake options specific to building NDB Cluster, see Options for Compiling NDB Cluster .

Once the build process is complete, you can create a Zip archive containing the compiled binaries; Section 2.9.2, “Installing MySQL Using a Standard Source Distribution” provides the commands needed to perform this task on Windows systems. The NDB Cluster binaries can be found in the bin directory of the resulting archive, which is equivalent to the no-install archive, and which can be installed and configured in the same manner. For more information, see Section 22.2.3.1, “Installing NDB Cluster on Windows from a Binary Release” .

22.2.3.3 Initial Startup of NDB Cluster on Windows

Once the NDB Cluster executables and needed configuration files are in place, performing an initial start of the cluster is simply a matter of starting the NDB Cluster executables for all nodes in the cluster. Each cluster node process must be started separately, and on the host computer where it resides. The management node should be started first, followed by the data nodes, and then finally by any SQL nodes.

  1. On the management node host, issue the following command from the command line to start the management node process. The output should appear similar to what is shown here:

    C:\mysql\bin> ndb_mgmd
    2010-06-23 07:53:34 [MgmtSrvr] INFO -- NDB Cluster Management Server. mysql-8.0.15-ndb-8.0.15
    2010-06-23 07:53:34 [MgmtSrvr] INFO -- Reading cluster configuration from 'config.ini'
                                            

    The management node process continues to print logging output to the console. This is normal, because the management node is not running as a Windows service. (If you have used NDB Cluster on a Unix-like platform such as Linux, you may notice that the management node's default behavior in this regard on Windows is effectively the opposite of its behavior on Unix systems, where it runs by default as a Unix daemon process. This behavior is also true of NDB Cluster data node processes running on Windows.) For this reason, do not close the window in which ndb_mgmd.exe is running; doing so kills the management node process. (See Section 22.2.3.4, “Installing NDB Cluster Processes as Windows Services” , where we show how to install and run NDB Cluster processes as Windows services.)

    The required -f option tells the management node where to find the global configuration file ( config.ini ). The long form of this option is --config-file .

    Important

    An NDB Cluster management node caches the configuration data that it reads from config.ini ; once it has created a configuration cache, it ignores the config.ini file on subsequent starts unless forced to do otherwise. This means that, if the management node fails to start due to an error in this file, you must make the management node re-read config.ini after you have corrected any errors in it. You can do this by starting ndb_mgmd.exe with the --reload or --initial option on the command line. Either of these options works to refresh the configuration cache.

    It is not necessary or advisable to use either of these options in the management node's my.ini file.

    For additional information about options which can be used with ndb_mgmd , see Section 22.4.4, “ ndb_mgmd — The NDB Cluster Management Server Daemon” , as well as Section 22.4.31, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs” .

  2. On each of the data node hosts, run the command shown here to start the data node processes:

    C:\mysql\bin> ndbd
    2010-06-23 07:53:46 [ndbd] INFO -- Configuration fetched from 'localhost:1186', generation: 1
                                            

    In each case, the first line of output from the data node process should resemble what is shown in the preceding example, and is followed by additional lines of logging output. As with the management node process, this is normal, because the data node is not running as a Windows service. For this reason, do not close the console window in which the data node process is running; doing so kills ndbd.exe . (For more information, see Section 22.2.3.4, “Installing NDB Cluster Processes as Windows Services” .)

  3. Do not start the SQL node yet; it cannot connect to the cluster until the data nodes have finished starting, which may take some time. Instead, in a new console window on the management node host, start the NDB Cluster management client ndb_mgm.exe , which should be in C:\mysql\bin on the management node host. (Do not try to re-use the console window where ndb_mgmd.exe is running by typing CTRL + C , as this kills the management node.) The resulting output should look like this:

    C:\mysql\bin> ndb_mgm
    -- NDB Cluster -- Management Client --
    ndb_mgm>
                                            

    When the prompt ndb_mgm> appears, this indicates that the management client is ready to receive NDB Cluster management commands. You can observe the status of the data nodes as they start by entering ALL STATUS at the management client prompt. This command causes a running report of the data nodes's startup sequence, which should look something like this:

    ndb_mgm> ALL STATUS
    Connected to Management Server at: localhost:1186
    Node 2: starting (Last completed phase 3) (mysql-8.0.15-ndb-8.0.15)
    Node 3: starting (Last completed phase 3) (mysql-8.0.15-ndb-8.0.15)
    Node 2: starting (Last completed phase 4) (mysql-8.0.15-ndb-8.0.15)
    Node 3: starting (Last completed phase 4) (mysql-8.0.15-ndb-8.0.15)
    Node 2: Started (version 8.0.15)
    Node 3: Started (version 8.0.15)
    ndb_mgm>
                                            
    Note

    Commands issued in the management client are not case-sensitive; we use uppercase as the canonical form of these commands, but you are not required to observe this convention when inputting them into the ndb_mgm client. For more information, see Section 22.5.2, “Commands in the NDB Cluster Management Client” .

    The output produced by ALL STATUS is likely to vary from what is shown here, according to the speed at which the data nodes are able to start, the release version number of the NDB Cluster software you are using, and other factors. What is significant is that, when you see that both data nodes have started, you are ready to start the SQL node.

    You can leave ndb_mgm.exe running; it has no negative impact on the performance of the NDB Cluster, and we use it in the next step to verify that the SQL node is connected to the cluster after you have started it.

  4. On the computer designated as the SQL node host, open a console window and navigate to the directory where you unpacked the NDB Cluster binaries (if you are following our example, this is C:\mysql\bin ).

    Start the SQL node by invoking mysqld.exe from the command line, as shown here:

    C:\mysql\bin> mysqld --console
                                            

    The --console option causes logging information to be written to the console, which can be helpful in the event of problems. (Once you are satisfied that the SQL node is running in a satisfactory manner, you can stop it and restart it out without the --console option, so that logging is performed normally.)

    In the console window where the management client ( ndb_mgm.exe ) is running on the management node host, enter the SHOW command, which should produce output similar to what is shown here:

    ndb_mgm> SHOW
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=2    @198.51.100.30  (Version: 8.0.15-ndb-8.0.15, Nodegroup: 0, *)
    id=3    @198.51.100.40  (Version: 8.0.15-ndb-8.0.15, Nodegroup: 0)
    [ndb_mgmd(MGM)] 1 node(s)
    id=1    @198.51.100.10  (Version: 8.0.15-ndb-8.0.15)
    [mysqld(API)]   1 node(s)
    id=4    @198.51.100.20  (Version: 8.0.15-ndb-8.0.15)
                                            

    You can also verify that the SQL node is connected to the NDB Cluster in the mysql client ( mysql.exe ) using the SHOW ENGINE NDB STATUS statement.

You should now be ready to work with database objects and data using NDB Cluster 's NDBCLUSTER storage engine. See Section 22.2.6, “NDB Cluster Example with Tables and Data” , for more information and examples.

You can also install ndb_mgmd.exe , ndbd.exe , and ndbmtd.exe as Windows services. For information on how to do this, see Section 22.2.3.4, “Installing NDB Cluster Processes as Windows Services” ).

22.2.3.4 Installing NDB Cluster Processes as Windows Services

Once you are satisfied that NDB Cluster is running as desired, you can install the management nodes and data nodes as Windows services, so that these processes are started and stopped automatically whenever Windows is started or stopped. This also makes it possible to control these processes from the command line with the appropriate SC START and SC STOP commands, or using the Windows graphical Services utility. NET START and NET STOP commands can also be used.

Installing programs as Windows services usually must be done using an account that has Administrator rights on the system.

To install the management node as a service on Windows, invoke ndb_mgmd.exe from the command line on the machine hosting the management node, using the --install option, as shown here:

C:\> C:\mysql\bin\ndb_mgmd.exe --install
Installing service 'NDB Cluster Management Server'
  as '"C:\mysql\bin\ndbd.exe" "--service=ndb_mgmd"'
Service successfully installed.
                            
Important

When installing an NDB Cluster program as a Windows service, you should always specify the complete path; otherwise the service installation may fail with the error The system cannot find the file specified .

The --install option must be used first, ahead of any other options that might be specified for ndb_mgmd.exe . However, it is preferable to specify such options in an options file instead. If your options file is not in one of the default locations as shown in the output of ndb_mgmd.exe --help , you can specify the location using the --config-file option.

Now you should be able to start and stop the management server like this:

C:\> SC START ndb_mgmd
C:\> SC STOP ndb_mgmd
                            
Note

If using NET commands, you can also start or stop the management server as a Windows service using the descriptive name, as shown here:

C:\> NET START 'NDB Cluster Management Server'
The NDB Cluster Management Server service is starting.
The NDB Cluster Management Server service was started successfully.
C:\> NET STOP  'NDB Cluster Management Server'
The NDB Cluster Management Server service is stopping..
The NDB Cluster Management Server service was stopped successfully.
                                

It is usually simpler to specify a short service name or to permit the default service name to be used when installing the service, and then reference that name when starting or stopping the service. To specify a service name other than ndb_mgmd , append it to the --install option, as shown in this example:

C:\> C:\mysql\bin\ndb_mgmd.exe --install=mgmd1
Installing service 'NDB Cluster Management Server'
  as '"C:\mysql\bin\ndb_mgmd.exe" "--service=mgmd1"'
Service successfully installed.
                            

Now you should be able to start or stop the service using the name you have specified, like this:

C:\> SC START mgmd1
C:\> SC STOP mgmd1
                            

To remove the management node service, use SC DELETE service_name :

C:\> SC DELETE mgmd1
                            

Alternatively, invoke ndb_mgmd.exe with the --remove option, as shown here:

C:\> C:\mysql\bin\ndb_mgmd.exe --remove
Removing service 'NDB Cluster Management Server'
Service successfully removed.
                            

If you installed the service using a service name other than the default, pass the service name as the value of the ndb_mgmd.exe --remove option, like this:

C:\> C:\mysql\bin\ndb_mgmd.exe --remove=mgmd1
Removing service 'mgmd1'
Service successfully removed.
                            

Installation of an NDB Cluster data node process as a Windows service can be done in a similar fashion, using the --install option for ndbd.exe (or ndbmtd.exe ), as shown here:

C:\> C:\mysql\bin\ndbd.exe --install
Installing service 'NDB Cluster Data Node Daemon' as '"C:\mysql\bin\ndbd.exe" "--service=ndbd"'
Service successfully installed.
                            

Now you can start or stop the data node as shown in the following example:

C:\> SC START ndbd
C:\> SC STOP ndbd
                            

To remove the data node service, use SC DELETE service_name :

C:\> SC DELETE ndbd
                            

Alternatively, invoke ndbd.exe with the --remove option, as shown here:

C:\> C:\mysql\bin\ndbd.exe --remove
Removing service 'NDB Cluster Data Node Daemon'
Service successfully removed.
                            

As with ndb_mgmd.exe (and mysqld.exe ), when installing ndbd.exe as a Windows service, you can also specify a name for the service as the value of --install , and then use it when starting or stopping the service, like this:

C:\> C:\mysql\bin\ndbd.exe --install=dnode1
Installing service 'dnode1' as '"C:\mysql\bin\ndbd.exe" "--service=dnode1"'
Service successfully installed.
C:\> SC START dnode1
C:\> SC STOP dnode1
                            

If you specified a service name when installing the data node service, you can use this name when removing it as well, as shown here:

C:\> SC DELETE dnode1
                            

Alternatively, you can pass the service name as the value of the ndbd.exe --remove option, as shown here:

C:\> C:\mysql\bin\ndbd.exe --remove=dnode1
Removing service 'dnode1'
Service successfully removed.
                            

Installation of the SQL node as a Windows service, starting the service, stopping the service, and removing the service are done in a similar fashion, using mysqld --install , SC START , SC STOP , and