Skip to content

Instantly share code, notes, and snippets.

@jakewins
Created November 8, 2017 13:54
Show Gist options
  • Save jakewins/6eee663c9df824754a4268a8e9e4287c to your computer and use it in GitHub Desktop.
Save jakewins/6eee663c9df824754a4268a8e9e4287c to your computer and use it in GitHub Desktop.
Setting Default Description
unsupported.dbms.directories.neo4j_home null Root relative to which directory settings are resolved. This is set in code and should never be configured explicitly.
dbms.read_only false Only allow read operations from this Neo4j instance. This mode still requires write access to the directory for lock purposes.
unsupported.dbms.disconnected false Disable all protocol connectors.
unsupported.dbms.report_configuration false Print out the effective Neo4j configuration after startup.
dbms.config.strict_validation false A strict configuration validation will prevent the database from starting up if unknown configuration options are specified in the neo4j settings namespace (such as dbms., ha., cypher., etc). This is currently false by default but will be true by default in 4.0.
dbms.allow_format_migration false Whether to allow a store upgrade in case the current version of the database starts against an older store version. Setting this to `true` does not guarantee successful upgrade, it just allows an upgrade to be performed.
dbms.allow_upgrade false Whether to allow an upgrade in case the current version of the database starts against an older version.
dbms.record_format Database record format. Valid values: `standard`, `high_limit`. The `high_limit` format is available for Enterprise Edition only. It is required if you have a graph that is larger than 34 billion nodes, 34 billion relationships, or 68 billion properties. A change of the record format is irreversible. Certain operations may suffer from a performance penalty of up to 10%, which is why this format is not switched on by default.
cypher.default_language_version default Set this to specify the default parser (language version).
cypher.planner default Set this to specify the default planner for the default language version.
cypher.hints_error false Set this to specify the behavior when Cypher planner or runtime hints cannot be fulfilled. If true, then non-conformance will result in an error, otherwise only a warning is generated.
cypher.forbid_exhaustive_shortestpath false This setting is associated with performance optimization. Set this to `true` in situations where it is preferable to have any queries using the 'shortestPath' function terminate as soon as possible with no answer, rather than potentially running for a long time attempting to find an answer (even if there is no path to be found). For most queries, the 'shortestPath' algorithm will return the correct answer very quickly. However there are some cases where it is possible that the fast bidirectional breadth-first search algorithm will find no results even if they exist. This can happen when the predicates in the `WHERE` clause applied to 'shortestPath' cannot be applied to each step of the traversal, and can only be applied to the entire path. When the query planner detects these special cases, it will plan to perform an exhaustive depth-first search if the fast algorithm finds no paths. However, the exhaustive search may be orders of magnitude slower than the fast algorithm. If it is critical that queries terminate as soon as possible, it is recommended that this option be set to `true`, which means that Neo4j will never consider using the exhaustive search for shortestPath queries. However, please note that if no paths are found, an error will be thrown at run time, which will need to be handled by the application.
cypher.forbid_shortestpath_common_nodes true This setting is associated with performance optimization. The shortest path algorithm does not work when the start and end nodes are the same. With this setting set to `false` no path will be returned when that happens. The default value of `true` will instead throw an exception. This can happen if you perform a shortestPath search after a cartesian product that might have the same start and end nodes for some of the rows passed to shortestPath. If it is preferable to not experience this exception, and acceptable for results to be missing for those rows, then set this to `false`. If you cannot accept missing results, and really want the shortestPath between two common nodes, then re-write the query using a standard Cypher variable length pattern expression followed by ordering by path length and limiting to one result.
unsupported.cypher.runtime default Set this to specify the default runtime for the default language version.
unsupported.cypher.compiler_tracing false Enable tracing of compilation in cypher.
dbms.query_cache_size 1000 The number of Cypher query execution plans that are cached.
cypher.statistics_divergence_threshold 0.75 The threshold when a plan is considered stale. If any of the underlying statistics used to create the plan has changed more than this value, the plan is considered stale and will be replanned. A value of 0 means always replan, and 1 means never replan.
unsupported.cypher.non_indexed_label_warning_threshold 10000 The threshold when a warning is generated if a label scan is done after a load csv where the label has no index
unsupported.cypher.idp_solver_table_threshold 128 To improve IDP query planning time, we can restrict the internal planning table size, triggering compaction of candidate plans. The smaller the threshold the faster the planning, but the higher the risk of sub-optimal plans.
unsupported.cypher.idp_solver_duration_threshold 1000 To improve IDP query planning time, we can restrict the internal planning loop duration, triggering more frequent compaction of candidate plans. The smaller the threshold the faster the planning, but the higher the risk of sub-optimal plans.
cypher.min_replan_interval 10s The minimum lifetime of a query plan before a query is considered for replanning
dbms.security.allow_csv_import_from_file_urls true Determines if Cypher will allow using file URLs when loading data using `LOAD CSV`. Setting this value to `false` will cause Neo4j to fail `LOAD CSV` clauses that load data from the file system.
dbms.directories.import null Sets the root directory for file URLs used with the Cypher `LOAD CSV` clause. This must be set to a single directory, restricting access to only those files within that directory and its subdirectories.
dbms.import.csv.legacy_quote_escaping true Selects whether to conform to the standard https://tools.ietf.org/html/rfc4180 for interpreting escaped quotation characters in CSV files loaded using `LOAD CSV`. Setting this to `false` will use the standard, interpreting repeated quotes '""' as a single in-lined quote, while `true` will use the legacy convention originally supported in Neo4j 3.0 and 3.1, allowing a backslash to include quotes in-lined in fields.
dbms.track_query_cpu_time true Enables or disables tracking of how much time a query spends actively executing on the CPU.
dbms.track_query_allocation true Enables or disables tracking of how many bytes are allocated by the execution of a query.
unsupported.dbms.transaction_start_timeout 1s The maximum amount of time to wait for the database to become available, when starting a new transaction.
unsupported.dbms.executiontime_limit.enabled false Please use dbms.transaction.timeout instead.
dbms.transaction.timeout 0 The maximum time interval of a transaction within which it should be completed.
dbms.lock.acquisition.timeout 0 The maximum time interval within which lock should be acquired.
dbms.transaction.monitor.check.interval 2s Configures the time interval between transaction monitor checks. Determines how often monitor thread will check transaction for timeout.
dbms.shutdown_transaction_end_timeout 10s The maximum amount of time to wait for running transactions to complete before allowing initiated database shutdown to continue
dbms.directories.plugins plugins Location of the database plugin directory. Compiled Java JAR files that contain database procedures will be loaded if they are placed in this directory.
dbms.logs.debug.rotation.size 20m Threshold for rotation of the debug log.
unsupported.dbms.logs.debug.debug_loggers org.neo4j.diagnostics,org.neo4j.cluster.protocol,org.neo4j.kernel.ha Debug log contexts that should output debug level logging
dbms.logs.debug.level INFO Debug log level threshold.
unsupported.dbms.counts_store_rotation_timeout 10m Maximum time to wait for active transaction completion when rotating counts store
dbms.logs.debug.rotation.delay 300s Minimum time interval after last rotation of the debug log before it may be rotated again.
dbms.logs.debug.rotation.keep_number 7 Maximum number of history files for the debug log.
dbms.checkpoint.interval.tx 100000 Configures the transaction interval between check-points. The database will not check-point more often than this (unless check pointing is triggered by a different event), but might check-point less often than this interval, if performing a check-point takes longer time than the configured interval. A check-point is a point in the transaction logs, from which recovery would start from. Longer check-point intervals typically means that recovery will take longer to complete in case of a crash. On the other hand, a longer check-point interval can also reduce the I/O load that the database places on the system, as each check-point implies a flushing and forcing of all the store files. The default is '100000' for a check-point every 100000 transactions.
dbms.checkpoint.interval.time 15m Configures the time interval between check-points. The database will not check-point more often than this (unless check pointing is triggered by a different event), but might check-point less often than this interval, if performing a check-point takes longer time than the configured interval. A check-point is a point in the transaction logs, from which recovery would start from. Longer check-point intervals typically means that recovery will take longer to complete in case of a crash. On the other hand, a longer check-point interval can also reduce the I/O load that the database places on the system, as each check-point implies a flushing and forcing of all the store files.
dbms.checkpoint.iops.limit 300 Limit the number of IOs the background checkpoint process will consume per second. This setting is advisory, is ignored in Neo4j Community Edition, and is followed to best effort in Enterprise Edition. An IO is in this case a 8 KiB (mostly sequential) write. Limiting the write IO in this way will leave more bandwidth in the IO subsystem to service random-read IOs, which is important for the response time of queries when the database cannot fit entirely in memory. The only drawback of this setting is that longer checkpoint times may lead to slightly longer recovery times in case of a database or system crash. A lower number means lower IO pressure, and consequently longer checkpoint times. The configuration can also be commented out to remove the limitation entirely, and let the checkpointer flush data as fast as the hardware will go. Set this to -1 to disable the IOPS limit.
dbms.auto_index.nodes.enabled false Controls the auto indexing feature for nodes. Setting it to `false` shuts it down, while `true` enables it by default for properties listed in the dbms.auto_index.nodes.keys setting.
dbms.auto_index.nodes.keys A list of property names (comma separated) that will be indexed by default. This applies to _nodes_ only.
dbms.auto_index.relationships.enabled false Controls the auto indexing feature for relationships. Setting it to `false` shuts it down, while `true` enables it by default for properties listed in the dbms.auto_index.relationships.keys setting.
dbms.auto_index.relationships.keys A list of property names (comma separated) that will be indexed by default. This applies to _relationships_ only.
dbms.index_sampling.background_enabled true Enable or disable background index sampling
dbms.index_sampling.buffer_size 64m Size of buffer used by index sampling. This configuration setting is no longer applicable as from Neo4j 3.0.3. Please use dbms.index_sampling.sample_size_limit instead.
dbms.index_sampling.sample_size_limit 8388608 Index sampling chunk size limit
dbms.index_sampling.update_percentage 5 Percentage of index updates of total index size required before sampling of a given index is triggered
dbms.index_searcher_cache_size 2147483647 The maximum number of open Lucene index searchers.
unsupported.dbms.multi_threaded_schema_index_population_enabled true
unsupported.dbms.enable_native_schema_index true
dbms.tx_log.rotation.retention_policy 7 days Make Neo4j keep the logical transaction logs for being able to backup the database. Can be used for specifying the threshold to prune logical logs after. For example "10 days" will prune logical logs that only contains transactions older than 10 days from the current time, or "100k txs" will keep the 100k latest transactions and prune any older transactions.
dbms.tx_log.rotation.size 250M Specifies at which file size the logical log will auto-rotate. Minimum accepted value is 1M.
unsupported.dbms.id_generator_fast_rebuild_enabled true Use a quick approach for rebuilding the ID generators. This give quicker recovery time, but will limit the ability to reuse the space of deleted entities.
unsupported.dbms.memory.pagecache.pagesize 0 Target size for pages of mapped memory. If set to 0, then a reasonable default is chosen, depending on the storage device used.
dbms.memory.pagecache.size null The amount of memory to use for mapping the store files, in bytes (or kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g'). If Neo4j is running on a dedicated server, then it is generally recommended to leave about 2-4 gigabytes for the operating system, give the JVM enough heap to hold all your transaction state and query context, and then leave the rest for the page cache. If no page cache memory is configured, then a heuristic setting is computed based on available system resources.
dbms.memory.pagecache.swapper null Specify which page swapper to use for doing paged IO. This is only used when integrating with proprietary storage technology.
unsupported.dbms.block_size.strings 0 Specifies the block size for storing strings. This parameter is only honored when the store is created, otherwise it is ignored. Note that each character in a string occupies two bytes, meaning that e.g a block size of 120 will hold a 60 character long string before overflowing into a second block. Also note that each block carries a ~10B of overhead so record size on disk will be slightly larger than the configured block size
unsupported.dbms.block_size.array_properties 0 Specifies the block size for storing arrays. This parameter is only honored when the store is created, otherwise it is ignored. Also note that each block carries a ~10B of overhead so record size on disk will be slightly larger than the configured block size
unsupported.dbms.block_size.labels 0 Specifies the block size for storing labels exceeding in-lined space in node record. This parameter is only honored when the store is created, otherwise it is ignored. Also note that each block carries a ~10B of overhead so record size on disk will be slightly larger than the configured block size
unsupported.dbms.record_id_batch_size 20 Specifies the size of id batches local to each transaction when committing. Committing a transaction which contains changes most often results in new data records being created. For each record a new id needs to be generated from an id generator. It's more efficient to allocate a batch of ids from the contended id generator, which the transaction holds and generates ids from while creating these new records. This setting specifies how big those batches are. Remaining ids are freed back to id generator on clean shutdown.
unsupported.dbms.kernel_id null An identifier that uniquely identifies this graph database instance within this JVM. Defaults to an auto-generated number depending on how many instance are started in this JVM.
unsupported.dbms.gc_monitor_wait_time 100ms Amount of time in ms the GC monitor thread will wait before taking another measurement.
unsupported.dbms.gc_monitor_threshold 200ms The amount of time in ms the monitor thread has to be blocked before logging a message it was blocked.
dbms.relationship_grouping_threshold 50 Relationship count threshold for considering a node to be dense
dbms.logs.query.enabled false Log executed queries that take longer than the configured threshold, dbms.logs.query.threshold. Log entries are by default written to the file _query.log_ located in the Logs directory. For location of the Logs directory, see <<file-locations>>. This feature is available in the Neo4j Enterprise Edition.
dbms.directories.logs logs Path of the logs directory.
dbms.logs.query.path null Path to the query log file.
dbms.logs.debug.path null Path to the debug log file.
dbms.logs.query.parameter_logging_enabled true Log parameters for the executed queries being logged.
dbms.logs.query.time_logging_enabled false Log detailed time information for the executed queries being logged.
dbms.logs.query.allocation_logging_enabled false Log allocated bytes for the executed queries being logged.
dbms.logs.query.page_logging_enabled false Log page hits and page faults for the executed queries being logged.
dbms.logs.query.threshold 0s If the execution of query takes more time than this threshold, the query is logged - provided query logging is enabled. Defaults to 0 seconds, that is all queries are logged.
dbms.logs.query.rotation.size 20m The file size in bytes at which the query log will auto-rotate. If set to zero then no rotation will occur. Accepts a binary suffix `k`, `m` or `g`.
dbms.logs.query.rotation.keep_number 7 Maximum number of history files for the query log.
unsupported.tools.batch_inserter.batch_size 10000 Specifies number of operations that batch inserter will try to group into one batch before flushing data into underlying storage.
dbms.label_index NATIVE Backend to use for label --> nodes index
dbms.security.auth_enabled false Enable auth requirement to access Neo4j.
unsupported.dbms.security.auth_store.location null
unsupported.dbms.security.auth_max_failed_attempts 3
dbms.security.procedures.unrestricted A list of procedures and user defined functions (comma separated) that are allowed full access to the database. The list may contain both fully-qualified procedure names, and partial names with the wildcard '*'. Note that this enables these procedures to bypass security. Use with caution.
dbms.procedures.kill_query_verbose true Specifies whether or not dbms.killQueries produces a verbose output, with information about which queries were not found
unsupported.dbms.schema.release_lock_while_building_constraint false Whether or not to release the exclusive schema lock is while building uniqueness constraints index
dbms.security.procedures.whitelist * A list of procedures (comma separated) that are to be loaded. The list may contain both fully-qualified procedure names, and partial names with the wildcard '*'. If this setting is left empty no procedures will be loaded.
dbms.connectors.default_listen_address 127.0.0.1 Default network interface to listen for incoming connections. To listen for connections on all interfaces, use "0.0.0.0". To bind specific connectors to a specific network interfaces, specify the +listen_address+ properties for the specific connector.
dbms.connectors.default_advertised_address localhost Default hostname or IP address the server uses to advertise itself to its connectors. To advertise a specific hostname or IP address for a specific connector, specify the +advertised_address+ property for the specific connector.
unsupported.dbms.logs.bolt.enabled false
unsupported.dbms.logs.bolt.path null
unsupported.dbms.index.archive_failed false Create an archive of an index before re-creating it if failing to load on startup.
dbms.transaction.bookmark_ready_timeout 30s The maximum amount of time to wait for the database state represented by the bookmark.
dbms.udc.enabled true Enable the UDC extension.
unsupported.dbms.udc.first_delay 600000
unsupported.dbms.udc.interval 86400000
unsupported.dbms.udc.host udc.neo4j.org
unsupported.dbms.udc.source null
unsupported.dbms.udc.reg unreg
dbms.backup.enabled true Enable support for running online backups
dbms.backup.address 127.0.0.1:6362-6372 Listening server for online backups
dbms.ids.reuse.types.override RELATIONSHIP,NODE Specified names of id types (comma separated) that should be reused. Currently only 'node' and 'relationship' types are supported.
unsupported.dbms.security.module enterprise-security-module
dbms.windows_service_name neo4j Name of the Windows Service.
dbms.jvm.additional Additional JVM arguments.
dbms.memory.heap.initial_size Initial heap size. By default it is calculated based on available system resources.
dbms.memory.heap.max_size Maximum heap size. By default it is calculated based on available system resources.
unsupported.dbms.ephemeral false
unsupported.dbms.lock_manager
unsupported.dbms.tracer null
unsupported.dbms.edition unknown
tools.consistency_checker.check_property_owners false This setting is deprecated. See commandline arguments for neoj4-admin check-consistency instead. Perform optional additional checking on property ownership. This can detect a theoretical inconsistency where a property could be owned by multiple entities. However, the check is very expensive in time and memory, so it is skipped by default.
tools.consistency_checker.check_label_scan_store true This setting is deprecated. See commandline arguments for neoj4-admin check-consistency instead. Perform checks on the label scan store. Checking this store is more expensive than checking the native stores, so it may be useful to turn off this check for very large databases.
tools.consistency_checker.check_indexes true This setting is deprecated. See commandline arguments for neoj4-admin check-consistency instead. Perform checks on indexes. Checking indexes is more expensive than checking the native stores, so it may be useful to turn off this check for very large databases.
tools.consistency_checker.check_graph true This setting is deprecated. See commandline arguments for neoj4-admin check-consistency instead. Perform checks between nodes, relationships, properties, types and tokens.
ha.slave_read_timeout 20s How long a slave will wait for response from master before giving up.
ha.role_switch_timeout 120s Timeout for request threads waiting for instance to become master or slave.
ha.internal_role_switch_timeout 10s Timeout for waiting for internal conditions during state switch, like for transactions to complete, before switching to master or slave.
ha.slave_lock_timeout 20s Timeout for taking remote (write) locks on slaves. Defaults to ha.slave_read_timeout.
ha.max_channels_per_slave 20 Maximum number of connections a slave can have to the master.
ha.host.data 0.0.0.0:6001-6011 Hostname and port to bind the HA server.
ha.slave_only false Whether this instance should only participate as slave in cluster. If set to `true`, it will never be elected as master.
ha.branched_data_policy keep_all Policy for how to handle branched data.
dbms.security.ha_status_auth_enabled true Require authorization for access to the HA status endpoints.
ha.data_chunk_size 2M Max size of the data chunks that flows between master and slaves in HA. Bigger size may increase throughput, but may also be more sensitive to variations in bandwidth, whereas lower size increases tolerance for bandwidth variations.
ha.pull_interval 0s Interval of pulling updates from master.
ha.tx_push_factor 1 The amount of slaves the master will ask to replicate a committed transaction.
ha.tx_push_strategy fixed_ascending Push strategy of a transaction to a slave during commit.
ha.branched_data_copying_strategy branch_then_copy Strategy for how to order handling of branched data on slaves and copying of the store from the master. The default is copy_then_branch, which, when combined with the keep_last or keep_none branch handling strategies results in a safer branching strategy, as there is always a store present so store failure to copy a store (for example, because of network failure) does not leave the instance without a store.
ha.pull_batch_size 100 Size of batches of transactions applied on slaves when pulling from master
unsupported.dbms.id_reuse_safe_zone 1h Duration for which master will buffer ids and not reuse them to allow slaves read consistently. Slaves will also terminate transactions longer than this duration, when applying received transaction stream, to make sure they do not read potentially inconsistent/reused records.
dbms.active_database graph.db Name of the database to load
dbms.directories.data data Path of the data directory. You must not configure more than one Neo4j installation to use the same data directory.
unsupported.dbms.directories.database null
unsupported.dbms.directories.auth null
metrics.prefix neo4j A common prefix for the reported metrics field names. By default, this is either be 'neo4j', or a computed value based on the cluster and instance names, when running in an HA configuration.
metrics.enabled false The default enablement value for all the supported metrics. Set this to `false` to turn off all metrics by default. The individual settings can then be used to selectively re-enable specific metrics.
metrics.neo4j.enabled false The default enablement value for all Neo4j specific support metrics. Set this to `false` to turn off all Neo4j specific metrics by default. The individual `metrics.neo4j.*` metrics can then be turned on selectively.
metrics.neo4j.tx.enabled false Enable reporting metrics about transactions; number of transactions started, committed, etc.
metrics.neo4j.pagecache.enabled false Enable reporting metrics about the Neo4j page cache; page faults, evictions, flushes, exceptions, etc.
metrics.neo4j.counts.enabled false Enable reporting metrics about approximately how many entities are in the database; nodes, relationships, properties, etc.
metrics.neo4j.network.enabled false Enable reporting metrics about the network usage.
metrics.neo4j.causal_clustering.enabled false Enable reporting metrics about Causal Clustering mode.
metrics.neo4j.checkpointing.enabled false Enable reporting metrics about Neo4j check pointing; when it occurs and how much time it takes to complete.
metrics.neo4j.logrotation.enabled false Enable reporting metrics about the Neo4j log rotation; when it occurs and how much time it takes to complete.
metrics.neo4j.cluster.enabled false Enable reporting metrics about HA cluster info.
metrics.neo4j.server.enabled false Enable reporting metrics about Server threading info.
metrics.jvm.gc.enabled false Enable reporting metrics about the duration of garbage collections
metrics.jvm.memory.enabled false Enable reporting metrics about the memory usage.
metrics.jvm.buffers.enabled false Enable reporting metrics about the buffer pools.
metrics.jvm.threads.enabled false Enable reporting metrics about the current number of threads running.
metrics.cypher.replanning.enabled false Enable reporting metrics about number of occurred replanning events.
metrics.bolt.messages.enabled false Enable reporting metrics about Bolt Protocol message processing.
metrics.csv.enabled false Set to `true` to enable exporting metrics to CSV files
dbms.directories.metrics metrics The target location of the CSV files: a path to a directory wherein a CSV file per reported field will be written.
metrics.csv.interval 3s The reporting interval for the CSV files. That is, how often new rows with numbers are appended to the CSV files.
metrics.graphite.enabled false Set to `true` to enable exporting metrics to Graphite.
metrics.graphite.server :2003 The hostname or IP address of the Graphite server
metrics.graphite.interval 3s The reporting interval for Graphite. That is, how often to send updated metrics to Graphite.
dbms.security.auth_provider native The authentication and authorization provider that contains both the users and roles. This can be one of the built-in `native` or `ldap` providers, or it can be an externally provided plugin, with a custom name prefixed by `plugin-`, i.e. `plugin-<AUTH_PROVIDER_NAME>`.
dbms.security.auth_providers null A list of security authentication and authorization providers containing the users and roles. They will be queried in the given order when login is attempted.
dbms.security.native.authentication_enabled null Enable authentication via native authentication provider.
dbms.security.native.authorization_enabled null Enable authorization via native authorization provider.
dbms.security.ldap.authentication_enabled null Enable authentication via settings configurable LDAP authentication provider.
dbms.security.ldap.authorization_enabled null Enable authorization via settings configurable LDAP authorization provider.
dbms.security.plugin.authentication_enabled null Enable authentication via plugin authentication providers.
dbms.security.plugin.authorization_enabled null Enable authorization via plugin authorization providers.
dbms.security.ldap.host localhost URL of LDAP server to use for authentication and authorization. The format of the setting is `<protocol>://<hostname>:<port>`, where hostname is the only required field. The supported values for protocol are `ldap` (default) and `ldaps`. The default port for `ldap` is 389 and for `ldaps` 636. For example: `ldaps://ldap.example.com:10389`. NOTE: You may want to consider using STARTTLS (`dbms.security.ldap.use_starttls`) instead of LDAPS for secure connections, in which case the correct protocol is `ldap`.
dbms.security.ldap.use_starttls false Use secure communication with the LDAP server using opportunistic TLS. First an initial insecure connection will be made with the LDAP server, and a STARTTLS command will be issued to negotiate an upgrade of the connection to TLS before initiating authentication.
dbms.security.ldap.referral follow The LDAP referral behavior when creating a connection. This is one of `follow`, `ignore` or `throw`. * `follow` automatically follows any referrals * `ignore` ignores any referrals * `throw` throws an exception, which will lead to authentication failure
dbms.security.ldap.connection_timeout 30s The timeout for establishing an LDAP connection. If a connection with the LDAP server cannot be established within the given time the attempt is aborted. A value of 0 means to use the network protocol's (i.e., TCP's) timeout value.
dbms.security.ldap.read_timeout 30s The timeout for an LDAP read request (i.e. search). If the LDAP server does not respond within the given time the request will be aborted. A value of 0 means wait for a response indefinitely.
dbms.security.ldap.authentication.mechanism simple LDAP authentication mechanism. This is one of `simple` or a SASL mechanism supported by JNDI, for example `DIGEST-MD5`. `simple` is basic username and password authentication and SASL is used for more advanced mechanisms. See RFC 2251 LDAPv3 documentation for more details.
dbms.security.ldap.authentication.user_dn_template uid={0},ou=users,dc=example,dc=com LDAP user DN template. An LDAP object is referenced by its distinguished name (DN), and a user DN is an LDAP fully-qualified unique user identifier. This setting is used to generate an LDAP DN that conforms with the LDAP directory's schema from the user principal that is submitted with the authentication token when logging in. The special token {0} is a placeholder where the user principal will be substituted into the DN string.
dbms.security.ldap.authentication.cache_enabled true Determines if the result of authentication via the LDAP server should be cached or not. Caching is used to limit the number of LDAP requests that have to be made over the network for users that have already been authenticated successfully. A user can be authenticated against an existing cache entry (instead of via an LDAP server) as long as it is alive (see `dbms.security.auth_cache_ttl`). An important consequence of setting this to `true` is that Neo4j then needs to cache a hashed version of the credentials in order to perform credentials matching. This hashing is done using a cryptographic hash function together with a random salt. Preferably a conscious decision should be made if this method is considered acceptable by the security standards of the organization in which this Neo4j instance is deployed.
dbms.security.ldap.authentication.use_samaccountname false Perform authentication with sAMAccountName instead of DN. Using this setting requires `dbms.security.ldap.authorization.system_username` and dbms.security.ldap.authorization.system_password to be used since there is no way to log in through ldap directly with the sAMAccountName, instead the login name will be resolved to a DN that will be used to log in with.
dbms.security.ldap.authorization.use_system_account false Perform LDAP search for authorization info using a system account instead of the user's own account. If this is set to `false` (default), the search for group membership will be performed directly after authentication using the LDAP context bound with the user's own account. The mapped roles will be cached for the duration of `dbms.security.auth_cache_ttl`, and then expire, requiring re-authentication. To avoid frequently having to re-authenticate sessions you may want to set a relatively long auth cache expiration time together with this option. NOTE: This option will only work if the users are permitted to search for their own group membership attributes in the directory. If this is set to `true`, the search will be performed using a special system account user with read access to all the users in the directory. You need to specify the username and password using the settings `dbms.security.ldap.authorization.system_username` and `dbms.security.ldap.authorization.system_password` with this option. Note that this account only needs read access to the relevant parts of the LDAP directory and does not need to have access rights to Neo4j, or any other systems.
dbms.security.ldap.authorization.system_username null An LDAP system account username to use for authorization searches when `dbms.security.ldap.authorization.use_system_account` is `true`. Note that the `dbms.security.ldap.authentication.user_dn_template` will not be applied to this username, so you may have to specify a full DN.
dbms.security.ldap.authorization.system_password null An LDAP system account password to use for authorization searches when `dbms.security.ldap.authorization.use_system_account` is `true`.
dbms.security.ldap.authorization.user_search_base ou=users,dc=example,dc=com The name of the base object or named context to search for user objects when LDAP authorization is enabled. A common case is that this matches the last part of `dbms.security.ldap.authentication.user_dn_template`.
dbms.security.ldap.authorization.user_search_filter (&(objectClass=*)(uid={0})) The LDAP search filter to search for a user principal when LDAP authorization is enabled. The filter should contain the placeholder token {0} which will be substituted for the user principal.
dbms.security.ldap.authorization.group_membership_attributes memberOf A list of attribute names on a user object that contains groups to be used for mapping to roles when LDAP authorization is enabled.
dbms.security.ldap.authorization.group_to_role_mapping null An authorization mapping from LDAP group names to Neo4j role names. The map should be formatted as a semicolon separated list of key-value pairs, where the key is the LDAP group name and the value is a comma separated list of corresponding role names. For example: group1=role1;group2=role2;group3=role3,role4,role5 You could also use whitespaces and quotes around group names to make this mapping more readable, for example: dbms.security.ldap.authorization.group_to_role_mapping=\ "cn=Neo4j Read Only,cn=users,dc=example,dc=com" = reader; \ "cn=Neo4j Read-Write,cn=users,dc=example,dc=com" = publisher; \ "cn=Neo4j Schema Manager,cn=users,dc=example,dc=com" = architect; \ "cn=Neo4j Administrator,cn=users,dc=example,dc=com" = admin
dbms.security.auth_cache_ttl 10m The time to live (TTL) for cached authentication and authorization info when using external auth providers (LDAP or plugin). Setting the TTL to 0 will disable auth caching. Disabling caching while using the LDAP auth provider requires the use of an LDAP system account for resolving authorization information.
dbms.security.auth_cache_max_capacity 10000 The maximum capacity for authentication and authorization caches (respectively).
dbms.logs.security.path null Path to the security log file.
dbms.logs.security.level INFO Security log level threshold.
dbms.security.log_successful_authentication true Set to log successful authentication events to the security log. If this is set to `false` only failed authentication events will be logged, which could be useful if you find that the successful events spam the logs too much, and you do not require full auditing capability.
dbms.logs.security.rotation.size 20m Threshold for rotation of the security log.
dbms.logs.security.rotation.delay 300s Minimum time interval after last rotation of the security log before it may be rotated again.
dbms.logs.security.rotation.keep_number 7 Maximum number of history files for the security log.
dbms.security.procedures.default_allowed The default role that can execute all procedures and user-defined functions that are not covered by the `dbms.security.procedures.roles` setting. If the `dbms.security.procedures.default_allowed` setting is the empty string (default), procedures will be executed according to the same security rules as normal Cypher statements.
dbms.security.procedures.roles This provides a finer level of control over which roles can execute procedures than the `dbms.security.procedures.default_allowed` setting. For example: `dbms.security.procedures.roles=apoc.convert.*:reader;apoc.load.json*:writer;apoc.trigger.add:TriggerHappy` will allow the role `reader` to execute all procedures in the `apoc.convert` namespace, the role `writer` to execute all procedures in the `apoc.load` namespace that starts with `json` and the role `TriggerHappy` to execute the specific procedure `apoc.trigger.add`. Procedures not matching any of these patterns will be subject to the `dbms.security.procedures.default_allowed` setting.
unsupported.dbms.security.ldap.authorization.connection_pooling true Set to true if connection pooling should be used for authorization searches using the system account.
dbms.directories.certificates certificates Directory for storing certificates to be used by Neo4j for TLS connections
unsupported.dbms.security.tls_certificate_file null Path to the X.509 public certificate to be used by Neo4j for TLS connections
unsupported.dbms.security.tls_key_file null Path to the X.509 private key to be used by Neo4j for TLS connections
causal_clustering.join_catch_up_timeout 10m Time out for a new member to catch up
causal_clustering.leader_election_timeout 7s The time limit within which a new leader election will occur if no messages are received.
causal_clustering.refuse_to_be_leader false Prevents the current instance from volunteering to become Raft leader. Defaults to false, and should only be used in exceptional circumstances by expert users. Using this can result in reduced availability for the cluster.
causal_clustering.catchup_batch_size 64 The maximum batch size when catching up (in unit of entries)
causal_clustering.log_shipping_max_lag 256 The maximum lag allowed before log shipping pauses (in unit of entries)
causal_clustering.raft_in_queue_size 64 Size of the RAFT in queue
causal_clustering.raft_in_queue_max_batch 64 Largest batch processed by RAFT
causal_clustering.expected_core_cluster_size 3 Expected number of Core machines in the cluster
causal_clustering.transaction_listen_address 127.0.0.1:6000 Network interface and port for the transaction shipping server to listen on.
causal_clustering.transaction_advertised_address localhost:6000 Advertised hostname/IP address and port for the transaction shipping server.
causal_clustering.raft_listen_address 127.0.0.1:7000 Network interface and port for the RAFT server to listen on.
causal_clustering.raft_advertised_address localhost:7000 Advertised hostname/IP address and port for the RAFT server.
causal_clustering.discovery_listen_address 127.0.0.1:5000 Host and port to bind the cluster member discovery management communication.
causal_clustering.discovery_advertised_address localhost:5000 Advertised cluster member discovery management communication.
causal_clustering.initial_discovery_members null A comma-separated list of other members of the cluster to join.
causal_clustering.discovery_type LIST Configure the discovery type used for cluster name resolution
causal_clustering.disable_middleware_logging true Prevents the network middleware from dumping its own logs. Defaults to true.
causal_clustering.middleware_logging.level 500 Logging level of middleware logging
hazelcast.license_key null Hazelcast license key
causal_clustering.last_applied_state_size 1000 The maximum file size before the storage file is rotated (in unit of entries)
causal_clustering.id_alloc_state_size 1000 The maximum file size before the ID allocation file is rotated (in unit of entries)
causal_clustering.raft_membership_state_size 1000 The maximum file size before the membership state file is rotated (in unit of entries)
causal_clustering.raft_vote_state_size 1000 The maximum file size before the vote state file is rotated (in unit of entries)
causal_clustering.raft_term_state_size 1000 The maximum file size before the term state file is rotated (in unit of entries)
causal_clustering.global_session_tracker_state_size 1000 The maximum file size before the global session tracker state file is rotated (in unit of entries)
causal_clustering.replicated_lock_token_state_size 1000 The maximum file size before the replicated lock token state file is rotated (in unit of entries)
causal_clustering.replication_total_size_limit 128M The maximum amount of data which can be in the replication stage concurrently.
causal_clustering.replication_retry_timeout_base 10s The initial timeout until replication is retried. The timeout will increase exponentially.
causal_clustering.replication_retry_timeout_limit 60s The upper limit for the exponentially incremented retry timeout.
causal_clustering.state_machine_flush_window_size 4096 The number of operations to be processed before the state machines flush to disk
causal_clustering.state_machine_apply_max_batch_size 16 The maximum number of operations to be batched during applications of operations in the state machines
causal_clustering.raft_log_prune_strategy 1g size RAFT log pruning strategy
causal_clustering.raft_log_implementation SEGMENTED RAFT log implementation
causal_clustering.raft_log_rotation_size 250M RAFT log rotation size
causal_clustering.raft_log_reader_pool_size 8 RAFT log reader pool size
causal_clustering.raft_log_pruning_frequency 10m RAFT log pruning frequency
causal_clustering.raft_messages_log_enable false Enable or disable the dump of all network messages pertaining to the RAFT protocol
causal_clustering.raft_messages_log_path null Path to RAFT messages log.
causal_clustering.pull_interval 1s Interval of pulling updates from cores.
causal_clustering.catch_up_client_inactivity_timeout 20s The catch up protocol times out if the given duration elapses with not network activity. Every message received by the client from the server extends the time out duration.
causal_clustering.unknown_address_logging_throttle 10000ms Throttle limit for logging unknown cluster member address
causal_clustering.read_replica_transaction_applier_batch_size 64 Maximum transaction batch size for read replicas when applying transactions pulled from core servers.
causal_clustering.read_replica_time_to_live 1m Time To Live before read replica is considered unavailable
causal_clustering.cluster_routing_ttl 300s How long drivers should cache the data from the `dbms.cluster.routing.getServers()` procedure.
causal_clustering.cluster_allow_reads_on_followers true Configure if the `dbms.cluster.routing.getServers()` procedure should include followers as read endpoints or return only read replicas. Note: if there are no read replicas in the cluster, followers are returned as read end points regardless the value of this setting. Defaults to true so that followersare available for read-only queries in a typical heterogeneous setup.
causal_clustering.node_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of NODE IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.relationship_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of RELATIONSHIP IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.property_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of PROPERTY IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.string_block_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of STRING_BLOCK IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.array_block_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of ARRAY_BLOCK IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.property_key_token_id_allocation_size 32 The size of the ID allocation requests Core servers will make when they run out of PROPERTY_KEY_TOKEN IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.property_key_token_name_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of PROPERTY_KEY_TOKEN_NAME IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.relationship_type_token_id_allocation_size 32 The size of the ID allocation requests Core servers will make when they run out of RELATIONSHIP_TYPE_TOKEN IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.relationship_type_token_name_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of RELATIONSHIP_TYPE_TOKEN_NAME IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.label_token_id_allocation_size 32 The size of the ID allocation requests Core servers will make when they run out of LABEL_TOKEN IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.label_token_name_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of LABEL_TOKEN_NAME IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.neostore_block_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of NEOSTORE_BLOCK IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.schema_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of SCHEMA IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.node_labels_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of NODE_LABELS IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.relationship_group_id_allocation_size 1024 The size of the ID allocation requests Core servers will make when they run out of RELATIONSHIP_GROUP IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.
causal_clustering.cluster_topology_refresh 5s Time between scanning the cluster to refresh current server's view of topology
causal_clustering.upstream_selection_strategy default An ordered list in descending preference of the strategy which read replicas use to choose the upstream server from which to pull transactional updates.
causal_clustering.user_defined_upstream_strategy Configuration of a user-defined upstream selection strategy. The user-defined strategy is used if the list of strategies (`causal_clustering.upstream_selection_strategy`) includes the value `user_defined`.
causal_clustering.connect-randomly-to-server-group Comma separated list of groups to be used by the connect-randomly-to-server-group selection strategy. The connect-randomly-to-server-group strategy is used if the list of strategies (`causal_clustering.upstream_selection_strategy`) includes the value `connect-randomly-to-server-group`.
causal_clustering.server_groups A list of group names for the server used when configuring load balancing and replication policies.
causal_clustering.load_balancing.plugin server_policies The load balancing plugin to use.
causal_clustering.load_balancing.config The configuration must be valid for the configured plugin and usually existsunder matching subkeys, e.g. ..config.server_policies.*This is just a top-level placeholder for the plugin-specific configuration.
causal_clustering.load_balancing.shuffle true Enables shuffling of the returned load balancing result.
dbms.security.causal_clustering_status_auth_enabled true Require authorization for access to the Causal Clustering status endpoints.
causal_clustering.multi_dc_license false Enable multi-data center features. Requires appropriate licensing.
causal_clustering.ssl_policy null Name of the SSL policy to be used by the clustering, as defined under the dbms.ssl.policy.* settings. If no policy is configured then the communication will not be secured.
dbms.mode SINGLE Configure the operating mode of the database -- 'SINGLE' for stand-alone operation, 'HA' for operating as a member in an HA cluster, 'ARBITER' for a cluster member with no database in an HA cluster, 'CORE' for operating as a core member of a Causal Cluster, or 'READ_REPLICA' for operating as a read replica member of a Causal Cluster.
ha.server_id null Id for a cluster instance. Must be unique within the cluster.
unsupported.ha.cluster_name neo4j.ha The name of a cluster.
ha.initial_hosts null A comma-separated list of other members of the cluster to join.
ha.host.coordination 0.0.0.0:5001-5099 Host and port to bind the cluster management communication.
ha.allow_init_cluster true Whether to allow this instance to create a cluster if unable to join.
ha.default_timeout 5s Default timeout used for clustering timeouts. Override specific timeout settings with proper values if necessary. This value is the default value for the ha.heartbeat_interval, ha.paxos_timeout and ha.learn_timeout settings.
ha.heartbeat_interval 5s How often heartbeat messages should be sent. Defaults to ha.default_timeout.
ha.heartbeat_timeout 40s How long to wait for heartbeats from other instances before marking them as suspects for failure. This value reflects considerations of network latency, expected duration of garbage collection pauses and other factors that can delay message sending and processing. Larger values will result in more stable masters but also will result in longer waits before a failover in case of master failure. This value should not be set to less than twice the ha.heartbeat_interval value otherwise there is a high risk of frequent master switches and possibly branched data occurrence.
ha.broadcast_timeout 30s Timeout for broadcasting values in cluster. Must consider end-to-end duration of Paxos algorithm. This value is the default value for the ha.join_timeout and ha.leave_timeout settings.
ha.join_timeout 30s Timeout for joining a cluster. Defaults to ha.broadcast_timeout. Note that if the timeout expires during cluster formation, the operator may have to restart the instance or instances.
ha.configuration_timeout 1s Timeout for waiting for configuration from an existing cluster member during cluster join.
ha.leave_timeout 30s Timeout for waiting for cluster leave to finish. Defaults to ha.broadcast_timeout.
ha.paxos_timeout 5s Default value for all Paxos timeouts. This setting controls the default value for the ha.phase1_timeout, ha.phase2_timeout and ha.election_timeout settings. If it is not given a value it defaults to ha.default_timeout and will implicitly change if ha.default_timeout changes. This is an advanced parameter which should only be changed if specifically advised by Neo4j Professional Services.
ha.phase1_timeout 5s Timeout for Paxos phase 1. If it is not given a value it defaults to ha.paxos_timeout and will implicitly change if ha.paxos_timeout changes. This is an advanced parameter which should only be changed if specifically advised by Neo4j Professional Services.
ha.phase2_timeout 5s Timeout for Paxos phase 2. If it is not given a value it defaults to ha.paxos_timeout and will implicitly change if ha.paxos_timeout changes. This is an advanced parameter which should only be changed if specifically advised by Neo4j Professional Services.
ha.learn_timeout 5s Timeout for learning values. Defaults to ha.default_timeout.
ha.election_timeout 5s Timeout for waiting for other members to finish a role election. Defaults to ha.paxos_timeout.
unsupported.ha.instance_name null
ha.max_acceptors 21 Maximum number of servers to involve when agreeing to membership changes. In very large clusters, the probability of half the cluster failing is low, but protecting against any arbitrary half failing is expensive. Therefore you may wish to set this parameter to a value less than the cluster size.
ha.strict_initial_hosts false
dbms.netty.ssl.provider JDK Netty SSL provider
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment