Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save nalabjp/e8a24ab8deb45fbc8201a119460cba82 to your computer and use it in GitHub Desktop.
Save nalabjp/e8a24ab8deb45fbc8201a119460cba82 to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
diff --git a/activerecord/.gitignore b/activerecord/.gitignore
index 8d747034f6..f2516307fa 100644
--- a/activerecord/.gitignore
+++ b/activerecord/.gitignore
@@ -1,4 +1,5 @@
/sqlnet.log
/test/config.yml
/test/db/
+/test/storage/
/test/fixtures/*.sqlite*
diff --git a/activerecord/CHANGELOG.md b/activerecord/CHANGELOG.md
index 0fdf704683..563112b636 100644
--- a/activerecord/CHANGELOG.md
+++ b/activerecord/CHANGELOG.md
@@ -1,1827 +1,2468 @@
-## Rails 6.1.7.7 (February 21, 2024) ##
+## Rails 7.1.3.2 (February 21, 2024) ##
* No changes.
-## Rails 6.1.7.6 (August 22, 2023) ##
+## Rails 7.1.3.1 (February 21, 2024) ##
* No changes.
-## Rails 6.1.7.5 (August 22, 2023) ##
+## Rails 7.1.3 (January 16, 2024) ##
-* No changes.
+* Fix Migrations with versions older than 7.1 validating options given to
+ `add_reference`.
+ *Hartley McGuire*
-## Rails 6.1.7.4 (June 26, 2023) ##
+* Ensure `reload` sets correct owner for each association.
-* No changes.
+ *Dmytro Savochkin*
+* Fix view runtime for controllers with async queries.
-## Rails 6.1.7.3 (March 13, 2023) ##
+ *fatkodima*
-* No changes.
+* Fix `load_async` to work with query cache.
+ *fatkodima*
-## Rails 6.1.7.2 (January 24, 2023) ##
+* Fix polymorphic `belongs_to` to correctly use parent's `query_constraints`.
-* No changes.
+ *fatkodima*
+* Fix `Preloader` to not generate a query for already loaded association with `query_constraints`.
-## Rails 6.1.7.1 (January 17, 2023) ##
+ *fatkodima*
-* Make sanitize_as_sql_comment more strict
+* Fix multi-database polymorphic preloading with equivalent table names.
- Though this method was likely never meant to take user input, it was
- attempting sanitization. That sanitization could be bypassed with
- carefully crafted input.
+ When preloading polymorphic associations, if two models pointed to two
+ tables with the same name but located in different databases, the
+ preloader would only load one.
- This commit makes the sanitization more robust by replacing any
- occurrances of "/*" or "*/" with "/ *" or "* /". It also performs a
- first pass to remove one surrounding comment to avoid compatibility
- issues for users relying on the existing removal.
+ *Ari Summer*
- This also clarifies in the documentation of annotate that it should not
- be provided user input.
+* Fix `encrypted_attribute?` to take into account context properties passed to `encrypts`.
- [CVE-2023-22794]
+ *Maxime Réty*
-* Added integer width check to PostgreSQL::Quoting
+* Fix `find_by` to work correctly in presence of composite primary keys.
- Given a value outside the range for a 64bit signed integer type
- PostgreSQL will treat the column type as numeric. Comparing
- integer values against numeric values can result in a slow
- sequential scan.
+ *fatkodima*
- This behavior is configurable via
- ActiveRecord::Base.raise_int_wider_than_64bit which defaults to true.
+* Fix async queries sometimes returning a raw result if they hit the query cache.
- [CVE-2022-44566]
+ `ShipPart.async_count` could return a raw integer rather than a Promise
+ if it found the result in the query cache.
-## Rails 6.1.7 (September 09, 2022) ##
+ *fatkodima*
-* Symbol is allowed by default for YAML columns
+* Fix `Relation#transaction` to not apply a default scope.
- *Étienne Barrié*
+ The method was incorrectly setting a default scope around its block:
-* Fix `ActiveRecord::Store` to serialize as a regular Hash
+ ```ruby
+ Post.where(published: true).transaction do
+ Post.count # SELECT COUNT(*) FROM posts WHERE published = FALSE;
+ end
+ ```
- Previously it would serialize as an `ActiveSupport::HashWithIndifferentAccess`
- which is wasteful and cause problem with YAML safe_load.
+ *Jean Boussier*
+
+* Fix calling `async_pluck` on a `none` relation.
+
+ `Model.none.async_pluck(:id)` was returning a naked value
+ instead of a promise.
*Jean Boussier*
-* Fix PG.connect keyword arguments deprecation warning on ruby 2.7
+* Fix calling `load_async` on a `none` relation.
- Fixes #44307.
+ `Model.none.load_async` was returning a broken result.
- *Nikita Vasilevsky*
+ *Lucas Mazza*
-## Rails 6.1.6.1 (July 12, 2022) ##
+* TrilogyAdapter: ignore `host` if `socket` parameter is set.
-* Change ActiveRecord::Coders::YAMLColumn default to safe_load
+ This allows to configure a connection on a UNIX socket via DATABASE_URL:
- This adds two new configuration options The configuration options are as
- follows:
-
- * `config.active_storage.use_yaml_unsafe_load`
-
- When set to true, this configuration option tells Rails to use the old
- "unsafe" YAML loading strategy, maintaining the existing behavior but leaving
- the possible escalation vulnerability in place. Setting this option to true
- is *not* recommended, but can aid in upgrading.
-
- * `config.active_record.yaml_column_permitted_classes`
-
- The "safe YAML" loading method does not allow all classes to be deserialized
- by default. This option allows you to specify classes deemed "safe" in your
- application. For example, if your application uses Symbol and Time in
- serialized data, you can add Symbol and Time to the allowed list as follows:
-
```
- config.active_record.yaml_column_permitted_classes = [Symbol, Date, Time]
+ DATABASE_URL=trilogy://does-not-matter/my_db_production?socket=/var/run/mysql.sock
```
- [CVE-2022-32224]
+ *Jean Boussier*
+
+* Fix `has_secure_token` calls the setter method on initialize.
+ *Abeid Ahmed*
-## Rails 6.1.6 (May 09, 2022) ##
+* Allow using `object_id` as a database column name.
+ It was available before rails 7.1 and may be used as a part of polymorphic relationship to `object` where `object` can be any other database record.
-* No changes.
+ *Mikhail Doronin*
+
+* Fix `rails db:create:all` to not touch databases before they are created.
+
+ *fatkodima*
+
+
+## Rails 7.1.2 (November 10, 2023) ##
+
+* Fix renaming primary key index when renaming a table with a UUID primary key
+ in PostgreSQL.
+
+ *fatkodima*
+
+* Fix `where(field: values)` queries when `field` is a serialized attribute
+ (for example, when `field` uses `ActiveRecord::Base.serialize` or is a JSON
+ column).
+
+ *João Alves*
+
+* Prevent marking broken connections as verified.
+
+ *Daniel Colson*
+
+* Don't mark Float::INFINITY as changed when reassigning it
+
+ When saving a record with a float infinite value, it shouldn't mark as changed
+
+ *Maicol Bentancor*
+
+* `ActiveRecord::Base.table_name` now returns `nil` instead of raising
+ "undefined method `abstract_class?` for Object:Class".
+
+ *a5-stable*
+
+* Fix upserting for custom `:on_duplicate` and `:unique_by` consisting of all
+ inserts keys.
+
+ *fatkodima*
+
+* Fixed an [issue](https://github.com/rails/rails/issues/49809) where saving a
+ record could innappropriately `dup` its attributes.
+
+ *Jonathan Hefner*
+
+* Dump schema only for a specific db for rollback/up/down tasks for multiple dbs.
+
+ *fatkodima*
+
+* Fix `NoMethodError` when casting a PostgreSQL `money` value that uses a
+ comma as its radix point and has no leading currency symbol. For example,
+ when casting `"3,50"`.
+
+ *Andreas Reischuck* and *Jonathan Hefner*
+* Re-enable support for using `enum` with non-column-backed attributes.
+ Non-column-backed attributes must be previously declared with an explicit
+ type. For example:
-## Rails 6.1.5.1 (April 26, 2022) ##
+ ```ruby
+ class Post < ActiveRecord::Base
+ attribute :topic, :string
+ enum topic: %i[science tech engineering math]
+ end
+ ```
+
+ *Jonathan Hefner*
+
+* Raise on `foreign_key:` being passed as an array in associations
+
+ *Nikita Vasilevsky*
+
+* Return back maximum allowed PostgreSQL table name to 63 characters.
+
+ *fatkodima*
+
+* Fix detecting `IDENTITY` columns for PostgreSQL < 10.
+
+ *fatkodima*
+
+
+## Rails 7.1.1 (October 11, 2023) ##
+
+* Fix auto populating IDENTITY columns for PostgreSQL.
+
+ *fatkodima*
+
+* Fix "ArgumentError: wrong number of arguments (given 3, expected 2)" when
+ down migrating `rename_table` in older migrations.
+
+ *fatkodima*
+
+* Do not require the Action Text, Active Storage and Action Mailbox tables
+ to be present when running when running test on CI.
+
+ *Rafael Mendonça França*
+
+
+## Rails 7.1.0 (October 05, 2023) ##
* No changes.
-## Rails 6.1.5 (March 09, 2022) ##
+## Rails 7.1.0.rc2 (October 01, 2023) ##
-* Fix `ActiveRecord::ConnectionAdapters::SchemaCache#deep_deduplicate` for Ruby 2.6.
+* Remove -shm and -wal SQLite files when `rails db:drop` is run.
- Ruby 2.6 and 2.7 have slightly different implementations of the `String#-@` method.
- In Ruby 2.6, the receiver of the `String#-@` method is modified under certain circumstances.
- This was later identified as a bug (https://bugs.ruby-lang.org/issues/15926) and only
- fixed in Ruby 2.7.
+ *Niklas Häusele*
- Before the changes in this commit, the
- `ActiveRecord::ConnectionAdapters::SchemaCache#deep_deduplicate` method, which internally
- calls the `String#-@` method, could also modify an input string argument in Ruby 2.6 --
- changing a tainted, unfrozen string into a tainted, frozen string.
+* Revert the change to raise an `ArgumentError` when `#accepts_nested_attributes_for` is declared more than once for
+ an association in the same class.
- Fixes #43056
+ The reverted behavior broke the case where the `#accepts_nested_attributes_for` was defined in a concern and
+ where overridden in the class that included the concern.
- *Eric O'Hanlon*
+ *Rafael Mendonça França*
-* Fix migration compatibility to create SQLite references/belongs_to column as integer when
- migration version is 6.0.
- `reference`/`belongs_to` in migrations with version 6.0 were creating columns as
- bigint instead of integer for the SQLite Adapter.
+## Rails 7.1.0.rc1 (September 27, 2023) ##
- *Marcelo Lauxen*
+* Better naming for unique constraints support.
-* Fix dbconsole for 3-tier config.
+ Naming unique keys leads to misunderstanding it's a short-hand of unique indexes.
+ Just naming it unique constraints is not misleading.
- *Eileen M. Uchitelle*
+ In Rails 7.1.0.beta1 or before:
-* Better handle SQL queries with invalid encoding.
+ ```ruby
+ add_unique_key :sections, [:position], deferrable: :deferred, name: "unique_section_position"
+ remove_unique_key :sections, name: "unique_section_position"
+ ```
+
+ Now:
```ruby
- Post.create(name: "broken \xC8 UTF-8")
+ add_unique_constraint :sections, [:position], deferrable: :deferred, name: "unique_section_position"
+ remove_unique_constraint :sections, name: "unique_section_position"
```
- Would cause all adapters to fail in a non controlled way in the code
- responsible to detect write queries.
+ *Ryuta Kamizono*
- The query is now properly passed to the database connection, which might or might
- not be able to handle it, but will either succeed or failed in a more correct way.
+* Fix duplicate quoting for check constraint expressions in schema dump when using MySQL
- *Jean Boussier*
+ A check constraint with an expression, that already contains quotes, lead to an invalid schema
+ dump with the mysql2 adapter.
-* Ignore persisted in-memory records when merging target lists.
+ Fixes #42424.
- *Kevin Sjöberg*
+ *Felix Tscheulin*
-* Fix regression bug that caused ignoring additional conditions for preloading
- `has_many` through relations.
+* Performance tune the SQLite3 adapter connection configuration
- Fixes #43132
+ For Rails applications, the Write-Ahead-Log in normal syncing mode with a capped journal size, a healthy shared memory buffer and a shared cache will perform, on average, 2× better.
- *Alexander Pauly*
+ *Stephen Margheim*
-* Fix `ActiveRecord::InternalMetadata` to not be broken by
- `config.active_record.record_timestamps = false`
+* Allow SQLite3 `busy_handler` to be configured with simple max number of `retries`
- Since the model always create the timestamp columns, it has to set them, otherwise it breaks
- various DB management tasks.
+ Retrying busy connections without delay is a preferred practice for performance-sensitive applications. Add support for a `database.yml` `retries` integer, which is used in a simple `busy_handler` function to retry busy connections without exponential backoff up to the max number of `retries`.
- Fixes #42983
+ *Stephen Margheim*
- *Jean Boussier*
+* The SQLite3 adapter now supports `supports_insert_returning?`
-* Fix duplicate active record objects on `inverse_of`.
+ Implementing the full `supports_insert_returning?` contract means the SQLite3 adapter supports auto-populated columns (#48241) as well as custom primary keys.
- *Justin Carvalho*
+ *Stephen Margheim*
-* Fix duplicate objects stored in has many association after save.
+* Ensure the SQLite3 adapter handles default functions with the `||` concatenation operator
- Fixes #42549.
+ Previously, this default function would produce the static string `"'Ruby ' || 'on ' || 'Rails'"`.
+ Now, the adapter will appropriately receive and use `"Ruby on Rails"`.
- *Alex Ghiculescu*
+ ```ruby
+ change_column_default "test_models", "ruby_on_rails", -> { "('Ruby ' || 'on ' || 'Rails')" }
+ ```
+
+ *Stephen Margheim*
+
+* Dump PostgreSQL schemas as part of the schema dump.
-* Fix performance regression in `CollectionAssocation#build`.
+ *Lachlan Sylvester*
+
+
+## Rails 7.1.0.beta1 (September 13, 2023) ##
+
+* Encryption now supports `support_unencrypted_data` being set per-attribute.
+
+ You can now opt out of `support_unencrypted_data` on a specific encrypted attribute.
+ This only has an effect if `ActiveRecord::Encryption.config.support_unencrypted_data == true`.
+
+ ```ruby
+ class User < ActiveRecord::Base
+ encrypts :name, deterministic: true, support_unencrypted_data: false
+ encrypts :email, deterministic: true
+ end
+ ```
*Alex Ghiculescu*
-* Fix retrieving default value for text column for MariaDB.
+* Add instrumentation for Active Record transactions
+
+ Allows subscribing to transaction events for tracking/instrumentation. The event payload contains the connection and the outcome (commit, rollback, restart, incomplete), as well as timing details.
+
+ ```ruby
+ ActiveSupport::Notifications.subscribe("transaction.active_record") do |event|
+ puts "Transaction event occurred!"
+ connection = event.payload[:connection]
+ puts "Connection: #{connection.inspect}"
+ end
+ ```
+
+ *Daniel Colson*, *Ian Candy*
+
+* Support composite foreign keys via migration helpers.
+
+ ```ruby
+ # Assuming "carts" table has "(shop_id, user_id)" as a primary key.
+
+ add_foreign_key(:orders, :carts, primary_key: [:shop_id, :user_id])
+
+ remove_foreign_key(:orders, :carts, primary_key: [:shop_id, :user_id])
+ foreign_key_exists?(:orders, :carts, primary_key: [:shop_id, :user_id])
+ ```
*fatkodima*
+* Adds support for `if_not_exists` when adding a check constraint.
-## Rails 6.1.4.7 (March 08, 2022) ##
+ ```ruby
+ add_check_constraint :posts, "post_type IN ('blog', 'comment', 'share')", if_not_exists: true
+ ```
-* No changes.
+ *Cody Cutrer*
+* Raise an `ArgumentError` when `#accepts_nested_attributes_for` is declared more than once for an association in
+ the same class. Previously, the last declaration would silently override the previous one. Overriding in a subclass
+ is still allowed.
-## Rails 6.1.4.6 (February 11, 2022) ##
+ *Joshua Young*
-* No changes.
+* Deprecate `rewhere` argument on `#merge`.
+ The `rewhere` argument on `#merge`is deprecated without replacement and
+ will be removed in Rails 7.2.
-## Rails 6.1.4.5 (February 11, 2022) ##
+ *Adam Hess*
-* No changes.
+* Deprecate aliasing non-attributes with `alias_attribute`.
+ *Ian Candy*
-## Rails 6.1.4.4 (December 15, 2021) ##
+* Fix unscope is not working in specific case
-* No changes.
+ Before:
+ ```ruby
+ Post.where(id: 1...3).unscope(where: :id).to_sql # "SELECT `posts`.* FROM `posts` WHERE `posts`.`id` >= 1 AND `posts`.`id` < 3"
+ ```
-## Rails 6.1.4.3 (December 14, 2021) ##
+ After:
+ ```ruby
+ Post.where(id: 1...3).unscope(where: :id).to_sql # "SELECT `posts`.* FROM `posts`"
+ ```
-* No changes.
+ Fixes #48094.
+ *Kazuya Hatanaka*
-## Rails 6.1.4.2 (December 14, 2021) ##
+* Change `has_secure_token` default to `on: :initialize`
-* No changes.
+ Change the new default value from `on: :create` to `on: :initialize`
+
+ Can be controlled by the `config.active_record.generate_secure_token_on`
+ configuration:
+ ```ruby
+ config.active_record.generate_secure_token_on = :create
+ ```
-## Rails 6.1.4.1 (August 19, 2021) ##
+ *Sean Doyle*
-* No changes.
+* Fix `change_column` not setting `precision: 6` on `datetime` columns when
+ using 7.0+ Migrations and SQLite.
+ *Hartley McGuire*
-## Rails 6.1.4 (June 24, 2021) ##
+* Support composite identifiers in `to_key`
-* Do not try to rollback transactions that failed due to a `ActiveRecord::TransactionRollbackError`.
+ `to_key` avoids wrapping `#id` value into an `Array` if `#id` already an array
- *Jamie McCarthy*
+ *Nikita Vasilevsky*
-* Raise an error if `pool_config` is `nil` in `set_pool_config`.
+* Add validation option for `enum`
- *Eileen M. Uchitelle*
+ ```ruby
+ class Contract < ApplicationRecord
+ enum :status, %w[in_progress completed], validate: true
+ end
+ Contract.new(status: "unknown").valid? # => false
+ Contract.new(status: nil).valid? # => false
+ Contract.new(status: "completed").valid? # => true
-* Fix compatibility with `psych >= 4`.
+ class Contract < ApplicationRecord
+ enum :status, %w[in_progress completed], validate: { allow_nil: true }
+ end
+ Contract.new(status: "unknown").valid? # => false
+ Contract.new(status: nil).valid? # => true
+ Contract.new(status: "completed").valid? # => true
+ ```
- Starting in Psych 4.0.0 `YAML.load` behaves like `YAML.safe_load`. To preserve compatibility
- Active Record's schema cache loader and `YAMLColumn` now uses `YAML.unsafe_load` if available.
+ *Edem Topuzov*, *Ryuta Kamizono*
- *Jean Boussier*
+* Allow batching methods to use already loaded relation if available
-* Support using replicas when using `rails dbconsole`.
+ Calling batch methods on already loaded relations will use the records previously loaded instead of retrieving
+ them from the database again.
- *Christopher Thornton*
+ *Adam Hess*
-* Restore connection pools after transactional tests.
+* Deprecate `read_attribute(:id)` returning the primary key if the primary key is not `:id`.
- *Eugene Kenny*
+ Starting in Rails 7.2, `read_attribute(:id)` will return the value of the id column, regardless of the model's
+ primary key. To retrieve the value of the primary key, use `#id` instead. `read_attribute(:id)` for composite
+ primary key models will now return the value of the id column.
-* Change `upsert_all` to fails cleanly for MySQL when `:unique_by` is used.
+ *Adrianna Chang*
- *Bastian Bartmann*
+* Fix `change_table` setting datetime precision for 6.1 Migrations
-* Fix user-defined `self.default_scope` to respect table alias.
+ *Hartley McGuire*
- *Ryuta Kamizono*
+* Fix change_column setting datetime precision for 6.1 Migrations
-* Clear `@cache_keys` cache after `update_all`, `delete_all`, `destroy_all`.
+ *Hartley McGuire*
- *Ryuta Kamizono*
+* Add `ActiveRecord::Base#id_value` alias to access the raw value of a record's id column.
-* Changed Arel predications `contains` and `overlaps` to use
- `quoted_node` so that PostgreSQL arrays are quoted properly.
+ This alias is only provided for models that declare an `:id` column.
- *Bradley Priest*
+ *Adrianna Chang*
-* Fix `merge` when the `where` clauses have string contents.
+* Fix previous change tracking for `ActiveRecord::Store` when using a column with JSON structured database type
- *Ryuta Kamizono*
+ Before, the methods to access the changes made during the last save `#saved_change_to_key?`, `#saved_change_to_key`, and `#key_before_last_save` did not work if the store was defined as a `store_accessor` on a column with a JSON structured database type
+
+ *Robert DiMartino*
+
+* Fully support `NULLS [NOT] DISTINCT` for PostgreSQL 15+ indexes.
+
+ Previous work was done to allow the index to be created in a migration, but it was not
+ supported in schema.rb. Additionally, the matching for `NULLS [NOT] DISTINCT` was not
+ in the correct order, which could have resulted in inconsistent schema detection.
+
+ *Gregory Jones*
+
+* Allow escaping of literal colon characters in `sanitize_sql_*` methods when named bind variables are used
+
+ *Justin Bull*
+
+* Fix `#previously_new_record?` to return true for destroyed records.
+
+ Before, if a record was created and then destroyed, `#previously_new_record?` would return true.
+ Now, any UPDATE or DELETE to a record is considered a change, and will result in `#previously_new_record?`
+ returning false.
+
+ *Adrianna Chang*
+
+* Specify callback in `has_secure_token`
+
+ ```ruby
+ class User < ApplicationRecord
+ has_secure_token on: :initialize
+ end
+
+ User.new.token # => "abc123...."
+ ```
+
+ *Sean Doyle*
-* Fix rollback of parent destruction with nested `dependent: :destroy`.
+* Fix incrementation of in memory counter caches when associations overlap
- *Jacopo Beschi*
+ When two associations had a similarly named counter cache column, Active Record
+ could sometime increment the wrong one.
-* Fix binds logging for `"WHERE ... IN ..."` statements.
+ *Jacopo Beschi*, *Jean Boussier*
- *Ricardo Díaz*
+* Don't show secrets for Active Record's `Cipher::Aes256Gcm#inspect`.
-* Handle `false` in relation strict loading checks.
+ Before:
- Previously when a model had strict loading set to true and then had a
- relation set `strict_loading` to false the false wasn't considered when
- deciding whether to raise/warn about strict loading.
+ ```ruby
+ ActiveRecord::Encryption::Cipher::Aes256Gcm.new(secret).inspect
+ "#<ActiveRecord::Encryption::Cipher::Aes256Gcm:0x0000000104888038 ... @secret=\"\\xAF\\bFh]LV}q\\nl\\xB2U\\xB3 ... >"
+ ```
+ After:
+
+ ```ruby
+ ActiveRecord::Encryption::Cipher::Aes256Gcm(secret).inspect
+ "#<ActiveRecord::Encryption::Cipher::Aes256Gcm:0x0000000104888038>"
```
- class Dog < ActiveRecord::Base
- self.strict_loading_by_default = true
- has_many :treats, strict_loading: false
+ *Petrik de Heus*
+
+* Bring back the historical behavior of committing transaction on non-local return.
+
+ ```ruby
+ Model.transaction do
+ model.save
+ return
+ other_model.save # not executed
end
```
- In the example, `dog.treats` would still raise even though
- `strict_loading` was set to false. This is a bug affecting more than
- Active Storage which is why I made this PR superseding #41461. We need
- to fix this for all applications since the behavior is a little
- surprising. I took the test from #41461 and the code suggestion from #41453
- with some additions.
+ Historically only raised errors would trigger a rollback, but in Ruby `2.3`, the `timeout` library
+ started using `throw` to interrupt execution which had the adverse effect of committing open transactions.
- *Eileen M. Uchitelle*, *Radamés Roriz*
+ To solve this, in Active Record 6.1 the behavior was changed to instead rollback the transaction as it was safer
+ than to potentially commit an incomplete transaction.
-* Fix numericality validator without precision.
+ Using `return`, `break` or `throw` inside a `transaction` block was essentially deprecated from Rails 6.1 onwards.
- *Ryuta Kamizono*
+ However with the release of `timeout 0.4.0`, `Timeout.timeout` now raises an error again, and Active Record is able
+ to return to its original, less surprising, behavior.
-* Fix aggregate attribute on Enum types.
+ This historical behavior can now be opt-ed in via:
- *Ryuta Kamizono*
+ ```
+ Rails.application.config.active_record.commit_transaction_on_non_local_return = true
+ ```
-* Fix `CREATE INDEX` statement generation for PostgreSQL.
+ And is the default for new applications created in Rails 7.1.
- *eltongo*
+ *Jean Boussier*
-* Fix where clause on enum attribute when providing array of strings.
+* Deprecate `name` argument on `#remove_connection`.
- *Ryuta Kamizono*
+ The `name` argument is deprecated on `#remove_connection` without replacement. `#remove_connection` should be called directly on the class that established the connection.
-* Fix `unprepared_statement` to work it when nesting.
+ *Eileen M. Uchitelle*
- *Ryuta Kamizono*
+* Fix has_one through singular building with inverse.
+ Allows building of records from an association with a has_one through a
+ singular association with inverse. For belongs_to through associations,
+ linking the foreign key to the primary key model isn't needed.
+ For has_one, we cannot build records due to the association not being mutable.
-## Rails 6.1.3.2 (May 05, 2021) ##
+ *Gannon McGibbon*
-* No changes.
+* Disable database prepared statements when query logs are enabled
+ Prepared Statements and Query Logs are incompatible features due to query logs making every query unique.
-## Rails 6.1.3.1 (March 26, 2021) ##
+ *zzak, Jean Boussier*
-* No changes.
+* Support decrypting data encrypted non-deterministically with a SHA1 hash digest.
+
+ This adds a new Active Record encryption option to support decrypting data encrypted
+ non-deterministically with a SHA1 hash digest:
+ ```
+ Rails.application.config.active_record.encryption.support_sha1_for_non_deterministic_encryption = true
+ ```
-## Rails 6.1.3 (February 17, 2021) ##
+ The new option addresses a problem when upgrading from 7.0 to 7.1. Due to a bug in how Active Record
+ Encryption was getting initialized, the key provider used for non-deterministic encryption were using
+ SHA-1 as its digest class, instead of the one configured globally by Rails via
+ `Rails.application.config.active_support.key_generator_hash_digest_class`.
-* Fix the MySQL adapter to always set the right collation and charset
- to the connection session.
+ *Cadu Ribeiro and Jorge Manrubia*
- *Rafael Mendonça França*
+* Added PostgreSQL migration commands for enum rename, add value, and rename value.
-* Fix MySQL adapter handling of time objects when prepared statements
- are enabled.
+ `rename_enum` and `rename_enum_value` are reversible. Due to Postgres
+ limitation, `add_enum_value` is not reversible since you cannot delete enum
+ values. As an alternative you should drop and recreate the enum entirely.
- *Rafael Mendonça França*
+ ```ruby
+ rename_enum :article_status, to: :article_state
+ ```
-* Fix scoping in enum fields using conditions that would generate
- an `IN` clause.
+ ```ruby
+ add_enum_value :article_state, "archived" # will be at the end of existing values
+ add_enum_value :article_state, "in review", before: "published"
+ add_enum_value :article_state, "approved", after: "in review"
+ ```
- *Ryuta Kamizono*
+ ```ruby
+ rename_enum_value :article_state, from: "archived", to: "deleted"
+ ```
-* Skip optimised #exist? query when #include? is called on a relation
- with a having clause
+ *Ray Faddis*
- Relations that have aliased select values AND a having clause that
- references an aliased select value would generate an error when
- #include? was called, due to an optimisation that would generate
- call #exists? on the relation instead, which effectively alters
- the select values of the query (and thus removes the aliased select
- values), but leaves the having clause intact. Because the having
- clause is then referencing an aliased column that is no longer
- present in the simplified query, an ActiveRecord::InvalidStatement
- error was raised.
+* Allow composite primary key to be derived from schema
- An sample query affected by this problem:
+ Booting an application with a schema that contains composite primary keys
+ will not issue warning and won't `nil`ify the `ActiveRecord::Base#primary_key` value anymore.
+ Given a `travel_routes` table definition and a `TravelRoute` model like:
```ruby
- Author.select('COUNT(*) as total_posts', 'authors.*')
- .joins(:posts)
- .group(:id)
- .having('total_posts > 2')
- .include?(Author.first)
+ create_table :travel_routes, primary_key: [:origin, :destination], force: true do |t|
+ t.string :origin
+ t.string :destination
+ end
+
+ class TravelRoute < ActiveRecord::Base; end
```
+ The `TravelRoute.primary_key` value will be automatically derived to `["origin", "destination"]`
+
+ *Nikita Vasilevsky*
+
+* Include the `connection_pool` with exceptions raised from an adapter.
+
+ The `connection_pool` provides added context such as the connection used
+ that led to the exception as well as which role and shard.
- This change adds an addition check to the condition that skips the
- simplified #exists? query, which simply checks for the presence of
- a having clause.
+ *Luan Vieira*
- Fixes #41417
+* Support multiple column ordering for `find_each`, `find_in_batches` and `in_batches`.
- *Michael Smart*
+ When find_each/find_in_batches/in_batches are performed on a table with composite primary keys, ascending or descending order can be selected for each key.
-* Increment postgres prepared statement counter before making a prepared statement, so if the statement is aborted
- without Rails knowledge (e.g., if app gets kill -9d during long-running query or due to Rack::Timeout), app won't end
- up in perpetual crash state for being inconsistent with Postgres.
+ ```ruby
+ Person.find_each(order: [:desc, :asc]) do |person|
+ person.party_all_night!
+ end
+ ```
- *wbharding*, *Martin Tepper*
+ *Takuya Kurimoto*
+* Fix where on association with has_one/has_many polymorphic relations.
-## Rails 6.1.2.1 (February 10, 2021) ##
+ Before:
+ ```ruby
+ Treasure.where(price_estimates: PriceEstimate.all)
+ #=> SELECT (...) WHERE "treasures"."id" IN (SELECT "price_estimates"."estimate_of_id" FROM "price_estimates")
+ ```
-* Fix possible DoS vector in PostgreSQL money type
+ Later:
+ ```ruby
+ Treasure.where(price_estimates: PriceEstimate.all)
+ #=> SELECT (...) WHERE "treasures"."id" IN (SELECT "price_estimates"."estimate_of_id" FROM "price_estimates" WHERE "price_estimates"."estimate_of_type" = 'Treasure')
+ ```
- Carefully crafted input can cause a DoS via the regular expressions used
- for validating the money format in the PostgreSQL adapter. This patch
- fixes the regexp.
+ *Lázaro Nixon*
- Thanks to @dee-see from Hackerone for this patch!
+* Assign auto populated columns on Active Record record creation.
- [CVE-2021-22880]
+ Changes record creation logic to allow for the `auto_increment` column to be assigned
+ immediately after creation regardless of it's relation to the model's primary key.
- *Aaron Patterson*
+ The PostgreSQL adapter benefits the most from the change allowing for any number of auto-populated
+ columns to be assigned on the object immediately after row insertion utilizing the `RETURNING` statement.
+
+ *Nikita Vasilevsky*
+
+* Use the first key in the `shards` hash from `connected_to` for the `default_shard`.
+ Some applications may not want to use `:default` as a shard name in their connection model. Unfortunately Active Record expects there to be a `:default` shard because it must assume a shard to get the right connection from the pool manager. Rather than force applications to manually set this, `connects_to` can infer the default shard name from the hash of shards and will now assume that the first shard is your default.
-## Rails 6.1.2 (February 09, 2021) ##
+ For example if your model looked like this:
-* Fix timestamp type for sqlite3.
+ ```ruby
+ class ShardRecord < ApplicationRecord
+ self.abstract_class = true
+
+ connects_to shards: {
+ shard_one: { writing: :shard_one },
+ shard_two: { writing: :shard_two }
+ }
+ ```
+
+ Then the `default_shard` for this class would be set to `shard_one`.
+
+ Fixes: #45390
*Eileen M. Uchitelle*
-* Make destroy async transactional.
+* Fix mutation detection for serialized attributes backed by binary columns.
- An active record rollback could occur while enqueuing a job. In this
- case the job would enqueue even though the database deletion
- rolledback putting things in a funky state.
+ *Jean Boussier*
- Now the jobs are only enqueued until after the db transaction has been committed.
+* Add `ActiveRecord.disconnect_all!` method to immediately close all connections from all pools.
- *Cory Gwin*
+ *Jean Boussier*
-* Fix malformed packet error in MySQL statement for connection configuration.
+* Discard connections which may have been left in a transaction.
- *robinroestenburg*
+ There are cases where, due to an error, `within_new_transaction` may unexpectedly leave a connection in an open transaction. In these cases the connection may be reused, and the following may occur:
+ - Writes appear to fail when they actually succeed.
+ - Writes appear to succeed when they actually fail.
+ - Reads return stale or uncommitted data.
-* Connection specification now passes the "url" key as a configuration for the
- adapter if the "url" protocol is "jdbc", "http", or "https". Previously only
- urls with the "jdbc" prefix were passed to the Active Record Adapter, others
- are assumed to be adapter specification urls.
+ Previously, the following case was detected:
+ - An error is encountered during the transaction, then another error is encountered while attempting to roll it back.
- Fixes #41137.
+ Now, the following additional cases are detected:
+ - An error is encountered just after successfully beginning a transaction.
+ - An error is encountered while committing a transaction, then another error is encountered while attempting to roll it back.
+ - An error is encountered while rolling back a transaction.
- *Jonathan Bracy*
+ *Nick Dower*
-* Fix granular connection swapping when there are multiple abstract classes.
+* Active Record query cache now evicts least recently used entries
- *Eileen M. Uchitelle*
+ By default it only keeps the `100` most recently used queries.
-* Fix `find_by` with custom primary key for belongs_to association.
+ The cache size can be configured via `database.yml`
- *Ryuta Kamizono*
+ ```yaml
+ development:
+ adapter: mysql2
+ query_cache: 200
+ ```
-* Add support for `rails console --sandbox` for multiple database applications.
+ It can also be entirely disabled:
- *alpaca-tc*
+ ```yaml
+ development:
+ adapter: mysql2
+ query_cache: false
+ ```
-* Fix `where` on polymorphic association with empty array.
+ *Jean Boussier*
- *Ryuta Kamizono*
+* Deprecate `check_pending!` in favor of `check_all_pending!`.
-* Fix preventing writes for `ApplicationRecord`.
+ `check_pending!` will only check for pending migrations on the current database connection or the one passed in. This has been deprecated in favor of `check_all_pending!` which will find all pending migrations for the database configurations in a given environment.
*Eileen M. Uchitelle*
+* Make `increment_counter`/`decrement_counter` accept an amount argument
-## Rails 6.1.1 (January 07, 2021) ##
+ ```ruby
+ Post.increment_counter(:comments_count, 5, by: 3)
+ ```
-* Fix fixtures loading when strict loading is enabled for the association.
+ *fatkodima*
- *Alex Ghiculescu*
+* Add support for `Array#intersect?` to `ActiveRecord::Relation`.
-* Fix `where` with custom primary key for belongs_to association.
+ `Array#intersect?` is only available on Ruby 3.1 or later.
- *Ryuta Kamizono*
+ This allows the Rubocop `Style/ArrayIntersect` cop to work with `ActiveRecord::Relation` objects.
-* Fix `where` with aliased associations.
+ *John Harry Kelly*
- *Ryuta Kamizono*
+* The deferrable foreign key can be passed to `t.references`.
-* Fix `composed_of` with symbol mapping.
+ *Hiroyuki Ishii*
- *Ryuta Kamizono*
+* Deprecate `deferrable: true` option of `add_foreign_key`.
-* Don't skip money's type cast for pluck and calculations.
+ `deferrable: true` is deprecated in favor of `deferrable: :immediate`, and
+ will be removed in Rails 7.2.
- *Ryuta Kamizono*
+ Because `deferrable: true` and `deferrable: :deferred` are hard to understand.
+ Both true and :deferred are truthy values.
+ This behavior is the same as the deferrable option of the add_unique_key method, added in #46192.
-* Fix `where` on polymorphic association with non Active Record object.
+ *Hiroyuki Ishii*
- *Ryuta Kamizono*
+* `AbstractAdapter#execute` and `#exec_query` now clear the query cache
-* Make sure `db:prepare` works even the schema file doesn't exist.
+ If you need to perform a read only SQL query without clearing the query
+ cache, use `AbstractAdapter#select_all`.
- *Rafael Mendonça França*
+ *Jean Boussier*
-* Fix complicated `has_many :through` with nested where condition.
+* Make `.joins` / `.left_outer_joins` work with CTEs.
- *Ryuta Kamizono*
+ For example:
-* Handle STI models for `has_many dependent: :destroy_async`.
+ ```ruby
+ Post
+ .with(commented_posts: Comment.select(:post_id).distinct)
+ .joins(:commented_posts)
+ #=> WITH (...) SELECT ... INNER JOIN commented_posts on posts.id = commented_posts.post_id
+ ```
- *Muhammad Usman*
+ *Vladimir Dementyev*
-* Restore possibility of passing `false` to :polymorphic option of `belongs_to`.
+* Add a load hook for `ActiveRecord::ConnectionAdapters::Mysql2Adapter`
+ (named `active_record_mysql2adapter`) to allow for overriding aspects of the
+ `ActiveRecord::ConnectionAdapters::Mysql2Adapter` class. This makes `Mysql2Adapter`
+ consistent with `PostgreSQLAdapter` and `SQLite3Adapter` that already have load hooks.
- Previously, passing `false` would trigger the option validation logic
- to throw an error saying :polymorphic would not be a valid option.
+ *fatkodima*
- *glaszig*
+* Introduce adapter for Trilogy database client
-* Allow adding nonnamed expression indexes to be revertible.
+ Trilogy is a MySQL-compatible database client. Rails applications can use Trilogy
+ by configuring their `config/database.yml`:
- Fixes #40732.
+ ```yaml
+ development:
+ adapter: trilogy
+ database: blog_development
+ pool: 5
+ ```
- Previously, the following code would raise an error, when executed while rolling back,
- and the index name should be specified explicitly. Now, the index name is inferred
- automatically.
+ Or by using the `DATABASE_URL` environment variable:
```ruby
- add_index(:items, "to_tsvector('english', description)")
+ ENV['DATABASE_URL'] # => "trilogy://localhost/blog_development?pool=5"
```
- *fatkodima*
+ *Adrianna Chang*
+* `after_commit` callbacks defined on models now execute in the correct order.
-## Rails 6.1.0 (December 09, 2020) ##
+ ```ruby
+ class User < ActiveRecord::Base
+ after_commit { puts("this gets called first") }
+ after_commit { puts("this gets called second") }
+ end
+ ```
-* Only warn about negative enums if a positive form that would cause conflicts exists.
+ Previously, the callbacks executed in the reverse order. To opt in to the new behaviour:
- Fixes #39065.
+ ```ruby
+ config.active_record.run_after_transaction_callbacks_in_order_defined = true
+ ```
+
+ This is the default for new apps.
*Alex Ghiculescu*
-* Change `attribute_for_inspect` to take `filter_attributes` in consideration.
+* Infer `foreign_key` when `inverse_of` is present on `has_one` and `has_many` associations.
- *Rafael Mendonça França*
+ ```ruby
+ has_many :citations, foreign_key: "book1_id", inverse_of: :book
+ ```
+
+ can be simplified to
-* Fix odd behavior of inverse_of with multiple belongs_to to same class.
+ ```ruby
+ has_many :citations, inverse_of: :book
+ ```
+
+ and the foreign_key will be read from the corresponding `belongs_to` association.
+
+ *Daniel Whitney*
+
+* Limit max length of auto generated index names
+
+ Auto generated index names are now limited to 62 bytes, which fits within
+ the default index name length limits for MySQL, Postgres and SQLite.
+
+ Any index name over the limit will fallback to the new short format.
+
+ Before (too long):
+ ```
+ index_testings_on_foo_and_bar_and_first_name_and_last_name_and_administrator
+ ```
+
+ After (short format):
+ ```
+ idx_on_foo_bar_first_name_last_name_administrator_5939248142
+ ```
- Fixes #35204.
+ The short format includes a hash to ensure the name is unique database-wide.
- *Tomoyuki Kai*
+ *Mike Coutermarsh*
-* Build predicate conditions with objects that delegate `#id` and primary key:
+* Introduce a more stable and optimized Marshal serializer for Active Record models.
+
+ Can be enabled with `config.active_record.marshalling_format_version = 7.1`.
+
+ *Jean Boussier*
+
+* Allow specifying where clauses with column-tuple syntax.
+
+ Querying through `#where` now accepts a new tuple-syntax which accepts, as
+ a key, an array of columns and, as a value, an array of corresponding tuples.
+ The key specifies a list of columns, while the value is an array of
+ ordered-tuples that conform to the column list.
+
+ For instance:
```ruby
- class AdminAuthor
- delegate_missing_to :@author
+ # Cpk::Book => Cpk::Book(author_id: integer, number: integer, title: string, revision: integer)
+ # Cpk::Book.primary_key => ["author_id", "number"]
- def initialize(author)
- @author = author
- end
+ book = Cpk::Book.create!(author_id: 1, number: 1)
+ Cpk::Book.where(Cpk::Book.primary_key => [[1, 2]]) # => [book]
+
+ # Topic => Topic(id: integer, title: string, author_name: string...)
+
+ Topic.where([:title, :author_name] => [["The Alchemist", "Paulo Coelho"], ["Harry Potter", "J.K Rowling"]])
+ ```
+
+ *Paarth Madan*
+
+* Allow warning codes to be ignore when reporting SQL warnings.
+
+ Active Record config that can ignore warning codes
+
+ ```ruby
+ # Configure allowlist of warnings that should always be ignored
+ config.active_record.db_warnings_ignore = [
+ "1062", # MySQL Error 1062: Duplicate entry
+ ]
+ ```
+
+ This is supported for the MySQL and PostgreSQL adapters.
+
+ *Nick Borromeo*
+
+* Introduce `:active_record_fixtures` lazy load hook.
+
+ Hooks defined with this name will be run whenever `TestFixtures` is included
+ in a class.
+
+ ```ruby
+ ActiveSupport.on_load(:active_record_fixtures) do
+ self.fixture_paths << "test/fixtures"
end
- Post.where(author: AdminAuthor.new(author))
+ klass = Class.new
+ klass.include(ActiveRecord::TestFixtures)
+
+ klass.fixture_paths # => ["test/fixtures"]
```
- *Sean Doyle*
+ *Andrew Novoselac*
-* Add `connected_to_many` API.
+* Introduce `TestFixtures#fixture_paths`.
- This API allows applications to connect to multiple databases at once without switching all of them or implementing a deeply nested stack.
+ Multiple fixture paths can now be specified using the `#fixture_paths` accessor.
+ Apps will continue to have `test/fixtures` as their one fixture path by default,
+ but additional fixture paths can be specified.
- Before:
+ ```ruby
+ ActiveSupport::TestCase.fixture_paths << "component1/test/fixtures"
+ ActiveSupport::TestCase.fixture_paths << "component2/test/fixtures"
+ ```
- AnimalsRecord.connected_to(role: :reading) do
- MealsRecord.connected_to(role: :reading) do
- Dog.first # read from animals replica
- Dinner.first # read from meals replica
- Person.first # read from primary writer
- end
- end
+ `TestFixtures#fixture_path` is now deprecated.
- After:
+ *Andrew Novoselac*
- ActiveRecord::Base.connected_to_many([AnimalsRecord, MealsRecord], role: :reading) do
- Dog.first # read from animals replica
- Dinner.first # read from meals replica
- Person.first # read from primary writer
- end
+* Adds support for deferrable exclude constraints in PostgreSQL.
- *Eileen M. Uchitelle*, *John Crepezzi*
+ By default, exclude constraints in PostgreSQL are checked after each statement.
+ This works for most use cases, but becomes a major limitation when replacing
+ records with overlapping ranges by using multiple statements.
-* Add option to raise or log for `ActiveRecord::StrictLoadingViolationError`.
+ ```ruby
+ exclusion_constraint :users, "daterange(valid_from, valid_to) WITH &&", deferrable: :immediate
+ ```
- Some applications may not want to raise an error in production if using `strict_loading`. This would allow an application to set strict loading to log for the production environment while still raising in development and test environments.
+ Passing `deferrable: :immediate` checks constraint after each statement,
+ but allows manually deferring the check using `SET CONSTRAINTS ALL DEFERRED`
+ within a transaction. This will cause the excludes to be checked after the transaction.
- Set `config.active_record.action_on_strict_loading_violation` to `:log` errors instead of raising.
+ It's also possible to change the default behavior from an immediate check
+ (after the statement), to a deferred check (after the transaction):
- *Eileen M. Uchitelle*
+ ```ruby
+ exclusion_constraint :users, "daterange(valid_from, valid_to) WITH &&", deferrable: :deferred
+ ```
-* Allow the inverse of a `has_one` association that was previously autosaved to be loaded.
+ *Hiroyuki Ishii*
- Fixes #34255.
+* Respect `foreign_type` option to `delegated_type` for `{role}_class` method.
- *Steven Weber*
+ Usage of `delegated_type` with non-conventional `{role}_type` column names can now be specified with `foreign_type` option.
+ This option is the same as `foreign_type` as forwarded to the underlying `belongs_to` association that `delegated_type` wraps.
-* Optimise the length of index names for polymorphic references by using the reference name rather than the type and id column names.
+ *Jason Karns*
- Because the default behaviour when adding an index with multiple columns is to use all column names in the index name, this could frequently lead to overly long index names for polymorphic references which would fail the migration if it exceeded the database limit.
+* Add support for unique constraints (PostgreSQL-only).
- This change reduces the chance of that happening by using the reference name, e.g. `index_my_table_on_my_reference`.
+ ```ruby
+ add_unique_key :sections, [:position], deferrable: :deferred, name: "unique_section_position"
+ remove_unique_key :sections, name: "unique_section_position"
+ ```
- Fixes #38655.
+ See PostgreSQL's [Unique Constraints](https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-UNIQUE-CONSTRAINTS) documentation for more on unique constraints.
- *Luke Redpath*
+ By default, unique constraints in PostgreSQL are checked after each statement.
+ This works for most use cases, but becomes a major limitation when replacing
+ records with unique column by using multiple statements.
-* MySQL: Uniqueness validator now respects default database collation,
- no longer enforce case sensitive comparison by default.
+ An example of swapping unique columns between records.
- *Ryuta Kamizono*
+ ```ruby
+ # position is unique column
+ old_item = Item.create!(position: 1)
+ new_item = Item.create!(position: 2)
-* Remove deprecated methods from `ActiveRecord::ConnectionAdapters::DatabaseLimits`.
+ Item.transaction do
+ old_item.update!(position: 2)
+ new_item.update!(position: 1)
+ end
+ ```
- `column_name_length`
- `table_name_length`
- `columns_per_table`
- `indexes_per_table`
- `columns_per_multicolumn_index`
- `sql_query_length`
- `joins_per_query`
+ Using the default behavior, the transaction would fail when executing the
+ first `UPDATE` statement.
- *Rafael Mendonça França*
+ By passing the `:deferrable` option to the `add_unique_key` statement in
+ migrations, it's possible to defer this check.
-* Remove deprecated `ActiveRecord::ConnectionAdapters::AbstractAdapter#supports_multi_insert?`.
+ ```ruby
+ add_unique_key :items, [:position], deferrable: :immediate
+ ```
- *Rafael Mendonça França*
+ Passing `deferrable: :immediate` does not change the behaviour of the previous example,
+ but allows manually deferring the check using `SET CONSTRAINTS ALL DEFERRED` within a transaction.
+ This will cause the unique constraints to be checked after the transaction.
-* Remove deprecated `ActiveRecord::ConnectionAdapters::AbstractAdapter#supports_foreign_keys_in_create?`.
+ It's also possible to adjust the default behavior from an immediate
+ check (after the statement), to a deferred check (after the transaction):
- *Rafael Mendonça França*
+ ```ruby
+ add_unique_key :items, [:position], deferrable: :deferred
+ ```
-* Remove deprecated `ActiveRecord::ConnectionAdapters::PostgreSQLAdapter#supports_ranges?`.
+ If you want to change an existing unique index to deferrable, you can use :using_index
+ to create deferrable unique constraints.
- *Rafael Mendonça França*
+ ```ruby
+ add_unique_key :items, deferrable: :deferred, using_index: "index_items_on_position"
+ ```
-* Remove deprecated `ActiveRecord::Base#update_attributes` and `ActiveRecord::Base#update_attributes!`.
+ *Hiroyuki Ishii*
+
+* Remove deprecated `Tasks::DatabaseTasks.schema_file_type`.
*Rafael Mendonça França*
-* Remove deprecated `migrations_path` argument in `ActiveRecord::ConnectionAdapter::SchemaStatements#assume_migrated_upto_version`.
+* Remove deprecated `config.active_record.partial_writes`.
*Rafael Mendonça França*
-* Remove deprecated `config.active_record.sqlite3.represent_boolean_as_integer`.
+* Remove deprecated `ActiveRecord::Base` config accessors.
*Rafael Mendonça França*
-* `relation.create` does no longer leak scope to class level querying methods
- in initialization block and callbacks.
+* Remove the `:include_replicas` argument from `configs_for`. Use `:include_hidden` argument instead.
- Before:
+ *Eileen M. Uchitelle*
- User.where(name: "John").create do |john|
- User.find_by(name: "David") # => nil
- end
+* Allow applications to lookup a config via a custom hash key.
- After:
+ If you have registered a custom config or want to find configs where the hash matches a specific key, now you can pass `config_key` to `configs_for`. For example if you have a `db_config` with the key `vitess` you can look up a database configuration hash by matching that key.
- User.where(name: "John").create do |john|
- User.find_by(name: "David") # => #<User name: "David", ...>
- end
+ ```ruby
+ ActiveRecord::Base.configurations.configs_for(env_name: "development", name: "primary", config_key: :vitess)
+ ActiveRecord::Base.configurations.configs_for(env_name: "development", config_key: :vitess)
+ ```
- *Ryuta Kamizono*
+ *Eileen M. Uchitelle*
+
+* Allow applications to register a custom database configuration handler.
+
+ Adds a mechanism for registering a custom handler for cases where you want database configurations to respond to custom methods. This is useful for non-Rails database adapters or tools like Vitess that you may want to configure differently from a standard `HashConfig` or `UrlConfig`.
-* Named scope chain does no longer leak scope to class level querying methods.
+ Given the following database YAML we want the `animals` db to create a `CustomConfig` object instead while the `primary` database will be a `UrlConfig`:
- class User < ActiveRecord::Base
- scope :david, -> { User.where(name: "David") }
+ ```yaml
+ development:
+ primary:
+ url: postgres://localhost/primary
+ animals:
+ url: postgres://localhost/animals
+ custom_config:
+ sharded: 1
+ ```
+
+ To register a custom handler first make a class that has your custom methods:
+
+ ```ruby
+ class CustomConfig < ActiveRecord::DatabaseConfigurations::UrlConfig
+ def sharded?
+ custom_config.fetch("sharded", false)
+ end
+
+ private
+ def custom_config
+ configuration_hash.fetch(:custom_config)
end
+ end
+ ```
- Before:
+ Then register the config in an initializer:
- User.where(name: "John").david
- # SELECT * FROM users WHERE name = 'John' AND name = 'David'
+ ```ruby
+ ActiveRecord::DatabaseConfigurations.register_db_config_handler do |env_name, name, url, config|
+ next unless config.key?(:custom_config)
+ CustomConfig.new(env_name, name, url, config)
+ end
+ ```
- After:
+ When the application is booted, configuration hashes with the `:custom_config` key will be `CustomConfig` objects and respond to `sharded?`. Applications must handle the condition in which Active Record should use their custom handler.
- User.where(name: "John").david
- # SELECT * FROM users WHERE name = 'David'
+ *Eileen M. Uchitelle and John Crepezzi*
- *Ryuta Kamizono*
+* `ActiveRecord::Base.serialize` no longer uses YAML by default.
-* Remove deprecated methods from `ActiveRecord::DatabaseConfigurations`.
+ YAML isn't particularly performant and can lead to security issues
+ if not used carefully.
- `fetch`
- `each`
- `first`
- `values`
- `[]=`
+ Unfortunately there isn't really any good serializers in Ruby's stdlib
+ to replace it.
- *Rafael Mendonça França*
+ The obvious choice would be JSON, which is a fine format for this use case,
+ however the JSON serializer in Ruby's stdlib isn't strict enough, as it fallback
+ to casting unknown types to strings, which could lead to corrupted data.
-* `where.not` now generates NAND predicates instead of NOR.
+ Some third party JSON libraries like `Oj` have a suitable strict mode.
- Before:
+ So it's preferable that users choose a serializer based on their own constraints.
- User.where.not(name: "Jon", role: "admin")
- # SELECT * FROM users WHERE name != 'Jon' AND role != 'admin'
+ The original default can be restored by setting `config.active_record.default_column_serializer = YAML`.
- After:
+ *Jean Boussier*
- User.where.not(name: "Jon", role: "admin")
- # SELECT * FROM users WHERE NOT (name == 'Jon' AND role == 'admin')
+* `ActiveRecord::Base.serialize` signature changed.
- *Rafael Mendonça França*
+ Rather than a single positional argument that accepts two possible
+ types of values, `serialize` now accepts two distinct keyword arguments.
-* Remove deprecated `ActiveRecord::Result#to_hash` method.
+ Before:
- *Rafael Mendonça França*
+ ```ruby
+ serialize :content, JSON
+ serialize :backtrace, Array
+ ```
-* Deprecate `ActiveRecord::Base.allow_unsafe_raw_sql`.
+ After:
- *Rafael Mendonça França*
+ ```ruby
+ serialize :content, coder: JSON
+ serialize :backtrace, type: Array
+ ```
-* Remove deprecated support for using unsafe raw SQL in `ActiveRecord::Relation` methods.
+ *Jean Boussier*
- *Rafael Mendonça França*
+* YAML columns use `YAML.safe_dump` if available.
-* Allow users to silence the "Rails couldn't infer whether you are using multiple databases..."
- message using `config.active_record.suppress_multiple_database_warning`.
+ As of `psych 5.1.0`, `YAML.safe_dump` can now apply the same permitted
+ types restrictions than `YAML.safe_load`.
- *Omri Gabay*
+ It's preferable to ensure the payload only use allowed types when we first
+ try to serialize it, otherwise you may end up with invalid records in the
+ database.
-* Connections can be granularly switched for abstract classes when `connected_to` is called.
+ *Jean Boussier*
- This change allows `connected_to` to switch a `role` and/or `shard` for a single abstract class instead of all classes globally. Applications that want to use the new feature need to set `config.active_record.legacy_connection_handling` to `false` in their application configuration.
+* `ActiveRecord::QueryLogs` better handle broken encoding.
- Example usage:
+ It's not uncommon when building queries with BLOB fields to contain
+ binary data. Unless the call carefully encode the string in ASCII-8BIT
+ it generally end up being encoded in `UTF-8`, and `QueryLogs` would
+ end up failing on it.
- Given an application we have a `User` model that inherits from `ApplicationRecord` and a `Dog` model that inherits from `AnimalsRecord`. `AnimalsRecord` and `ApplicationRecord` have writing and reading connections as well as shard `default`, `one`, and `two`.
+ `ActiveRecord::QueryLogs` no longer depend on the query to be properly encoded.
- ```ruby
- ActiveRecord::Base.connected_to(role: :reading) do
- User.first # reads from default replica
- Dog.first # reads from default replica
+ *Jean Boussier*
- AnimalsRecord.connected_to(role: :writing, shard: :one) do
- User.first # reads from default replica
- Dog.first # reads from shard one primary
- end
+* Fix a bug where `ActiveRecord::Generators::ModelGenerator` would not respect create_table_migration template overrides.
- User.first # reads from default replica
- Dog.first # reads from default replica
+ ```
+ rails g model create_books title:string content:text
+ ```
+ will now read from the create_table_migration.rb.tt template in the following locations in order:
+ ```
+ lib/templates/active_record/model/create_table_migration.rb
+ lib/templates/active_record/migration/create_table_migration.rb
+ ```
- ApplicationRecord.connected_to(role: :writing, shard: :two) do
- User.first # reads from shard two primary
- Dog.first # reads from default replica
- end
- end
+ *Spencer Neste*
+
+* `ActiveRecord::Relation#explain` now accepts options.
+
+ For databases and adapters which support them (currently PostgreSQL
+ and MySQL), options can be passed to `explain` to provide more
+ detailed query plan analysis:
+
+ ```ruby
+ Customer.where(id: 1).joins(:orders).explain(:analyze, :verbose)
```
- *Eileen M. Uchitelle*, *John Crepezzi*
+ *Reid Lynch*
+
+* Multiple `Arel::Nodes::SqlLiteral` nodes can now be added together to
+ form `Arel::Nodes::Fragments` nodes. This allows joining several pieces
+ of SQL.
-* Allow double-dash comment syntax when querying read-only databases
+ *Matthew Draper*, *Ole Friis*
- *James Adam*
+* `ActiveRecord::Base#signed_id` raises if called on a new record.
-* Add `values_at` method.
+ Previously it would return an ID that was not usable, since it was based on `id = nil`.
- Returns an array containing the values associated with the given methods.
+ *Alex Ghiculescu*
+
+* Allow SQL warnings to be reported.
+
+ Active Record configs can be set to enable SQL warning reporting.
```ruby
- topic = Topic.first
- topic.values_at(:title, :author_name)
- # => ["Budget", "Jason"]
+ # Configure action to take when SQL query produces warning
+ config.active_record.db_warnings_action = :raise
+
+ # Configure allowlist of warnings that should always be ignored
+ config.active_record.db_warnings_ignore = [
+ /Invalid utf8mb4 character string/,
+ "An exact warning message",
+ ]
```
- Similar to `Hash#values_at` but on an Active Record instance.
-
- *Guillaume Briday*
+ This is supported for the MySQL and PostgreSQL adapters.
-* Fix `read_attribute_before_type_cast` to consider attribute aliases.
+ *Adrianna Chang*, *Paarth Madan*
- *Marcelo Lauxen*
+* Add `#regroup` query method as a short-hand for `.unscope(:group).group(fields)`
-* Support passing record to uniqueness validator `:conditions` callable:
+ Example:
```ruby
- class Article < ApplicationRecord
- validates_uniqueness_of :title, conditions: ->(article) {
- published_at = article.published_at
- where(published_at: published_at.beginning_of_year..published_at.end_of_year)
- }
- end
+ Post.group(:title).regroup(:author)
+ # SELECT `posts`.`*` FROM `posts` GROUP BY `posts`.`author`
```
- *Eliot Sykes*
+ *Danielius Visockas*
+
+* PostgreSQL adapter method `enable_extension` now allows parameter to be `[schema_name.]<extension_name>`
+ if the extension must be installed on another schema.
+
+ Example: `enable_extension('heroku_ext.hstore')`
+
+ *Leonardo Luarte*
-* `BatchEnumerator#update_all` and `BatchEnumerator#delete_all` now return the
- total number of rows affected, just like their non-batched counterparts.
+* Add `:include` option to `add_index`.
+
+ Add support for including non-key columns in indexes for PostgreSQL
+ with the `INCLUDE` parameter.
```ruby
- Person.in_batches.update_all("first_name = 'Eugene'") # => 42
- Person.in_batches.delete_all # => 42
+ add_index(:users, :email, include: [:id, :created_at])
```
- Fixes #40287.
+ will result in:
- *Eugene Kenny*
+ ```sql
+ CREATE INDEX index_users_on_email USING btree (email) INCLUDE (id, created_at)
+ ```
-* Add support for PostgreSQL `interval` data type with conversion to
- `ActiveSupport::Duration` when loading records from database and
- serialization to ISO 8601 formatted duration string on save.
- Add support to define a column in migrations and get it in a schema dump.
- Optional column precision is supported.
+ *Steve Abrams*
- To use this in 6.1, you need to place the next string to your model file:
+* `ActiveRecord::Relation`’s `#any?`, `#none?`, and `#one?` methods take an optional pattern
+ argument, more closely matching their `Enumerable` equivalents.
- attribute :duration, :interval
+ *George Claghorn*
- To keep old behavior until 7.0 is released:
+* Add `ActiveRecord::Base.normalizes` for declaring attribute normalizations.
- attribute :duration, :string
+ An attribute normalization is applied when the attribute is assigned or
+ updated, and the normalized value will be persisted to the database. The
+ normalization is also applied to the corresponding keyword argument of query
+ methods, allowing records to be queried using unnormalized values.
- Example:
+ For example:
- create_table :events do |t|
- t.string :name
- t.interval :duration
- end
+ ```ruby
+ class User < ActiveRecord::Base
+ normalizes :email, with: -> email { email.strip.downcase }
+ normalizes :phone, with: -> phone { phone.delete("^0-9").delete_prefix("1") }
+ end
- class Event < ApplicationRecord
- attribute :duration, :interval
- end
+ user = User.create(email: " CRUISE-CONTROL@EXAMPLE.COM\n")
+ user.email # => "cruise-control@example.com"
+
+ user = User.find_by(email: "\tCRUISE-CONTROL@EXAMPLE.COM ")
+ user.email # => "cruise-control@example.com"
+ user.email_before_type_cast # => "cruise-control@example.com"
+
+ User.where(email: "\tCRUISE-CONTROL@EXAMPLE.COM ").count # => 1
+ User.where(["email = ?", "\tCRUISE-CONTROL@EXAMPLE.COM "]).count # => 0
- Event.create!(name: 'Rock Fest', duration: 2.days)
- Event.last.duration # => 2 days
- Event.last.duration.iso8601 # => "P2D"
- Event.new(duration: 'P1DT12H3S').duration # => 1 day, 12 hours, and 3 seconds
- Event.new(duration: '1 day') # Unknown value will be ignored and NULL will be written to database
+ User.exists?(email: "\tCRUISE-CONTROL@EXAMPLE.COM ") # => true
+ User.exists?(["email = ?", "\tCRUISE-CONTROL@EXAMPLE.COM "]) # => false
- *Andrey Novikov*
+ User.normalize_value_for(:phone, "+1 (555) 867-5309") # => "5558675309"
+ ```
-* Allow associations supporting the `dependent:` key to take `dependent: :destroy_async`.
+ *Jonathan Hefner*
+
+* Hide changes to before_committed! callback behaviour behind flag.
+
+ In #46525, behavior around before_committed! callbacks was changed so that callbacks
+ would run on every enrolled record in a transaction, not just the first copy of a record.
+ This change in behavior is now controlled by a configuration option,
+ `config.active_record.before_committed_on_all_records`. It will be enabled by default on Rails 7.1.
+
+ *Adrianna Chang*
+
+* The `namespaced_controller` Query Log tag now matches the `controller` format
+
+ For example, a request processed by `NameSpaced::UsersController` will now log as:
+
+ ```
+ :controller # "users"
+ :namespaced_controller # "name_spaced/users"
+ ```
+
+ *Alex Ghiculescu*
+
+* Return only unique ids from ActiveRecord::Calculations#ids
+
+ Updated ActiveRecord::Calculations#ids to only return the unique ids of the base model
+ when using eager_load, preload and includes.
```ruby
- class Account < ActiveRecord::Base
- belongs_to :supplier, dependent: :destroy_async
- end
+ Post.find_by(id: 1).comments.count
+ # => 5
+ Post.includes(:comments).where(id: 1).pluck(:id)
+ # => [1, 1, 1, 1, 1]
+ Post.includes(:comments).where(id: 1).ids
+ # => [1]
```
- `:destroy_async` will enqueue a job to destroy associated records in the background.
+ *Joshua Young*
- *DHH*, *George Claghorn*, *Cory Gwin*, *Rafael Mendonça França*, *Adrianna Chang*
+* Stop using `LOWER()` for case-insensitive queries on `citext` columns
-* Add `SKIP_TEST_DATABASE` environment variable to disable modifying the test database when `rails db:create` and `rails db:drop` are called.
+ Previously, `LOWER()` was added for e.g. uniqueness validations with
+ `case_sensitive: false`.
+ It wasn't mentioned in the documentation that the index without `LOWER()`
+ wouldn't be used in this case.
- *Jason Schweier*
+ *Phil Pirozhkov*
-* `connects_to` can only be called on `ActiveRecord::Base` or abstract classes.
+* Extract `#sync_timezone_changes` method in AbstractMysqlAdapter to enable subclasses
+ to sync database timezone changes without overriding `#raw_execute`.
- Ensure that `connects_to` can only be called from `ActiveRecord::Base` or abstract classes. This protects the application from opening duplicate or too many connections.
+ *Adrianna Chang*, *Paarth Madan*
- *Eileen M. Uchitelle*, *John Crepezzi*
+* Do not write additional new lines when dumping sql migration versions
-* All connection adapters `execute` now raises `ActiveRecord::ConnectionNotEstablished` rather than
- `ActiveRecord::StatementInvalid` when they encounter a connection error.
+ This change updates the `insert_versions_sql` function so that the database insert string containing the current database migration versions does not end with two additional new lines.
- *Jean Boussier*
+ *Misha Schwartz*
-* `Mysql2Adapter#quote_string` now raises `ActiveRecord::ConnectionNotEstablished` rather than
- `ActiveRecord::StatementInvalid` when it can't connect to the MySQL server.
+* Fix `composed_of` value freezing and duplication.
- *Jean Boussier*
+ Previously composite values exhibited two confusing behaviors:
-* Add support for check constraints that are `NOT VALID` via `validate: false` (PostgreSQL-only).
+ - When reading a compositve value it'd _NOT_ be frozen, allowing it to get out of sync with its underlying database
+ columns.
+ - When writing a compositve value the argument would be frozen, potentially confusing the caller.
- *Alex Robbin*
+ Currently, composite values instantiated based on database columns are frozen (addressing the first issue) and
+ assigned compositve values are duplicated and the duplicate is frozen (addressing the second issue).
-* Ensure the default configuration is considered primary or first for an environment
+ *Greg Navis*
- If a multiple database application provides a configuration named primary, that will be treated as default. In applications that do not have a primary entry, the default database configuration will be the first configuration for an environment.
+* Fix redundant updates to the column insensitivity cache
- *Eileen M. Uchitelle*
+ Fixed redundant queries checking column capability for insensitive
+ comparison.
-* Allow `where` references association names as joined table name aliases.
+ *Phil Pirozhkov*
- ```ruby
- class Comment < ActiveRecord::Base
- enum label: [:default, :child]
- has_many :children, class_name: "Comment", foreign_key: :parent_id
- end
+* Allow disabling methods generated by `ActiveRecord.enum`.
- # ... FROM comments LEFT OUTER JOIN comments children ON ... WHERE children.label = 1
- Comment.includes(:children).where("children.label": "child")
- ```
+ *Alfred Dominic*
- *Ryuta Kamizono*
+* Avoid validating `belongs_to` association if it has not changed.
-* Support storing demodulized class name for polymorphic type.
+ Previously, when updating a record, Active Record will perform an extra query to check for the presence of
+ `belongs_to` associations (if the presence is configured to be mandatory), even if that attribute hasn't changed.
- Before Rails 6.1, storing demodulized class name is supported only for STI type
- by `store_full_sti_class` class attribute.
+ Currently, only `belongs_to`-related columns are checked for presence. It is possible to have orphaned records with
+ this approach. To avoid this problem, you need to use a foreign key.
- Now `store_full_class_name` class attribute can handle both STI and polymorphic types.
+ This behavior can be controlled by configuration:
- *Ryuta Kamizono*
+ ```ruby
+ config.active_record.belongs_to_required_validates_foreign_key = false
+ ```
-* Deprecate `rails db:structure:{load, dump}` tasks and extend
- `rails db:schema:{load, dump}` tasks to work with either `:ruby` or `:sql` format,
- depending on `config.active_record.schema_format` configuration value.
+ and will be disabled by default with `config.load_defaults 7.1`.
*fatkodima*
-* Respect the `select` values for eager loading.
+* `has_one` and `belongs_to` associations now define a `reset_association` method
+ on the owner model (where `association` is the name of the association). This
+ method unloads the cached associate record, if any, and causes the next access
+ to query it from the database.
- ```ruby
- post = Post.select("UPPER(title) AS title").first
- post.title # => "WELCOME TO THE WEBLOG"
- post.body # => ActiveModel::MissingAttributeError
+ *George Claghorn*
- # Rails 6.0 (ignore the `select` values)
- post = Post.select("UPPER(title) AS title").eager_load(:comments).first
- post.title # => "Welcome to the weblog"
- post.body # => "Such a lovely day"
+* Allow per attribute setting of YAML permitted classes (safe load) and unsafe load.
- # Rails 6.1 (respect the `select` values)
- post = Post.select("UPPER(title) AS title").eager_load(:comments).first
- post.title # => "WELCOME TO THE WEBLOG"
- post.body # => ActiveModel::MissingAttributeError
- ```
+ *Carlos Palhares*
- *Ryuta Kamizono*
+* Add a build persistence method
+
+ Provides a wrapper for `new`, to provide feature parity with `create`s
+ ability to create multiple records from an array of hashes, using the
+ same notation as the `build` method on associations.
+
+ *Sean Denny*
-* Allow attribute's default to be configured but keeping its own type.
+* Raise on assignment to readonly attributes
```ruby
class Post < ActiveRecord::Base
- attribute :written_at, default: -> { Time.now.utc }
+ attr_readonly :content
end
-
- # Rails 6.0
- Post.type_for_attribute(:written_at) # => #<Type::Value ... precision: nil, ...>
-
- # Rails 6.1
- Post.type_for_attribute(:written_at) # => #<Type::DateTime ... precision: 6, ...>
+ Post.create!(content: "cannot be updated")
+ post.content # "cannot be updated"
+ post.content = "something else" # => ActiveRecord::ReadonlyAttributeError
```
- *Ryuta Kamizono*
+ Previously, assignment would succeed but silently not write to the database.
-* Allow default to be configured for Enum.
+ This behavior can be controlled by configuration:
```ruby
- class Book < ActiveRecord::Base
- enum status: [:proposed, :written, :published], _default: :published
- end
-
- Book.new.status # => "published"
+ config.active_record.raise_on_assign_to_attr_readonly = true
```
- *Ryuta Kamizono*
+ and will be enabled by default with `config.load_defaults 7.1`.
-* Deprecate YAML loading from legacy format older than Rails 5.0.
+ *Alex Ghiculescu*, *Hartley McGuire*
- *Ryuta Kamizono*
+* Allow unscoping of preload and eager_load associations
-* Added the setting `ActiveRecord::Base.immutable_strings_by_default`, which
- allows you to specify that all string columns should be frozen unless
- otherwise specified. This will reduce memory pressure for applications which
- do not generally mutate string properties of Active Record objects.
+ Added the ability to unscope preload and eager_load associations just like
+ includes, joins, etc. See ActiveRecord::QueryMethods::VALID_UNSCOPING_VALUES
+ for the full list of supported unscopable scopes.
- *Sean Griffin*, *Ryuta Kamizono*
+ ```ruby
+ query.unscope(:eager_load, :preload).group(:id).select(:id)
+ ```
-* Deprecate `map!` and `collect!` on `ActiveRecord::Result`.
+ *David Morehouse*
- *Ryuta Kamizono*
+* Add automatic filtering of encrypted attributes on inspect
-* Support `relation.and` for intersection as Set theory.
+ This feature is enabled by default but can be disabled with
```ruby
- david_and_mary = Author.where(id: [david, mary])
- mary_and_bob = Author.where(id: [mary, bob])
-
- david_and_mary.merge(mary_and_bob) # => [mary, bob]
-
- david_and_mary.and(mary_and_bob) # => [mary]
- david_and_mary.or(mary_and_bob) # => [david, mary, bob]
+ config.active_record.encryption.add_to_filter_parameters = false
```
- *Ryuta Kamizono*
-
-* Merging conditions on the same column no longer maintain both conditions,
- and will be consistently replaced by the latter condition in Rails 7.0.
- To migrate to Rails 7.0's behavior, use `relation.merge(other, rewhere: true)`.
-
- ```ruby
- # Rails 6.1 (IN clause is replaced by merger side equality condition)
- Author.where(id: [david.id, mary.id]).merge(Author.where(id: bob)) # => [bob]
+ *Hartley McGuire*
- # Rails 6.1 (both conflict conditions exists, deprecated)
- Author.where(id: david.id..mary.id).merge(Author.where(id: bob)) # => []
+* Clear locking column on #dup
- # Rails 6.1 with rewhere to migrate to Rails 7.0's behavior
- Author.where(id: david.id..mary.id).merge(Author.where(id: bob), rewhere: true) # => [bob]
+ This change fixes not to duplicate locking_column like id and timestamps.
- # Rails 7.0 (same behavior with IN clause, mergee side condition is consistently replaced)
- Author.where(id: [david.id, mary.id]).merge(Author.where(id: bob)) # => [bob]
- Author.where(id: david.id..mary.id).merge(Author.where(id: bob)) # => [bob]
+ ```
+ car = Car.create!
+ car.touch
+ car.lock_version #=> 1
+ car.dup.lock_version #=> 0
```
- *Ryuta Kamizono*
+ *Shouichi Kamiya*, *Seonggi Yang*, *Ryohei UEDA*
-* Do not mark Postgresql MAC address and UUID attributes as changed when the assigned value only varies by case.
+* Invalidate transaction as early as possible
- *Peter Fry*
+ After rescuing a `TransactionRollbackError` exception Rails invalidates transactions earlier in the flow
+ allowing the framework to skip issuing the `ROLLBACK` statement in more cases.
+ Only affects adapters that have `savepoint_errors_invalidate_transactions?` configured as `true`,
+ which at this point is only applicable to the `mysql2` adapter.
-* Resolve issue with insert_all unique_by option when used with expression index.
+ *Nikita Vasilevsky*
- When the `:unique_by` option of `ActiveRecord::Persistence.insert_all` and
- `ActiveRecord::Persistence.upsert_all` was used with the name of an expression index, an error
- was raised. Adding a guard around the formatting behavior for the `:unique_by` corrects this.
+* Allow configuring columns list to be used in SQL queries issued by an `ActiveRecord::Base` object
- Usage:
+ It is now possible to configure columns list that will be used to build an SQL query clauses when
+ updating, deleting or reloading an `ActiveRecord::Base` object
```ruby
- create_table :books, id: :integer, force: true do |t|
- t.column :name, :string
- t.index "lower(name)", unique: true
+ class Developer < ActiveRecord::Base
+ query_constraints :company_id, :id
end
-
- Book.insert_all [{ name: "MyTest" }], unique_by: :index_books_on_lower_name
+ developer = Developer.first.update(name: "Bob")
+ # => UPDATE "developers" SET "name" = 'Bob' WHERE "developers"."company_id" = 1 AND "developers"."id" = 1
```
- Fixes #39516.
+ *Nikita Vasilevsky*
- *Austen Madden*
+* Adds `validate` to foreign keys and check constraints in schema.rb
-* Add basic support for CHECK constraints to database migrations.
+ Previously, `schema.rb` would not record if `validate: false` had been used when adding a foreign key or check
+ constraint, so restoring a database from the schema could result in foreign keys or check constraints being
+ incorrectly validated.
- Usage:
+ *Tommy Graves*
- ```ruby
- add_check_constraint :products, "price > 0", name: "price_check"
- remove_check_constraint :products, name: "price_check"
- ```
+* Adapter `#execute` methods now accept an `allow_retry` option. When set to `true`, the SQL statement will be
+ retried, up to the database's configured `connection_retries` value, upon encountering connection-related errors.
- *fatkodima*
+ *Adrianna Chang*
-* Add `ActiveRecord::Base.strict_loading_by_default` and `ActiveRecord::Base.strict_loading_by_default=`
- to enable/disable strict_loading mode by default for a model. The configuration's value is
- inheritable by subclasses, but they can override that value and it will not impact parent class.
+* Only trigger `after_commit :destroy` callbacks when a database row is deleted.
- Usage:
+ This prevents `after_commit :destroy` callbacks from being triggered again
+ when `destroy` is called multiple times on the same record.
- ```ruby
- class Developer < ApplicationRecord
- self.strict_loading_by_default = true
+ *Ben Sheldon*
- has_many :projects
- end
+* Fix `ciphertext_for` for yet-to-be-encrypted values.
- dev = Developer.first
- dev.projects.first
- # => ActiveRecord::StrictLoadingViolationError Exception: Developer is marked as strict_loading and Project cannot be lazily loaded.
- ```
+ Previously, `ciphertext_for` returned the cleartext of values that had not
+ yet been encrypted, such as with an unpersisted record:
- *bogdanvlviv*
+ ```ruby
+ Post.encrypts :body
-* Deprecate passing an Active Record object to `quote`/`type_cast` directly.
+ post = Post.create!(body: "Hello")
+ post.ciphertext_for(:body)
+ # => "{\"p\":\"abc..."
- *Ryuta Kamizono*
+ post.body = "World"
+ post.ciphertext_for(:body)
+ # => "World"
+ ```
-* Default engine `ENGINE=InnoDB` is no longer dumped to make schema more agnostic.
+ Now, `ciphertext_for` will always return the ciphertext of encrypted
+ attributes:
- Before:
+ ```ruby
+ Post.encrypts :body
- ```ruby
- create_table "accounts", options: "ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci", force: :cascade do |t|
- end
- ```
+ post = Post.create!(body: "Hello")
+ post.ciphertext_for(:body)
+ # => "{\"p\":\"abc..."
- After:
+ post.body = "World"
+ post.ciphertext_for(:body)
+ # => "{\"p\":\"xyz..."
+ ```
- ```ruby
- create_table "accounts", charset: "utf8mb4", collation: "utf8mb4_0900_ai_ci", force: :cascade do |t|
- end
- ```
+ *Jonathan Hefner*
- *Ryuta Kamizono*
+* Fix a bug where using groups and counts with long table names would return incorrect results.
-* Added delegated type as an alternative to single-table inheritance for representing class hierarchies.
- See ActiveRecord::DelegatedType for the full description.
+ *Shota Toguchi*, *Yusaku Ono*
- *DHH*
+* Fix encryption of column default values.
-* Deprecate aggregations with group by duplicated fields.
+ Previously, encrypted attributes that used column default values appeared to
+ be encrypted on create, but were not:
- To migrate to Rails 7.0's behavior, use `uniq!(:group)` to deduplicate group fields.
+ ```ruby
+ Book.encrypts :name
- ```ruby
- accounts = Account.group(:firm_id)
+ book = Book.create!
+ book.name
+ # => "<untitled>"
+ book.name_before_type_cast
+ # => "{\"p\":\"abc..."
+ book.reload.name_before_type_cast
+ # => "<untitled>"
+ ```
- # duplicated group fields, deprecated.
- accounts.merge(accounts.where.not(credit_limit: nil)).sum(:credit_limit)
- # => {
- # [1, 1] => 50,
- # [2, 2] => 60
- # }
+ Now, attributes with column default values are encrypted:
- # use `uniq!(:group)` to deduplicate group fields.
- accounts.merge(accounts.where.not(credit_limit: nil)).uniq!(:group).sum(:credit_limit)
- # => {
- # 1 => 50,
- # 2 => 60
- # }
- ```
+ ```ruby
+ Book.encrypts :name
- *Ryuta Kamizono*
+ book = Book.create!
+ book.name
+ # => "<untitled>"
+ book.name_before_type_cast
+ # => "{\"p\":\"abc..."
+ book.reload.name_before_type_cast
+ # => "{\"p\":\"abc..."
+ ```
-* Deprecate duplicated query annotations.
+ *Jonathan Hefner*
- To migrate to Rails 7.0's behavior, use `uniq!(:annotate)` to deduplicate query annotations.
+* Deprecate delegation from `Base` to `connection_handler`.
- ```ruby
- accounts = Account.where(id: [1, 2]).annotate("david and mary")
+ Calling `Base.clear_all_connections!`, `Base.clear_active_connections!`, `Base.clear_reloadable_connections!` and `Base.flush_idle_connections!` is deprecated. Please call these methods on the connection handler directly. In future Rails versions, the delegation from `Base` to the `connection_handler` will be removed.
- # duplicated annotations, deprecated.
- accounts.merge(accounts.rewhere(id: 3))
- # SELECT accounts.* FROM accounts WHERE accounts.id = 3 /* david and mary */ /* david and mary */
+ *Eileen M. Uchitelle*
- # use `uniq!(:annotate)` to deduplicate annotations.
- accounts.merge(accounts.rewhere(id: 3)).uniq!(:annotate)
- # SELECT accounts.* FROM accounts WHERE accounts.id = 3 /* david and mary */
- ```
+* Allow ActiveRecord::QueryMethods#reselect to receive hash values, similar to ActiveRecord::QueryMethods#select
- *Ryuta Kamizono*
+ *Sampat Badhe*
-* Resolve conflict between counter cache and optimistic locking.
+* Validate options when managing columns and tables in migrations.
- Bump an Active Record instance's lock version after updating its counter
- cache. This avoids raising an unnecessary `ActiveRecord::StaleObjectError`
- upon subsequent transactions by maintaining parity with the corresponding
- database record's `lock_version` column.
+ If an invalid option is passed to a migration method like `create_table` and `add_column`, an error will be raised
+ instead of the option being silently ignored. Validation of the options will only be applied for new migrations
+ that are created.
- Fixes #16449.
+ *Guo Xiang Tan*, *George Wambold*
- *Aaron Lipman*
+* Update query log tags to use the [SQLCommenter](https://open-telemetry.github.io/opentelemetry-sqlcommenter/) format by default. See [#46179](https://github.com/rails/rails/issues/46179)
-* Support merging option `:rewhere` to allow mergee side condition to be replaced exactly.
+ To opt out of SQLCommenter-formatted query log tags, set `config.active_record.query_log_tags_format = :legacy`. By default, this is set to `:sqlcommenter`.
- ```ruby
- david_and_mary = Author.where(id: david.id..mary.id)
+ *Modulitos* and *Iheanyi*
- # both conflict conditions exists
- david_and_mary.merge(Author.where(id: bob)) # => []
+* Allow any ERB in the database.yml when creating rake tasks.
- # mergee side condition is replaced by rewhere
- david_and_mary.merge(Author.rewhere(id: bob)) # => [bob]
+ Any ERB can be used in `database.yml` even if it accesses environment
+ configurations.
- # mergee side condition is replaced by rewhere option
- david_and_mary.merge(Author.where(id: bob), rewhere: true) # => [bob]
- ```
+ Deprecates `config.active_record.suppress_multiple_database_warning`.
- *Ryuta Kamizono*
+ *Eike Send*
-* Add support for finding records based on signed ids, which are tamper-proof, verified ids that can be
- set to expire and scoped with a purpose. This is particularly useful for things like password reset
- or email verification, where you want the bearer of the signed id to be able to interact with the
- underlying record, but usually only within a certain time period.
+* Add table to error for duplicate column definitions.
- ```ruby
- signed_id = User.first.signed_id expires_in: 15.minutes, purpose: :password_reset
+ If a migration defines duplicate columns for a table, the error message
+ shows which table it concerns.
- User.find_signed signed_id # => nil, since the purpose does not match
+ *Petrik de Heus*
- travel 16.minutes
- User.find_signed signed_id, purpose: :password_reset # => nil, since the signed id has expired
+* Fix erroneous nil default precision on virtual datetime columns.
- travel_back
- User.find_signed signed_id, purpose: :password_reset # => User.first
+ Prior to this change, virtual datetime columns did not have the same
+ default precision as regular datetime columns, resulting in the following
+ being erroneously equivalent:
- User.find_signed! "bad data" # => ActiveSupport::MessageVerifier::InvalidSignature
- ```
+ t.virtual :name, type: datetime, as: "expression"
+ t.virtual :name, type: datetime, precision: nil, as: "expression"
- *DHH*
+ This change fixes the default precision lookup, so virtual and regular
+ datetime column default precisions match.
-* Support `ALGORITHM = INSTANT` DDL option for index operations on MySQL.
+ *Sam Bostock*
- *Ryuta Kamizono*
+* Use connection from `#with_raw_connection` in `#quote_string`.
-* Fix index creation to preserve index comment in bulk change table on MySQL.
+ This ensures that the string quoting is wrapped in the reconnect and retry logic
+ that `#with_raw_connection` offers.
- *Ryuta Kamizono*
+ *Adrianna Chang*
-* Allow `unscope` to be aware of table name qualified values.
+* Add `expires_at` option to `signed_id`.
- It is possible to unscope only the column in the specified table.
+ *Shouichi Kamiya*
- ```ruby
- posts = Post.joins(:comments).group(:"posts.hidden")
- posts = posts.where("posts.hidden": false, "comments.hidden": false)
+* Allow applications to set retry deadline for query retries.
- posts.count
- # => { false => 10 }
+ Building on the work done in #44576 and #44591, we extend the logic that automatically
+ reconnects database connections to take into account a timeout limit. We won't retry
+ a query if a given amount of time has elapsed since the query was first attempted. This
+ value defaults to nil, meaning that all retryable queries are retried regardless of time elapsed,
+ but this can be changed via the `retry_deadline` option in the database config.
- # unscope both hidden columns
- posts.unscope(where: :hidden).count
- # => { false => 11, true => 1 }
+ *Adrianna Chang*
- # unscope only comments.hidden column
- posts.unscope(where: :"comments.hidden").count
- # => { false => 11 }
- ```
+* Fix a case where the query cache can return wrong values. See #46044
- *Ryuta Kamizono*, *Slava Korolev*
+ *Aaron Patterson*
-* Fix `rewhere` to truly overwrite collided where clause by new where clause.
+* Support MySQL's ssl-mode option for MySQLDatabaseTasks.
- ```ruby
- steve = Person.find_by(name: "Steve")
- david = Author.find_by(name: "David")
+ Verifying the identity of the database server requires setting the ssl-mode
+ option to VERIFY_CA or VERIFY_IDENTITY. This option was previously ignored
+ for MySQL database tasks like creating a database and dumping the structure.
- relation = Essay.where(writer: steve)
+ *Petrik de Heus*
- # Before
- relation.rewhere(writer: david).to_a # => []
+* Move `ActiveRecord::InternalMetadata` to an independent object.
- # After
- relation.rewhere(writer: david).to_a # => [david]
- ```
+ `ActiveRecord::InternalMetadata` no longer inherits from `ActiveRecord::Base` and is now an independent object that should be instantiated with a `connection`. This class is private and should not be used by applications directly. If you want to interact with the schema migrations table, please access it on the connection directly, for example: `ActiveRecord::Base.connection.schema_migration`.
- *Ryuta Kamizono*
+ *Eileen M. Uchitelle*
-* Inspect time attributes with subsec and time zone offset.
+* Deprecate quoting `ActiveSupport::Duration` as an integer
- ```ruby
- p Knot.create
- => #<Knot id: 1, created_at: "2016-05-05 01:29:47.116928000 +0000">
+ Using ActiveSupport::Duration as an interpolated bind parameter in a SQL
+ string template is deprecated. To avoid this warning, you should explicitly
+ convert the duration to a more specific database type. For example, if you
+ want to use a duration as an integer number of seconds:
+ ```
+ Record.where("duration = ?", 1.hour.to_i)
+ ```
+ If you want to use a duration as an ISO 8601 string:
+ ```
+ Record.where("duration = ?", 1.hour.iso8601)
```
- *akinomaeni*, *Jonathan Hefner*
+ *Aram Greenman*
-* Deprecate passing a column to `type_cast`.
+* Allow `QueryMethods#in_order_of` to order by a string column name.
- *Ryuta Kamizono*
+ ```ruby
+ Post.in_order_of("id", [4,2,3,1]).to_a
+ Post.joins(:author).in_order_of("authors.name", ["Bob", "Anna", "John"]).to_a
+ ```
-* Deprecate `in_clause_length` and `allowed_index_name_length` in `DatabaseLimits`.
+ *Igor Kasyanchuk*
- *Ryuta Kamizono*
+* Move `ActiveRecord::SchemaMigration` to an independent object.
-* Support bulk insert/upsert on relation to preserve scope values.
+ `ActiveRecord::SchemaMigration` no longer inherits from `ActiveRecord::Base` and is now an independent object that should be instantiated with a `connection`. This class is private and should not be used by applications directly. If you want to interact with the schema migrations table, please access it on the connection directly, for example: `ActiveRecord::Base.connection.schema_migration`.
- *Josef Šimánek*, *Ryuta Kamizono*
+ *Eileen M. Uchitelle*
+
+* Deprecate `all_connection_pools` and make `connection_pool_list` more explicit.
-* Preserve column comment value on changing column name on MySQL.
+ Following on #45924 `all_connection_pools` is now deprecated. `connection_pool_list` will either take an explicit role or applications can opt into the new behavior by passing `:all`.
- *Islam Taha*
+ *Eileen M. Uchitelle*
-* Add support for `if_exists` option for removing an index.
+* Fix connection handler methods to operate on all pools.
- The `remove_index` method can take an `if_exists` option. If this is set to true an error won't be raised if the index doesn't exist.
+ `active_connections?`, `clear_active_connections!`, `clear_reloadable_connections!`, `clear_all_connections!`, and `flush_idle_connections!` now operate on all pools by default. Previously they would default to using the `current_role` or `:writing` role unless specified.
*Eileen M. Uchitelle*
-* Remove ibm_db, informix, mssql, oracle, and oracle12 Arel visitors which are not used in the code base.
- *Ryuta Kamizono*
+* Allow ActiveRecord::QueryMethods#select to receive hash values.
-* Prevent `build_association` from `touching` a parent record if the record isn't persisted for `has_one` associations.
+ Currently, `select` might receive only raw sql and symbols to define columns and aliases to select.
- Fixes #38219.
+ With this change we can provide `hash` as argument, for example:
- *Josh Brody*
+ ```ruby
+ Post.joins(:comments).select(posts: [:id, :title, :created_at], comments: [:id, :body, :author_id])
+ #=> "SELECT \"posts\".\"id\", \"posts\".\"title\", \"posts\".\"created_at\", \"comments\".\"id\", \"comments\".\"body\", \"comments\".\"author_id\"
+ # FROM \"posts\" INNER JOIN \"comments\" ON \"comments\".\"post_id\" = \"posts\".\"id\""
-* Add support for `if_not_exists` option for adding index.
+ Post.joins(:comments).select(posts: { id: :post_id, title: :post_title }, comments: { id: :comment_id, body: :comment_body })
+ #=> "SELECT posts.id as post_id, posts.title as post_title, comments.id as comment_id, comments.body as comment_body
+ # FROM \"posts\" INNER JOIN \"comments\" ON \"comments\".\"post_id\" = \"posts\".\"id\""
+ ```
+ *Oleksandr Holubenko*, *Josef Šimánek*, *Jean Boussier*
- The `add_index` method respects `if_not_exists` option. If it is set to true
- index won't be added.
+* Adapts virtual attributes on `ActiveRecord::Persistence#becomes`.
- Usage:
+ When source and target classes have a different set of attributes adapts
+ attributes such that the extra attributes from target are added.
```ruby
- add_index :users, :account_id, if_not_exists: true
- ```
+ class Person < ApplicationRecord
+ end
- The `if_not_exists` option passed to `create_table` also gets propagated to indexes
- created within that migration so that if table and its indexes exist then there is no
- attempt to create them again.
+ class WebUser < Person
+ attribute :is_admin, :boolean
+ after_initialize :set_admin
- *Prathamesh Sonpatki*
+ def set_admin
+ write_attribute(:is_admin, email =~ /@ourcompany\.com$/)
+ end
+ end
-* Add `ActiveRecord::Base#previously_new_record?` to show if a record was new before the last save.
+ person = Person.find_by(email: "email@ourcompany.com")
+ person.respond_to? :is_admin
+ # => false
+ person.becomes(WebUser).is_admin?
+ # => true
+ ```
- *Tom Ward*
+ *Jacopo Beschi*, *Sampson Crowley*
-* Support descending order for `find_each`, `find_in_batches`, and `in_batches`.
+* Fix `ActiveRecord::QueryMethods#in_order_of` to include `nil`s, to match the
+ behavior of `Enumerable#in_order_of`.
- Batch processing methods allow you to work with the records in batches, greatly reducing memory consumption, but records are always batched from oldest id to newest.
+ For example, `Post.in_order_of(:title, [nil, "foo"])` will now include posts
+ with `nil` titles, the same as `Post.all.to_a.in_order_of(:title, [nil, "foo"])`.
- This change allows reversing the order, batching from newest to oldest. This is useful when you need to process newer batches of records first.
+ *fatkodima*
- Pass `order: :desc` to yield batches in descending order. The default remains `order: :asc`.
+* Optimize `add_timestamps` to use a single SQL statement.
```ruby
- Person.find_each(order: :desc) do |person|
- person.party_all_night!
- end
+ add_timestamps :my_table
```
- *Alexey Vasiliev*
-
-* Fix `insert_all` with enum values.
+ Now results in the following SQL:
- Fixes #38716.
-
- *Joel Blum*
+ ```sql
+ ALTER TABLE "my_table" ADD COLUMN "created_at" datetime(6) NOT NULL, ADD COLUMN "updated_at" datetime(6) NOT NULL
+ ```
-* Add support for `db:rollback:name` for multiple database applications.
+ *Iliana Hadzhiatanasova*
- Multiple database applications will now raise if `db:rollback` is call and recommend using the `db:rollback:[NAME]` to rollback migrations.
+* Add `drop_enum` migration command for PostgreSQL
- *Eileen M. Uchitelle*
+ This does the inverse of `create_enum`. Before dropping an enum, ensure you have
+ dropped columns that depend on it.
-* `Relation#pick` now uses already loaded results instead of making another query.
+ *Alex Ghiculescu*
- *Eugene Kenny*
+* Adds support for `if_exists` option when removing a check constraint.
-* Deprecate using `return`, `break` or `throw` to exit a transaction block after writes.
+ The `remove_check_constraint` method now accepts an `if_exists` option. If set
+ to true an error won't be raised if the check constraint doesn't exist.
- *Dylan Thacker-Smith*
+ *Margaret Parsa* and *Aditya Bhutani*
-* Dump the schema or structure of a database when calling `db:migrate:name`.
+* `find_or_create_by` now try to find a second time if it hits a unicity constraint.
- In previous versions of Rails, `rails db:migrate` would dump the schema of the database. In Rails 6, that holds true (`rails db:migrate` dumps all databases' schemas), but `rails db:migrate:name` does not share that behavior.
+ `find_or_create_by` always has been inherently racy, either creating multiple
+ duplicate records or failing with `ActiveRecord::RecordNotUnique` depending on
+ whether a proper unicity constraint was set.
- Going forward, calls to `rails db:migrate:name` will dump the schema (or structure) of the database being migrated.
+ `create_or_find_by` was introduced for this use case, however it's quite wasteful
+ when the record is expected to exist most of the time, as INSERT require to send
+ more data than SELECT and require more work from the database. Also on some
+ databases it can actually consume a primary key increment which is undesirable.
- *Kyle Thompson*
+ So for case where most of the time the record is expected to exist, `find_or_create_by`
+ can be made race-condition free by re-trying the `find` if the `create` failed
+ with `ActiveRecord::RecordNotUnique`. This assumes that the table has the proper
+ unicity constraints, if not, `find_or_create_by` will still lead to duplicated records.
-* Reset the `ActiveRecord::Base` connection after `rails db:migrate:name`.
+ *Jean Boussier*, *Alex Kitchens*
- When `rails db:migrate` has finished, it ensures the `ActiveRecord::Base` connection is reset to its original configuration. Going forward, `rails db:migrate:name` will have the same behavior.
+* Introduce a simpler constructor API for ActiveRecord database adapters.
- *Kyle Thompson*
+ Previously the adapter had to know how to build a new raw connection to
+ support reconnect, but also expected to be passed an initial already-
+ established connection.
-* Disallow calling `connected_to` on subclasses of `ActiveRecord::Base`.
+ When manually creating an adapter instance, it will now accept a single
+ config hash, and only establish the real connection on demand.
- Behavior has not changed here but the previous API could be misleading to people who thought it would switch connections for only that class. `connected_to` switches the context from which we are getting connections, not the connections themselves.
+ *Matthew Draper*
- *Eileen M. Uchitelle*, *John Crepezzi*
+* Avoid redundant `SELECT 1` connection-validation query during DB pool
+ checkout when possible.
-* Add support for horizontal sharding to `connects_to` and `connected_to`.
+ If the first query run during a request is known to be idempotent, it can be
+ used directly to validate the connection, saving a network round-trip.
- Applications can now connect to multiple shards and switch between their shards in an application. Note that the shard swapping is still a manual process as this change does not include an API for automatic shard swapping.
+ *Matthew Draper*
- Usage:
+* Automatically reconnect broken database connections when safe, even
+ mid-request.
- Given the following configuration:
+ When an error occurs while attempting to run a known-idempotent query, and
+ not inside a transaction, it is safe to immediately reconnect to the
+ database server and try again, so this is now the default behavior.
- ```yaml
- # config/database.yml
- production:
- primary:
- database: my_database
- primary_shard_one:
- database: my_database_shard_one
- ```
+ This new default should always be safe -- to support that, it's consciously
+ conservative about which queries are considered idempotent -- but if
+ necessary it can be disabled by setting the `connection_retries` connection
+ option to `0`.
- Connect to multiple shards:
+ *Matthew Draper*
- ```ruby
- class ApplicationRecord < ActiveRecord::Base
- self.abstract_class = true
+* Avoid removing a PostgreSQL extension when there are dependent objects.
- connects_to shards: {
- default: { writing: :primary },
- shard_one: { writing: :primary_shard_one }
- }
- ```
+ Previously, removing an extension also implicitly removed dependent objects. Now, this will raise an error.
- Swap between shards in your controller / model code:
+ You can force removing the extension:
```ruby
- ActiveRecord::Base.connected_to(shard: :shard_one) do
- # Read from shard one
- end
+ disable_extension :citext, force: :cascade
```
- The horizontal sharding API also supports read replicas. See guides for more details.
+ Fixes #29091.
- *Eileen M. Uchitelle*, *John Crepezzi*
+ *fatkodima*
-* Deprecate `spec_name` in favor of `name` on database configurations.
+* Allow nested functions as safe SQL string
- The accessors for `spec_name` on `configs_for` and `DatabaseConfig` are deprecated. Please use `name` instead.
+ *Michael Siegfried*
- Deprecated behavior:
+* Allow `destroy_association_async_job=` to be configured with a class string instead of a constant.
- ```ruby
- db_config = ActiveRecord::Base.configurations.configs_for(env_name: "development", spec_name: "primary")
- db_config.spec_name
- ```
+ Defers an autoloading dependency between `ActiveRecord::Base` and `ActiveJob::Base`
+ and moves the configuration of `ActiveRecord::DestroyAssociationAsyncJob`
+ from ActiveJob to ActiveRecord.
- New behavior:
+ Deprecates `ActiveRecord::ActiveJobRequiredError` and now raises a `NameError`
+ if the job class is unloadable or an `ActiveRecord::ConfigurationError` if
+ `dependent: :destroy_async` is declared on an association but there is no job
+ class configured.
- ```ruby
- db_config = ActiveRecord::Base.configurations.configs_for(env_name: "development", name: "primary")
- db_config.name
- ```
+ *Ben Sheldon*
- *Eileen M. Uchitelle*
+* Fix `ActiveRecord::Store` to serialize as a regular Hash
+
+ Previously it would serialize as an `ActiveSupport::HashWithIndifferentAccess`
+ which is wasteful and cause problem with YAML safe_load.
-* Add additional database-specific rake tasks for multi-database users.
+ *Jean Boussier*
- Previously, `rails db:create`, `rails db:drop`, and `rails db:migrate` were the only rails tasks that could operate on a single
- database. For example:
+* Add `timestamptz` as a time zone aware type for PostgreSQL
- ```
- rails db:create
- rails db:create:primary
- rails db:create:animals
- rails db:drop
- rails db:drop:primary
- rails db:drop:animals
- rails db:migrate
- rails db:migrate:primary
- rails db:migrate:animals
- ```
+ This is required for correctly parsing `timestamp with time zone` values in your database.
- With these changes, `rails db:schema:dump`, `rails db:schema:load`, `rails db:structure:dump`, `rails db:structure:load` and
- `rails db:test:prepare` can additionally operate on a single database. For example:
+ If you don't want this, you can opt out by adding this initializer:
- ```
- rails db:schema:dump
- rails db:schema:dump:primary
- rails db:schema:dump:animals
- rails db:schema:load
- rails db:schema:load:primary
- rails db:schema:load:animals
- rails db:structure:dump
- rails db:structure:dump:primary
- rails db:structure:dump:animals
- rails db:structure:load
- rails db:structure:load:primary
- rails db:structure:load:animals
- rails db:test:prepare
- rails db:test:prepare:primary
- rails db:test:prepare:animals
+ ```ruby
+ ActiveRecord::Base.time_zone_aware_types -= [:timestamptz]
```
- *Kyle Thompson*
+ *Alex Ghiculescu*
-* Add support for `strict_loading` mode on association declarations.
+* Add new `ActiveRecord::Base.generates_token_for` API.
- Raise an error if attempting to load a record from an association that has been marked as `strict_loading` unless it was explicitly eager loaded.
+ Currently, `signed_id` fulfills the role of generating tokens for e.g.
+ resetting a password. However, signed IDs cannot reflect record state, so
+ if a token is intended to be single-use, it must be tracked in a database at
+ least until it expires.
- Usage:
+ With `generates_token_for`, a token can embed data from a record. When
+ using the token to fetch the record, the data from the token and the current
+ data from the record will be compared. If the two do not match, the token
+ will be treated as invalid, the same as if it had expired. For example:
```ruby
- class Developer < ApplicationRecord
- has_many :projects, strict_loading: true
+ class User < ActiveRecord::Base
+ has_secure_password
+
+ generates_token_for :password_reset, expires_in: 15.minutes do
+ # A password's BCrypt salt changes when the password is updated.
+ # By embedding (part of) the salt in a token, the token will
+ # expire when the password is updated.
+ BCrypt::Password.new(password_digest).salt[-10..]
+ end
end
- dev = Developer.first
- dev.projects.first
- # => ActiveRecord::StrictLoadingViolationError: The projects association is marked as strict_loading and cannot be lazily loaded.
+ user = User.first
+ token = user.generate_token_for(:password_reset)
+
+ User.find_by_token_for(:password_reset, token) # => user
+
+ user.update!(password: "new password")
+ User.find_by_token_for(:password_reset, token) # => nil
```
- *Kevin Deisz*
+ *Jonathan Hefner*
-* Add support for `strict_loading` mode to prevent lazy loading of records.
+* Optimize Active Record batching for whole table iterations.
- Raise an error if a parent record is marked as `strict_loading` and attempts to lazily load its associations. This is useful for finding places you may want to preload an association and avoid additional queries.
+ Previously, `in_batches` got all the ids and constructed an `IN`-based query for each batch.
+ When iterating over the whole tables, this approach is not optimal as it loads unneeded ids and
+ `IN` queries with lots of items are slow.
- Usage:
+ Now, whole table iterations use range iteration (`id >= x AND id <= y`) by default which can make iteration
+ several times faster. E.g., tested on a PostgreSQL table with 10 million records: querying (`253s` vs `30s`),
+ updating (`288s` vs `124s`), deleting (`268s` vs `83s`).
+
+ Only whole table iterations use this style of iteration by default. You can disable this behavior by passing `use_ranges: false`.
+ If you iterate over the table and the only condition is, e.g., `archived_at: nil` (and only a tiny fraction
+ of the records are archived), it makes sense to opt in to this approach:
```ruby
- dev = Developer.strict_loading.first
- dev.audit_logs.to_a
- # => ActiveRecord::StrictLoadingViolationError: Developer is marked as strict_loading and AuditLog cannot be lazily loaded.
+ Project.where(archived_at: nil).in_batches(use_ranges: true) do |relation|
+ # do something
+ end
```
- *Eileen M. Uchitelle*, *Aaron Patterson*
+ See #45414 for more details.
-* Add support for PostgreSQL 11+ partitioned indexes when using `upsert_all`.
+ *fatkodima*
- *Sebastián Palma*
+* `.with` query method added. Construct common table expressions with ease and get `ActiveRecord::Relation` back.
-* Adds support for `if_not_exists` to `add_column` and `if_exists` to `remove_column`.
+ ```ruby
+ Post.with(posts_with_comments: Post.where("comments_count > ?", 0))
+ # => ActiveRecord::Relation
+ # WITH posts_with_comments AS (SELECT * FROM posts WHERE (comments_count > 0)) SELECT * FROM posts
+ ```
- Applications can set their migrations to ignore exceptions raised when adding a column that already exists or when removing a column that does not exist.
+ *Vlado Cingel*
- Example Usage:
+* Don't establish a new connection if an identical pool exists already.
- ```ruby
- class AddColumnTitle < ActiveRecord::Migration[6.1]
- def change
- add_column :posts, :title, :string, if_not_exists: true
- end
- end
- ```
+ Previously, if `establish_connection` was called on a class that already had an established connection, the existing connection would be removed regardless of whether it was the same config. Now if a pool is found with the same values as the new connection, the existing connection will be returned instead of creating a new one.
- ```ruby
- class RemoveColumnTitle < ActiveRecord::Migration[6.1]
- def change
- remove_column :posts, :title, if_exists: true
- end
- end
- ```
+ This has a slight change in behavior if application code is depending on a new connection being established regardless of whether it's identical to an existing connection. If the old behavior is desirable, applications should call `ActiveRecord::Base#remove_connection` before establishing a new one. Calling `establish_connection` with a different config works the same way as it did previously.
*Eileen M. Uchitelle*
-* Regexp-escape table name for MS SQL Server.
+* Update `db:prepare` task to load schema when an uninitialized database exists, and dump schema after migrations.
- Add `Regexp.escape` to one method in ActiveRecord, so that table names with regular expression characters in them work as expected. Since MS SQL Server uses "[" and "]" to quote table and column names, and those characters are regular expression characters, methods like `pluck` and `select` fail in certain cases when used with the MS SQL Server adapter.
+ *Ben Sheldon*
- *Larry Reid*
+* Fix supporting timezone awareness for `tsrange` and `tstzrange` array columns.
-* Store advisory locks on their own named connection.
+ ```ruby
+ # In database migrations
+ add_column :shops, :open_hours, :tsrange, array: true
+ # In app config
+ ActiveRecord::Base.time_zone_aware_types += [:tsrange]
+ # In the code times are properly converted to app time zone
+ Shop.create!(open_hours: [Time.current..8.hour.from_now])
+ ```
- Previously advisory locks were taken out against a connection when a migration started. This works fine in single database applications but doesn't work well when migrations need to open new connections which results in the lock getting dropped.
+ *Wojciech Wnętrzak*
- In order to fix this we are storing the advisory lock on a new connection with the connection specification name `AdvisoryLockBase`. The caveat is that we need to maintain at least 2 connections to a database while migrations are running in order to do this.
+* Introduce strategy pattern for executing migrations.
- *Eileen M. Uchitelle*, *John Crepezzi*
+ By default, migrations will use a strategy object that delegates the method
+ to the connection adapter. Consumers can implement custom strategy objects
+ to change how their migrations run.
-* Allow schema cache path to be defined in the database configuration file.
+ *Adrianna Chang*
- For example:
+* Add adapter option disallowing foreign keys
+
+ This adds a new option to be added to `database.yml` which enables skipping
+ foreign key constraints usage even if the underlying database supports them.
+ Usage:
```yaml
development:
- adapter: postgresql
- database: blog_development
- pool: 5
- schema_cache_path: tmp/schema/main.yml
+ <<: *default
+ database: storage/development.sqlite3
+ foreign_keys: false
```
- *Katrina Owen*
+ *Paulo Barros*
-* Deprecate `#remove_connection` in favor of `#remove_connection_pool` when called on the handler.
+* Add configurable deprecation warning for singular associations
- `#remove_connection` is deprecated in order to support returning a `DatabaseConfig` object instead of a `Hash`. Use `#remove_connection_pool`, `#remove_connection` will be removed in Rails 7.0.
+ This adds a deprecation warning when using the plural name of a singular associations in `where`.
+ It is possible to opt into the new more performant behavior with `config.active_record.allow_deprecated_singular_associations_name = false`
- *Eileen M. Uchitelle*, *John Crepezzi*
+ *Adam Hess*
-* Deprecate `#default_hash` and it's alias `#[]` on database configurations.
+* Run transactional callbacks on the freshest instance to save a given
+ record within a transaction.
- Applications should use `configs_for`. `#default_hash` and `#[]` will be removed in Rails 7.0.
+ When multiple Active Record instances change the same record within a
+ transaction, Rails runs `after_commit` or `after_rollback` callbacks for
+ only one of them. `config.active_record.run_commit_callbacks_on_first_saved_instances_in_transaction`
+ was added to specify how Rails chooses which instance receives the
+ callbacks. The framework defaults were changed to use the new logic.
- *Eileen M. Uchitelle*, *John Crepezzi*
+ When `config.active_record.run_commit_callbacks_on_first_saved_instances_in_transaction`
+ is `true`, transactional callbacks are run on the first instance to save,
+ even though its instance state may be stale.
-* Add scale support to `ActiveRecord::Validations::NumericalityValidator`.
+ When it is `false`, which is the new framework default starting with version
+ 7.1, transactional callbacks are run on the instances with the freshest
+ instance state. Those instances are chosen as follows:
- *Gannon McGibbon*
+ - In general, run transactional callbacks on the last instance to save a
+ given record within the transaction.
+ - There are two exceptions:
+ - If the record is created within the transaction, then updated by
+ another instance, `after_create_commit` callbacks will be run on the
+ second instance. This is instead of the `after_update_commit`
+ callbacks that would naively be run based on that instance’s state.
+ - If the record is destroyed within the transaction, then
+ `after_destroy_commit` callbacks will be fired on the last destroyed
+ instance, even if a stale instance subsequently performed an update
+ (which will have affected 0 rows).
-* Find orphans by looking for missing relations through chaining `where.missing`:
+ *Cameron Bothner and Mitch Vollebregt*
- Before:
+* Enable strict strings mode for `SQLite3Adapter`.
+
+ Configures SQLite with a strict strings mode, which disables double-quoted string literals.
+
+ SQLite has some quirks around double-quoted string literals.
+ It first tries to consider double-quoted strings as identifier names, but if they don't exist
+ it then considers them as string literals. Because of this, typos can silently go unnoticed.
+ For example, it is possible to create an index for a non existing column.
+ See [SQLite documentation](https://www.sqlite.org/quirks.html#double_quoted_string_literals_are_accepted) for more details.
+
+ If you don't want this behavior, you can disable it via:
```ruby
- Post.left_joins(:author).where(authors: { id: nil })
+ # config/application.rb
+ config.active_record.sqlite3_adapter_strict_strings_by_default = false
```
- After:
+ Fixes #27782.
+
+ *fatkodima*, *Jean Boussier*
+
+* Resolve issue where a relation cache_version could be left stale.
+
+ Previously, when `reset` was called on a relation object it did not reset the cache_versions
+ ivar. This led to a confusing situation where despite having the correct data the relation
+ still reported a stale cache_version.
+
+ Usage:
```ruby
- Post.where.missing(:author)
+ developers = Developer.all
+ developers.cache_version
+
+ Developer.update_all(updated_at: Time.now.utc + 1.second)
+
+ developers.cache_version # Stale cache_version
+ developers.reset
+ developers.cache_version # Returns the current correct cache_version
```
- *Tom Rossi*
+ Fixes #45341.
-* Ensure `:reading` connections always raise if a write is attempted.
+ *Austen Madden*
- Now Rails will raise an `ActiveRecord::ReadOnlyError` if any connection on the reading handler attempts to make a write. If your reading role needs to write you should name the role something other than `:reading`.
+* Add support for exclusion constraints (PostgreSQL-only).
- *Eileen M. Uchitelle*
+ ```ruby
+ add_exclusion_constraint :invoices, "daterange(start_date, end_date) WITH &&", using: :gist, name: "invoices_date_overlap"
+ remove_exclusion_constraint :invoices, name: "invoices_date_overlap"
+ ```
-* Deprecate `"primary"` as the `connection_specification_name` for `ActiveRecord::Base`.
+ See PostgreSQL's [`CREATE TABLE ... EXCLUDE ...`](https://www.postgresql.org/docs/12/sql-createtable.html#SQL-CREATETABLE-EXCLUDE) documentation for more on exclusion constraints.
- `"primary"` has been deprecated as the `connection_specification_name` for `ActiveRecord::Base` in favor of using `"ActiveRecord::Base"`. This change affects calls to `ActiveRecord::Base.connection_handler.retrieve_connection` and `ActiveRecord::Base.connection_handler.remove_connection`. If you're calling these methods with `"primary"`, please switch to `"ActiveRecord::Base"`.
+ *Alex Robbin*
- *Eileen M. Uchitelle*, *John Crepezzi*
+* `change_column_null` raises if a non-boolean argument is provided
-* Add `ActiveRecord::Validations::NumericalityValidator` with
- support for casting floats using a database columns' precision value.
+ Previously if you provided a non-boolean argument, `change_column_null` would
+ treat it as truthy and make your column nullable. This could be surprising, so now
+ the input must be either `true` or `false`.
- *Gannon McGibbon*
+ ```ruby
+ change_column_null :table, :column, true # good
+ change_column_null :table, :column, false # good
+ change_column_null :table, :column, from: true, to: false # raises (previously this made the column nullable)
+ ```
+
+ *Alex Ghiculescu*
-* Enforce fresh ETag header after a collection's contents change by adding
- ActiveRecord::Relation#cache_key_with_version. This method will be used by
- ActionController::ConditionalGet to ensure that when collection cache versioning
- is enabled, requests using ConditionalGet don't return the same ETag header
- after a collection is modified.
+* Enforce limit on table names length.
- Fixes #38078.
+ Fixes #45130.
- *Aaron Lipman*
+ *fatkodima*
-* Skip test database when running `db:create` or `db:drop` in development
- with `DATABASE_URL` set.
+* Adjust the minimum MariaDB version for check constraints support.
- *Brian Buchalter*
+ *Eddie Lebow*
-* Don't allow mutations on the database configurations hash.
+* Fix Hstore deserialize regression.
- Freeze the configurations hash to disallow directly changing it. If applications need to change the hash, for example to create databases for parallelization, they should use the `DatabaseConfig` object directly.
+ *edsharp*
- Before:
+* Add validity for PostgreSQL indexes.
```ruby
- @db_config = ActiveRecord::Base.configurations.configs_for(env_name: "test", spec_name: "primary")
- @db_config.configuration_hash.merge!(idle_timeout: "0.02")
+ connection.index_exists?(:users, :email, valid: true)
+ connection.indexes(:users).select(&:valid?)
```
- After:
+ *fatkodima*
+
+* Fix eager loading for models without primary keys.
+
+ *Anmol Chopra*, *Matt Lawrence*, and *Jonathan Hefner*
+
+* Avoid validating a unique field if it has not changed and is backed by a unique index.
+
+ Previously, when saving a record, Active Record will perform an extra query to check for the
+ uniqueness of each attribute having a `uniqueness` validation, even if that attribute hasn't changed.
+ If the database has the corresponding unique index, then this validation can never fail for persisted
+ records, and we could safely skip it.
+
+ *fatkodima*
+
+* Stop setting `sql_auto_is_null`
+
+ Since version 5.5 the default has been off, we no longer have to manually turn it off.
+
+ *Adam Hess*
+
+* Fix `touch` to raise an error for readonly columns.
+
+ *fatkodima*
+
+* Add ability to ignore tables by regexp for SQL schema dumps.
```ruby
- @db_config = ActiveRecord::Base.configurations.configs_for(env_name: "test", spec_name: "primary")
- config = @db_config.configuration_hash.merge(idle_timeout: "0.02")
- db_config = ActiveRecord::DatabaseConfigurations::HashConfig.new(@db_config.env_name, @db_config.spec_name, config)
+ ActiveRecord::SchemaDumper.ignore_tables = [/^_/]
```
- *Eileen M. Uchitelle*, *John Crepezzi*
-
-* Remove `:connection_id` from the `sql.active_record` notification.
+ *fatkodima*
- *Aaron Patterson*, *Rafael Mendonça França*
+* Avoid queries when performing calculations on contradictory relations.
-* The `:name` key will no longer be returned as part of `DatabaseConfig#configuration_hash`. Please use `DatabaseConfig#owner_name` instead.
+ Previously calculations would make a query even when passed a
+ contradiction, such as `User.where(id: []).count`. We no longer perform a
+ query in that scenario.
- *Eileen M. Uchitelle*, *John Crepezzi*
+ This applies to the following calculations: `count`, `sum`, `average`,
+ `minimum` and `maximum`
-* ActiveRecord's `belongs_to_required_by_default` flag can now be set per model.
+ *Luan Vieira, John Hawthorn and Daniel Colson*
- You can now opt-out/opt-in specific models from having their associations required
- by default.
+* Allow using aliased attributes with `insert_all`/`upsert_all`.
- This change is meant to ease the process of migrating all your models to have
- their association required.
+ ```ruby
+ class Book < ApplicationRecord
+ alias_attribute :title, :name
+ end
- *Edouard Chin*
+ Book.insert_all [{ title: "Remote", author_id: 1 }], returning: :title
+ ```
-* The `connection_config` method has been deprecated, please use `connection_db_config` instead which will return a `DatabaseConfigurations::DatabaseConfig` instead of a `Hash`.
+ *fatkodima*
- *Eileen M. Uchitelle*, *John Crepezzi*
+* Support encrypted attributes on columns with default db values.
-* Retain explicit selections on the base model after applying `includes` and `joins`.
+ This adds support for encrypted attributes defined on columns with default values.
+ It will encrypt those values at creation time. Before, it would raise an
+ error unless `config.active_record.encryption.support_unencrypted_data` was true.
- Resolves #34889.
+ *Jorge Manrubia* and *Dima Fatko*
- *Patrick Rebsch*
+* Allow overriding `reading_request?` in `DatabaseSelector::Resolver`
-* The `database` kwarg is deprecated without replacement because it can't be used for sharding and creates an issue if it's used during a request. Applications that need to create new connections should use `connects_to` instead.
+ The default implementation checks if a request is a `get?` or `head?`,
+ but you can now change it to anything you like. If the method returns true,
+ `Resolver#read` gets called meaning the request could be served by the
+ replica database.
- *Eileen M. Uchitelle*, *John Crepezzi*
+ *Alex Ghiculescu*
-* Allow attributes to be fetched from Arel node groupings.
+* Remove `ActiveRecord.legacy_connection_handling`.
- *Jeff Emminger*, *Gannon McGibbon*
+ *Eileen M. Uchitelle*
-* A database URL can now contain a querystring value that contains an equal sign. This is needed to support passing PostgreSQL `options`.
+* `rails db:schema:{dump,load}` now checks `ENV["SCHEMA_FORMAT"]` before config
- *Joshua Flanagan*
+ Since `rails db:structure:{dump,load}` was deprecated there wasn't a simple
+ way to dump a schema to both SQL and Ruby formats. You can now do this with
+ an environment variable. For example:
-* Calling methods like `establish_connection` with a `Hash` which is invalid (eg: no `adapter`) will now raise an error the same way as connections defined in `config/database.yml`.
+ ```
+ SCHEMA_FORMAT=sql rake db:schema:dump
+ ```
- *John Crepezzi*
+ *Alex Ghiculescu*
-* Specifying `implicit_order_column` now subsorts the records by primary key if available to ensure deterministic results.
+* Fixed MariaDB default function support.
- *Paweł Urbanek*
+ Defaults would be written wrong in "db/schema.rb" and not work correctly
+ if using `db:schema:load`. Further more the function name would be
+ added as string content when saving new records.
-* `where(attr => [])` now loads an empty result without making a query.
+ *kaspernj*
- *John Hawthorn*
+* Add `active_record.destroy_association_async_batch_size` configuration
-* Fixed the performance regression for `primary_keys` introduced MySQL 8.0.
+ This allows applications to specify the maximum number of records that will
+ be destroyed in a single background job by the `dependent: :destroy_async`
+ association option. By default, the current behavior will remain the same:
+ when a parent record is destroyed, all dependent records will be destroyed
+ in a single background job. If the number of dependent records is greater
+ than this configuration, the records will be destroyed in multiple
+ background jobs.
- *Hiroyuki Ishii*
+ *Nick Holden*
-* Add support for `belongs_to` to `has_many` inversing.
+* Fix `remove_foreign_key` with `:if_exists` option when foreign key actually exists.
- *Gannon McGibbon*
+ *fatkodima*
-* Allow length configuration for `has_secure_token` method. The minimum length
- is set at 24 characters.
+* Remove `--no-comments` flag in structure dumps for PostgreSQL
- Before:
+ This broke some apps that used custom schema comments. If you don't want
+ comments in your structure dump, you can use:
```ruby
- has_secure_token :auth_token
+ ActiveRecord::Tasks::DatabaseTasks.structure_dump_flags = ['--no-comments']
```
- After:
+ *Alex Ghiculescu*
- ```ruby
- has_secure_token :default_token # 24 characters
- has_secure_token :auth_token, length: 36 # 36 characters
- has_secure_token :invalid_token, length: 12 # => ActiveRecord::SecureToken::MinimumLengthError
- ```
+* Reduce the memory footprint of fixtures accessors.
- *Bernardo de Araujo*
+ Until now fixtures accessors were eagerly defined using `define_method`.
+ So the memory usage was directly dependent of the number of fixtures and
+ test suites.
-* Deprecate `DatabaseConfigurations#to_h`. These connection hashes are still available via `ActiveRecord::Base.configurations.configs_for`.
+ Instead fixtures accessors are now implemented with `method_missing`,
+ so they incur much less memory and CPU overhead.
- *Eileen Uchitelle*, *John Crepezzi*
+ *Jean Boussier*
-* Add `DatabaseConfig#configuration_hash` to return database configuration hashes with symbol keys, and use all symbol-key configuration hashes internally. Deprecate `DatabaseConfig#config` which returns a String-keyed `Hash` with the same values.
+* Fix `config.active_record.destroy_association_async_job` configuration
- *John Crepezzi*, *Eileen Uchitelle*
+ `config.active_record.destroy_association_async_job` should allow
+ applications to specify the job that will be used to destroy associated
+ records in the background for `has_many` associations with the
+ `dependent: :destroy_async` option. Previously, that was ignored, which
+ meant the default `ActiveRecord::DestroyAssociationAsyncJob` always
+ destroyed records in the background.
-* Allow column names to be passed to `remove_index` positionally along with other options.
+ *Nick Holden*
- Passing other options can be necessary to make `remove_index` correctly reversible.
+* Fix `change_column_comment` to preserve column's AUTO_INCREMENT in the MySQL adapter
- Before:
+ *fatkodima*
- add_index :reports, :report_id # => works
- add_index :reports, :report_id, unique: true # => works
- remove_index :reports, :report_id # => works
- remove_index :reports, :report_id, unique: true # => ArgumentError
+* Fix quoting of `ActiveSupport::Duration` and `Rational` numbers in the MySQL adapter.
- After:
+ *Kevin McPhillips*
- remove_index :reports, :report_id, unique: true # => works
+* Allow column name with COLLATE (e.g., title COLLATE "C") as safe SQL string
- *Eugene Kenny*
+ *Shugo Maeda*
-* Allow bulk `ALTER` statements to drop and recreate indexes with the same name.
+* Permit underscores in the VERSION argument to database rake tasks.
- *Eugene Kenny*
+ *Eddie Lebow*
-* `insert`, `insert_all`, `upsert`, and `upsert_all` now clear the query cache.
+* Reversed the order of `INSERT` statements in `structure.sql` dumps
- *Eugene Kenny*
+ This should decrease the likelihood of merge conflicts. New migrations
+ will now be added at the top of the list.
-* Call `while_preventing_writes` directly from `connected_to`.
+ For existing apps, there will be a large diff the next time `structure.sql`
+ is generated.
- In some cases application authors want to use the database switching middleware and make explicit calls with `connected_to`. It's possible for an app to turn off writes and not turn them back on by the time we call `connected_to(role: :writing)`.
+ *Alex Ghiculescu*, *Matt Larraz*
- This change allows apps to fix this by assuming if a role is writing we want to allow writes, except in the case it's explicitly turned off.
+* Fix PG.connect keyword arguments deprecation warning on ruby 2.7
- *Eileen M. Uchitelle*
+ Fixes #44307.
+
+ *Nikita Vasilevsky*
-* Improve detection of ActiveRecord::StatementTimeout with mysql2 adapter in the edge case when the query is terminated during filesort.
+* Fix dropping DB connections after serialization failures and deadlocks.
- *Kir Shatrov*
+ Prior to 6.1.4, serialization failures and deadlocks caused rollbacks to be
+ issued for both real transactions and savepoints. This breaks MySQL which
+ disallows rollbacks of savepoints following a deadlock.
-* Stop trying to read yaml file fixtures when loading Active Record fixtures.
+ 6.1.4 removed these rollbacks, for both transactions and savepoints, causing
+ the DB connection to be left in an unknown state and thus discarded.
- *Gannon McGibbon*
+ These rollbacks are now restored, except for savepoints on MySQL.
-* Deprecate `.reorder(nil)` with `.first` / `.first!` taking non-deterministic result.
+ *Thomas Morgan*
- To continue taking non-deterministic result, use `.take` / `.take!` instead.
+* Make `ActiveRecord::ConnectionPool` Fiber-safe
- *Ryuta Kamizono*
+ When `ActiveSupport::IsolatedExecutionState.isolation_level` is set to `:fiber`,
+ the connection pool now supports multiple Fibers from the same Thread checking
+ out connections from the pool.
-* Preserve user supplied joins order as much as possible.
+ *Alex Matchneer*
- Fixes #36761, #34328, #24281, #12953.
+* Add `update_attribute!` to `ActiveRecord::Persistence`
- *Ryuta Kamizono*
+ Similar to `update_attribute`, but raises `ActiveRecord::RecordNotSaved` when a `before_*` callback throws `:abort`.
-* Allow `matches_regex` and `does_not_match_regexp` on the MySQL Arel visitor.
+ ```ruby
+ class Topic < ActiveRecord::Base
+ before_save :check_title
- *James Pearson*
+ def check_title
+ throw(:abort) if title == "abort"
+ end
+ end
-* Allow specifying fixtures to be ignored by setting `ignore` in YAML file's '_fixture' section.
+ topic = Topic.create(title: "Test Title")
+ # #=> #<Topic title: "Test Title">
+ topic.update_attribute!(:title, "Another Title")
+ # #=> #<Topic title: "Another Title">
+ topic.update_attribute!(:title, "abort")
+ # raises ActiveRecord::RecordNotSaved
+ ```
- *Tongfei Gao*
+ *Drew Tempelmeyer*
-* Make the DATABASE_URL env variable only affect the primary connection. Add new env variables for multiple databases.
+* Avoid loading every record in `ActiveRecord::Relation#pretty_print`
- *John Crepezzi*, *Eileen Uchitelle*
+ ```ruby
+ # Before
+ pp Foo.all # Loads the whole table.
-* Add a warning for enum elements with 'not_' prefix.
+ # After
+ pp Foo.all # Shows 10 items and an ellipsis.
+ ```
- class Foo
- enum status: [:sent, :not_sent]
- end
+ *Ulysse Buonomo*
- *Edu Depetris*
+* Change `QueryMethods#in_order_of` to drop records not listed in values.
-* Make currency symbols optional for money column type in PostgreSQL.
+ `in_order_of` now filters down to the values provided, to match the behavior of the `Enumerable` version.
- *Joel Schneider*
+ *Kevin Newton*
-* Add support for beginless ranges, introduced in Ruby 2.7.
+* Allow named expression indexes to be revertible.
- *Josh Goodall*
+ Previously, the following code would raise an error in a reversible migration executed while rolling back, due to the index name not being used in the index removal.
-* Add `database_exists?` method to connection adapters to check if a database exists.
+ ```ruby
+ add_index(:settings, "(data->'property')", using: :gin, name: :index_settings_data_property)
+ ```
- *Guilherme Mansur*
+ Fixes #43331.
-* Loading the schema for a model that has no `table_name` raises a `TableNotSpecified` error.
+ *Oliver Günther*
- *Guilherme Mansur*, *Eugene Kenny*
+* Fix incorrect argument in PostgreSQL structure dump tasks.
-* PostgreSQL: Fix GROUP BY with ORDER BY virtual count attribute.
+ Updating the `--no-comment` argument added in Rails 7 to the correct `--no-comments` argument.
- Fixes #36022.
+ *Alex Dent*
- *Ryuta Kamizono*
+* Fix migration compatibility to create SQLite references/belongs_to column as integer when migration version is 6.0.
-* Make ActiveRecord `ConnectionPool.connections` method thread-safe.
+ Reference/belongs_to in migrations with version 6.0 were creating columns as
+ bigint instead of integer for the SQLite Adapter.
+
+ *Marcelo Lauxen*
+
+* Fix `QueryMethods#in_order_of` to handle empty order list.
- Fixes #36465.
+ ```ruby
+ Post.in_order_of(:id, []).to_a
+ ```
- *Jeff Doering*
+ Also more explicitly set the column as secondary order, so that any other
+ value is still ordered.
+
+ *Jean Boussier*
-* Add support for multiple databases to `rails db:abort_if_pending_migrations`.
+* Fix quoting of column aliases generated by calculation methods.
- *Mark Lee*
+ Since the alias is derived from the table name, we can't assume the result
+ is a valid identifier.
-* Fix sqlite3 collation parsing when using decimal columns.
+ ```ruby
+ class Test < ActiveRecord::Base
+ self.table_name = '1abc'
+ end
+ Test.group(:id).count
+ # syntax error at or near "1" (ActiveRecord::StatementInvalid)
+ # LINE 1: SELECT COUNT(*) AS count_all, "1abc"."id" AS 1abc_id FROM "1...
+ ```
- *Martin R. Schuster*
+ *Jean Boussier*
-* Fix invalid schema when primary key column has a comment.
+* Add `authenticate_by` when using `has_secure_password`.
- Fixes #29966.
+ `authenticate_by` is intended to replace code like the following, which
+ returns early when a user with a matching email is not found:
- *Guilherme Goettems Schneider*
+ ```ruby
+ User.find_by(email: "...")&.authenticate("...")
+ ```
-* Fix table comment also being applied to the primary key column.
+ Such code is vulnerable to timing-based enumeration attacks, wherein an
+ attacker can determine if a user account with a given email exists. After
+ confirming that an account exists, the attacker can try passwords associated
+ with that email address from other leaked databases, in case the user
+ re-used a password across multiple sites (a common practice). Additionally,
+ knowing an account email address allows the attacker to attempt a targeted
+ phishing ("spear phishing") attack.
- *Guilherme Goettems Schneider*
+ `authenticate_by` addresses the vulnerability by taking the same amount of
+ time regardless of whether a user with a matching email is found:
-* Allow generated `create_table` migrations to include or skip timestamps.
+ ```ruby
+ User.authenticate_by(email: "...", password: "...")
+ ```
- *Michael Duchemin*
+ *Jonathan Hefner*
-Please check [6-0-stable](https://github.com/rails/rails/blob/6-0-stable/activerecord/CHANGELOG.md) for previous changes.
+Please check [7-0-stable](https://github.com/rails/rails/blob/7-0-stable/activerecord/CHANGELOG.md) for previous changes.
diff --git a/activerecord/MIT-LICENSE b/activerecord/MIT-LICENSE
index 508e65ed03..5b86109107 100644
--- a/activerecord/MIT-LICENSE
+++ b/activerecord/MIT-LICENSE
@@ -1,4 +1,4 @@
-Copyright (c) 2004-2022 David Heinemeier Hansson
+Copyright (c) David Heinemeier Hansson
Arel originally copyright (c) 2007-2016 Nick Kallen, Bryan Helmkamp, Emilio Tagua, Aaron Patterson
diff --git a/activerecord/README.rdoc b/activerecord/README.rdoc
index 306982d17b..9d06a11623 100644
--- a/activerecord/README.rdoc
+++ b/activerecord/README.rdoc
@@ -1,4 +1,4 @@
-= Active Record -- Object-relational mapping in Rails
+= Active Record -- Object-relational mapping in \Rails
Active Record connects classes to relational database tables to establish an
almost zero-configuration persistence layer for applications. The library
@@ -13,29 +13,28 @@ columns. Although these mappings can be defined explicitly, it's recommended
to follow naming conventions, especially when getting started with the
library.
-You can read more about Active Record in the {Active Record Basics}[https://edgeguides.rubyonrails.org/active_record_basics.html] guide.
+You can read more about Active Record in the {Active Record Basics}[https://guides.rubyonrails.org/active_record_basics.html] guide.
A short rundown of some of the major features:
* Automated mapping between classes and tables, attributes and columns.
- class Product < ActiveRecord::Base
- end
-
- {Learn more}[link:classes/ActiveRecord/Base.html]
+ class Product < ActiveRecord::Base
+ end
-The Product class is automatically mapped to the table named "products",
-which might look like this:
+ The Product class is automatically mapped to the table named "products",
+ which might look like this:
- CREATE TABLE products (
- id bigint NOT NULL auto_increment,
- name varchar(255),
- PRIMARY KEY (id)
- );
+ CREATE TABLE products (
+ id bigint NOT NULL auto_increment,
+ name varchar(255),
+ PRIMARY KEY (id)
+ );
-This would also define the following accessors: <tt>Product#name</tt> and
-<tt>Product#name=(new_name)</tt>.
+ This would also define the following accessors: <tt>Product#name</tt> and
+ <tt>Product#name=(new_name)</tt>.
+ {Learn more}[link:classes/ActiveRecord/Base.html]
* Associations between objects defined by simple class methods.
@@ -140,7 +139,7 @@ This would also define the following accessors: <tt>Product#name</tt> and
* Database agnostic schema management with Migrations.
- class AddSystemSettings < ActiveRecord::Migration[6.0]
+ class AddSystemSettings < ActiveRecord::Migration[7.1]
def up
create_table :system_settings do |t|
t.string :name
@@ -167,6 +166,7 @@ Active Record is an implementation of the object-relational mapping (ORM)
pattern[https://www.martinfowler.com/eaaCatalog/activeRecord.html] by the same
name described by Martin Fowler:
+>>>
"An object that wraps a row in a database table or view,
encapsulates the database access, and adds domain logic on that data."
@@ -192,7 +192,7 @@ The latest version of Active Record can be installed with RubyGems:
$ gem install activerecord
-Source code can be downloaded as part of the Rails project on GitHub:
+Source code can be downloaded as part of the \Rails project on GitHub:
* https://github.com/rails/rails/tree/main/activerecord
@@ -210,7 +210,7 @@ API documentation is at:
* https://api.rubyonrails.org
-Bug reports for the Ruby on Rails project can be filed here:
+Bug reports for the Ruby on \Rails project can be filed here:
* https://github.com/rails/rails/issues
diff --git a/activerecord/RUNNING_UNIT_TESTS.rdoc b/activerecord/RUNNING_UNIT_TESTS.rdoc
index 37473c37c6..a048463edd 100644
--- a/activerecord/RUNNING_UNIT_TESTS.rdoc
+++ b/activerecord/RUNNING_UNIT_TESTS.rdoc
@@ -21,6 +21,7 @@ example:
Simply executing <tt>bundle exec rake test</tt> is equivalent to the following:
$ bundle exec rake test:mysql2
+ $ bundle exec rake test:trilogy
$ bundle exec rake test:postgresql
$ bundle exec rake test:sqlite3
@@ -33,6 +34,14 @@ There should be tests available for each database backend listed in the {Config
File}[rdoc-label:label-Config+File]. (the exact set of available tests is
defined in +Rakefile+)
+There are some performance tests for the encryption system that can be run with.
+
+ $ rake test:encryption:performance:mysql2
+ $ rake test:integration:active_job:postgresql
+ $ rake test:encryption:performance:sqlite3
+
+These performance tests are not executed as part of the regular testing tasks.
+
== Config File
If +test/config.yml+ is present, then its parameters are obeyed; otherwise, the
diff --git a/activerecord/Rakefile b/activerecord/Rakefile
index ef9ab27813..98039dbff7 100644
--- a/activerecord/Rakefile
+++ b/activerecord/Rakefile
@@ -18,16 +18,16 @@ def run_without_aborting(*tasks)
abort "Errors running #{errors.join(', ')}" if errors.any?
end
-desc "Run mysql2, sqlite, and postgresql tests by default"
+desc "Run mysql2, trilogy, sqlite, and postgresql tests by default"
task default: :test
task :package
-desc "Run mysql2, sqlite, and postgresql tests"
+desc "Run mysql2, trilogy, sqlite, and postgresql tests"
task :test do
tasks = defined?(JRUBY_VERSION) ?
%w(test_jdbcmysql test_jdbcsqlite3 test_jdbcpostgresql) :
- %w(test_mysql2 test_sqlite3 test_postgresql)
+ %w(test_mysql2 test_trilogy test_sqlite3 test_postgresql)
run_without_aborting(*tasks)
end
@@ -35,9 +35,17 @@ namespace :test do
task :isolated do
tasks = defined?(JRUBY_VERSION) ?
%w(isolated_test_jdbcmysql isolated_test_jdbcsqlite3 isolated_test_jdbcpostgresql) :
- %w(isolated_test_mysql2 isolated_test_sqlite3 isolated_test_postgresql)
+ %w(isolated_test_mysql2 isolated_test_trilogy isolated_test_sqlite3 isolated_test_postgresql)
run_without_aborting(*tasks)
end
+
+ Rake::TestTask.new(:arel) do |t|
+ t.libs << "test"
+ t.test_files = FileList["test/cases/arel/**/*_test.rb"]
+
+ t.warning = true
+ t.verbose = true
+ end
end
namespace :db do
@@ -48,19 +56,23 @@ namespace :db do
task drop: ["db:mysql:drop", "db:postgresql:drop"]
end
-%w( mysql2 postgresql sqlite3 sqlite3_mem oracle jdbcmysql jdbcpostgresql jdbcsqlite3 jdbcderby jdbch2 jdbchsqldb ).each do |adapter|
+%w( mysql2 trilogy postgresql sqlite3 sqlite3_mem oracle jdbcmysql jdbcpostgresql jdbcsqlite3 jdbcderby jdbch2 jdbchsqldb ).each do |adapter|
namespace :test do
- Rake::TestTask.new(adapter => "#{adapter}:env") { |t|
+ Rake::TestTask.new(adapter => "#{adapter}:env") do |t|
adapter_short = adapter[/^[a-z0-9]+/]
t.libs << "test"
- t.test_files = (FileList["test/cases/**/*_test.rb"].reject {
- |x| x.include?("/adapters/")
+ files = (FileList["test/cases/**/*_test.rb"].reject {
+ |x| x.include?("/adapters/") || x.include?("/encryption/performance")
} + FileList["test/cases/adapters/#{adapter_short}/**/*_test.rb"])
+ files = files + FileList["test/cases/adapters/abstract_mysql_adapter/**/*_test.rb"] if ["mysql2", "trilogy"].include?(adapter)
+
+ t.test_files = files
+ t.test_files = files
t.warning = true
t.verbose = true
t.ruby_opts = ["--dev"] if defined?(JRUBY_VERSION)
- }
+ end
namespace :integration do
# Active Job Integration Tests
@@ -99,13 +111,14 @@ end
# We need to dance around minitest autorun, though.
require "minitest"
- Minitest.instance_eval do
- alias _original_autorun autorun
+ Minitest.singleton_class.class_eval do
+ alias_method :_original_autorun, :autorun
def autorun
# no-op
end
require "cases/helper"
- alias autorun _original_autorun
+ alias_method :autorun, :autorun # suppress redefinition warning
+ alias_method :autorun, :_original_autorun
end
failing_files = []
@@ -113,14 +126,14 @@ end
test_options = ENV["TESTOPTS"].to_s.split(/[\s]+/)
test_files = (Dir["test/cases/**/*_test.rb"].reject {
- |x| x.include?("/adapters/")
+ |x| x.include?("/adapters/") || x.include?("/encryption/performance")
} + Dir["test/cases/adapters/#{adapter_short}/**/*_test.rb"]).sort
if ENV["BUILDKITE_PARALLEL_JOB_COUNT"]
n = ENV["BUILDKITE_PARALLEL_JOB"].to_i
m = ENV["BUILDKITE_PARALLEL_JOB_COUNT"].to_i
- test_files = test_files.each_slice(m).map { |slice| slice[n] }.compact
+ test_files = test_files.each_slice(m).filter_map { |slice| slice[n] }
end
test_files.each do |file|
@@ -165,6 +178,20 @@ end
end
end
end
+
+ namespace :encryption do
+ namespace :performance do
+ Rake::TestTask.new(adapter => "#{adapter}:env") do |t|
+ t.description = "Encryption performance tests for #{adapter}"
+ t.libs << "test"
+ t.test_files = FileList["test/cases/encryption/performance/*_test.rb"]
+
+ t.warning = true
+ t.verbose = true
+ t.ruby_opts = ["--dev"] if defined?(JRUBY_VERSION)
+ end
+ end
+ end
end
namespace adapter do
@@ -182,23 +209,52 @@ end
namespace :db do
namespace :mysql do
- connection_arguments = lambda do |connection_name|
- config = ARTest.config["connections"]["mysql2"][connection_name]
- ["--user=#{config["username"]}", "--password=#{config["password"]}", ("--host=#{config["host"]}" if config["host"])].join(" ")
+ mysql2_config = ARTest.config["connections"]["mysql2"]
+ mysql2_connection_arguments = lambda do |connection_name|
+ mysql2_connection = mysql2_config[connection_name]
+ ["--user=#{mysql2_connection["username"]}", ("--password=#{mysql2_connection["password"]}" if mysql2_connection["password"]), ("--host=#{mysql2_connection["host"]}" if mysql2_connection["host"]), ("--socket=#{mysql2_connection["socket"]}" if mysql2_connection["socket"])].join(" ")
+ end
+
+ trilogy_config = ARTest.config["connections"]["trilogy"]
+ trilogy_connection_arguments = lambda do |connection_name|
+ trilogy_connection = trilogy_config[connection_name]
+ ["--user=#{trilogy_connection["username"]}", ("--password=#{trilogy_connection["password"]}" if trilogy_connection["password"]), ("--host=#{trilogy_connection["host"]}" if trilogy_connection["host"]), ("--socket=#{trilogy_connection["socket"]}" if trilogy_connection["socket"])].join(" ")
+ end
+
+ mysql_configs = [mysql2_config, trilogy_config]
+
+ desc "Create the MySQL Rails User"
+ task :build_user do
+ if ENV["MYSQL_CODESPACES"]
+ mysql_command = "mysql -uroot -proot -e"
+ else
+ mysql_command = "mysql -uroot -e"
+ end
+
+ mysql_configs.each do |config|
+ %x( #{mysql_command} "CREATE USER IF NOT EXISTS '#{config["arunit"]["username"]}'@'%';" )
+ %x( #{mysql_command} "CREATE USER IF NOT EXISTS '#{config["arunit2"]["username"]}'@'%';" )
+ %x( #{mysql_command} "GRANT ALL PRIVILEGES ON #{config["arunit"]["database"]}.* to '#{config["arunit"]["username"]}'@'%'" )
+ %x( #{mysql_command} "GRANT ALL PRIVILEGES ON #{config["arunit2"]["database"]}.* to '#{config["arunit2"]["username"]}'@'%'" )
+ %x( #{mysql_command} "GRANT ALL PRIVILEGES ON inexistent_activerecord_unittest.* to '#{config["arunit"]["username"]}'@'%';" )
+ end
end
desc "Build the MySQL test databases"
- task :build do
- config = ARTest.config["connections"]["mysql2"]
- %x( mysql #{connection_arguments["arunit"]} -e "create DATABASE #{config["arunit"]["database"]} DEFAULT CHARACTER SET utf8mb4" )
- %x( mysql #{connection_arguments["arunit2"]} -e "create DATABASE #{config["arunit2"]["database"]} DEFAULT CHARACTER SET utf8mb4" )
+ task build: ["db:mysql:build_user"] do
+ %x( mysql #{mysql2_connection_arguments["arunit"]} -e "create DATABASE IF NOT EXISTS #{mysql2_config["arunit"]["database"]} DEFAULT CHARACTER SET utf8mb4" )
+ %x( mysql #{mysql2_connection_arguments["arunit2"]} -e "create DATABASE IF NOT EXISTS #{mysql2_config["arunit2"]["database"]} DEFAULT CHARACTER SET utf8mb4" )
+ %x( mysql #{trilogy_connection_arguments["arunit"]} -e "create DATABASE IF NOT EXISTS #{trilogy_config["arunit"]["database"]} DEFAULT CHARACTER SET utf8mb4" )
+ %x( mysql #{trilogy_connection_arguments["arunit2"]} -e "create DATABASE IF NOT EXISTS #{trilogy_config["arunit2"]["database"]} DEFAULT CHARACTER SET utf8mb4" )
end
desc "Drop the MySQL test databases"
- task :drop do
- config = ARTest.config["connections"]["mysql2"]
- %x( mysqladmin #{connection_arguments["arunit"]} -f drop #{config["arunit"]["database"]} )
- %x( mysqladmin #{connection_arguments["arunit2"]} -f drop #{config["arunit2"]["database"]} )
+ task drop: ["db:mysql:build_user"] do
+ %x( mysql #{mysql2_connection_arguments["arunit"]} -e "drop database IF EXISTS #{mysql2_config["arunit"]["database"]}" )
+ %x( mysql #{mysql2_connection_arguments["arunit2"]} -e "drop database IF EXISTS #{mysql2_config["arunit2"]["database"]}" )
+
+ %x( mysql #{trilogy_connection_arguments["arunit"]} -e "drop database IF EXISTS #{trilogy_config["arunit"]["database"]}" )
+ %x( mysql #{trilogy_connection_arguments["arunit2"]} -e "drop database IF EXISTS #{trilogy_config["arunit2"]["database"]}" )
end
desc "Rebuild the MySQL test databases"
@@ -209,15 +265,15 @@ namespace :db do
desc "Build the PostgreSQL test databases"
task :build do
config = ARTest.config["connections"]["postgresql"]
- %x( createdb -E UTF8 -T template0 #{config["arunit"]["database"]} )
- %x( createdb -E UTF8 -T template0 #{config["arunit2"]["database"]} )
+ %x( createdb -E UTF8 -T template0 #{config["arunit"]["database"]} --lc-collate en_US.UTF-8 )
+ %x( createdb -E UTF8 -T template0 #{config["arunit2"]["database"]} --lc-collate en_US.UTF-8 )
end
desc "Drop the PostgreSQL test databases"
task :drop do
config = ARTest.config["connections"]["postgresql"]
- %x( dropdb #{config["arunit"]["database"]} )
- %x( dropdb #{config["arunit2"]["database"]} )
+ %x( dropdb --if-exists #{config["arunit"]["database"]} )
+ %x( dropdb --if-exists #{config["arunit2"]["database"]} )
end
desc "Rebuild the PostgreSQL test databases"
diff --git a/activerecord/activerecord.gemspec b/activerecord/activerecord.gemspec
index 412dcf91be..016528bfdc 100644
--- a/activerecord/activerecord.gemspec
+++ b/activerecord/activerecord.gemspec
@@ -9,7 +9,7 @@
s.summary = "Object-relational mapper framework (part of Rails)."
s.description = "Databases on Rails. Build a persistent domain model by mapping database tables to Ruby classes. Strong conventions for associations, validations, aggregations, migrations, and testing come baked-in."
- s.required_ruby_version = ">= 2.5.0"
+ s.required_ruby_version = ">= 2.7.0"
s.license = "MIT"
@@ -37,4 +37,5 @@
s.add_dependency "activesupport", version
s.add_dependency "activemodel", version
+ s.add_dependency "timeout", ">= 0.4.0"
end
diff --git a/activerecord/bin/test b/activerecord/bin/test
index 9ecf27ce67..872f80fa99 100755
--- a/activerecord/bin/test
+++ b/activerecord/bin/test
@@ -8,14 +8,14 @@ if adapter_index
end
COMPONENT_ROOT = File.expand_path("..", __dir__)
-require_relative "../../tools/test"
+require_relative "../test/support/tools"
module Minitest
def self.plugin_active_record_options(opts, options)
opts.separator ""
opts.separator "Active Record options:"
opts.on("-a", "--adapter [ADAPTER]",
- "Run tests using a specific adapter (sqlite3, sqlite3_mem, mysql2, postgresql)") do |adapter|
+ "Run tests using a specific adapter (sqlite3, sqlite3_mem, mysql2, trilogy, postgresql)") do |adapter|
ENV["ARCONN"] = adapter.strip
end
diff --git a/activerecord/lib/active_record.rb b/activerecord/lib/active_record.rb
index 9b41ba8f47..85dfd20fae 100644
--- a/activerecord/lib/active_record.rb
+++ b/activerecord/lib/active_record.rb
@@ -1,7 +1,7 @@
# frozen_string_literal: true
#--
-# Copyright (c) 2004-2022 David Heinemeier Hansson
+# Copyright (c) David Heinemeier Hansson
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
@@ -30,32 +30,40 @@
require "yaml"
require "active_record/version"
+require "active_record/deprecator"
require "active_model/attribute_set"
require "active_record/errors"
+# :include: activerecord/README.rdoc
module ActiveRecord
extend ActiveSupport::Autoload
autoload :Base
autoload :Callbacks
- autoload :Core
autoload :ConnectionHandling
+ autoload :Core
autoload :CounterCache
- autoload :DynamicMatchers
autoload :DelegatedType
+ autoload :DestroyAssociationAsyncJob
+ autoload :DynamicMatchers
+ autoload :Encryption
autoload :Enum
- autoload :InternalMetadata
autoload :Explain
+ autoload :FixtureSet, "active_record/fixtures"
autoload :Inheritance
autoload :Integration
+ autoload :InternalMetadata
+ autoload :LogSubscriber
+ autoload :Marshalling
autoload :Migration
autoload :Migrator, "active_record/migration"
autoload :ModelSchema
autoload :NestedAttributes
autoload :NoTouching
- autoload :TouchLater
+ autoload :Normalization
autoload :Persistence
autoload :QueryCache
+ autoload :QueryLogs
autoload :Querying
autoload :ReadonlyAttributes
autoload :RecordInvalid, "active_record/validations"
@@ -66,51 +74,55 @@ module ActiveRecord
autoload :SchemaDumper
autoload :SchemaMigration
autoload :Scoping
+ autoload :SecurePassword
+ autoload :SecureToken
autoload :Serialization
- autoload :StatementCache
- autoload :Store
autoload :SignedId
+ autoload :Store
autoload :Suppressor
+ autoload :TestDatabases
+ autoload :TestFixtures, "active_record/fixtures"
autoload :Timestamp
+ autoload :TokenFor
+ autoload :TouchLater
autoload :Transactions
autoload :Translation
autoload :Validations
- autoload :SecureToken
- autoload :DestroyAssociationAsyncJob
eager_autoload do
- autoload :ConnectionAdapters
-
autoload :Aggregations
+ autoload :AssociationRelation
autoload :Associations
+ autoload :AsynchronousQueriesTracker
autoload :AttributeAssignment
autoload :AttributeMethods
autoload :AutosaveAssociation
-
+ autoload :ConnectionAdapters
+ autoload :DisableJoinsAssociationRelation
+ autoload :FutureResult
autoload :LegacyYamlAdapter
-
+ autoload :Promise
autoload :Relation
- autoload :AssociationRelation
- autoload :NullRelation
+ autoload :Result
+ autoload :StatementCache
+ autoload :TableMetadata
+ autoload :Type
autoload_under "relation" do
- autoload :QueryMethods
- autoload :FinderMethods
+ autoload :Batches
autoload :Calculations
+ autoload :Delegation
+ autoload :FinderMethods
autoload :PredicateBuilder
+ autoload :QueryMethods
autoload :SpawnMethods
- autoload :Batches
- autoload :Delegation
end
-
- autoload :Result
- autoload :TableMetadata
- autoload :Type
end
module Coders
- autoload :YAMLColumn, "active_record/coders/yaml_column"
+ autoload :ColumnSerializer, "active_record/coders/column_serializer"
autoload :JSON, "active_record/coders/json"
+ autoload :YAMLColumn, "active_record/coders/yaml_column"
end
module AttributeMethods
@@ -122,9 +134,9 @@ module AttributeMethods
autoload :PrimaryKey
autoload :Query
autoload :Read
+ autoload :Serialization
autoload :TimeZoneConversion
autoload :Write
- autoload :Serialization
end
end
@@ -141,29 +153,314 @@ module Scoping
extend ActiveSupport::Autoload
eager_autoload do
- autoload :Named
autoload :Default
+ autoload :Named
end
end
module Middleware
extend ActiveSupport::Autoload
- autoload :DatabaseSelector, "active_record/middleware/database_selector"
+ autoload :DatabaseSelector
+ autoload :ShardSelector
end
module Tasks
extend ActiveSupport::Autoload
autoload :DatabaseTasks
- autoload :SQLiteDatabaseTasks, "active_record/tasks/sqlite_database_tasks"
autoload :MySQLDatabaseTasks, "active_record/tasks/mysql_database_tasks"
- autoload :PostgreSQLDatabaseTasks,
- "active_record/tasks/postgresql_database_tasks"
+ autoload :PostgreSQLDatabaseTasks, "active_record/tasks/postgresql_database_tasks"
+ autoload :SQLiteDatabaseTasks, "active_record/tasks/sqlite_database_tasks"
end
- autoload :TestDatabases, "active_record/test_databases"
- autoload :TestFixtures, "active_record/fixtures"
+ singleton_class.attr_accessor :disable_prepared_statements
+ self.disable_prepared_statements = false
+
+ # Lazily load the schema cache. This option will load the schema cache
+ # when a connection is established rather than on boot. If set,
+ # +config.active_record.use_schema_cache_dump+ will be set to false.
+ singleton_class.attr_accessor :lazily_load_schema_cache
+ self.lazily_load_schema_cache = false
+
+ # A list of tables or regex's to match tables to ignore when
+ # dumping the schema cache. For example if this is set to +[/^_/]+
+ # the schema cache will not dump tables named with an underscore.
+ singleton_class.attr_accessor :schema_cache_ignored_tables
+ self.schema_cache_ignored_tables = []
+
+ singleton_class.attr_reader :default_timezone
+
+ # Determines whether to use Time.utc (using :utc) or Time.local (using :local) when pulling
+ # dates and times from the database. This is set to :utc by default.
+ def self.default_timezone=(default_timezone)
+ unless %i(local utc).include?(default_timezone)
+ raise ArgumentError, "default_timezone must be either :utc (default) or :local."
+ end
+
+ @default_timezone = default_timezone
+ end
+
+ self.default_timezone = :utc
+
+ # The action to take when database query produces warning.
+ # Must be one of :ignore, :log, :raise, :report, or a custom proc.
+ # The default is :ignore.
+ singleton_class.attr_reader :db_warnings_action
+
+ def self.db_warnings_action=(action)
+ @db_warnings_action =
+ case action
+ when :ignore
+ nil
+ when :log
+ ->(warning) do
+ warning_message = "[#{warning.class}] #{warning.message}"
+ warning_message += " (#{warning.code})" if warning.code
+ ActiveRecord::Base.logger.warn(warning_message)
+ end
+ when :raise
+ ->(warning) { raise warning }
+ when :report
+ ->(warning) { Rails.error.report(warning, handled: true) }
+ when Proc
+ action
+ else
+ raise ArgumentError, "db_warnings_action must be one of :ignore, :log, :raise, :report, or a custom proc."
+ end
+ end
+
+ self.db_warnings_action = :ignore
+
+ # Specify allowlist of database warnings.
+ singleton_class.attr_accessor :db_warnings_ignore
+ self.db_warnings_ignore = []
+
+ singleton_class.attr_accessor :writing_role
+ self.writing_role = :writing
+
+ singleton_class.attr_accessor :reading_role
+ self.reading_role = :reading
+
+ def self.legacy_connection_handling=(_)
+ raise ArgumentError, <<~MSG.squish
+ The `legacy_connection_handling` setter was deprecated in 7.0 and removed in 7.1,
+ but is still defined in your configuration. Please remove this call as it no longer
+ has any effect."
+ MSG
+ end
+
+ # Sets the async_query_executor for an application. By default the thread pool executor
+ # set to +nil+ which will not run queries in the background. Applications must configure
+ # a thread pool executor to use this feature. Options are:
+ #
+ # * nil - Does not initialize a thread pool executor. Any async calls will be
+ # run in the foreground.
+ # * :global_thread_pool - Initializes a single +Concurrent::ThreadPoolExecutor+
+ # that uses the +async_query_concurrency+ for the +max_threads+ value.
+ # * :multi_thread_pool - Initializes a +Concurrent::ThreadPoolExecutor+ for each
+ # database connection. The initializer values are defined in the configuration hash.
+ singleton_class.attr_accessor :async_query_executor
+ self.async_query_executor = nil
+
+ def self.global_thread_pool_async_query_executor # :nodoc:
+ concurrency = global_executor_concurrency || 4
+ @global_thread_pool_async_query_executor ||= Concurrent::ThreadPoolExecutor.new(
+ min_threads: 0,
+ max_threads: concurrency,
+ max_queue: concurrency * 4,
+ fallback_policy: :caller_runs
+ )
+ end
+
+ # Set the +global_executor_concurrency+. This configuration value can only be used
+ # with the global thread pool async query executor.
+ def self.global_executor_concurrency=(global_executor_concurrency)
+ if self.async_query_executor.nil? || self.async_query_executor == :multi_thread_pool
+ raise ArgumentError, "`global_executor_concurrency` cannot be set when using the executor is nil or set to multi_thead_pool. For multiple thread pools, please set the concurrency in your database configuration."
+ end
+
+ @global_executor_concurrency = global_executor_concurrency
+ end
+
+ def self.global_executor_concurrency # :nodoc:
+ @global_executor_concurrency ||= nil
+ end
+
+ singleton_class.attr_accessor :index_nested_attribute_errors
+ self.index_nested_attribute_errors = false
+
+ ##
+ # :singleton-method:
+ #
+ # Specifies if the methods calling database queries should be logged below
+ # their relevant queries. Defaults to false.
+ singleton_class.attr_accessor :verbose_query_logs
+ self.verbose_query_logs = false
+
+ ##
+ # :singleton-method:
+ #
+ # Specifies the names of the queues used by background jobs.
+ singleton_class.attr_accessor :queues
+ self.queues = {}
+
+ singleton_class.attr_accessor :maintain_test_schema
+ self.maintain_test_schema = nil
+
+ singleton_class.attr_accessor :raise_on_assign_to_attr_readonly
+ self.raise_on_assign_to_attr_readonly = false
+
+ singleton_class.attr_accessor :belongs_to_required_validates_foreign_key
+ self.belongs_to_required_validates_foreign_key = true
+
+ singleton_class.attr_accessor :before_committed_on_all_records
+ self.before_committed_on_all_records = false
+
+ singleton_class.attr_accessor :run_after_transaction_callbacks_in_order_defined
+ self.run_after_transaction_callbacks_in_order_defined = false
+
+ singleton_class.attr_accessor :commit_transaction_on_non_local_return
+ self.commit_transaction_on_non_local_return = false
+
+ ##
+ # :singleton-method:
+ # Specify a threshold for the size of query result sets. If the number of
+ # records in the set exceeds the threshold, a warning is logged. This can
+ # be used to identify queries which load thousands of records and
+ # potentially cause memory bloat.
+ singleton_class.attr_accessor :warn_on_records_fetched_greater_than
+ self.warn_on_records_fetched_greater_than = false
+
+ singleton_class.attr_accessor :application_record_class
+ self.application_record_class = nil
+
+ ##
+ # :singleton-method:
+ # Set the application to log or raise when an association violates strict loading.
+ # Defaults to :raise.
+ singleton_class.attr_accessor :action_on_strict_loading_violation
+ self.action_on_strict_loading_violation = :raise
+
+ ##
+ # :singleton-method:
+ # Specifies the format to use when dumping the database schema with Rails'
+ # Rakefile. If :sql, the schema is dumped as (potentially database-
+ # specific) SQL statements. If :ruby, the schema is dumped as an
+ # ActiveRecord::Schema file which can be loaded into any database that
+ # supports migrations. Use :ruby if you want to have different database
+ # adapters for, e.g., your development and test environments.
+ singleton_class.attr_accessor :schema_format
+ self.schema_format = :ruby
+
+ ##
+ # :singleton-method:
+ # Specifies if an error should be raised if the query has an order being
+ # ignored when doing batch queries. Useful in applications where the
+ # scope being ignored is error-worthy, rather than a warning.
+ singleton_class.attr_accessor :error_on_ignored_order
+ self.error_on_ignored_order = false
+
+ ##
+ # :singleton-method:
+ # Specify whether or not to use timestamps for migration versions
+ singleton_class.attr_accessor :timestamped_migrations
+ self.timestamped_migrations = true
+
+ ##
+ # :singleton-method:
+ # Specify strategy to use for executing migrations.
+ singleton_class.attr_accessor :migration_strategy
+ self.migration_strategy = Migration::DefaultStrategy
+
+ ##
+ # :singleton-method:
+ # Specify whether schema dump should happen at the end of the
+ # bin/rails db:migrate command. This is true by default, which is useful for the
+ # development environment. This should ideally be false in the production
+ # environment where dumping schema is rarely needed.
+ singleton_class.attr_accessor :dump_schema_after_migration
+ self.dump_schema_after_migration = true
+
+ ##
+ # :singleton-method:
+ # Specifies which database schemas to dump when calling db:schema:dump.
+ # If the value is :schema_search_path (the default), any schemas listed in
+ # schema_search_path are dumped. Use :all to dump all schemas regardless
+ # of schema_search_path, or a string of comma separated schemas for a
+ # custom list.
+ singleton_class.attr_accessor :dump_schemas
+ self.dump_schemas = :schema_search_path
+
+ def self.suppress_multiple_database_warning
+ ActiveRecord.deprecator.warn(<<-MSG.squish)
+ config.active_record.suppress_multiple_database_warning is deprecated and will be removed in Rails 7.2.
+ It no longer has any effect and should be removed from the configuration file.
+ MSG
+ end
+
+ def self.suppress_multiple_database_warning=(value)
+ ActiveRecord.deprecator.warn(<<-MSG.squish)
+ config.active_record.suppress_multiple_database_warning= is deprecated and will be removed in Rails 7.2.
+ It no longer has any effect and should be removed from the configuration file.
+ MSG
+ end
+
+ ##
+ # :singleton-method:
+ # If true, Rails will verify all foreign keys in the database after loading fixtures.
+ # An error will be raised if there are any foreign key violations, indicating incorrectly
+ # written fixtures.
+ # Supported by PostgreSQL and SQLite.
+ singleton_class.attr_accessor :verify_foreign_keys_for_fixtures
+ self.verify_foreign_keys_for_fixtures = false
+
+ ##
+ # :singleton-method:
+ # If true, Rails will continue allowing plural association names in where clauses on singular associations
+ # This behavior will be removed in Rails 7.2.
+ singleton_class.attr_accessor :allow_deprecated_singular_associations_name
+ self.allow_deprecated_singular_associations_name = true
+
+ singleton_class.attr_accessor :query_transformers
+ self.query_transformers = []
+
+ ##
+ # :singleton-method:
+ # Application configurable boolean that instructs the YAML Coder to use
+ # an unsafe load if set to true.
+ singleton_class.attr_accessor :use_yaml_unsafe_load
+ self.use_yaml_unsafe_load = false
+
+ ##
+ # :singleton-method:
+ # Application configurable boolean that denotes whether or not to raise
+ # an exception when the PostgreSQLAdapter is provided with an integer that
+ # is wider than signed 64bit representation
+ singleton_class.attr_accessor :raise_int_wider_than_64bit
+ self.raise_int_wider_than_64bit = true
+
+ ##
+ # :singleton-method:
+ # Application configurable array that provides additional permitted classes
+ # to Psych safe_load in the YAML Coder
+ singleton_class.attr_accessor :yaml_column_permitted_classes
+ self.yaml_column_permitted_classes = [Symbol]
+
+ ##
+ # :singleton-method:
+ # Controls when to generate a value for <tt>has_secure_token</tt>
+ # declarations. Defaults to <tt>:create</tt>.
+ singleton_class.attr_accessor :generate_secure_token_on
+ self.generate_secure_token_on = :create
+
+ def self.marshalling_format_version
+ Marshalling.format_version
+ end
+
+ def self.marshalling_format_version=(value)
+ Marshalling.format_version = value
+ end
def self.eager_load!
super
@@ -172,6 +469,12 @@ def self.eager_load!
ActiveRecord::Associations.eager_load!
ActiveRecord::AttributeMethods.eager_load!
ActiveRecord::ConnectionAdapters.eager_load!
+ ActiveRecord::Encryption.eager_load!
+ end
+
+ # Explicitly closes all database connections in all pools.
+ def self.disconnect_all!
+ ConnectionAdapters::PoolConfig.disconnect_all!
end
end
diff --git a/activerecord/lib/active_record/aggregations.rb b/activerecord/lib/active_record/aggregations.rb
index 0379b8d6dc..64f91f3b67 100644
--- a/activerecord/lib/active_record/aggregations.rb
+++ b/activerecord/lib/active_record/aggregations.rb
@@ -4,7 +4,7 @@ module ActiveRecord
# See ActiveRecord::Aggregations::ClassMethods for documentation
module Aggregations
def initialize_dup(*) # :nodoc:
- @aggregation_cache = {}
+ @aggregation_cache = @aggregation_cache.dup
super
end
@@ -19,10 +19,12 @@ def clear_aggregation_cache
end
def init_internals
- @aggregation_cache = {}
super
+ @aggregation_cache = {}
end
+ # = Active Record \Aggregations
+ #
# Active Record implements aggregation through a macro-like class method called #composed_of
# for representing attributes as value objects. It expresses relationships like "Account [is]
# composed of Money [among other things]" or "Person [is] composed of [an] address". Each call
@@ -32,8 +34,8 @@ def init_internals
# the database).
#
# class Customer < ActiveRecord::Base
- # composed_of :balance, class_name: "Money", mapping: %w(balance amount)
- # composed_of :address, mapping: [ %w(address_street street), %w(address_city city) ]
+ # composed_of :balance, class_name: "Money", mapping: { balance: :amount }
+ # composed_of :address, mapping: { address_street: :street, address_city: :city }
# end
#
# The customer class now has the following methods to manipulate the value objects:
@@ -150,7 +152,7 @@ def init_internals
# class NetworkResource < ActiveRecord::Base
# composed_of :cidr,
# class_name: 'NetAddr::CIDR',
- # mapping: [ %w(network_address network), %w(cidr_range bits) ],
+ # mapping: { network_address: :network, cidr_range: :bits },
# allow_nil: true,
# constructor: Proc.new { |network_address, cidr_range| NetAddr::CIDR.create("#{network_address}/#{cidr_range}") },
# converter: Proc.new { |value| NetAddr::CIDR.create(value.is_a?(Array) ? value.join('/') : value) }
@@ -188,10 +190,10 @@ module ClassMethods
# to the Address class, but if the real class name is +CompanyAddress+, you'll have to specify it
# with this option.
# * <tt>:mapping</tt> - Specifies the mapping of entity attributes to attributes of the value
- # object. Each mapping is represented as an array where the first item is the name of the
- # entity attribute and the second item is the name of the attribute in the value object. The
+ # object. Each mapping is represented as a key-value pair where the key is the name of the
+ # entity attribute and the value is the name of the attribute in the value object. The
# order in which mappings are defined determines the order in which attributes are sent to the
- # value class constructor.
+ # value class constructor. The mapping can be written as a hash or as an array of pairs.
# * <tt>:allow_nil</tt> - Specifies that the value object will not be instantiated when all mapped
# attributes are +nil+. Setting the value object to +nil+ has the effect of writing +nil+ to all
# mapped attributes.
@@ -208,14 +210,15 @@ module ClassMethods
# can return +nil+ to skip the assignment.
#
# Option examples:
- # composed_of :temperature, mapping: %w(reading celsius)
- # composed_of :balance, class_name: "Money", mapping: %w(balance amount)
+ # composed_of :temperature, mapping: { reading: :celsius }
+ # composed_of :balance, class_name: "Money", mapping: { balance: :amount }
+ # composed_of :address, mapping: { address_street: :street, address_city: :city }
# composed_of :address, mapping: [ %w(address_street street), %w(address_city city) ]
# composed_of :gps_location
# composed_of :gps_location, allow_nil: true
# composed_of :ip_address,
# class_name: 'IPAddr',
- # mapping: %w(ip to_i),
+ # mapping: { ip: :to_i },
# constructor: Proc.new { |ip| IPAddr.new(ip, Socket::AF_INET) },
# converter: Proc.new { |ip| ip.is_a?(Integer) ? IPAddr.new(ip, Socket::AF_INET) : IPAddr.new(ip.to_s) }
#
@@ -249,7 +252,7 @@ def reader_method(name, class_name, mapping, allow_nil, constructor)
object = constructor.respond_to?(:call) ?
constructor.call(*attrs) :
class_name.constantize.send(constructor, *attrs)
- @aggregation_cache[name] = object
+ @aggregation_cache[name] = object.freeze
end
@aggregation_cache[name]
end
@@ -264,7 +267,7 @@ def writer_method(name, class_name, mapping, allow_nil, converter)
end
hash_from_multiparameter_assignment = part.is_a?(Hash) &&
- part.each_key.all? { |k| k.is_a?(Integer) }
+ part.keys.all?(Integer)
if hash_from_multiparameter_assignment
raise ArgumentError unless part.size == part.each_key.max
part = klass.new(*part.sort.map(&:last))
@@ -275,7 +278,7 @@ def writer_method(name, class_name, mapping, allow_nil, converter)
@aggregation_cache[name] = nil
else
mapping.each { |key, value| write_attribute(key, part.send(value)) }
- @aggregation_cache[name] = part.freeze
+ @aggregation_cache[name] = part.dup.freeze
end
end
end
diff --git a/activerecord/lib/active_record/association_relation.rb b/activerecord/lib/active_record/association_relation.rb
index 41571857b3..3f71fa8345 100644
--- a/activerecord/lib/active_record/association_relation.rb
+++ b/activerecord/lib/active_record/association_relation.rb
@@ -16,7 +16,7 @@ def ==(other)
end
%w(insert insert_all insert! insert_all! upsert upsert_all).each do |method|
- class_eval <<~RUBY
+ class_eval <<~RUBY, __FILE__, __LINE__ + 1
def #{method}(attributes, **kwargs)
if @association.reflection.through_reflection?
raise ArgumentError, "Bulk insert or upsert is currently not supported for has_many through association"
@@ -27,16 +27,6 @@ def #{method}(attributes, **kwargs)
RUBY
end
- def build(attributes = nil, &block)
- if attributes.is_a?(Array)
- attributes.collect { |attr| build(attr, &block) }
- else
- block = current_scope_restoring_block(&block)
- scoping { _new(attributes, &block) }
- end
- end
- alias new build
-
private
def _new(attributes, &block)
@association.build(attributes, &block)
diff --git a/activerecord/lib/active_record/associations.rb b/activerecord/lib/active_record/associations.rb
index b7545b6e20..d6539d4df1 100644
--- a/activerecord/lib/active_record/associations.rb
+++ b/activerecord/lib/active_record/associations.rb
@@ -1,11 +1,9 @@
# frozen_string_literal: true
-require "active_support/core_ext/enumerable"
-require "active_support/core_ext/string/conversions"
-
module ActiveRecord
- class AssociationNotFoundError < ConfigurationError #:nodoc:
+ class AssociationNotFoundError < ConfigurationError # :nodoc:
attr_reader :record, :association_name
+
def initialize(record = nil, association_name = nil)
@record = record
@association_name = association_name
@@ -16,32 +14,25 @@ def initialize(record = nil, association_name = nil)
end
end
- class Correction
- def initialize(error)
- @error = error
- end
+ if defined?(DidYouMean::Correctable) && defined?(DidYouMean::SpellChecker)
+ include DidYouMean::Correctable
def corrections
- if @error.association_name
- maybe_these = @error.record.class.reflections.keys
-
- maybe_these.sort_by { |n|
- DidYouMean::Jaro.distance(@error.association_name.to_s, n)
- }.reverse.first(4)
+ if record && association_name
+ @corrections ||= begin
+ maybe_these = record.class.reflections.keys
+ DidYouMean::SpellChecker.new(dictionary: maybe_these).correct(association_name)
+ end
else
[]
end
end
end
-
- # We may not have DYM, and DYM might not let us register error handlers
- if defined?(DidYouMean) && DidYouMean.respond_to?(:correct_error)
- DidYouMean.correct_error(self, Correction)
- end
end
- class InverseOfAssociationNotFoundError < ActiveRecordError #:nodoc:
+ class InverseOfAssociationNotFoundError < ActiveRecordError # :nodoc:
attr_reader :reflection, :associated_class
+
def initialize(reflection = nil, associated_class = nil)
if reflection
@reflection = reflection
@@ -52,31 +43,35 @@ def initialize(reflection = nil, associated_class = nil)
end
end
- class Correction
- def initialize(error)
- @error = error
- end
+ if defined?(DidYouMean::Correctable) && defined?(DidYouMean::SpellChecker)
+ include DidYouMean::Correctable
def corrections
- if @error.reflection && @error.associated_class
- maybe_these = @error.associated_class.reflections.keys
-
- maybe_these.sort_by { |n|
- DidYouMean::Jaro.distance(@error.reflection.options[:inverse_of].to_s, n)
- }.reverse.first(4)
+ if reflection && associated_class
+ @corrections ||= begin
+ maybe_these = associated_class.reflections.keys
+ DidYouMean::SpellChecker.new(dictionary: maybe_these).correct(reflection.options[:inverse_of].to_s)
+ end
else
[]
end
end
end
+ end
- # We may not have DYM, and DYM might not let us register error handlers
- if defined?(DidYouMean) && DidYouMean.respond_to?(:correct_error)
- DidYouMean.correct_error(self, Correction)
+ class InverseOfAssociationRecursiveError < ActiveRecordError # :nodoc:
+ attr_reader :reflection
+ def initialize(reflection = nil)
+ if reflection
+ @reflection = reflection
+ super("Inverse association #{reflection.name} (#{reflection.options[:inverse_of].inspect} in #{reflection.class_name}) is recursive.")
+ else
+ super("Inverse association is recursive.")
+ end
end
end
- class HasManyThroughAssociationNotFoundError < ActiveRecordError #:nodoc:
+ class HasManyThroughAssociationNotFoundError < ActiveRecordError # :nodoc:
attr_reader :owner_class, :reflection
def initialize(owner_class = nil, reflection = nil)
@@ -89,32 +84,24 @@ def initialize(owner_class = nil, reflection = nil)
end
end
- class Correction
- def initialize(error)
- @error = error
- end
+ if defined?(DidYouMean::Correctable) && defined?(DidYouMean::SpellChecker)
+ include DidYouMean::Correctable
def corrections
- if @error.reflection && @error.owner_class
- maybe_these = @error.owner_class.reflections.keys
- maybe_these -= [@error.reflection.name.to_s] # remove failing reflection
-
- maybe_these.sort_by { |n|
- DidYouMean::Jaro.distance(@error.reflection.options[:through].to_s, n)
- }.reverse.first(4)
+ if owner_class && reflection
+ @corrections ||= begin
+ maybe_these = owner_class.reflections.keys
+ maybe_these -= [reflection.name.to_s] # remove failing reflection
+ DidYouMean::SpellChecker.new(dictionary: maybe_these).correct(reflection.options[:through].to_s)
+ end
else
[]
end
end
end
-
- # We may not have DYM, and DYM might not let us register error handlers
- if defined?(DidYouMean) && DidYouMean.respond_to?(:correct_error)
- DidYouMean.correct_error(self, Correction)
- end
end
- class HasManyThroughAssociationPolymorphicSourceError < ActiveRecordError #:nodoc:
+ class HasManyThroughAssociationPolymorphicSourceError < ActiveRecordError # :nodoc:
def initialize(owner_class_name = nil, reflection = nil, source_reflection = nil)
if owner_class_name && reflection && source_reflection
super("Cannot have a has_many :through association '#{owner_class_name}##{reflection.name}' on the polymorphic object '#{source_reflection.class_name}##{source_reflection.name}' without 'source_type'. Try adding 'source_type: \"#{reflection.name.to_s.classify}\"' to 'has_many :through' definition.")
@@ -124,7 +111,7 @@ def initialize(owner_class_name = nil, reflection = nil, source_reflection = nil
end
end
- class HasManyThroughAssociationPolymorphicThroughError < ActiveRecordError #:nodoc:
+ class HasManyThroughAssociationPolymorphicThroughError < ActiveRecordError # :nodoc:
def initialize(owner_class_name = nil, reflection = nil)
if owner_class_name && reflection
super("Cannot have a has_many :through association '#{owner_class_name}##{reflection.name}' which goes through the polymorphic association '#{owner_class_name}##{reflection.through_reflection.name}'.")
@@ -134,7 +121,7 @@ def initialize(owner_class_name = nil, reflection = nil)
end
end
- class HasManyThroughAssociationPointlessSourceTypeError < ActiveRecordError #:nodoc:
+ class HasManyThroughAssociationPointlessSourceTypeError < ActiveRecordError # :nodoc:
def initialize(owner_class_name = nil, reflection = nil, source_reflection = nil)
if owner_class_name && reflection && source_reflection
super("Cannot have a has_many :through association '#{owner_class_name}##{reflection.name}' with a :source_type option if the '#{reflection.through_reflection.class_name}##{source_reflection.name}' is not polymorphic. Try removing :source_type on your association.")
@@ -144,7 +131,7 @@ def initialize(owner_class_name = nil, reflection = nil, source_reflection = nil
end
end
- class HasOneThroughCantAssociateThroughCollection < ActiveRecordError #:nodoc:
+ class HasOneThroughCantAssociateThroughCollection < ActiveRecordError # :nodoc:
def initialize(owner_class_name = nil, reflection = nil, through_reflection = nil)
if owner_class_name && reflection && through_reflection
super("Cannot have a has_one :through association '#{owner_class_name}##{reflection.name}' where the :through association '#{owner_class_name}##{through_reflection.name}' is a collection. Specify a has_one or belongs_to association in the :through option instead.")
@@ -154,7 +141,7 @@ def initialize(owner_class_name = nil, reflection = nil, through_reflection = ni
end
end
- class HasOneAssociationPolymorphicThroughError < ActiveRecordError #:nodoc:
+ class HasOneAssociationPolymorphicThroughError < ActiveRecordError # :nodoc:
def initialize(owner_class_name = nil, reflection = nil)
if owner_class_name && reflection
super("Cannot have a has_one :through association '#{owner_class_name}##{reflection.name}' which goes through the polymorphic association '#{owner_class_name}##{reflection.through_reflection.name}'.")
@@ -164,7 +151,7 @@ def initialize(owner_class_name = nil, reflection = nil)
end
end
- class HasManyThroughSourceAssociationNotFoundError < ActiveRecordError #:nodoc:
+ class HasManyThroughSourceAssociationNotFoundError < ActiveRecordError # :nodoc:
def initialize(reflection = nil)
if reflection
through_reflection = reflection.through_reflection
@@ -177,7 +164,7 @@ def initialize(reflection = nil)
end
end
- class HasManyThroughOrderError < ActiveRecordError #:nodoc:
+ class HasManyThroughOrderError < ActiveRecordError # :nodoc:
def initialize(owner_class_name = nil, reflection = nil, through_reflection = nil)
if owner_class_name && reflection && through_reflection
super("Cannot have a has_many :through association '#{owner_class_name}##{reflection.name}' which goes through '#{owner_class_name}##{through_reflection.name}' before the through association is defined.")
@@ -187,7 +174,7 @@ def initialize(owner_class_name = nil, reflection = nil, through_reflection = ni
end
end
- class ThroughCantAssociateThroughHasOneOrManyReflection < ActiveRecordError #:nodoc:
+ class ThroughCantAssociateThroughHasOneOrManyReflection < ActiveRecordError # :nodoc:
def initialize(owner = nil, reflection = nil)
if owner && reflection
super("Cannot modify association '#{owner.class.name}##{reflection.name}' because the source reflection class '#{reflection.source_reflection.class_name}' is associated to '#{reflection.through_reflection.class_name}' via :#{reflection.source_reflection.macro}.")
@@ -197,6 +184,22 @@ def initialize(owner = nil, reflection = nil)
end
end
+ class CompositePrimaryKeyMismatchError < ActiveRecordError # :nodoc:
+ attr_reader :reflection
+
+ def initialize(reflection = nil)
+ if reflection
+ if reflection.has_one? || reflection.collection?
+ super("Association #{reflection.active_record}##{reflection.name} primary key #{reflection.active_record_primary_key} doesn't match with foreign key #{reflection.foreign_key}. Please specify query_constraints, or primary_key and foreign_key values.")
+ else
+ super("Association #{reflection.active_record}##{reflection.name} primary key #{reflection.association_primary_key} doesn't match with foreign key #{reflection.foreign_key}. Please specify query_constraints, or primary_key and foreign_key values.")
+ end
+ else
+ super("Association primary key doesn't match with foreign key.")
+ end
+ end
+ end
+
class AmbiguousSourceReflectionForThroughAssociation < ActiveRecordError # :nodoc:
def initialize(klass, macro, association_name, options, possible_sources)
example_options = options.dup
@@ -212,13 +215,13 @@ def initialize(klass, macro, association_name, options, possible_sources)
end
end
- class HasManyThroughCantAssociateThroughHasOneOrManyReflection < ThroughCantAssociateThroughHasOneOrManyReflection #:nodoc:
+ class HasManyThroughCantAssociateThroughHasOneOrManyReflection < ThroughCantAssociateThroughHasOneOrManyReflection # :nodoc:
end
- class HasOneThroughCantAssociateThroughHasOneOrManyReflection < ThroughCantAssociateThroughHasOneOrManyReflection #:nodoc:
+ class HasOneThroughCantAssociateThroughHasOneOrManyReflection < ThroughCantAssociateThroughHasOneOrManyReflection # :nodoc:
end
- class ThroughNestedAssociationsAreReadonly < ActiveRecordError #:nodoc:
+ class ThroughNestedAssociationsAreReadonly < ActiveRecordError # :nodoc:
def initialize(owner = nil, reflection = nil)
if owner && reflection
super("Cannot modify association '#{owner.class.name}##{reflection.name}' because it goes through more than one other association.")
@@ -228,10 +231,10 @@ def initialize(owner = nil, reflection = nil)
end
end
- class HasManyThroughNestedAssociationsAreReadonly < ThroughNestedAssociationsAreReadonly #:nodoc:
+ class HasManyThroughNestedAssociationsAreReadonly < ThroughNestedAssociationsAreReadonly # :nodoc:
end
- class HasOneThroughNestedAssociationsAreReadonly < ThroughNestedAssociationsAreReadonly #:nodoc:
+ class HasOneThroughNestedAssociationsAreReadonly < ThroughNestedAssociationsAreReadonly # :nodoc:
end
# This error is raised when trying to eager load a polymorphic association using a JOIN.
@@ -250,7 +253,7 @@ def initialize(reflection = nil)
# This error is raised when trying to destroy a parent instance in N:1 or 1:1 associations
# (has_many, has_one) when there is at least 1 child associated instance.
# ex: if @project.tasks.size > 0, DeleteRestrictionError will be raised when trying to destroy @project
- class DeleteRestrictionError < ActiveRecordError #:nodoc:
+ class DeleteRestrictionError < ActiveRecordError # :nodoc:
def initialize(name = nil)
if name
super("Cannot delete record because of dependent #{name}")
@@ -274,7 +277,7 @@ module Associations # :nodoc:
autoload :CollectionProxy
autoload :ThroughAssociation
- module Builder #:nodoc:
+ module Builder # :nodoc:
autoload :Association, "active_record/associations/builder/association"
autoload :SingularAssociation, "active_record/associations/builder/singular_association"
autoload :CollectionAssociation, "active_record/associations/builder/collection_association"
@@ -296,16 +299,18 @@ module Builder #:nodoc:
autoload :Preloader
autoload :JoinDependency
autoload :AssociationScope
+ autoload :DisableJoinsAssociationScope
autoload :AliasTracker
end
def self.eager_load!
super
Preloader.eager_load!
+ JoinDependency.eager_load!
end
# Returns the association instance for the given name, instantiating it if it doesn't already exist
- def association(name) #:nodoc:
+ def association(name) # :nodoc:
association = association_instance_get(name)
if association.nil?
@@ -328,20 +333,10 @@ def initialize_dup(*) # :nodoc:
super
end
- def reload(*) # :nodoc:
- clear_association_cache
- super
- end
-
private
- # Clears out the association cache.
- def clear_association_cache
- @association_cache.clear if persisted?
- end
-
def init_internals
- @association_cache = {}
super
+ @association_cache = {}
end
# Returns the specified association instance if it exists, +nil+ otherwise.
@@ -354,6 +349,8 @@ def association_instance_set(name, association)
@association_cache[name] = association
end
+ # = Active Record \Associations
+ #
# \Associations are a set of macro-like class methods for tying objects together through
# foreign keys. They express relationships like "Project has one Project Manager"
# or "Project belongs to a Portfolio". Each macro adds a number of methods to the
@@ -370,23 +367,42 @@ def association_instance_set(name, association)
#
# The project class now has the following methods (and more) to ease the traversal and
# manipulation of its relationships:
- # * <tt>Project#portfolio</tt>, <tt>Project#portfolio=(portfolio)</tt>, <tt>Project#reload_portfolio</tt>
- # * <tt>Project#project_manager</tt>, <tt>Project#project_manager=(project_manager)</tt>, <tt>Project#reload_project_manager</tt>
- # * <tt>Project#milestones.empty?</tt>, <tt>Project#milestones.size</tt>, <tt>Project#milestones</tt>, <tt>Project#milestones<<(milestone)</tt>,
- # <tt>Project#milestones.delete(milestone)</tt>, <tt>Project#milestones.destroy(milestone)</tt>, <tt>Project#milestones.find(milestone_id)</tt>,
- # <tt>Project#milestones.build</tt>, <tt>Project#milestones.create</tt>
- # * <tt>Project#categories.empty?</tt>, <tt>Project#categories.size</tt>, <tt>Project#categories</tt>, <tt>Project#categories<<(category1)</tt>,
- # <tt>Project#categories.delete(category1)</tt>, <tt>Project#categories.destroy(category1)</tt>
+ #
+ # project = Project.first
+ # project.portfolio
+ # project.portfolio = Portfolio.first
+ # project.reload_portfolio
+ #
+ # project.project_manager
+ # project.project_manager = ProjectManager.first
+ # project.reload_project_manager
+ #
+ # project.milestones.empty?
+ # project.milestones.size
+ # project.milestones
+ # project.milestones << Milestone.first
+ # project.milestones.delete(Milestone.first)
+ # project.milestones.destroy(Milestone.first)
+ # project.milestones.find(Milestone.first.id)
+ # project.milestones.build
+ # project.milestones.create
+ #
+ # project.categories.empty?
+ # project.categories.size
+ # project.categories
+ # project.categories << Category.first
+ # project.categories.delete(category1)
+ # project.categories.destroy(category1)
#
# === A word of warning
#
# Don't create associations that have the same name as {instance methods}[rdoc-ref:ActiveRecord::Core] of
- # <tt>ActiveRecord::Base</tt>. Since the association adds a method with that name to
- # its model, using an association with the same name as one provided by <tt>ActiveRecord::Base</tt> will override the method inherited through <tt>ActiveRecord::Base</tt> and will break things.
- # For instance, +attributes+ and +connection+ would be bad choices for association names, because those names already exist in the list of <tt>ActiveRecord::Base</tt> instance methods.
+ # +ActiveRecord::Base+. Since the association adds a method with that name to
+ # its model, using an association with the same name as one provided by +ActiveRecord::Base+ will override the method inherited through +ActiveRecord::Base+ and will break things.
+ # For instance, +attributes+ and +connection+ would be bad choices for association names, because those names already exist in the list of +ActiveRecord::Base+ instance methods.
#
# == Auto-generated methods
- # See also Instance Public methods below for more details.
+ # See also "Instance Public methods" below ( from #belongs_to ) for more details.
#
# === Singular associations (one-to-one)
# | | belongs_to |
@@ -398,6 +414,8 @@ def association_instance_set(name, association)
# create_other(attributes={}) | X | | X
# create_other!(attributes={}) | X | | X
# reload_other | X | X | X
+ # other_changed? | X | X |
+ # other_previously_changed? | X | X |
#
# === Collection associations (one-to-many / many-to-many)
# | | | has_many
@@ -451,7 +469,7 @@ def association_instance_set(name, association)
#
# == Cardinality and associations
#
- # Active Record associations can be used to describe one-to-one, one-to-many and many-to-many
+ # Active Record associations can be used to describe one-to-one, one-to-many, and many-to-many
# relationships between models. Each model uses an association to describe its role in
# the relation. The #belongs_to association is always used in the model that has
# the foreign key.
@@ -605,8 +623,11 @@ def association_instance_set(name, association)
# has_many :birthday_events, ->(user) { where(starts_on: user.birthday) }, class_name: 'Event'
# end
#
- # Note: Joining, eager loading and preloading of these associations is not possible.
- # These operations happen before instance creation and the scope will be called with a +nil+ argument.
+ # Note: Joining or eager loading such associations is not possible because
+ # those operations happen before instance creation. Such associations
+ # _can_ be preloaded, but doing so will perform N+1 queries because there
+ # will be a different scope for each record (similar to preloading
+ # polymorphic scopes).
#
# == Association callbacks
#
@@ -614,22 +635,31 @@ def association_instance_set(name, association)
# you can also define callbacks that get triggered when you add an object to or remove an
# object from an association collection.
#
- # class Project
- # has_and_belongs_to_many :developers, after_add: :evaluate_velocity
+ # class Firm < ActiveRecord::Base
+ # has_many :clients,
+ # dependent: :destroy,
+ # after_add: :congratulate_client,
+ # after_remove: :log_after_remove
+ #
+ # def congratulate_client(record)
+ # # ...
+ # end
#
- # def evaluate_velocity(developer)
- # ...
+ # def log_after_remove(record)
+ # # ...
# end
# end
#
# It's possible to stack callbacks by passing them as an array. Example:
#
- # class Project
- # has_and_belongs_to_many :developers,
- # after_add: [:evaluate_velocity, Proc.new { |p, d| p.shipping_date = Time.now}]
+ # class Firm < ActiveRecord::Base
+ # has_many :clients,
+ # dependent: :destroy,
+ # after_add: [:congratulate_client, -> (firm, record) { firm.log << "after_adding#{record.id}" }],
+ # after_remove: :log_after_remove
# end
#
- # Possible callbacks are: +before_add+, +after_add+, +before_remove+ and +after_remove+.
+ # Possible callbacks are: +before_add+, +after_add+, +before_remove+, and +after_remove+.
#
# If any of the +before_add+ callbacks throw an exception, the object will not be
# added to the collection.
@@ -637,6 +667,18 @@ def association_instance_set(name, association)
# Similarly, if any of the +before_remove+ callbacks throw an exception, the object
# will not be removed from the collection.
#
+ # Note: To trigger remove callbacks, you must use +destroy+ / +destroy_all+ methods. For example:
+ #
+ # * <tt>firm.clients.destroy(client)</tt>
+ # * <tt>firm.clients.destroy(*clients)</tt>
+ # * <tt>firm.clients.destroy_all</tt>
+ #
+ # +delete+ / +delete_all+ methods like the following do *not* trigger remove callbacks:
+ #
+ # * <tt>firm.clients.delete(client)</tt>
+ # * <tt>firm.clients.delete(*clients)</tt>
+ # * <tt>firm.clients.delete_all</tt>
+ #
# == Association extensions
#
# The proxy objects that control the access to associations can be extended through anonymous
@@ -780,9 +822,10 @@ def association_instance_set(name, association)
# inverse detection only works on #has_many, #has_one, and
# #belongs_to associations.
#
- # <tt>:foreign_key</tt> and <tt>:through</tt> options on the associations,
- # or a custom scope, will also prevent the association's inverse
- # from being found automatically.
+ # <tt>:foreign_key</tt> and <tt>:through</tt> options on the associations
+ # will also prevent the association's inverse from being found automatically,
+ # as will a custom scopes in some cases. See further details in the
+ # {Active Record Associations guide}[https://guides.rubyonrails.org/association_basics.html#bi-directional-associations].
#
# The automatic guessing of the inverse association uses a heuristic based
# on the name of the class, so it may not work for all associations,
@@ -1007,7 +1050,7 @@ def association_instance_set(name, association)
# query per addressable type.
# For example, if all the addressables are either of class Person or Company, then a total
# of 3 queries will be executed. The list of addressable types to load is determined on
- # the back of the addresses loaded. This is not supported if Active Record has to fallback
+ # the back of the addresses loaded. This is not supported if Active Record has to fall back
# to the previous implementation of eager loading and will raise ActiveRecord::EagerLoadPolymorphicError.
# The reason is that the parent model's type is a column value so its corresponding table
# name cannot be put in the +FROM+/+JOIN+ clauses of that query.
@@ -1020,45 +1063,45 @@ def association_instance_set(name, association)
# Indexes are appended for any more successive uses of the table name.
#
# Post.joins(:comments)
- # # => SELECT ... FROM posts INNER JOIN comments ON ...
+ # # SELECT ... FROM posts INNER JOIN comments ON ...
# Post.joins(:special_comments) # STI
- # # => SELECT ... FROM posts INNER JOIN comments ON ... AND comments.type = 'SpecialComment'
+ # # SELECT ... FROM posts INNER JOIN comments ON ... AND comments.type = 'SpecialComment'
# Post.joins(:comments, :special_comments) # special_comments is the reflection name, posts is the parent table name
- # # => SELECT ... FROM posts INNER JOIN comments ON ... INNER JOIN comments special_comments_posts
+ # # SELECT ... FROM posts INNER JOIN comments ON ... INNER JOIN comments special_comments_posts
#
# Acts as tree example:
#
# TreeMixin.joins(:children)
- # # => SELECT ... FROM mixins INNER JOIN mixins childrens_mixins ...
+ # # SELECT ... FROM mixins INNER JOIN mixins childrens_mixins ...
# TreeMixin.joins(children: :parent)
- # # => SELECT ... FROM mixins INNER JOIN mixins childrens_mixins ...
- # INNER JOIN parents_mixins ...
+ # # SELECT ... FROM mixins INNER JOIN mixins childrens_mixins ...
+ # # INNER JOIN parents_mixins ...
# TreeMixin.joins(children: {parent: :children})
- # # => SELECT ... FROM mixins INNER JOIN mixins childrens_mixins ...
- # INNER JOIN parents_mixins ...
- # INNER JOIN mixins childrens_mixins_2
+ # # SELECT ... FROM mixins INNER JOIN mixins childrens_mixins ...
+ # # INNER JOIN parents_mixins ...
+ # # INNER JOIN mixins childrens_mixins_2
#
# Has and Belongs to Many join tables use the same idea, but add a <tt>_join</tt> suffix:
#
# Post.joins(:categories)
- # # => SELECT ... FROM posts INNER JOIN categories_posts ... INNER JOIN categories ...
+ # # SELECT ... FROM posts INNER JOIN categories_posts ... INNER JOIN categories ...
# Post.joins(categories: :posts)
- # # => SELECT ... FROM posts INNER JOIN categories_posts ... INNER JOIN categories ...
- # INNER JOIN categories_posts posts_categories_join INNER JOIN posts posts_categories
+ # # SELECT ... FROM posts INNER JOIN categories_posts ... INNER JOIN categories ...
+ # # INNER JOIN categories_posts posts_categories_join INNER JOIN posts posts_categories
# Post.joins(categories: {posts: :categories})
- # # => SELECT ... FROM posts INNER JOIN categories_posts ... INNER JOIN categories ...
- # INNER JOIN categories_posts posts_categories_join INNER JOIN posts posts_categories
- # INNER JOIN categories_posts categories_posts_join INNER JOIN categories categories_posts_2
+ # # SELECT ... FROM posts INNER JOIN categories_posts ... INNER JOIN categories ...
+ # # INNER JOIN categories_posts posts_categories_join INNER JOIN posts posts_categories
+ # # INNER JOIN categories_posts categories_posts_join INNER JOIN categories categories_posts_2
#
# If you wish to specify your own custom joins using ActiveRecord::QueryMethods#joins method, those table
# names will take precedence over the eager associations:
#
# Post.joins(:comments).joins("inner join comments ...")
- # # => SELECT ... FROM posts INNER JOIN comments_posts ON ... INNER JOIN comments ...
+ # # SELECT ... FROM posts INNER JOIN comments_posts ON ... INNER JOIN comments ...
# Post.joins(:comments, :special_comments).joins("inner join comments ...")
- # # => SELECT ... FROM posts INNER JOIN comments comments_posts ON ...
- # INNER JOIN comments special_comments_posts ...
- # INNER JOIN comments ...
+ # # SELECT ... FROM posts INNER JOIN comments comments_posts ON ...
+ # # INNER JOIN comments special_comments_posts ...
+ # # INNER JOIN comments ...
#
# Table aliases are automatically truncated according to the maximum length of table identifiers
# according to the specific database.
@@ -1139,7 +1182,8 @@ def association_instance_set(name, association)
# belongs_to :dungeon, inverse_of: :evil_wizard
# end
#
- # For more information, see the documentation for the +:inverse_of+ option.
+ # For more information, see the documentation for the +:inverse_of+ option and the
+ # {Active Record Associations guide}[https://guides.rubyonrails.org/association_basics.html#bi-directional-associations].
#
# == Deleting from associations
#
@@ -1161,7 +1205,7 @@ def association_instance_set(name, association)
# specific association types. When no option is given, the behavior is to do nothing
# with the associated records when destroying a record.
#
- # Note that <tt>:dependent</tt> is implemented using Rails' callback
+ # Note that <tt>:dependent</tt> is implemented using \Rails' callback
# system, which works by processing callbacks in order. Therefore, other
# callbacks declared either before or after the <tt>:dependent</tt> option
# can affect what it does.
@@ -1232,15 +1276,15 @@ module ClassMethods
# +collection+ is a placeholder for the symbol passed as the +name+ argument, so
# <tt>has_many :clients</tt> would add among others <tt>clients.empty?</tt>.
#
- # [collection]
+ # [<tt>collection</tt>]
# Returns a Relation of all the associated objects.
# An empty Relation is returned if none are found.
- # [collection<<(object, ...)]
+ # [<tt>collection<<(object, ...)</tt>]
# Adds one or more objects to the collection by setting their foreign keys to the collection's primary key.
# Note that this operation instantly fires update SQL without waiting for the save or update call on the
# parent object, unless the parent object is a new record.
# This will also run validations and callbacks of associated object(s).
- # [collection.delete(object, ...)]
+ # [<tt>collection.delete(object, ...)</tt>]
# Removes one or more objects from the collection by setting their foreign keys to +NULL+.
# Objects will be in addition destroyed if they're associated with <tt>dependent: :destroy</tt>,
# and deleted if they're associated with <tt>dependent: :delete_all</tt>.
@@ -1248,75 +1292,84 @@ module ClassMethods
# If the <tt>:through</tt> option is used, then the join records are deleted (rather than
# nullified) by default, but you can specify <tt>dependent: :destroy</tt> or
# <tt>dependent: :nullify</tt> to override this.
- # [collection.destroy(object, ...)]
+ # [<tt>collection.destroy(object, ...)</tt>]
# Removes one or more objects from the collection by running <tt>destroy</tt> on
# each record, regardless of any dependent option, ensuring callbacks are run.
#
# If the <tt>:through</tt> option is used, then the join records are destroyed
# instead, not the objects themselves.
- # [collection=objects]
+ # [<tt>collection=objects</tt>]
# Replaces the collections content by deleting and adding objects as appropriate. If the <tt>:through</tt>
# option is true callbacks in the join models are triggered except destroy callbacks, since deletion is
# direct by default. You can specify <tt>dependent: :destroy</tt> or
# <tt>dependent: :nullify</tt> to override this.
- # [collection_singular_ids]
+ # [<tt>collection_singular_ids</tt>]
# Returns an array of the associated objects' ids
- # [collection_singular_ids=ids]
+ # [<tt>collection_singular_ids=ids</tt>]
# Replace the collection with the objects identified by the primary keys in +ids+. This
# method loads the models and calls <tt>collection=</tt>. See above.
- # [collection.clear]
+ # [<tt>collection.clear</tt>]
# Removes every object from the collection. This destroys the associated objects if they
# are associated with <tt>dependent: :destroy</tt>, deletes them directly from the
# database if <tt>dependent: :delete_all</tt>, otherwise sets their foreign keys to +NULL+.
# If the <tt>:through</tt> option is true no destroy callbacks are invoked on the join models.
# Join models are directly deleted.
- # [collection.empty?]
+ # [<tt>collection.empty?</tt>]
# Returns +true+ if there are no associated objects.
- # [collection.size]
+ # [<tt>collection.size</tt>]
# Returns the number of associated objects.
- # [collection.find(...)]
+ # [<tt>collection.find(...)</tt>]
# Finds an associated object according to the same rules as ActiveRecord::FinderMethods#find.
- # [collection.exists?(...)]
+ # [<tt>collection.exists?(...)</tt>]
# Checks whether an associated object with the given conditions exists.
# Uses the same rules as ActiveRecord::FinderMethods#exists?.
- # [collection.build(attributes = {}, ...)]
+ # [<tt>collection.build(attributes = {}, ...)</tt>]
# Returns one or more new objects of the collection type that have been instantiated
# with +attributes+ and linked to this object through a foreign key, but have not yet
# been saved.
- # [collection.create(attributes = {})]
+ # [<tt>collection.create(attributes = {})</tt>]
# Returns a new object of the collection type that has been instantiated
# with +attributes+, linked to this object through a foreign key, and that has already
# been saved (if it passed the validation). *Note*: This only works if the base model
# already exists in the DB, not if it is a new (unsaved) record!
- # [collection.create!(attributes = {})]
+ # [<tt>collection.create!(attributes = {})</tt>]
# Does the same as <tt>collection.create</tt>, but raises ActiveRecord::RecordInvalid
# if the record is invalid.
- # [collection.reload]
+ # [<tt>collection.reload</tt>]
# Returns a Relation of all of the associated objects, forcing a database read.
# An empty Relation is returned if none are found.
#
- # === Example
- #
- # A <tt>Firm</tt> class declares <tt>has_many :clients</tt>, which will add:
- # * <tt>Firm#clients</tt> (similar to <tt>Client.where(firm_id: id)</tt>)
- # * <tt>Firm#clients<<</tt>
- # * <tt>Firm#clients.delete</tt>
- # * <tt>Firm#clients.destroy</tt>
- # * <tt>Firm#clients=</tt>
- # * <tt>Firm#client_ids</tt>
- # * <tt>Firm#client_ids=</tt>
- # * <tt>Firm#clients.clear</tt>
- # * <tt>Firm#clients.empty?</tt> (similar to <tt>firm.clients.size == 0</tt>)
- # * <tt>Firm#clients.size</tt> (similar to <tt>Client.count "firm_id = #{id}"</tt>)
- # * <tt>Firm#clients.find</tt> (similar to <tt>Client.where(firm_id: id).find(id)</tt>)
- # * <tt>Firm#clients.exists?(name: 'ACME')</tt> (similar to <tt>Client.exists?(name: 'ACME', firm_id: firm.id)</tt>)
- # * <tt>Firm#clients.build</tt> (similar to <tt>Client.new(firm_id: id)</tt>)
- # * <tt>Firm#clients.create</tt> (similar to <tt>c = Client.new(firm_id: id); c.save; c</tt>)
- # * <tt>Firm#clients.create!</tt> (similar to <tt>c = Client.new(firm_id: id); c.save!</tt>)
- # * <tt>Firm#clients.reload</tt>
+ # ==== Example
+ #
+ # class Firm < ActiveRecord::Base
+ # has_many :clients
+ # end
+ #
+ # Declaring <tt>has_many :clients</tt> adds the following methods (and more):
+ #
+ # firm = Firm.find(2)
+ # client = Client.find(6)
+ #
+ # firm.clients # similar to Client.where(firm_id: 2)
+ # firm.clients << client
+ # firm.clients.delete(client)
+ # firm.clients.destroy(client)
+ # firm.clients = [client]
+ # firm.client_ids
+ # firm.client_ids = [6]
+ # firm.clients.clear
+ # firm.clients.empty? # similar to firm.clients.size == 0
+ # firm.clients.size # similar to Client.count "firm_id = 2"
+ # firm.clients.find # similar to Client.where(firm_id: 2).find(6)
+ # firm.clients.exists?(name: 'ACME') # similar to Client.exists?(name: 'ACME', firm_id: 2)
+ # firm.clients.build # similar to Client.new(firm_id: 2)
+ # firm.clients.create # similar to Client.create(firm_id: 2)
+ # firm.clients.create! # similar to Client.create!(firm_id: 2)
+ # firm.clients.reload
+ #
# The declaration can also include an +options+ hash to specialize the behavior of the association.
#
- # === Scopes
+ # ==== Scopes
#
# You can pass a second argument +scope+ as a callable (i.e. proc or
# lambda) to retrieve a specific set of records or customize the generated
@@ -1327,10 +1380,10 @@ module ClassMethods
# has_many :employees, -> { joins(:address) }
# has_many :posts, ->(blog) { where("max_post_length > ?", blog.max_post_length) }
#
- # === Extensions
+ # ==== Extensions
#
# The +extension+ argument allows you to pass a block into a has_many
- # association. This is useful for adding new finders, creators and other
+ # association. This is useful for adding new finders, creators, and other
# factory-type methods to be used as part of the association.
#
# Extension examples:
@@ -1341,31 +1394,31 @@ module ClassMethods
# end
# end
#
- # === Options
- # [:class_name]
+ # ==== Options
+ # [+:class_name+]
# Specify the class name of the association. Use it only if that name can't be inferred
# from the association name. So <tt>has_many :products</tt> will by default be linked
# to the +Product+ class, but if the real class name is +SpecialProduct+, you'll have to
# specify it with this option.
- # [:foreign_key]
+ # [+:foreign_key+]
# Specify the foreign key used for the association. By default this is guessed to be the name
# of this class in lower-case and "_id" suffixed. So a Person class that makes a #has_many
# association will use "person_id" as the default <tt>:foreign_key</tt>.
#
- # If you are going to modify the association (rather than just read from it), then it is
- # a good idea to set the <tt>:inverse_of</tt> option.
- # [:foreign_type]
+ # Setting the <tt>:foreign_key</tt> option prevents automatic detection of the association's
+ # inverse, so it is generally a good idea to set the <tt>:inverse_of</tt> option as well.
+ # [+:foreign_type+]
# Specify the column used to store the associated object's type, if this is a polymorphic
# association. By default this is guessed to be the name of the polymorphic association
# specified on "as" option with a "_type" suffix. So a class that defines a
# <tt>has_many :tags, as: :taggable</tt> association will use "taggable_type" as the
# default <tt>:foreign_type</tt>.
- # [:primary_key]
+ # [+:primary_key+]
# Specify the name of the column to use as the primary key for the association. By default this is +id+.
- # [:dependent]
+ # [+:dependent+]
# Controls what happens to the associated objects when
# their owner is destroyed. Note that these are implemented as
- # callbacks, and Rails executes callbacks in order. Therefore, other
+ # callbacks, and \Rails executes callbacks in order. Therefore, other
# similar callbacks may affect the <tt>:dependent</tt> behavior, and the
# <tt>:dependent</tt> behavior may affect other callbacks.
#
@@ -1377,7 +1430,7 @@ module ClassMethods
# * <tt>:delete_all</tt> causes all the associated objects to be deleted directly from the database (so callbacks will not be executed).
# * <tt>:nullify</tt> causes the foreign keys to be set to +NULL+. Polymorphic type will also be nullified
# on polymorphic associations. Callbacks are not executed.
- # * <tt>:restrict_with_exception</tt> causes an <tt>ActiveRecord::DeleteRestrictionError</tt> exception to be raised if there are any associated records.
+ # * <tt>:restrict_with_exception</tt> causes an ActiveRecord::DeleteRestrictionError exception to be raised if there are any associated records.
# * <tt>:restrict_with_error</tt> causes an error to be added to the owner if there are any associated objects.
#
# If using with the <tt>:through</tt> option, the association on the join model must be
@@ -1389,12 +1442,12 @@ module ClassMethods
# <tt>has_many :comments, -> { where published: true }, dependent: :destroy</tt> and <tt>destroy</tt> is
# called on a post, only published comments are destroyed. This means that any unpublished comments in the
# database would still contain a foreign key pointing to the now deleted post.
- # [:counter_cache]
+ # [+:counter_cache+]
# This option can be used to configure a custom named <tt>:counter_cache.</tt> You only need this option,
# when you customized the name of your <tt>:counter_cache</tt> on the #belongs_to association.
- # [:as]
+ # [+:as+]
# Specifies a polymorphic interface (See #belongs_to).
- # [:through]
+ # [+:through+]
# Specifies an association through which to perform the query. This can be any other type
# of association, including other <tt>:through</tt> associations. Options for <tt>:class_name</tt>,
# <tt>:primary_key</tt> and <tt>:foreign_key</tt> are ignored, as the association uses the
@@ -1409,19 +1462,24 @@ module ClassMethods
# a good idea to set the <tt>:inverse_of</tt> option on the source association on the
# join model. This allows associated records to be built which will automatically create
# the appropriate join model records when they are saved. (See the 'Association Join Models'
- # section above.)
- # [:source]
+ # and 'Setting Inverses' sections above.)
+ # [+:disable_joins+]
+ # Specifies whether joins should be skipped for an association. If set to true, two or more queries
+ # will be generated. Note that in some cases, if order or limit is applied, it will be done in-memory
+ # due to database limitations. This option is only applicable on <tt>has_many :through</tt> associations as
+ # +has_many+ alone do not perform a join.
+ # [+:source+]
# Specifies the source association name used by #has_many <tt>:through</tt> queries.
# Only use it if the name cannot be inferred from the association.
# <tt>has_many :subscribers, through: :subscriptions</tt> will look for either <tt>:subscribers</tt> or
# <tt>:subscriber</tt> on Subscription, unless a <tt>:source</tt> is given.
- # [:source_type]
+ # [+:source_type+]
# Specifies type of the source association used by #has_many <tt>:through</tt> queries where the source
# association is a polymorphic #belongs_to.
- # [:validate]
+ # [+:validate+]
# When set to +true+, validates new objects added to association when saving the parent object. +true+ by default.
# If you want to ensure associated objects are revalidated on every update, use +validates_associated+.
- # [:autosave]
+ # [+:autosave+]
# If true, always save the associated objects or destroy them if marked for destruction,
# when saving the parent object. If false, never save or destroy the associated objects.
# By default, only save associated objects that are new records. This option is implemented as a
@@ -1430,19 +1488,24 @@ module ClassMethods
#
# Note that NestedAttributes::ClassMethods#accepts_nested_attributes_for sets
# <tt>:autosave</tt> to <tt>true</tt>.
- # [:inverse_of]
+ # [+:inverse_of+]
# Specifies the name of the #belongs_to association on the associated object
# that is the inverse of this #has_many association.
# See ActiveRecord::Associations::ClassMethods's overview on Bi-directional associations for more detail.
- # [:extend]
+ # [+:extend+]
# Specifies a module or array of modules that will be extended into the association object returned.
# Useful for defining methods on associations, especially when they should be shared between multiple
# association objects.
- # [:strict_loading]
- # Enforces strict loading every time the associated record is loaded through this association.
- # [:ensuring_owner_was]
+ # [+:strict_loading+]
+ # When set to +true+, enforces strict loading every time the associated record is loaded through this
+ # association.
+ # [+:ensuring_owner_was+]
# Specifies an instance method to be called on the owner. The method must return true in order for the
# associated records to be deleted in a background job.
+ # [+:query_constraints+]
+ # Serves as a composite foreign key. Defines the list of columns to be used to query the associated object.
+ # This is an optional option. By default Rails will attempt to derive the value automatically.
+ # When the value is set the Array size must match associated model's primary key or +query_constraints+ size.
#
# Option examples:
# has_many :comments, -> { order("posted_on") }
@@ -1453,7 +1516,9 @@ module ClassMethods
# has_many :tags, as: :taggable
# has_many :reports, -> { readonly }
# has_many :subscribers, through: :subscriptions, source: :user
+ # has_many :subscribers, through: :subscriptions, disable_joins: true
# has_many :comments, strict_loading: true
+ # has_many :comments, query_constraints: [:blog_id, :post_id]
def has_many(name, scope = nil, **options, &extension)
reflection = Builder::HasMany.build(self, name, scope, options, &extension)
Reflection.add_reflection self, name, reflection
@@ -1469,37 +1534,48 @@ def has_many(name, scope = nil, **options, &extension)
# +association+ is a placeholder for the symbol passed as the +name+ argument, so
# <tt>has_one :manager</tt> would add among others <tt>manager.nil?</tt>.
#
- # [association]
+ # [<tt>association</tt>]
# Returns the associated object. +nil+ is returned if none is found.
- # [association=(associate)]
+ # [<tt>association=(associate)</tt>]
# Assigns the associate object, extracts the primary key, sets it as the foreign key,
# and saves the associate object. To avoid database inconsistencies, permanently deletes an existing
# associated object when assigning a new one, even if the new one isn't saved to database.
- # [build_association(attributes = {})]
+ # [<tt>build_association(attributes = {})</tt>]
# Returns a new object of the associated type that has been instantiated
# with +attributes+ and linked to this object through a foreign key, but has not
# yet been saved.
- # [create_association(attributes = {})]
+ # [<tt>create_association(attributes = {})</tt>]
# Returns a new object of the associated type that has been instantiated
# with +attributes+, linked to this object through a foreign key, and that
# has already been saved (if it passed the validation).
- # [create_association!(attributes = {})]
+ # [<tt>create_association!(attributes = {})</tt>]
# Does the same as <tt>create_association</tt>, but raises ActiveRecord::RecordInvalid
# if the record is invalid.
- # [reload_association]
+ # [<tt>reload_association</tt>]
# Returns the associated object, forcing a database read.
+ # [<tt>reset_association</tt>]
+ # Unloads the associated object. The next access will query it from the database.
+ #
+ # ==== Example
+ #
+ # class Account < ActiveRecord::Base
+ # has_one :beneficiary
+ # end
+ #
+ # Declaring <tt>has_one :beneficiary</tt> adds the following methods (and more):
#
- # === Example
+ # account = Account.find(5)
+ # beneficiary = Beneficiary.find(8)
#
- # An Account class declares <tt>has_one :beneficiary</tt>, which will add:
- # * <tt>Account#beneficiary</tt> (similar to <tt>Beneficiary.where(account_id: id).first</tt>)
- # * <tt>Account#beneficiary=(beneficiary)</tt> (similar to <tt>beneficiary.account_id = account.id; beneficiary.save</tt>)
- # * <tt>Account#build_beneficiary</tt> (similar to <tt>Beneficiary.new(account_id: id)</tt>)
- # * <tt>Account#create_beneficiary</tt> (similar to <tt>b = Beneficiary.new(account_id: id); b.save; b</tt>)
- # * <tt>Account#create_beneficiary!</tt> (similar to <tt>b = Beneficiary.new(account_id: id); b.save!; b</tt>)
- # * <tt>Account#reload_beneficiary</tt>
+ # account.beneficiary # similar to Beneficiary.find_by(account_id: 5)
+ # account.beneficiary = beneficiary # similar to beneficiary.update(account_id: 5)
+ # account.build_beneficiary # similar to Beneficiary.new(account_id: 5)
+ # account.create_beneficiary # similar to Beneficiary.create(account_id: 5)
+ # account.create_beneficiary! # similar to Beneficiary.create!(account_id: 5)
+ # account.reload_beneficiary
+ # account.reset_beneficiary
#
- # === Scopes
+ # ==== Scopes
#
# You can pass a second argument +scope+ as a callable (i.e. proc or
# lambda) to retrieve a specific record or customize the generated query
@@ -1510,16 +1586,16 @@ def has_many(name, scope = nil, **options, &extension)
# has_one :employer, -> { joins(:company) }
# has_one :latest_post, ->(blog) { where("created_at > ?", blog.enabled_at) }
#
- # === Options
+ # ==== Options
#
# The declaration can also include an +options+ hash to specialize the behavior of the association.
#
# Options are:
- # [:class_name]
+ # [+:class_name+]
# Specify the class name of the association. Use it only if that name can't be inferred
# from the association name. So <tt>has_one :manager</tt> will by default be linked to the Manager class, but
# if the real class name is Person, you'll have to specify it with this option.
- # [:dependent]
+ # [+:dependent+]
# Controls what happens to the associated object when
# its owner is destroyed:
#
@@ -1531,66 +1607,89 @@ def has_many(name, scope = nil, **options, &extension)
# * <tt>:delete</tt> causes the associated object to be deleted directly from the database (so callbacks will not execute)
# * <tt>:nullify</tt> causes the foreign key to be set to +NULL+. Polymorphic type column is also nullified
# on polymorphic associations. Callbacks are not executed.
- # * <tt>:restrict_with_exception</tt> causes an <tt>ActiveRecord::DeleteRestrictionError</tt> exception to be raised if there is an associated record
+ # * <tt>:restrict_with_exception</tt> causes an ActiveRecord::DeleteRestrictionError exception to be raised if there is an associated record
# * <tt>:restrict_with_error</tt> causes an error to be added to the owner if there is an associated object
#
# Note that <tt>:dependent</tt> option is ignored when using <tt>:through</tt> option.
- # [:foreign_key]
+ # [+:foreign_key+]
# Specify the foreign key used for the association. By default this is guessed to be the name
# of this class in lower-case and "_id" suffixed. So a Person class that makes a #has_one association
# will use "person_id" as the default <tt>:foreign_key</tt>.
#
- # If you are going to modify the association (rather than just read from it), then it is
- # a good idea to set the <tt>:inverse_of</tt> option.
- # [:foreign_type]
+ # Setting the <tt>:foreign_key</tt> option prevents automatic detection of the association's
+ # inverse, so it is generally a good idea to set the <tt>:inverse_of</tt> option as well.
+ # [+:foreign_type+]
# Specify the column used to store the associated object's type, if this is a polymorphic
# association. By default this is guessed to be the name of the polymorphic association
# specified on "as" option with a "_type" suffix. So a class that defines a
# <tt>has_one :tag, as: :taggable</tt> association will use "taggable_type" as the
# default <tt>:foreign_type</tt>.
- # [:primary_key]
+ # [+:primary_key+]
# Specify the method that returns the primary key used for the association. By default this is +id+.
- # [:as]
+ # [+:as+]
# Specifies a polymorphic interface (See #belongs_to).
- # [:through]
+ # [+:through+]
# Specifies a Join Model through which to perform the query. Options for <tt>:class_name</tt>,
# <tt>:primary_key</tt>, and <tt>:foreign_key</tt> are ignored, as the association uses the
# source reflection. You can only use a <tt>:through</tt> query through a #has_one
# or #belongs_to association on the join model.
#
+ # If the association on the join model is a #belongs_to, the collection can be modified
+ # and the records on the <tt>:through</tt> model will be automatically created and removed
+ # as appropriate. Otherwise, the collection is read-only, so you should manipulate the
+ # <tt>:through</tt> association directly.
+ #
# If you are going to modify the association (rather than just read from it), then it is
- # a good idea to set the <tt>:inverse_of</tt> option.
- # [:source]
+ # a good idea to set the <tt>:inverse_of</tt> option on the source association on the
+ # join model. This allows associated records to be built which will automatically create
+ # the appropriate join model records when they are saved. (See the 'Association Join Models'
+ # and 'Setting Inverses' sections above.)
+ # [+:disable_joins+]
+ # Specifies whether joins should be skipped for an association. If set to true, two or more queries
+ # will be generated. Note that in some cases, if order or limit is applied, it will be done in-memory
+ # due to database limitations. This option is only applicable on <tt>has_one :through</tt> associations as
+ # +has_one+ alone does not perform a join.
+ # [+:source+]
# Specifies the source association name used by #has_one <tt>:through</tt> queries.
# Only use it if the name cannot be inferred from the association.
# <tt>has_one :favorite, through: :favorites</tt> will look for a
# <tt>:favorite</tt> on Favorite, unless a <tt>:source</tt> is given.
- # [:source_type]
+ # [+:source_type+]
# Specifies type of the source association used by #has_one <tt>:through</tt> queries where the source
# association is a polymorphic #belongs_to.
- # [:validate]
+ # [+:validate+]
# When set to +true+, validates new objects added to association when saving the parent object. +false+ by default.
# If you want to ensure associated objects are revalidated on every update, use +validates_associated+.
- # [:autosave]
+ # [+:autosave+]
# If true, always save the associated object or destroy it if marked for destruction,
# when saving the parent object. If false, never save or destroy the associated object.
# By default, only save the associated object if it's a new record.
#
# Note that NestedAttributes::ClassMethods#accepts_nested_attributes_for sets
# <tt>:autosave</tt> to <tt>true</tt>.
- # [:inverse_of]
+ # [+:touch+]
+ # If true, the associated object will be touched (the +updated_at+ / +updated_on+ attributes set to current time)
+ # when this record is either saved or destroyed. If you specify a symbol, that attribute
+ # will be updated with the current time in addition to the +updated_at+ / +updated_on+ attribute.
+ # Please note that no validation will be performed when touching, and only the +after_touch+,
+ # +after_commit+, and +after_rollback+ callbacks will be executed.
+ # [+:inverse_of+]
# Specifies the name of the #belongs_to association on the associated object
# that is the inverse of this #has_one association.
# See ActiveRecord::Associations::ClassMethods's overview on Bi-directional associations for more detail.
- # [:required]
+ # [+:required+]
# When set to +true+, the association will also have its presence validated.
# This will validate the association itself, not the id. You can use
# +:inverse_of+ to avoid an extra query during validation.
- # [:strict_loading]
+ # [+:strict_loading+]
# Enforces strict loading every time the associated record is loaded through this association.
- # [:ensuring_owner_was]
+ # [+:ensuring_owner_was+]
# Specifies an instance method to be called on the owner. The method must return true in order for the
# associated records to be deleted in a background job.
+ # [+:query_constraints+]
+ # Serves as a composite foreign key. Defines the list of columns to be used to query the associated object.
+ # This is an optional option. By default Rails will attempt to derive the value automatically.
+ # When the value is set the Array size must match associated model's primary key or +query_constraints+ size.
#
# Option examples:
# has_one :credit_card, dependent: :destroy # destroys the associated credit card
@@ -1601,9 +1700,11 @@ def has_many(name, scope = nil, **options, &extension)
# has_one :attachment, as: :attachable
# has_one :boss, -> { readonly }
# has_one :club, through: :membership
+ # has_one :club, through: :membership, disable_joins: true
# has_one :primary_address, -> { where(primary: true) }, through: :addressables, source: :addressable
# has_one :credit_card, required: true
# has_one :credit_card, strict_loading: true
+ # has_one :employment_record_book, query_constraints: [:organization_id, :employee_id]
def has_one(name, scope = nil, **options)
reflection = Builder::HasOne.build(self, name, scope, options)
Reflection.add_reflection self, name, reflection
@@ -1620,36 +1721,52 @@ def has_one(name, scope = nil, **options)
# +association+ is a placeholder for the symbol passed as the +name+ argument, so
# <tt>belongs_to :author</tt> would add among others <tt>author.nil?</tt>.
#
- # [association]
+ # [<tt>association</tt>]
# Returns the associated object. +nil+ is returned if none is found.
- # [association=(associate)]
+ # [<tt>association=(associate)</tt>]
# Assigns the associate object, extracts the primary key, and sets it as the foreign key.
# No modification or deletion of existing records takes place.
- # [build_association(attributes = {})]
+ # [<tt>build_association(attributes = {})</tt>]
# Returns a new object of the associated type that has been instantiated
# with +attributes+ and linked to this object through a foreign key, but has not yet been saved.
- # [create_association(attributes = {})]
+ # [<tt>create_association(attributes = {})</tt>]
# Returns a new object of the associated type that has been instantiated
# with +attributes+, linked to this object through a foreign key, and that
# has already been saved (if it passed the validation).
- # [create_association!(attributes = {})]
+ # [<tt>create_association!(attributes = {})</tt>]
# Does the same as <tt>create_association</tt>, but raises ActiveRecord::RecordInvalid
# if the record is invalid.
- # [reload_association]
+ # [<tt>reload_association</tt>]
# Returns the associated object, forcing a database read.
+ # [<tt>reset_association</tt>]
+ # Unloads the associated object. The next access will query it from the database.
+ # [<tt>association_changed?</tt>]
+ # Returns true if a new associate object has been assigned and the next save will update the foreign key.
+ # [<tt>association_previously_changed?</tt>]
+ # Returns true if the previous save updated the association to reference a new associate object.
#
- # === Example
+ # ==== Example
#
- # A Post class declares <tt>belongs_to :author</tt>, which will add:
- # * <tt>Post#author</tt> (similar to <tt>Author.find(author_id)</tt>)
- # * <tt>Post#author=(author)</tt> (similar to <tt>post.author_id = author.id</tt>)
- # * <tt>Post#build_author</tt> (similar to <tt>post.author = Author.new</tt>)
- # * <tt>Post#create_author</tt> (similar to <tt>post.author = Author.new; post.author.save; post.author</tt>)
- # * <tt>Post#create_author!</tt> (similar to <tt>post.author = Author.new; post.author.save!; post.author</tt>)
- # * <tt>Post#reload_author</tt>
- # The declaration can also include an +options+ hash to specialize the behavior of the association.
+ # class Post < ActiveRecord::Base
+ # belongs_to :author
+ # end
+ #
+ # Declaring <tt>belongs_to :author</tt> adds the following methods (and more):
#
- # === Scopes
+ # post = Post.find(7)
+ # author = Author.find(19)
+ #
+ # post.author # similar to Author.find(post.author_id)
+ # post.author = author # similar to post.author_id = author.id
+ # post.build_author # similar to post.author = Author.new
+ # post.create_author # similar to post.author = Author.new; post.author.save; post.author
+ # post.create_author! # similar to post.author = Author.new; post.author.save!; post.author
+ # post.reload_author
+ # post.reset_author
+ # post.author_changed?
+ # post.author_previously_changed?
+ #
+ # ==== Scopes
#
# You can pass a second argument +scope+ as a callable (i.e. proc or
# lambda) to retrieve a specific record or customize the generated query
@@ -1660,37 +1777,39 @@ def has_one(name, scope = nil, **options)
# belongs_to :user, -> { joins(:friends) }
# belongs_to :level, ->(game) { where("game_level > ?", game.current_level) }
#
- # === Options
+ # ==== Options
#
- # [:class_name]
+ # The declaration can also include an +options+ hash to specialize the behavior of the association.
+ #
+ # [+:class_name+]
# Specify the class name of the association. Use it only if that name can't be inferred
# from the association name. So <tt>belongs_to :author</tt> will by default be linked to the Author class, but
# if the real class name is Person, you'll have to specify it with this option.
- # [:foreign_key]
+ # [+:foreign_key+]
# Specify the foreign key used for the association. By default this is guessed to be the name
# of the association with an "_id" suffix. So a class that defines a <tt>belongs_to :person</tt>
# association will use "person_id" as the default <tt>:foreign_key</tt>. Similarly,
# <tt>belongs_to :favorite_person, class_name: "Person"</tt> will use a foreign key
# of "favorite_person_id".
#
- # If you are going to modify the association (rather than just read from it), then it is
- # a good idea to set the <tt>:inverse_of</tt> option.
- # [:foreign_type]
+ # Setting the <tt>:foreign_key</tt> option prevents automatic detection of the association's
+ # inverse, so it is generally a good idea to set the <tt>:inverse_of</tt> option as well.
+ # [+:foreign_type+]
# Specify the column used to store the associated object's type, if this is a polymorphic
# association. By default this is guessed to be the name of the association with a "_type"
# suffix. So a class that defines a <tt>belongs_to :taggable, polymorphic: true</tt>
# association will use "taggable_type" as the default <tt>:foreign_type</tt>.
- # [:primary_key]
+ # [+:primary_key+]
# Specify the method that returns the primary key of associated object used for the association.
# By default this is +id+.
- # [:dependent]
+ # [+:dependent+]
# If set to <tt>:destroy</tt>, the associated object is destroyed when this object is. If set to
# <tt>:delete</tt>, the associated object is deleted *without* calling its destroy method. If set to
# <tt>:destroy_async</tt>, the associated object is scheduled to be destroyed in a background job.
# This option should not be specified when #belongs_to is used in conjunction with
# a #has_many relationship on another class because of the potential to leave
# orphaned records behind.
- # [:counter_cache]
+ # [+:counter_cache+]
# Caches the number of belonging objects on the associate class through the use of CounterCache::ClassMethods#increment_counter
# and CounterCache::ClassMethods#decrement_counter. The counter cache is incremented when an object of this
# class is created and decremented when it's destroyed. This requires that a column
@@ -1702,14 +1821,14 @@ def has_one(name, scope = nil, **options)
# option (e.g., <tt>counter_cache: :my_custom_counter</tt>.)
# Note: Specifying a counter cache will add it to that model's list of readonly attributes
# using +attr_readonly+.
- # [:polymorphic]
+ # [+:polymorphic+]
# Specify this association is a polymorphic association by passing +true+.
# Note: If you've enabled the counter cache, then you may want to add the counter cache attribute
# to the +attr_readonly+ list in the associated classes (e.g. <tt>class Post; attr_readonly :comments_count; end</tt>).
- # [:validate]
+ # [+:validate+]
# When set to +true+, validates new objects added to association when saving the parent object. +false+ by default.
# If you want to ensure associated objects are revalidated on every update, use +validates_associated+.
- # [:autosave]
+ # [+:autosave+]
# If true, always save the associated object or destroy it if marked for destruction, when
# saving the parent object.
# If false, never save or destroy the associated object.
@@ -1717,32 +1836,37 @@ def has_one(name, scope = nil, **options)
#
# Note that NestedAttributes::ClassMethods#accepts_nested_attributes_for
# sets <tt>:autosave</tt> to <tt>true</tt>.
- # [:touch]
- # If true, the associated object will be touched (the updated_at/on attributes set to current time)
+ # [+:touch+]
+ # If true, the associated object will be touched (the +updated_at+ / +updated_on+ attributes set to current time)
# when this record is either saved or destroyed. If you specify a symbol, that attribute
- # will be updated with the current time in addition to the updated_at/on attribute.
- # Please note that with touching no validation is performed and only the +after_touch+,
- # +after_commit+ and +after_rollback+ callbacks are executed.
- # [:inverse_of]
+ # will be updated with the current time in addition to the +updated_at+ / +updated_on+ attribute.
+ # Please note that no validation will be performed when touching, and only the +after_touch+,
+ # +after_commit+, and +after_rollback+ callbacks will be executed.
+ # [+:inverse_of+]
# Specifies the name of the #has_one or #has_many association on the associated
# object that is the inverse of this #belongs_to association.
# See ActiveRecord::Associations::ClassMethods's overview on Bi-directional associations for more detail.
- # [:optional]
+ # [+:optional+]
# When set to +true+, the association will not have its presence validated.
- # [:required]
+ # [+:required+]
# When set to +true+, the association will also have its presence validated.
# This will validate the association itself, not the id. You can use
# +:inverse_of+ to avoid an extra query during validation.
# NOTE: <tt>required</tt> is set to <tt>true</tt> by default and is deprecated. If
# you don't want to have association presence validated, use <tt>optional: true</tt>.
- # [:default]
+ # [+:default+]
# Provide a callable (i.e. proc or lambda) to specify that the association should
# be initialized with a particular record before validation.
- # [:strict_loading]
+ # Please note that callable won't be executed if the record exists.
+ # [+:strict_loading+]
# Enforces strict loading every time the associated record is loaded through this association.
- # [:ensuring_owner_was]
+ # [+:ensuring_owner_was+]
# Specifies an instance method to be called on the owner. The method must return true in order for the
# associated records to be deleted in a background job.
+ # [+:query_constraints+]
+ # Serves as a composite foreign key. Defines the list of columns to be used to query the associated object.
+ # This is an optional option. By default Rails will attempt to derive the value automatically.
+ # When the value is set the Array size must match associated model's primary key or +query_constraints+ size.
#
# Option examples:
# belongs_to :firm, foreign_key: "client_of"
@@ -1758,6 +1882,7 @@ def has_one(name, scope = nil, **options)
# belongs_to :user, optional: true
# belongs_to :account, default: -> { company.account }
# belongs_to :account, strict_loading: true
+ # belong_to :note, query_constraints: [:organization_id, :note_id]
def belongs_to(name, scope = nil, **options)
reflection = Builder::BelongsTo.build(self, name, scope, options)
Reflection.add_reflection self, name, reflection
@@ -1780,7 +1905,7 @@ def belongs_to(name, scope = nil, **options)
# The join table should not have a primary key or a model associated with it. You must manually generate the
# join table with a migration such as this:
#
- # class CreateDevelopersProjectsJoinTable < ActiveRecord::Migration[6.0]
+ # class CreateDevelopersProjectsJoinTable < ActiveRecord::Migration[7.1]
# def change
# create_join_table :developers, :projects
# end
@@ -1795,71 +1920,80 @@ def belongs_to(name, scope = nil, **options)
# +collection+ is a placeholder for the symbol passed as the +name+ argument, so
# <tt>has_and_belongs_to_many :categories</tt> would add among others <tt>categories.empty?</tt>.
#
- # [collection]
+ # [<tt>collection</tt>]
# Returns a Relation of all the associated objects.
# An empty Relation is returned if none are found.
- # [collection<<(object, ...)]
+ # [<tt>collection<<(object, ...)</tt>]
# Adds one or more objects to the collection by creating associations in the join table
# (<tt>collection.push</tt> and <tt>collection.concat</tt> are aliases to this method).
# Note that this operation instantly fires update SQL without waiting for the save or update call on the
# parent object, unless the parent object is a new record.
- # [collection.delete(object, ...)]
+ # [<tt>collection.delete(object, ...)</tt>]
# Removes one or more objects from the collection by removing their associations from the join table.
# This does not destroy the objects.
- # [collection.destroy(object, ...)]
+ # [<tt>collection.destroy(object, ...)</tt>]
# Removes one or more objects from the collection by running destroy on each association in the join table, overriding any dependent option.
# This does not destroy the objects.
- # [collection=objects]
+ # [<tt>collection=objects</tt>]
# Replaces the collection's content by deleting and adding objects as appropriate.
- # [collection_singular_ids]
+ # [<tt>collection_singular_ids</tt>]
# Returns an array of the associated objects' ids.
- # [collection_singular_ids=ids]
+ # [<tt>collection_singular_ids=ids</tt>]
# Replace the collection by the objects identified by the primary keys in +ids+.
- # [collection.clear]
+ # [<tt>collection.clear</tt>]
# Removes every object from the collection. This does not destroy the objects.
- # [collection.empty?]
+ # [<tt>collection.empty?</tt>]
# Returns +true+ if there are no associated objects.
- # [collection.size]
+ # [<tt>collection.size</tt>]
# Returns the number of associated objects.
- # [collection.find(id)]
+ # [<tt>collection.find(id)</tt>]
# Finds an associated object responding to the +id+ and that
# meets the condition that it has to be associated with this object.
# Uses the same rules as ActiveRecord::FinderMethods#find.
- # [collection.exists?(...)]
+ # [<tt>collection.exists?(...)</tt>]
# Checks whether an associated object with the given conditions exists.
# Uses the same rules as ActiveRecord::FinderMethods#exists?.
- # [collection.build(attributes = {})]
+ # [<tt>collection.build(attributes = {})</tt>]
# Returns a new object of the collection type that has been instantiated
# with +attributes+ and linked to this object through the join table, but has not yet been saved.
- # [collection.create(attributes = {})]
+ # [<tt>collection.create(attributes = {})</tt>]
# Returns a new object of the collection type that has been instantiated
# with +attributes+, linked to this object through the join table, and that has already been
# saved (if it passed the validation).
- # [collection.reload]
+ # [<tt>collection.reload</tt>]
# Returns a Relation of all of the associated objects, forcing a database read.
# An empty Relation is returned if none are found.
#
- # === Example
- #
- # A Developer class declares <tt>has_and_belongs_to_many :projects</tt>, which will add:
- # * <tt>Developer#projects</tt>
- # * <tt>Developer#projects<<</tt>
- # * <tt>Developer#projects.delete</tt>
- # * <tt>Developer#projects.destroy</tt>
- # * <tt>Developer#projects=</tt>
- # * <tt>Developer#project_ids</tt>
- # * <tt>Developer#project_ids=</tt>
- # * <tt>Developer#projects.clear</tt>
- # * <tt>Developer#projects.empty?</tt>
- # * <tt>Developer#projects.size</tt>
- # * <tt>Developer#projects.find(id)</tt>
- # * <tt>Developer#projects.exists?(...)</tt>
- # * <tt>Developer#projects.build</tt> (similar to <tt>Project.new(developer_id: id)</tt>)
- # * <tt>Developer#projects.create</tt> (similar to <tt>c = Project.new(developer_id: id); c.save; c</tt>)
- # * <tt>Developer#projects.reload</tt>
+ # ==== Example
+ #
+ # class Developer < ActiveRecord::Base
+ # has_and_belongs_to_many :projects
+ # end
+ #
+ # Declaring <tt>has_and_belongs_to_many :projects</tt> adds the following methods (and more):
+ #
+ # developer = Developer.find(11)
+ # project = Project.find(9)
+ #
+ # developer.projects
+ # developer.projects << project
+ # developer.projects.delete(project)
+ # developer.projects.destroy(project)
+ # developer.projects = [project]
+ # developer.project_ids
+ # developer.project_ids = [9]
+ # developer.projects.clear
+ # developer.projects.empty?
+ # developer.projects.size
+ # developer.projects.find(9)
+ # developer.projects.exists?(9)
+ # developer.projects.build # similar to Project.new(developer_id: 11)
+ # developer.projects.create # similar to Project.create(developer_id: 11)
+ # developer.projects.reload
+ #
# The declaration may include an +options+ hash to specialize the behavior of the association.
#
- # === Scopes
+ # ==== Scopes
#
# You can pass a second argument +scope+ as a callable (i.e. proc or
# lambda) to retrieve a specific set of records or customize the generated
@@ -1871,11 +2005,11 @@ def belongs_to(name, scope = nil, **options)
# where("default_category = ?", post.default_category)
# }
#
- # === Extensions
+ # ==== Extensions
#
# The +extension+ argument allows you to pass a block into a
# has_and_belongs_to_many association. This is useful for adding new
- # finders, creators and other factory-type methods to be used as part of
+ # finders, creators, and other factory-type methods to be used as part of
# the association.
#
# Extension examples:
@@ -1886,33 +2020,33 @@ def belongs_to(name, scope = nil, **options)
# end
# end
#
- # === Options
+ # ==== Options
#
- # [:class_name]
+ # [+:class_name+]
# Specify the class name of the association. Use it only if that name can't be inferred
# from the association name. So <tt>has_and_belongs_to_many :projects</tt> will by default be linked to the
# Project class, but if the real class name is SuperProject, you'll have to specify it with this option.
- # [:join_table]
+ # [+:join_table+]
# Specify the name of the join table if the default based on lexical order isn't what you want.
# <b>WARNING:</b> If you're overwriting the table name of either class, the +table_name+ method
# MUST be declared underneath any #has_and_belongs_to_many declaration in order to work.
- # [:foreign_key]
+ # [+:foreign_key+]
# Specify the foreign key used for the association. By default this is guessed to be the name
# of this class in lower-case and "_id" suffixed. So a Person class that makes
# a #has_and_belongs_to_many association to Project will use "person_id" as the
# default <tt>:foreign_key</tt>.
#
- # If you are going to modify the association (rather than just read from it), then it is
- # a good idea to set the <tt>:inverse_of</tt> option.
- # [:association_foreign_key]
+ # Setting the <tt>:foreign_key</tt> option prevents automatic detection of the association's
+ # inverse, so it is generally a good idea to set the <tt>:inverse_of</tt> option as well.
+ # [+:association_foreign_key+]
# Specify the foreign key used for the association on the receiving side of the association.
# By default this is guessed to be the name of the associated class in lower-case and "_id" suffixed.
# So if a Person class makes a #has_and_belongs_to_many association to Project,
# the association will use "project_id" as the default <tt>:association_foreign_key</tt>.
- # [:validate]
+ # [+:validate+]
# When set to +true+, validates new objects added to association when saving the parent object. +true+ by default.
# If you want to ensure associated objects are revalidated on every update, use +validates_associated+.
- # [:autosave]
+ # [+:autosave+]
# If true, always save the associated objects or destroy them if marked for destruction, when
# saving the parent object.
# If false, never save or destroy the associated objects.
@@ -1920,7 +2054,7 @@ def belongs_to(name, scope = nil, **options)
#
# Note that NestedAttributes::ClassMethods#accepts_nested_attributes_for sets
# <tt>:autosave</tt> to <tt>true</tt>.
- # [:strict_loading]
+ # [+:strict_loading+]
# Enforces strict loading every time an associated record is loaded through this association.
#
# Option examples:
diff --git a/activerecord/lib/active_record/associations/association.rb b/activerecord/lib/active_record/associations/association.rb
index 0b731e5d29..398278883c 100644
--- a/activerecord/lib/active_record/associations/association.rb
+++ b/activerecord/lib/active_record/associations/association.rb
@@ -19,7 +19,7 @@ module Associations
# Associations in Active Record are middlemen between the object that
# holds the association, known as the <tt>owner</tt>, and the associated
# result set, known as the <tt>target</tt>. Association metadata is available in
- # <tt>reflection</tt>, which is an instance of <tt>ActiveRecord::Reflection::AssociationReflection</tt>.
+ # <tt>reflection</tt>, which is an instance of +ActiveRecord::Reflection::AssociationReflection+.
#
# For example, given
#
@@ -32,8 +32,9 @@ module Associations
# The association of <tt>blog.posts</tt> has the object +blog+ as its
# <tt>owner</tt>, the collection of its posts as <tt>target</tt>, and
# the <tt>reflection</tt> object represents a <tt>:has_many</tt> macro.
- class Association #:nodoc:
- attr_reader :owner, :target, :reflection
+ class Association # :nodoc:
+ attr_accessor :owner
+ attr_reader :target, :reflection, :disable_joins
delegate :options, to: :reflection
@@ -41,9 +42,12 @@ def initialize(owner, reflection)
reflection.check_validity!
@owner, @reflection = owner, reflection
+ @disable_joins = @reflection.options[:disable_joins] || false
reset
reset_scope
+
+ @skip_strict_loading = nil
end
# Resets the \loaded flag to +false+ and sets the \target to +nil+.
@@ -51,7 +55,6 @@ def reset
@loaded = false
@target = nil
@stale_state = nil
- @inversed = false
end
def reset_negative_cache # :nodoc:
@@ -77,7 +80,6 @@ def loaded?
def loaded!
@loaded = true
@stale_state = stale_state
- @inversed = false
end
# The target is stale if the target no longer points to the record(s) that the
@@ -87,7 +89,7 @@ def loaded!
#
# Note that if the target has not been loaded, it is not considered stale.
def stale_target?
- !@inversed && loaded? && @stale_state != stale_state
+ loaded? && @stale_state != stale_state
end
# Sets the target of this association to <tt>\target</tt>, and the \loaded flag to +true+.
@@ -97,8 +99,12 @@ def target=(target)
end
def scope
- if (scope = klass.current_scope) && scope.try(:proxy_association) == self
+ if disable_joins
+ DisableJoinsAssociationScope.create.scope(self)
+ elsif (scope = klass.current_scope) && scope.try(:proxy_association) == self
scope.spawn
+ elsif scope = klass.global_current_scope
+ target_scope.merge!(association_scope).merge!(scope)
else
target_scope.merge!(association_scope)
end
@@ -132,15 +138,11 @@ def remove_inverse_instance(record)
def inversed_from(record)
self.target = record
- @inversed = !!record
end
def inversed_from_queries(record)
if inversable?(record)
self.target = record
- @inversed = true
- else
- @inversed = false
end
end
@@ -191,7 +193,7 @@ def marshal_load(data)
@reflection = @owner.class._reflect_on_association(reflection_name)
end
- def initialize_attributes(record, except_from_scope_attributes = nil) #:nodoc:
+ def initialize_attributes(record, except_from_scope_attributes = nil) # :nodoc:
except_from_scope_attributes ||= {}
skip_assign = [reflection.foreign_key, reflection.type].compact
assigned_keys = record.changed_attribute_names_to_save
@@ -210,8 +212,14 @@ def create!(attributes = nil, &block)
end
private
+ # Reader and writer methods call this so that consistent errors are presented
+ # when the association target class does not exist.
+ def ensure_klass_exists!
+ klass
+ end
+
def find_target
- if strict_loading? && owner.validation_context.nil?
+ if violates_strict_loading?
Base.strict_loading_violation!(owner: owner.class, reflection: reflection)
end
@@ -224,13 +232,32 @@ def find_target
end
binds = AssociationScope.get_bind_values(owner, reflection.chain)
- sc.execute(binds, klass.connection) { |record| set_inverse_instance(record) }
+ sc.execute(binds, klass.connection) do |record|
+ set_inverse_instance(record)
+ if owner.strict_loading_n_plus_one_only? && reflection.macro == :has_many
+ record.strict_loading!
+ else
+ record.strict_loading!(false, mode: owner.strict_loading_mode)
+ end
+ end
end
- def strict_loading?
+ def skip_strict_loading(&block)
+ skip_strict_loading_was = @skip_strict_loading
+ @skip_strict_loading = true
+ yield
+ ensure
+ @skip_strict_loading = skip_strict_loading_was
+ end
+
+ def violates_strict_loading?
+ return if @skip_strict_loading
+
+ return unless owner.validation_context.nil?
+
return reflection.strict_loading? if reflection.options.key?(:strict_loading)
- owner.strict_loading?
+ owner.strict_loading? && !owner.strict_loading_n_plus_one_only?
end
# The scope for this association.
@@ -241,7 +268,11 @@ def strict_loading?
# actually gets built.
def association_scope
if klass
- @association_scope ||= AssociationScope.scope(self)
+ @association_scope ||= if disable_joins
+ DisableJoinsAssociationScope.scope(self)
+ else
+ AssociationScope.scope(self)
+ end
end
end
@@ -273,7 +304,7 @@ def foreign_key_present?
# Raises ActiveRecord::AssociationTypeMismatch unless +record+ is of
# the kind of the class of the associated objects. Meant to be used as
- # a sanity check when you are about to assign an associated record.
+ # a safety check when you are about to assign an associated record.
def raise_on_type_mismatch!(record)
unless record.is_a?(reflection.klass)
fresh_class = reflection.class_name.safe_constantize
@@ -306,7 +337,8 @@ def invertible_for?(record)
# Returns true if record contains the foreign_key
def foreign_key_for?(record)
- record._has_attribute?(reflection.foreign_key)
+ foreign_key = Array(reflection.foreign_key)
+ foreign_key.all? { |key| record._has_attribute?(key) }
end
# This should be implemented to return the values of the relevant key(s) on the owner,
diff --git a/activerecord/lib/active_record/associations/association_scope.rb b/activerecord/lib/active_record/associations/association_scope.rb
index c5e394d7af..e16d2a74e5 100644
--- a/activerecord/lib/active_record/associations/association_scope.rb
+++ b/activerecord/lib/active_record/associations/association_scope.rb
@@ -2,7 +2,7 @@
module ActiveRecord
module Associations
- class AssociationScope #:nodoc:
+ class AssociationScope # :nodoc:
def self.scope(association)
INSTANCE.scope(association)
end
@@ -35,7 +35,7 @@ def self.get_bind_values(owner, chain)
binds = []
last_reflection = chain.last
- binds << last_reflection.join_id_for(owner)
+ binds.push(*last_reflection.join_id_for(owner))
if last_reflection.type
binds << owner.class.polymorphic_name
end
@@ -56,12 +56,15 @@ def join(table, constraint)
end
def last_chain_scope(scope, reflection, owner)
- primary_key = reflection.join_primary_key
- foreign_key = reflection.join_foreign_key
+ primary_key = Array(reflection.join_primary_key)
+ foreign_key = Array(reflection.join_foreign_key)
table = reflection.aliased_table
- value = transform_value(owner[foreign_key])
- scope = apply_scope(scope, table, primary_key, value)
+ primary_key_foreign_key_pairs = primary_key.zip(foreign_key)
+ primary_key_foreign_key_pairs.each do |join_key, foreign_key|
+ value = transform_value(owner._read_attribute(foreign_key))
+ scope = apply_scope(scope, table, join_key, value)
+ end
if reflection.type
polymorphic_type = transform_value(owner.class.polymorphic_name)
@@ -76,19 +79,23 @@ def transform_value(value)
end
def next_chain_scope(scope, reflection, next_reflection)
- primary_key = reflection.join_primary_key
- foreign_key = reflection.join_foreign_key
+ primary_key = Array(reflection.join_primary_key)
+ foreign_key = Array(reflection.join_foreign_key)
table = reflection.aliased_table
foreign_table = next_reflection.aliased_table
- constraint = table[primary_key].eq(foreign_table[foreign_key])
+
+ primary_key_foreign_key_pairs = primary_key.zip(foreign_key)
+ constraints = primary_key_foreign_key_pairs.map do |join_primary_key, foreign_key|
+ table[join_primary_key].eq(foreign_table[foreign_key])
+ end.inject(&:and)
if reflection.type
value = transform_value(next_reflection.klass.polymorphic_name)
scope = apply_scope(scope, table, reflection.type, value)
end
- scope.joins!(join(foreign_table, constraint))
+ scope.joins!(join(foreign_table, constraints))
end
class ReflectionProxy < SimpleDelegator # :nodoc:
@@ -123,8 +130,6 @@ def add_constraints(scope, owner, chain)
chain_head = chain.first
chain.reverse_each do |reflection|
- # Exclude the scope of the association itself, because that
- # was already merged in the #scope method.
reflection.constraints.each do |scope_chain_item|
item = eval_scope(reflection, scope_chain_item, owner)
diff --git a/activerecord/lib/active_record/associations/belongs_to_association.rb b/activerecord/lib/active_record/associations/belongs_to_association.rb
index 2ed6a700b0..cf87d308cf 100644
--- a/activerecord/lib/active_record/associations/belongs_to_association.rb
+++ b/activerecord/lib/active_record/associations/belongs_to_association.rb
@@ -3,7 +3,7 @@
module ActiveRecord
module Associations
# = Active Record Belongs To Association
- class BelongsToAssociation < SingularAssociation #:nodoc:
+ class BelongsToAssociation < SingularAssociation # :nodoc:
def handle_dependency
return unless load_target
@@ -11,8 +11,13 @@ def handle_dependency
when :destroy
raise ActiveRecord::Rollback unless target.destroy
when :destroy_async
- id = owner.public_send(reflection.foreign_key.to_sym)
- primary_key_column = reflection.active_record_primary_key.to_sym
+ if reflection.foreign_key.is_a?(Array)
+ primary_key_column = reflection.active_record_primary_key.map(&:to_sym)
+ id = reflection.foreign_key.map { |col| owner.public_send(col.to_sym) }
+ else
+ primary_key_column = reflection.active_record_primary_key.to_sym
+ id = owner.public_send(reflection.foreign_key.to_sym)
+ end
enqueue_destroy_association(
owner_model_name: owner.class.to_s,
@@ -55,7 +60,8 @@ def increment_counters
def decrement_counters_before_last_save
if reflection.polymorphic?
- model_was = owner.attribute_before_last_save(reflection.foreign_type)&.constantize
+ model_type_was = owner.attribute_before_last_save(reflection.foreign_type)
+ model_was = owner.class.polymorphic_class_for(model_type_was) if model_type_was
else
model_was = klass
end
@@ -68,6 +74,14 @@ def decrement_counters_before_last_save
end
def target_changed?
+ owner.attribute_changed?(reflection.foreign_key) || (!foreign_key_present? && target&.new_record?)
+ end
+
+ def target_previously_changed?
+ owner.attribute_previously_changed?(reflection.foreign_key)
+ end
+
+ def saved_change_to_target?
owner.saved_change_to_attribute?(reflection.foreign_key)
end
@@ -77,6 +91,8 @@ def replace(record)
raise_on_type_mismatch!(record)
set_inverse_instance(record)
@updated = true
+ elsif target
+ remove_inverse_instance(target)
end
replace_keys(record, force: true)
@@ -108,10 +124,13 @@ def require_counter_update?
end
def replace_keys(record, force: false)
- target_key = record ? record._read_attribute(primary_key(record.class)) : nil
+ target_key_values = record ? Array(primary_key(record.class)).map { |key| record._read_attribute(key) } : []
+ reflection_fk = Array(reflection.foreign_key)
- if force || owner[reflection.foreign_key] != target_key
- owner[reflection.foreign_key] = target_key
+ if force || reflection_fk.map { |fk| owner._read_attribute(fk) } != target_key_values
+ reflection_fk.zip(target_key_values).each do |key, value|
+ owner[key] = value
+ end
end
end
@@ -120,12 +139,12 @@ def primary_key(klass)
end
def foreign_key_present?
- owner._read_attribute(reflection.foreign_key)
+ Array(reflection.foreign_key).all? { |fk| owner._read_attribute(fk) }
end
def invertible_for?(record)
inverse = inverse_reflection_for(record)
- inverse && (inverse.has_one? || ActiveRecord::Base.has_many_inversing)
+ inverse && (inverse.has_one? || inverse.klass.has_many_inversing)
end
def stale_state
diff --git a/activerecord/lib/active_record/associations/belongs_to_polymorphic_association.rb b/activerecord/lib/active_record/associations/belongs_to_polymorphic_association.rb
index a0d5522fa0..298fa1011e 100644
--- a/activerecord/lib/active_record/associations/belongs_to_polymorphic_association.rb
+++ b/activerecord/lib/active_record/associations/belongs_to_polymorphic_association.rb
@@ -3,13 +3,21 @@
module ActiveRecord
module Associations
# = Active Record Belongs To Polymorphic Association
- class BelongsToPolymorphicAssociation < BelongsToAssociation #:nodoc:
+ class BelongsToPolymorphicAssociation < BelongsToAssociation # :nodoc:
def klass
type = owner[reflection.foreign_type]
type.presence && owner.class.polymorphic_class_for(type)
end
def target_changed?
+ super || owner.attribute_changed?(reflection.foreign_type)
+ end
+
+ def target_previously_changed?
+ super || owner.attribute_previously_changed?(reflection.foreign_type)
+ end
+
+ def saved_change_to_target?
super || owner.saved_change_to_attribute?(reflection.foreign_type)
end
@@ -19,7 +27,7 @@ def replace_keys(record, force: false)
target_type = record ? record.class.polymorphic_name : nil
- if force || owner[reflection.foreign_type] != target_type
+ if force || owner._read_attribute(reflection.foreign_type) != target_type
owner[reflection.foreign_type] = target_type
end
end
diff --git a/activerecord/lib/active_record/associations/builder/association.rb b/activerecord/lib/active_record/associations/builder/association.rb
index b7f45ed5ee..b785772d8b 100644
--- a/activerecord/lib/active_record/associations/builder/association.rb
+++ b/activerecord/lib/active_record/associations/builder/association.rb
@@ -12,14 +12,14 @@
# - HasManyAssociation
module ActiveRecord::Associations::Builder # :nodoc:
- class Association #:nodoc:
+ class Association # :nodoc:
class << self
attr_accessor :extensions
end
self.extensions = []
VALID_OPTIONS = [
- :class_name, :anonymous_class, :primary_key, :foreign_key, :dependent, :validate, :inverse_of, :strict_loading
+ :class_name, :anonymous_class, :primary_key, :foreign_key, :dependent, :validate, :inverse_of, :strict_loading, :query_constraints
].freeze # :nodoc:
def self.build(model, name, scope, options, &block)
@@ -33,6 +33,7 @@ def self.build(model, name, scope, options, &block)
define_accessors model, reflection
define_callbacks model, reflection
define_validations model, reflection
+ define_change_tracking_methods model, reflection
reflection
end
@@ -117,14 +118,18 @@ def self.define_validations(model, reflection)
# noop
end
+ def self.define_change_tracking_methods(model, reflection)
+ # noop
+ end
+
def self.valid_dependent_options
raise NotImplementedError
end
def self.check_dependent_options(dependent, model)
if dependent == :destroy_async && !model.destroy_association_async_job
- err_message = "ActiveJob is required to use destroy_async on associations"
- raise ActiveRecord::ActiveJobRequiredError, err_message
+ err_message = "A valid destroy_association_async_job is required to use `dependent: :destroy_async` on associations"
+ raise ActiveRecord::ConfigurationError, err_message
end
unless valid_dependent_options.include? dependent
raise ArgumentError, "The :dependent option must be one of #{valid_dependent_options}, but is :#{dependent}"
@@ -158,6 +163,7 @@ def _after_commit_jobs
private_class_method :build_scope, :macro, :valid_options, :validate_options, :define_extensions,
:define_callbacks, :define_accessors, :define_readers, :define_writers, :define_validations,
- :valid_dependent_options, :check_dependent_options, :add_destroy_callbacks, :add_after_commit_jobs_callback
+ :define_change_tracking_methods, :valid_dependent_options, :check_dependent_options,
+ :add_destroy_callbacks, :add_after_commit_jobs_callback
end
end
diff --git a/activerecord/lib/active_record/associations/builder/belongs_to.rb b/activerecord/lib/active_record/associations/builder/belongs_to.rb
index 584af2c3f2..098bedd592 100644
--- a/activerecord/lib/active_record/associations/builder/belongs_to.rb
+++ b/activerecord/lib/active_record/associations/builder/belongs_to.rb
@@ -1,7 +1,7 @@
# frozen_string_literal: true
module ActiveRecord::Associations::Builder # :nodoc:
- class BelongsTo < SingularAssociation #:nodoc:
+ class BelongsTo < SingularAssociation # :nodoc:
def self.macro
:belongs_to
end
@@ -30,17 +30,17 @@ def self.add_counter_cache_callbacks(model, reflection)
model.after_update lambda { |record|
association = association(reflection.name)
- if association.target_changed?
+ if association.saved_change_to_target?
association.increment_counters
association.decrement_counters_before_last_save
end
}
klass = reflection.class_name.safe_constantize
- klass.attr_readonly cache_column if klass && klass.respond_to?(:attr_readonly)
+ klass._counter_cache_columns |= [cache_column] if klass && klass.respond_to?(:_counter_cache_columns)
end
- def self.touch_record(o, changes, foreign_key, name, touch, touch_method) # :nodoc:
+ def self.touch_record(o, changes, foreign_key, name, touch) # :nodoc:
old_foreign_id = changes[foreign_key] && changes[foreign_key].first
if old_foreign_id
@@ -49,7 +49,7 @@ def self.touch_record(o, changes, foreign_key, name, touch, touch_method) # :nod
if reflection.polymorphic?
foreign_type = reflection.foreign_type
klass = changes[foreign_type] && changes[foreign_type].first || o.public_send(foreign_type)
- klass = klass.constantize
+ klass = o.class.polymorphic_class_for(klass)
else
klass = association.klass
end
@@ -58,9 +58,9 @@ def self.touch_record(o, changes, foreign_key, name, touch, touch_method) # :nod
if old_record
if touch != true
- old_record.public_send(touch_method, touch)
+ old_record.touch_later(touch)
else
- old_record.public_send(touch_method)
+ old_record.touch_later
end
end
end
@@ -68,9 +68,9 @@ def self.touch_record(o, changes, foreign_key, name, touch, touch_method) # :nod
record = o.public_send name
if record && record.persisted?
if touch != true
- record.public_send(touch_method, touch)
+ record.touch_later(touch)
else
- record.public_send(touch_method)
+ record.touch_later
end
end
end
@@ -81,13 +81,13 @@ def self.add_touch_callbacks(model, reflection)
touch = reflection.options[:touch]
callback = lambda { |changes_method| lambda { |record|
- BelongsTo.touch_record(record, record.send(changes_method), foreign_key, name, touch, belongs_to_touch_method)
+ BelongsTo.touch_record(record, record.send(changes_method), foreign_key, name, touch)
}}
if reflection.counter_cache_column
touch_callback = callback.(:saved_changes)
update_callback = lambda { |record|
- instance_exec(record, &touch_callback) unless association(reflection.name).target_changed?
+ instance_exec(record, &touch_callback) unless association(reflection.name).saved_change_to_target?
}
model.after_update update_callback, if: :saved_changes?
else
@@ -123,11 +123,37 @@ def self.define_validations(model, reflection)
super
if required
- model.validates_presence_of reflection.name, message: :required
+ if ActiveRecord.belongs_to_required_validates_foreign_key
+ model.validates_presence_of reflection.name, message: :required
+ else
+ condition = lambda { |record|
+ foreign_key = reflection.foreign_key
+ foreign_type = reflection.foreign_type
+
+ record.read_attribute(foreign_key).nil? ||
+ record.attribute_changed?(foreign_key) ||
+ (reflection.polymorphic? && (record.read_attribute(foreign_type).nil? || record.attribute_changed?(foreign_type)))
+ }
+
+ model.validates_presence_of reflection.name, message: :required, if: condition
+ end
end
end
- private_class_method :macro, :valid_options, :valid_dependent_options, :define_callbacks, :define_validations,
- :add_counter_cache_callbacks, :add_touch_callbacks, :add_default_callbacks, :add_destroy_callbacks
+ def self.define_change_tracking_methods(model, reflection)
+ model.generated_association_methods.class_eval <<-CODE, __FILE__, __LINE__ + 1
+ def #{reflection.name}_changed?
+ association(:#{reflection.name}).target_changed?
+ end
+
+ def #{reflection.name}_previously_changed?
+ association(:#{reflection.name}).target_previously_changed?
+ end
+ CODE
+ end
+
+ private_class_method :macro, :valid_options, :valid_dependent_options, :define_callbacks,
+ :define_validations, :define_change_tracking_methods, :add_counter_cache_callbacks,
+ :add_touch_callbacks, :add_default_callbacks, :add_destroy_callbacks
end
end
diff --git a/activerecord/lib/active_record/associations/builder/collection_association.rb b/activerecord/lib/active_record/associations/builder/collection_association.rb
index 0c0613d95f..391b2e4da3 100644
--- a/activerecord/lib/active_record/associations/builder/collection_association.rb
+++ b/activerecord/lib/active_record/associations/builder/collection_association.rb
@@ -3,7 +3,7 @@
require "active_record/associations"
module ActiveRecord::Associations::Builder # :nodoc:
- class CollectionAssociation < Association #:nodoc:
+ class CollectionAssociation < Association # :nodoc:
CALLBACKS = [:before_add, :after_add, :before_remove, :after_remove]
def self.valid_options(options)
@@ -30,11 +30,18 @@ def self.define_extensions(model, name, &block)
def self.define_callback(model, callback_name, name, options)
full_callback_name = "#{callback_name}_for_#{name}"
- unless model.method_defined?(full_callback_name)
+ callback_values = Array(options[callback_name.to_sym])
+ method_defined = model.respond_to?(full_callback_name)
+
+ # If there are no callbacks, we must also check if a superclass had
+ # previously defined this association
+ return if callback_values.empty? && !method_defined
+
+ unless method_defined
model.class_attribute(full_callback_name, instance_accessor: false, instance_predicate: false)
end
- callbacks = Array(options[callback_name.to_sym]).map do |callback|
+ callbacks = callback_values.map do |callback|
case callback
when Symbol
->(method, owner, record) { owner.send(callback, record) }
diff --git a/activerecord/lib/active_record/associations/builder/has_and_belongs_to_many.rb b/activerecord/lib/active_record/associations/builder/has_and_belongs_to_many.rb
index 170bdd7907..33fb8caf7e 100644
--- a/activerecord/lib/active_record/associations/builder/has_and_belongs_to_many.rb
+++ b/activerecord/lib/active_record/associations/builder/has_and_belongs_to_many.rb
@@ -20,6 +20,7 @@ class << self
attr_accessor :right_reflection
end
+ @table_name = nil
def self.table_name
# Table name needs to be resolved lazily
# because RHS class might not have been loaded
@@ -44,11 +45,6 @@ def self.add_right_association(name, options)
def self.retrieve_connection
left_model.retrieve_connection
end
-
- private
- def self.suppress_composite_primary_key(pk)
- pk unless pk.is_a?(Array)
- end
}
join_model.name = "HABTM_#{association_name.to_s.camelize}"
diff --git a/activerecord/lib/active_record/associations/builder/has_many.rb b/activerecord/lib/active_record/associations/builder/has_many.rb
index b21dd943aa..68e184fee7 100644
--- a/activerecord/lib/active_record/associations/builder/has_many.rb
+++ b/activerecord/lib/active_record/associations/builder/has_many.rb
@@ -1,16 +1,17 @@
# frozen_string_literal: true
module ActiveRecord::Associations::Builder # :nodoc:
- class HasMany < CollectionAssociation #:nodoc:
+ class HasMany < CollectionAssociation # :nodoc:
def self.macro
:has_many
end
def self.valid_options(options)
- valid = super + [:counter_cache, :join_table, :index_errors, :ensuring_owner_was]
+ valid = super + [:counter_cache, :join_table, :index_errors]
valid += [:as, :foreign_type] if options[:as]
valid += [:through, :source, :source_type] if options[:through]
valid += [:ensuring_owner_was] if options[:dependent] == :destroy_async
+ valid += [:disable_joins] if options[:disable_joins] && options[:through]
valid
end
diff --git a/activerecord/lib/active_record/associations/builder/has_one.rb b/activerecord/lib/active_record/associations/builder/has_one.rb
index 1773faa01b..97a7c7d0a5 100644
--- a/activerecord/lib/active_record/associations/builder/has_one.rb
+++ b/activerecord/lib/active_record/associations/builder/has_one.rb
@@ -1,7 +1,7 @@
# frozen_string_literal: true
module ActiveRecord::Associations::Builder # :nodoc:
- class HasOne < SingularAssociation #:nodoc:
+ class HasOne < SingularAssociation # :nodoc:
def self.macro
:has_one
end
@@ -11,6 +11,7 @@ def self.valid_options(options)
valid += [:as, :foreign_type] if options[:as]
valid += [:ensuring_owner_was] if options[:dependent] == :destroy_async
valid += [:through, :source, :source_type] if options[:through]
+ valid += [:disable_joins] if options[:disable_joins] && options[:through]
valid
end
diff --git a/activerecord/lib/active_record/associations/builder/singular_association.rb b/activerecord/lib/active_record/associations/builder/singular_association.rb
index 7537aa468f..b66ca141b2 100644
--- a/activerecord/lib/active_record/associations/builder/singular_association.rb
+++ b/activerecord/lib/active_record/associations/builder/singular_association.rb
@@ -3,7 +3,7 @@
# This class is inherited by the has_one and belongs_to association classes
module ActiveRecord::Associations::Builder # :nodoc:
- class SingularAssociation < Association #:nodoc:
+ class SingularAssociation < Association # :nodoc:
def self.valid_options(options)
super + [:required, :touch]
end
@@ -13,12 +13,16 @@ def self.define_accessors(model, reflection)
mixin = model.generated_association_methods
name = reflection.name
- define_constructors(mixin, name) if reflection.constructable?
+ define_constructors(mixin, name) unless reflection.polymorphic?
mixin.class_eval <<-CODE, __FILE__, __LINE__ + 1
def reload_#{name}
association(:#{name}).force_reload_reader
end
+
+ def reset_#{name}
+ association(:#{name}).reset
+ end
CODE
end
diff --git a/activerecord/lib/active_record/associations/collection_association.rb b/activerecord/lib/active_record/associations/collection_association.rb
index 3e087695ed..825671d13c 100644
--- a/activerecord/lib/active_record/associations/collection_association.rb
+++ b/activerecord/lib/active_record/associations/collection_association.rb
@@ -1,5 +1,7 @@
# frozen_string_literal: true
+require "active_support/core_ext/enumerable"
+
module ActiveRecord
module Associations
# = Active Record Association Collection
@@ -14,7 +16,7 @@ module Associations
#
# The CollectionAssociation class provides common methods to the collections
# defined by +has_and_belongs_to_many+, +has_many+ or +has_many+ with
- # the +:through association+ option.
+ # the <tt>:through association</tt> option.
#
# You need to be careful with assumptions regarding the target: The proxy
# does not fetch records from the database until it needs them, but new
@@ -25,9 +27,11 @@ module Associations
#
# If you need to work on all current children, new and existing records,
# +load_target+ and the +loaded+ flag are your friends.
- class CollectionAssociation < Association #:nodoc:
+ class CollectionAssociation < Association # :nodoc:
# Implements the reader method, e.g. foo.items for Foo.has_many :items
def reader
+ ensure_klass_exists!
+
if stale_target?
reload
end
@@ -57,14 +61,20 @@ def ids_writer(ids)
primary_key = reflection.association_primary_key
pk_type = klass.type_for_attribute(primary_key)
ids = Array(ids).compact_blank
- ids.map! { |i| pk_type.cast(i) }
+ ids.map! { |id| pk_type.cast(id) }
- records = klass.where(primary_key => ids).index_by do |r|
- r.public_send(primary_key)
+ records = if klass.composite_primary_key?
+ klass.where(primary_key => ids).index_by do |record|
+ primary_key.map { |primary_key| record._read_attribute(primary_key) }
+ end
+ else
+ klass.where(primary_key => ids).index_by do |record|
+ record._read_attribute(primary_key)
+ end
end.values_at(*ids).compact
if records.size != ids.size
- found_ids = records.map { |record| record.public_send(primary_key) }
+ found_ids = records.map { |record| record._read_attribute(primary_key) }
not_found_ids = ids - found_ids
klass.all.raise_record_not_found_exception!(ids, records.size, ids.size, primary_key, not_found_ids)
else
@@ -75,7 +85,7 @@ def ids_writer(ids)
def reset
super
@target = []
- @replaced_or_added_targets = Set.new
+ @replaced_or_added_targets = Set.new.compare_by_identity
@association_ids = nil
end
@@ -115,28 +125,13 @@ def build(attributes = nil, &block)
def concat(*records)
records = records.flatten
if owner.new_record?
- load_target
+ skip_strict_loading { load_target }
concat_records(records)
else
transaction { concat_records(records) }
end
end
- # Starts a transaction in the association class's database connection.
- #
- # class Author < ActiveRecord::Base
- # has_many :books
- # end
- #
- # Author.first.books.transaction do
- # # same effect as calling Book.transaction
- # end
- def transaction(*args)
- reflection.klass.transaction(*args) do
- yield
- end
- end
-
# Removes all records from the association without calling callbacks
# on the associated records. It honors the +:dependent+ option. However
# if the +:dependent+ value is +:destroy+ then in that case the +:delete_all+
@@ -191,7 +186,7 @@ def delete(*records)
end
# Deletes the +records+ and removes them from this association calling
- # +before_remove+ , +after_remove+ , +before_destroy+ and +after_destroy+ callbacks.
+ # +before_remove+, +after_remove+, +before_destroy+ and +after_destroy+ callbacks.
#
# Note that this method removes records from the database ignoring the
# +:dependent+ option.
@@ -244,7 +239,7 @@ def empty?
# and delete/add only records that have changed.
def replace(other_array)
other_array.each { |val| raise_on_type_mismatch!(val) }
- original_target = load_target.dup
+ original_target = skip_strict_loading { load_target }.dup
if owner.new_record?
replace_records(other_array, original_target)
@@ -284,9 +279,11 @@ def add_to_target(record, skip_callbacks: false, replace: false, &block)
end
def target=(record)
- return super unless ActiveRecord::Base.has_many_inversing
+ return super unless reflection.klass.has_many_inversing
case record
+ when nil
+ # It's not possible to remove the record from the inverse association.
when Array
super
else
@@ -313,6 +310,10 @@ def find_from_target?
end
private
+ def transaction(&block)
+ reflection.klass.transaction(&block)
+ end
+
# We have some records loaded from the database (persisted) and some that are
# in-memory (memory). The same record may be represented in the persisted array
# and in the memory array.
@@ -325,13 +326,12 @@ def find_from_target?
# * Otherwise, attributes should have the value found in the database
def merge_target_lists(persisted, memory)
return persisted if memory.empty?
- return memory if persisted.empty?
persisted.map! do |record|
if mem_record = memory.delete(record)
- ((record.attribute_names & mem_record.attribute_names) - mem_record.changed_attribute_names_to_save).each do |name|
- mem_record[name] = record[name]
+ ((record.attribute_names & mem_record.attribute_names) - mem_record.changed_attribute_names_to_save - mem_record.class._attr_readonly).each do |name|
+ mem_record._write_attribute(name, record[name])
end
mem_record
@@ -345,7 +345,7 @@ def merge_target_lists(persisted, memory)
def _create_record(attributes, raise = false, &block)
unless owner.persisted?
- raise ActiveRecord::RecordNotSaved, "You cannot call create unless the parent is saved"
+ raise ActiveRecord::RecordNotSaved.new("You cannot call create unless the parent is saved", owner)
end
if attributes.is_a?(Array)
@@ -489,7 +489,11 @@ def callback(method, record)
def callbacks_for(callback_name)
full_callback_name = "#{callback_name}_for_#{reflection.name}"
- owner.class.send(full_callback_name)
+ if owner.class.respond_to?(full_callback_name)
+ owner.class.send(full_callback_name)
+ else
+ []
+ end
end
def include_in_memory?(record)
diff --git a/activerecord/lib/active_record/associations/collection_proxy.rb b/activerecord/lib/active_record/associations/collection_proxy.rb
index 323a95d8cd..bdaec12cfc 100644
--- a/activerecord/lib/active_record/associations/collection_proxy.rb
+++ b/activerecord/lib/active_record/associations/collection_proxy.rb
@@ -2,6 +2,8 @@
module ActiveRecord
module Associations
+ # = Active Record Collection Proxy
+ #
# Collection proxies in Active Record are middlemen between an
# <tt>association</tt>, and its <tt>target</tt> result set.
#
@@ -27,7 +29,7 @@ module Associations
# is computed directly through SQL and does not trigger by itself the
# instantiation of the actual post records.
class CollectionProxy < Relation
- def initialize(klass, association, **) #:nodoc:
+ def initialize(klass, association, **) # :nodoc:
@association = association
super klass
@@ -46,7 +48,7 @@ def load_target
# Returns +true+ if the association has been loaded, otherwise +false+.
#
# person.pets.loaded? # => false
- # person.pets
+ # person.pets.records
# person.pets.loaded? # => true
def loaded?
@association.loaded?
@@ -94,12 +96,12 @@ def loaded?
# receive:
#
# person.pets.select(:name).first.person_id
- # # => ActiveModel::MissingAttributeError: missing attribute: person_id
+ # # => ActiveModel::MissingAttributeError: missing attribute 'person_id' for Pet
#
- # *Second:* You can pass a block so it can be used just like Array#select.
+ # *Second:* You can pass a block so it can be used just like <tt>Array#select</tt>.
# This builds an array of objects from the database for the scope,
# converting them into an array and iterating through them using
- # Array#select.
+ # <tt>Array#select</tt>.
#
# person.pets.select { |pet| /oo/.match?(pet.name) }
# # => [
@@ -108,7 +110,7 @@ def loaded?
# # ]
# Finds an object in the collection responding to the +id+. Uses the same
- # rules as ActiveRecord::Base.find. Returns ActiveRecord::RecordNotFound
+ # rules as ActiveRecord::FinderMethods.find. Returns ActiveRecord::RecordNotFound
# error if the object cannot be found.
#
# class Person < ActiveRecord::Base
@@ -218,7 +220,7 @@ def find(*args)
# :call-seq:
# third_to_last()
#
- # Same as #first except returns only the third-to-last record.
+ # Same as #last except returns only the third-to-last record.
##
# :method: second_to_last
@@ -226,7 +228,7 @@ def find(*args)
# :call-seq:
# second_to_last()
#
- # Same as #first except returns only the second-to-last record.
+ # Same as #last except returns only the second-to-last record.
# Returns the last record, or the last +n+ records, from the collection.
# If the collection is empty, the first form returns +nil+, and the second
@@ -260,7 +262,7 @@ def last(limit = nil)
end
# Gives a record (or N records if a parameter is supplied) from the collection
- # using the same rules as <tt>ActiveRecord::Base.take</tt>.
+ # using the same rules as ActiveRecord::FinderMethods.take.
#
# class Person < ActiveRecord::Base
# has_many :pets
@@ -382,7 +384,7 @@ def create!(attributes = {}, &block)
# # => [#<Pet id: 2, name: "Puff", group: "celebrities", person_id: 1>]
#
# If the supplied array has an incorrect association type, it raises
- # an <tt>ActiveRecord::AssociationTypeMismatch</tt> error:
+ # an ActiveRecord::AssociationTypeMismatch error:
#
# person.pets.replace(["doo", "ggie", "gaga"])
# # => ActiveRecord::AssociationTypeMismatch: Pet expected, got String
@@ -475,7 +477,7 @@ def delete_all(dependent = nil)
# Deletes the records of the collection directly from the database
# ignoring the +:dependent+ option. Records are instantiated and it
- # invokes +before_remove+, +after_remove+ , +before_destroy+ and
+ # invokes +before_remove+, +after_remove+, +before_destroy+, and
# +after_destroy+ callbacks.
#
# class Person < ActiveRecord::Base
@@ -813,7 +815,7 @@ def size
# to <tt>collection.size.zero?</tt>. If the collection has not been loaded,
# it is equivalent to <tt>!collection.exists?</tt>. If the collection has
# not already been loaded and you are going to fetch the records anyway it
- # is better to check <tt>collection.length.zero?</tt>.
+ # is better to check <tt>collection.load.empty?</tt>.
#
# class Person < ActiveRecord::Base
# has_many :pets
@@ -849,6 +851,11 @@ def empty?
# person.pets.count # => 1
# person.pets.any? # => true
#
+ # Calling it without a block when the collection is not yet
+ # loaded is equivalent to <tt>collection.exists?</tt>.
+ # If you're going to load the collection anyway, it is better
+ # to call <tt>collection.load.any?</tt> to avoid an extra query.
+ #
# You can also pass a +block+ to define criteria. The behavior
# is the same, it returns true if the collection based on the
# criteria is not empty.
@@ -925,7 +932,7 @@ def proxy_association # :nodoc:
@association
end
- # Returns a <tt>Relation</tt> object for the records in this association
+ # Returns a Relation object for the records in this association
def scope
@scope ||= @association.scope
end
@@ -950,10 +957,13 @@ def scope
# person.pets == other
# # => true
#
+ #
+ # Note that unpersisted records can still be seen as equal:
+ #
# other = [Pet.new(id: 1), Pet.new(id: 2)]
#
# person.pets == other
- # # => false
+ # # => true
def ==(other)
load_target == other
end
@@ -1097,13 +1107,18 @@ def inspect # :nodoc:
super
end
+ def pretty_print(pp) # :nodoc:
+ load_target if find_from_target?
+ super
+ end
+
delegate_methods = [
QueryMethods,
SpawnMethods,
].flat_map { |klass|
klass.public_instance_methods(false)
} - self.public_instance_methods(false) - [:select] + [
- :scoping, :values, :insert, :insert_all, :insert!, :insert_all!, :upsert, :upsert_all
+ :scoping, :values, :insert, :insert_all, :insert!, :insert_all!, :upsert, :upsert_all, :load_async
]
delegate(*delegate_methods, to: :scope)
diff --git a/activerecord/lib/active_record/associations/disable_joins_association_scope.rb b/activerecord/lib/active_record/associations/disable_joins_association_scope.rb
new file mode 100644
index 0000000000..8ae89539a5
--- /dev/null
+++ b/activerecord/lib/active_record/associations/disable_joins_association_scope.rb
@@ -0,0 +1,59 @@
+# frozen_string_literal: true
+
+module ActiveRecord
+ module Associations
+ class DisableJoinsAssociationScope < AssociationScope # :nodoc:
+ def scope(association)
+ source_reflection = association.reflection
+ owner = association.owner
+ unscoped = association.klass.unscoped
+ reverse_chain = get_chain(source_reflection, association, unscoped.alias_tracker).reverse
+
+ last_reflection, last_ordered, last_join_ids = last_scope_chain(reverse_chain, owner)
+
+ add_constraints(last_reflection, last_reflection.join_primary_key, last_join_ids, owner, last_ordered)
+ end
+
+ private
+ def last_scope_chain(reverse_chain, owner)
+ first_item = reverse_chain.shift
+ first_scope = [first_item, false, [owner._read_attribute(first_item.join_foreign_key)]]
+
+ reverse_chain.inject(first_scope) do |(reflection, ordered, join_ids), next_reflection|
+ key = reflection.join_primary_key
+ records = add_constraints(reflection, key, join_ids, owner, ordered)
+ foreign_key = next_reflection.join_foreign_key
+ record_ids = records.pluck(foreign_key)
+ records_ordered = records && records.order_values.any?
+
+ [next_reflection, records_ordered, record_ids]
+ end
+ end
+
+ def add_constraints(reflection, key, join_ids, owner, ordered)
+ scope = reflection.build_scope(reflection.aliased_table).where(key => join_ids)
+
+ relation = reflection.klass.scope_for_association
+ scope.merge!(
+ relation.except(:select, :create_with, :includes, :preload, :eager_load, :joins, :left_outer_joins)
+ )
+
+ scope = reflection.constraints.inject(scope) do |memo, scope_chain_item|
+ item = eval_scope(reflection, scope_chain_item, owner)
+ scope.unscope!(*item.unscope_values)
+ scope.where_clause += item.where_clause
+ scope.order_values = item.order_values | scope.order_values
+ scope
+ end
+
+ if scope.order_values.empty? && ordered
+ split_scope = DisableJoinsAssociationRelation.create(scope.klass, key, join_ids)
+ split_scope.where_clause += scope.where_clause
+ split_scope
+ else
+ scope
+ end
+ end
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/associations/foreign_association.rb b/activerecord/lib/active_record/associations/foreign_association.rb
index 6c9d28cfed..37cd9d74c8 100644
--- a/activerecord/lib/active_record/associations/foreign_association.rb
+++ b/activerecord/lib/active_record/associations/foreign_association.rb
@@ -12,7 +12,7 @@ def foreign_key_present?
def nullified_owner_attributes
Hash.new.tap do |attrs|
- attrs[reflection.foreign_key] = nil
+ Array(reflection.foreign_key).each { |foreign_key| attrs[foreign_key] = nil }
attrs[reflection.type] = nil if reflection.type.present?
end
end
@@ -22,8 +22,15 @@ def nullified_owner_attributes
def set_owner_attributes(record)
return if options[:through]
- key = owner._read_attribute(reflection.join_foreign_key)
- record._write_attribute(reflection.join_primary_key, key)
+ primary_key_attribute_names = Array(reflection.join_primary_key)
+ foreign_key_attribute_names = Array(reflection.join_foreign_key)
+
+ primary_key_foreign_key_pairs = primary_key_attribute_names.zip(foreign_key_attribute_names)
+
+ primary_key_foreign_key_pairs.each do |primary_key, foreign_key|
+ value = owner._read_attribute(foreign_key)
+ record._write_attribute(primary_key, value)
+ end
if reflection.type
record._write_attribute(reflection.type, owner.class.polymorphic_name)
diff --git a/activerecord/lib/active_record/associations/has_many_association.rb b/activerecord/lib/active_record/associations/has_many_association.rb
index 2553fd0ef1..c9b0eca67c 100644
--- a/activerecord/lib/active_record/associations/has_many_association.rb
+++ b/activerecord/lib/active_record/associations/has_many_association.rb
@@ -3,11 +3,12 @@
module ActiveRecord
module Associations
# = Active Record Has Many Association
+ #
# This is the proxy that handles a has many association.
#
# If the association has a <tt>:through</tt> option further specialization
# is provided by its child HasManyThroughAssociation.
- class HasManyAssociation < CollectionAssociation #:nodoc:
+ class HasManyAssociation < CollectionAssociation # :nodoc:
include ForeignAssociation
def handle_dependency
@@ -33,20 +34,24 @@ def handle_dependency
unless target.empty?
association_class = target.first.class
- primary_key_column = association_class.primary_key.to_sym
-
- ids = target.collect do |assoc|
- assoc.public_send(primary_key_column)
+ if association_class.query_constraints_list
+ primary_key_column = association_class.query_constraints_list.map(&:to_sym)
+ ids = target.collect { |assoc| primary_key_column.map { |col| assoc.public_send(col) } }
+ else
+ primary_key_column = association_class.primary_key.to_sym
+ ids = target.collect { |assoc| assoc.public_send(primary_key_column) }
end
- enqueue_destroy_association(
- owner_model_name: owner.class.to_s,
- owner_id: owner.id,
- association_class: reflection.klass.to_s,
- association_ids: ids,
- association_primary_key_column: primary_key_column,
- ensuring_owner_was_method: options.fetch(:ensuring_owner_was, nil)
- )
+ ids.each_slice(owner.class.destroy_association_async_batch_size || ids.size) do |ids_batch|
+ enqueue_destroy_association(
+ owner_model_name: owner.class.to_s,
+ owner_id: owner.id,
+ association_class: reflection.klass.to_s,
+ association_ids: ids_batch,
+ association_primary_key_column: primary_key_column,
+ ensuring_owner_was_method: options.fetch(:ensuring_owner_was, nil)
+ )
+ end
end
else
delete_all
@@ -79,10 +84,13 @@ def count_records
scope.count(:all)
end
- # If there's nothing in the database and @target has no new records
- # we are certain the current target is an empty array. This is a
- # documented side-effect of the method that may avoid an extra SELECT.
- loaded! if count == 0
+ # If there's nothing in the database, @target should only contain new
+ # records or be an empty array. This is a documented side-effect of
+ # the method that may avoid an extra SELECT.
+ if count == 0
+ target.select!(&:new_record?)
+ loaded!
+ end
[association_scope.limit_value, count].compact.min
end
@@ -121,7 +129,9 @@ def delete_records(records, method)
records.each(&:destroy!)
update_counter(-records.length) unless reflection.inverse_updates_counter_cache?
else
- scope = self.scope.where(reflection.klass.primary_key => records)
+ query_constraints = reflection.klass.composite_query_constraints_list
+ values = records.map { |r| query_constraints.map { |col| r._read_attribute(col) } }
+ scope = self.scope.where(query_constraints => values)
update_counter(-delete_count(method, scope))
end
end
diff --git a/activerecord/lib/active_record/associations/has_many_through_association.rb b/activerecord/lib/active_record/associations/has_many_through_association.rb
index 0b9fcb00d1..845e5e5642 100644
--- a/activerecord/lib/active_record/associations/has_many_through_association.rb
+++ b/activerecord/lib/active_record/associations/has_many_through_association.rb
@@ -3,7 +3,7 @@
module ActiveRecord
module Associations
# = Active Record Has Many Through Association
- class HasManyThroughAssociation < HasManyAssociation #:nodoc:
+ class HasManyThroughAssociation < HasManyAssociation # :nodoc:
include ThroughAssociation
def initialize(owner, reflection)
@@ -59,9 +59,10 @@ def build_through_record(record)
attributes = through_scope_attributes
attributes[source_reflection.name] = record
- attributes[source_reflection.foreign_type] = options[:source_type] if options[:source_type]
- through_association.build(attributes)
+ through_association.build(attributes).tap do |new_record|
+ new_record.send("#{source_reflection.foreign_type}=", options[:source_type]) if options[:source_type]
+ end
end
end
@@ -69,9 +70,12 @@ def build_through_record(record)
def through_scope_attributes
scope = through_scope || self.scope
- scope.where_values_hash(through_association.reflection.name.to_s).
- except!(through_association.reflection.foreign_key,
- through_association.reflection.klass.inheritance_column)
+ attributes = scope.where_values_hash(through_association.reflection.klass.table_name)
+ except_keys = [
+ *Array(through_association.reflection.foreign_key),
+ through_association.reflection.klass.inheritance_column
+ ]
+ attributes.except!(*except_keys)
end
def save_through_record(record)
@@ -109,7 +113,7 @@ def remove_records(existing_records, records, method)
end
def target_reflection_has_associated_record?
- !(through_reflection.belongs_to? && owner[through_reflection.foreign_key].blank?)
+ !(through_reflection.belongs_to? && Array(through_reflection.foreign_key).all? { |foreign_key_column| owner[foreign_key_column].blank? })
end
def update_through_counter?(method)
@@ -214,6 +218,7 @@ def delete_through_records(records)
def find_target
return [] unless target_reflection_has_associated_record?
+ return scope.to_a if disable_joins
super
end
diff --git a/activerecord/lib/active_record/associations/has_one_association.rb b/activerecord/lib/active_record/associations/has_one_association.rb
index d25f0fa55a..05322ec5a4 100644
--- a/activerecord/lib/active_record/associations/has_one_association.rb
+++ b/activerecord/lib/active_record/associations/has_one_association.rb
@@ -3,7 +3,7 @@
module ActiveRecord
module Associations
# = Active Record Has One Association
- class HasOneAssociation < SingularAssociation #:nodoc:
+ class HasOneAssociation < SingularAssociation # :nodoc:
include ForeignAssociation
def handle_dependency
@@ -33,8 +33,13 @@ def delete(method = options[:dependent])
target.destroy
throw(:abort) unless target.destroyed?
when :destroy_async
- primary_key_column = target.class.primary_key.to_sym
- id = target.public_send(primary_key_column)
+ if target.class.query_constraints_list
+ primary_key_column = target.class.query_constraints_list.map(&:to_sym)
+ id = primary_key_column.map { |col| target.public_send(col) }
+ else
+ primary_key_column = target.class.primary_key.to_sym
+ id = target.public_send(primary_key_column)
+ end
enqueue_destroy_association(
owner_model_name: owner.class.to_s,
@@ -70,7 +75,7 @@ def replace(record, save = true)
if save && !record.save
nullify_owner_attributes(record)
set_owner_attributes(target) if target
- raise RecordNotSaved, "Failed to save the new associated #{reflection.name}."
+ raise RecordNotSaved.new("Failed to save the new associated #{reflection.name}.", record)
end
end
end
@@ -102,19 +107,24 @@ def remove_target!(method)
if target.persisted? && owner.persisted? && !target.save
set_owner_attributes(target)
- raise RecordNotSaved, "Failed to remove the existing associated #{reflection.name}. " \
- "The record failed to save after its foreign key was set to nil."
+ raise RecordNotSaved.new(
+ "Failed to remove the existing associated #{reflection.name}. " \
+ "The record failed to save after its foreign key was set to nil.",
+ target
+ )
end
end
end
def nullify_owner_attributes(record)
- record[reflection.foreign_key] = nil
+ Array(reflection.foreign_key).each do |foreign_key_column|
+ record[foreign_key_column] = nil unless foreign_key_column.in?(Array(record.class.primary_key))
+ end
end
- def transaction_if(value)
+ def transaction_if(value, &block)
if value
- reflection.klass.transaction { yield }
+ reflection.klass.transaction(&block)
else
yield
end
@@ -122,7 +132,7 @@ def transaction_if(value)
def _create_record(attributes, raise_error = false, &block)
unless owner.persisted?
- raise ActiveRecord::RecordNotSaved, "You cannot call create unless the parent is saved"
+ raise ActiveRecord::RecordNotSaved.new("You cannot call create unless the parent is saved", owner)
end
super
diff --git a/activerecord/lib/active_record/associations/has_one_through_association.rb b/activerecord/lib/active_record/associations/has_one_through_association.rb
index 10978b2d93..e0a760cc7a 100644
--- a/activerecord/lib/active_record/associations/has_one_through_association.rb
+++ b/activerecord/lib/active_record/associations/has_one_through_association.rb
@@ -3,7 +3,7 @@
module ActiveRecord
module Associations
# = Active Record Has One Through Association
- class HasOneThroughAssociation < HasOneAssociation #:nodoc:
+ class HasOneThroughAssociation < HasOneAssociation # :nodoc:
include ThroughAssociation
private
diff --git a/activerecord/lib/active_record/associations/join_dependency.rb b/activerecord/lib/active_record/associations/join_dependency.rb
index 325a803c98..68594350ac 100644
--- a/activerecord/lib/active_record/associations/join_dependency.rb
+++ b/activerecord/lib/active_record/associations/join_dependency.rb
@@ -3,8 +3,12 @@
module ActiveRecord
module Associations
class JoinDependency # :nodoc:
- autoload :JoinBase, "active_record/associations/join_dependency/join_base"
- autoload :JoinAssociation, "active_record/associations/join_dependency/join_association"
+ extend ActiveSupport::Autoload
+
+ eager_autoload do
+ autoload :JoinBase
+ autoload :JoinAssociation
+ end
class Aliases # :nodoc:
def initialize(tables)
@@ -248,35 +252,41 @@ def construct(ar_parent, parent, row, seen, model_cache, strict_loading_value)
next
end
- key = aliases.column_alias(node, node.primary_key)
- id = row[key]
- if id.nil?
+ if node.primary_key
+ keys = Array(node.primary_key).map { |column| aliases.column_alias(node, column) }
+ ids = keys.map { |key| row[key] }
+ else
+ keys = Array(node.reflection.join_primary_key).map { |column| aliases.column_alias(node, column.to_s) }
+ ids = keys.map { nil } # Avoid id-based model caching.
+ end
+
+ if keys.any? { |key| row[key].nil? }
nil_association = ar_parent.association(node.reflection.name)
nil_association.loaded!
next
end
- model = seen[ar_parent][node][id]
-
- if model
- construct(model, node, row, seen, model_cache, strict_loading_value)
- else
- model = construct_model(ar_parent, node, row, model_cache, id, strict_loading_value)
-
- seen[ar_parent][node][id] = model
- construct(model, node, row, seen, model_cache, strict_loading_value)
+ ids.each do |id|
+ unless model = seen[ar_parent][node][id]
+ model = construct_model(ar_parent, node, row, model_cache, id, strict_loading_value)
+ seen[ar_parent][node][id] = model if id
+ end
end
+
+ construct(model, node, row, seen, model_cache, strict_loading_value)
end
end
def construct_model(record, node, row, model_cache, id, strict_loading_value)
other = record.association(node.reflection.name)
- model = model_cache[node][id] ||=
- node.instantiate(row, aliases.column_aliases(node)) do |m|
+ unless model = model_cache[node][id]
+ model = node.instantiate(row, aliases.column_aliases(node)) do |m|
m.strict_loading! if strict_loading_value
other.set_inverse_instance(m)
end
+ model_cache[node][id] = model if id
+ end
if node.reflection.collection?
other.target.push(model)
diff --git a/activerecord/lib/active_record/associations/preloader.rb b/activerecord/lib/active_record/associations/preloader.rb
index de7288983a..b35c8e8b57 100644
--- a/activerecord/lib/active_record/associations/preloader.rb
+++ b/activerecord/lib/active_record/associations/preloader.rb
@@ -4,6 +4,8 @@
module ActiveRecord
module Associations
+ # = Active Record \Preloader
+ #
# Implements the details of eager loading of Active Record associations.
#
# Suppose that you have the following two Active Record models:
@@ -22,8 +24,8 @@ module Associations
#
# Author.includes(:books).where(name: ['bell hooks', 'Homer']).to_a
#
- # => SELECT `authors`.* FROM `authors` WHERE `name` IN ('bell hooks', 'Homer')
- # => SELECT `books`.* FROM `books` WHERE `author_id` IN (2, 5)
+ # # SELECT `authors`.* FROM `authors` WHERE `name` IN ('bell hooks', 'Homer')
+ # # SELECT `books`.* FROM `books` WHERE `author_id` IN (2, 5)
#
# Active Record saves the ids of the records from the first query to use in
# the second. Depending on the number of associations involved there can be
@@ -33,23 +35,26 @@ module Associations
# Record will fall back to a slightly more resource-intensive single query:
#
# Author.includes(:books).where(books: {title: 'Illiad'}).to_a
- # => SELECT `authors`.`id` AS t0_r0, `authors`.`name` AS t0_r1, `authors`.`age` AS t0_r2,
- # `books`.`id` AS t1_r0, `books`.`title` AS t1_r1, `books`.`sales` AS t1_r2
- # FROM `authors`
- # LEFT OUTER JOIN `books` ON `authors`.`id` = `books`.`author_id`
- # WHERE `books`.`title` = 'Illiad'
+ # # SELECT `authors`.`id` AS t0_r0, `authors`.`name` AS t0_r1, `authors`.`age` AS t0_r2,
+ # # `books`.`id` AS t1_r0, `books`.`title` AS t1_r1, `books`.`sales` AS t1_r2
+ # # FROM `authors`
+ # # LEFT OUTER JOIN `books` ON `authors`.`id` = `books`.`author_id`
+ # # WHERE `books`.`title` = 'Illiad'
#
# This could result in many rows that contain redundant data and it performs poorly at scale
# and is therefore only used when necessary.
- #
- class Preloader #:nodoc:
+ class Preloader # :nodoc:
extend ActiveSupport::Autoload
eager_autoload do
autoload :Association, "active_record/associations/preloader/association"
+ autoload :Batch, "active_record/associations/preloader/batch"
+ autoload :Branch, "active_record/associations/preloader/branch"
autoload :ThroughAssociation, "active_record/associations/preloader/through_association"
end
+ attr_reader :records, :associations, :scope, :associate_by_default
+
# Eager loads the named associations for the given Active Record record(s).
#
# In this description, 'association name' shall refer to the name passed
@@ -70,137 +75,61 @@ class Preloader #:nodoc:
# for an Author.
# - an Array which specifies multiple association names. This array
# is processed recursively. For example, specifying <tt>[:avatar, :books]</tt>
- # allows this method to preload an author's avatar as well as all of his
+ # allows this method to preload an author's avatar as well as all of their
# books.
# - a Hash which specifies multiple association names, as well as
# association names for the to-be-preloaded association objects. For
# example, specifying <tt>{ author: :avatar }</tt> will preload a
# book's author, as well as that author's avatar.
#
- # +:associations+ has the same format as the +:include+ option for
- # <tt>ActiveRecord::Base.find</tt>. So +associations+ could look like this:
+ # +:associations+ has the same format as the arguments to
+ # ActiveRecord::QueryMethods#includes. So +associations+ could look like
+ # this:
#
# :books
# [ :books, :author ]
# { author: :avatar }
# [ :books, { author: :avatar } ]
- def preload(records, associations, preload_scope = nil)
- records = Array.wrap(records).compact
+ #
+ # +available_records+ is an array of ActiveRecord::Base. The Preloader
+ # will try to use the objects in this array to preload the requested
+ # associations before querying the database. This can save database
+ # queries by reusing in-memory objects. The optimization is only applied
+ # to single associations (i.e. :belongs_to, :has_one) with no scopes.
+ def initialize(records:, associations:, scope: nil, available_records: [], associate_by_default: true)
+ @records = records
+ @associations = associations
+ @scope = scope
+ @available_records = available_records || []
+ @associate_by_default = associate_by_default
- if records.empty?
- []
- else
- Array.wrap(associations).flat_map { |association|
- preloaders_on association, records, preload_scope
- }
- end
+ @tree = Branch.new(
+ parent: nil,
+ association: nil,
+ children: @associations,
+ associate_by_default: @associate_by_default,
+ scope: @scope
+ )
+ @tree.preloaded_records = @records
end
- def initialize(associate_by_default: true)
- @associate_by_default = associate_by_default
+ def empty?
+ associations.nil? || records.length == 0
end
- private
- # Loads all the given data into +records+ for the +association+.
- def preloaders_on(association, records, scope, polymorphic_parent = false)
- case association
- when Hash
- preloaders_for_hash(association, records, scope, polymorphic_parent)
- when Symbol, String
- preloaders_for_one(association, records, scope, polymorphic_parent)
- else
- raise ArgumentError, "#{association.inspect} was not recognized for preload"
- end
- end
-
- def preloaders_for_hash(association, records, scope, polymorphic_parent)
- association.flat_map { |parent, child|
- grouped_records(parent, records, polymorphic_parent).flat_map do |reflection, reflection_records|
- loaders = preloaders_for_reflection(reflection, reflection_records, scope)
- recs = loaders.flat_map(&:preloaded_records).uniq
- child_polymorphic_parent = reflection && reflection.options[:polymorphic]
- loaders.concat Array.wrap(child).flat_map { |assoc|
- preloaders_on assoc, recs, scope, child_polymorphic_parent
- }
- loaders
- end
- }
- end
-
- # Loads all the given data into +records+ for a singular +association+.
- #
- # Functions by instantiating a preloader class such as Preloader::Association and
- # call the +run+ method for each passed in class in the +records+ argument.
- #
- # Not all records have the same class, so group then preload group on the reflection
- # itself so that if various subclass share the same association then we do not split
- # them unnecessarily
- #
- # Additionally, polymorphic belongs_to associations can have multiple associated
- # classes, depending on the polymorphic_type field. So we group by the classes as
- # well.
- def preloaders_for_one(association, records, scope, polymorphic_parent)
- grouped_records(association, records, polymorphic_parent)
- .flat_map do |reflection, reflection_records|
- preloaders_for_reflection reflection, reflection_records, scope
- end
- end
-
- def preloaders_for_reflection(reflection, records, scope)
- records.group_by { |record| record.association(reflection.name).klass }.map do |rhs_klass, rs|
- preloader_for(reflection, rs).new(rhs_klass, rs, reflection, scope, @associate_by_default).run
- end
- end
+ def call
+ Batch.new([self], available_records: @available_records).call
- def grouped_records(association, records, polymorphic_parent)
- h = {}
- records.each do |record|
- reflection = record.class._reflect_on_association(association)
- next if polymorphic_parent && !reflection || !record.association(association).klass
- (h[reflection] ||= []) << record
- end
- h
- end
-
- class AlreadyLoaded # :nodoc:
- def initialize(klass, owners, reflection, preload_scope, associate_by_default = true)
- @owners = owners
- @reflection = reflection
- end
-
- def run
- self
- end
-
- def preloaded_records
- @preloaded_records ||= records_by_owner.flat_map(&:last)
- end
-
- def records_by_owner
- @records_by_owner ||= owners.index_with do |owner|
- Array(owner.association(reflection.name).target)
- end
- end
-
- private
- attr_reader :owners, :reflection
- end
+ loaders
+ end
- # Returns a class containing the logic needed to load preload the data
- # and attach it to a relation. The class returned implements a `run` method
- # that accepts a preloader.
- def preloader_for(reflection, owners)
- if owners.all? { |o| o.association(reflection.name).loaded? }
- return AlreadyLoaded
- end
- reflection.check_preloadable!
+ def branches
+ @tree.children
+ end
- if reflection.options[:through]
- ThroughAssociation
- else
- Association
- end
- end
+ def loaders
+ branches.flat_map(&:loaders)
+ end
end
end
end
diff --git a/activerecord/lib/active_record/associations/preloader/association.rb b/activerecord/lib/active_record/associations/preloader/association.rb
index f3bbd1c532..46b62f504a 100644
--- a/activerecord/lib/active_record/associations/preloader/association.rb
+++ b/activerecord/lib/active_record/associations/preloader/association.rb
@@ -1,19 +1,141 @@
# frozen_string_literal: true
+# :enddoc:
+
module ActiveRecord
module Associations
class Preloader
- class Association #:nodoc:
- def initialize(klass, owners, reflection, preload_scope, associate_by_default = true)
+ class Association # :nodoc:
+ class LoaderQuery
+ attr_reader :scope, :association_key_name
+
+ def initialize(scope, association_key_name)
+ @scope = scope
+ @association_key_name = association_key_name
+ end
+
+ def eql?(other)
+ association_key_name == other.association_key_name &&
+ scope.table_name == other.scope.table_name &&
+ scope.connection_specification_name == other.scope.connection_specification_name &&
+ scope.values_for_queries == other.scope.values_for_queries
+ end
+
+ def hash
+ [association_key_name, scope.table_name, scope.connection_specification_name, scope.values_for_queries].hash
+ end
+
+ def records_for(loaders)
+ LoaderRecords.new(loaders, self).records
+ end
+
+ def load_records_in_batch(loaders)
+ raw_records = records_for(loaders)
+
+ loaders.each do |loader|
+ loader.load_records(raw_records)
+ loader.run
+ end
+ end
+
+ def load_records_for_keys(keys, &block)
+ return [] if keys.empty?
+
+ if association_key_name.is_a?(Array)
+ query_constraints = Hash.new { |hsh, key| hsh[key] = Set.new }
+
+ keys.each_with_object(query_constraints) do |values_set, constraints|
+ association_key_name.zip(values_set).each do |key_name, value|
+ constraints[key_name] << value
+ end
+ end
+
+ scope.where(query_constraints)
+ else
+ scope.where(association_key_name => keys)
+ end.load(&block)
+ end
+ end
+
+ class LoaderRecords
+ def initialize(loaders, loader_query)
+ @loader_query = loader_query
+ @loaders = loaders
+ @keys_to_load = Set.new
+ @already_loaded_records_by_key = {}
+
+ populate_keys_to_load_and_already_loaded_records
+ end
+
+ def records
+ load_records + already_loaded_records
+ end
+
+ private
+ attr_reader :loader_query, :loaders, :keys_to_load, :already_loaded_records_by_key
+
+ def populate_keys_to_load_and_already_loaded_records
+ loaders.each do |loader|
+ loader.owners_by_key.each do |key, owners|
+ if loaded_owner = owners.find { |owner| loader.loaded?(owner) }
+ already_loaded_records_by_key[key] = loader.target_for(loaded_owner)
+ else
+ keys_to_load << key
+ end
+ end
+ end
+
+ @keys_to_load.subtract(already_loaded_records_by_key.keys)
+ end
+
+ def load_records
+ loader_query.load_records_for_keys(keys_to_load) do |record|
+ loaders.each { |l| l.set_inverse(record) }
+ end
+ end
+
+ def already_loaded_records
+ already_loaded_records_by_key.values.flatten
+ end
+ end
+
+ attr_reader :klass
+
+ def initialize(klass, owners, reflection, preload_scope, reflection_scope, associate_by_default)
@klass = klass
@owners = owners.uniq(&:__id__)
@reflection = reflection
@preload_scope = preload_scope
+ @reflection_scope = reflection_scope
@associate = associate_by_default || !preload_scope || preload_scope.empty_scope?
@model = owners.first && owners.first.class
+ @run = false
+ end
+
+ def table_name
+ @klass.table_name
+ end
+
+ def future_classes
+ if run?
+ []
+ else
+ [@klass]
+ end
+ end
+
+ def runnable_loaders
+ [self]
+ end
+
+ def run?
+ @run
end
def run
+ return self if run?
+ @run = true
+
records = records_by_owner
owners.each do |owner|
@@ -35,35 +157,85 @@ def preloaded_records
@preloaded_records
end
- private
- attr_reader :owners, :reflection, :preload_scope, :model, :klass
+ # The name of the key on the associated records
+ def association_key_name
+ reflection.join_primary_key(klass)
+ end
- def load_records
- # owners can be duplicated when a relation has a collection association join
- # #compare_by_identity makes such owners different hash keys
- @records_by_owner = {}.compare_by_identity
- raw_records = owner_keys.empty? ? [] : records_for(owner_keys)
+ def loader_query
+ LoaderQuery.new(scope, association_key_name)
+ end
- @preloaded_records = raw_records.select do |record|
- assignments = false
+ def owners_by_key
+ @owners_by_key ||= owners.each_with_object({}) do |owner, result|
+ key = derive_key(owner, owner_key_name)
+ (result[key] ||= []) << owner if key
+ end
+ end
- owners_by_key[convert_key(record[association_key_name])].each do |owner|
- entries = (@records_by_owner[owner] ||= [])
+ def loaded?(owner)
+ owner.association(reflection.name).loaded?
+ end
- if reflection.collection? || entries.empty?
- entries << record
- assignments = true
- end
- end
+ def target_for(owner)
+ Array.wrap(owner.association(reflection.name).target)
+ end
- assignments
+ def scope
+ @scope ||= build_scope
+ end
+
+ def set_inverse(record)
+ if owners = owners_by_key[derive_key(record, association_key_name)]
+ # Processing only the first owner
+ # because the record is modified but not an owner
+ association = owners.first.association(reflection.name)
+ association.set_inverse_instance(record)
+ end
+ end
+
+ def load_records(raw_records = nil)
+ # owners can be duplicated when a relation has a collection association join
+ # #compare_by_identity makes such owners different hash keys
+ @records_by_owner = {}.compare_by_identity
+ raw_records ||= loader_query.records_for([self])
+ @preloaded_records = raw_records.select do |record|
+ assignments = false
+
+ owners_by_key[derive_key(record, association_key_name)]&.each do |owner|
+ entries = (@records_by_owner[owner] ||= [])
+
+ if reflection.collection? || entries.empty?
+ entries << record
+ assignments = true
+ end
end
+
+ assignments
end
+ end
+
+ def associate_records_from_unscoped(unscoped_records)
+ return if unscoped_records.nil? || unscoped_records.empty?
+ return if !reflection_scope.empty_scope?
+ return if preload_scope && !preload_scope.empty_scope?
+ return if reflection.collection?
+
+ unscoped_records.select { |r| r[association_key_name].present? }.each do |record|
+ owners = owners_by_key[derive_key(record, association_key_name)]
+ owners&.each_with_index do |owner, i|
+ association = owner.association(reflection.name)
+ association.target = record
- # The name of the key on the associated records
- def association_key_name
- reflection.join_primary_key(klass)
+ if i == 0 # Set inverse on first owner
+ association.set_inverse_instance(record)
+ end
+ end
end
+ end
+
+ private
+ attr_reader :owners, :reflection, :preload_scope, :model
# The name of the key on the model which declares the association
def owner_key_name
@@ -71,7 +243,10 @@ def owner_key_name
end
def associate_records_to_owner(owner, records)
+ return if loaded?(owner)
+
association = owner.association(reflection.name)
+
if reflection.collection?
association.target = records
else
@@ -79,17 +254,6 @@ def associate_records_to_owner(owner, records)
end
end
- def owner_keys
- @owner_keys ||= owners_by_key.keys
- end
-
- def owners_by_key
- @owners_by_key ||= owners.each_with_object({}) do |owner, result|
- key = convert_key(owner[owner_key_name])
- (result[key] ||= []) << owner if key
- end
- end
-
def key_conversion_required?
unless defined?(@key_conversion_required)
@key_conversion_required = (association_key_type != owner_key_type)
@@ -98,6 +262,14 @@ def key_conversion_required?
@key_conversion_required
end
+ def derive_key(owner, key)
+ if key.is_a?(Array)
+ key.map { |k| convert_key(owner._read_attribute(k)) }
+ else
+ convert_key(owner._read_attribute(key))
+ end
+ end
+
def convert_key(key)
if key_conversion_required?
key.to_s
@@ -114,20 +286,6 @@ def owner_key_type
@model.type_for_attribute(owner_key_name).type
end
- def records_for(ids)
- scope.where(association_key_name => ids).load do |record|
- # Processing only the first owner
- # because the record is modified but not an owner
- owner = owners_by_key[convert_key(record[association_key_name])].first
- association = owner.association(reflection.name)
- association.set_inverse_instance(record)
- end
- end
-
- def scope
- @scope ||= build_scope
- end
-
def reflection_scope
@reflection_scope ||= reflection.join_scopes(klass.arel_table, klass.predicate_builder, klass).inject(klass.unscoped, &:merge!)
end
@@ -145,11 +303,11 @@ def build_scope
scope.merge!(preload_scope)
end
- if preload_scope && preload_scope.strict_loading_value
- scope.strict_loading
- else
- scope
- end
+ cascade_strict_loading(scope)
+ end
+
+ def cascade_strict_loading(scope)
+ preload_scope&.strict_loading_value ? scope.strict_loading : scope
end
end
end
diff --git a/activerecord/lib/active_record/associations/preloader/batch.rb b/activerecord/lib/active_record/associations/preloader/batch.rb
new file mode 100644
index 0000000000..a0048d0f6a
--- /dev/null
+++ b/activerecord/lib/active_record/associations/preloader/batch.rb
@@ -0,0 +1,48 @@
+# frozen_string_literal: true
+
+module ActiveRecord
+ module Associations
+ class Preloader
+ class Batch # :nodoc:
+ def initialize(preloaders, available_records:)
+ @preloaders = preloaders.reject(&:empty?)
+ @available_records = available_records.flatten.group_by { |r| r.class.base_class }
+ end
+
+ def call
+ branches = @preloaders.flat_map(&:branches)
+ until branches.empty?
+ loaders = branches.flat_map(&:runnable_loaders)
+
+ loaders.each { |loader| loader.associate_records_from_unscoped(@available_records[loader.klass.base_class]) }
+
+ if loaders.any?
+ future_tables = branches.flat_map do |branch|
+ branch.future_classes - branch.runnable_loaders.map(&:klass)
+ end.map(&:table_name).uniq
+
+ target_loaders = loaders.reject { |l| future_tables.include?(l.table_name) }
+ target_loaders = loaders if target_loaders.empty?
+
+ group_and_load_similar(target_loaders)
+ target_loaders.each(&:run)
+ end
+
+ finished, in_progress = branches.partition(&:done?)
+
+ branches = in_progress + finished.flat_map(&:children)
+ end
+ end
+
+ private
+ attr_reader :loaders
+
+ def group_and_load_similar(loaders)
+ loaders.grep_v(ThroughAssociation).group_by(&:loader_query).each_pair do |query, similar_loaders|
+ query.load_records_in_batch(similar_loaders)
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/associations/preloader/branch.rb b/activerecord/lib/active_record/associations/preloader/branch.rb
new file mode 100644
index 0000000000..4cd9bde50f
--- /dev/null
+++ b/activerecord/lib/active_record/associations/preloader/branch.rb
@@ -0,0 +1,147 @@
+# frozen_string_literal: true
+
+module ActiveRecord
+ module Associations
+ class Preloader
+ class Branch # :nodoc:
+ attr_reader :association, :children, :parent
+ attr_reader :scope, :associate_by_default
+ attr_writer :preloaded_records
+
+ def initialize(association:, children:, parent:, associate_by_default:, scope:)
+ @association = association
+ @parent = parent
+ @scope = scope
+ @associate_by_default = associate_by_default
+
+ @children = build_children(children)
+ @loaders = nil
+ end
+
+ def future_classes
+ (immediate_future_classes + children.flat_map(&:future_classes)).uniq
+ end
+
+ def immediate_future_classes
+ if parent.done?
+ loaders.flat_map(&:future_classes).uniq
+ else
+ likely_reflections.reject(&:polymorphic?).flat_map do |reflection|
+ reflection.
+ chain.
+ map(&:klass)
+ end.uniq
+ end
+ end
+
+ def target_classes
+ if done?
+ preloaded_records.map(&:klass).uniq
+ elsif parent.done?
+ loaders.map(&:klass).uniq
+ else
+ likely_reflections.reject(&:polymorphic?).map(&:klass).uniq
+ end
+ end
+
+ def likely_reflections
+ parent_classes = parent.target_classes
+ parent_classes.filter_map do |parent_klass|
+ parent_klass._reflect_on_association(@association)
+ end
+ end
+
+ def root?
+ parent.nil?
+ end
+
+ def source_records
+ @parent.preloaded_records
+ end
+
+ def preloaded_records
+ @preloaded_records ||= loaders.flat_map(&:preloaded_records)
+ end
+
+ def done?
+ root? || (@loaders && @loaders.all?(&:run?))
+ end
+
+ def runnable_loaders
+ loaders.flat_map(&:runnable_loaders).reject(&:run?)
+ end
+
+ def grouped_records
+ h = {}
+ polymorphic_parent = !root? && parent.polymorphic?
+ source_records.each do |record|
+ reflection = record.class._reflect_on_association(association)
+ next if polymorphic_parent && !reflection || !record.association(association).klass
+ (h[reflection] ||= []) << record
+ end
+ h
+ end
+
+ def preloaders_for_reflection(reflection, reflection_records)
+ reflection_records.group_by do |record|
+ klass = record.association(association).klass
+
+ if reflection.scope && reflection.scope.arity != 0
+ # For instance dependent scopes, the scope is potentially
+ # different for each record. To allow this we'll group each
+ # object separately into its own preloader
+ reflection_scope = reflection.join_scopes(klass.arel_table, klass.predicate_builder, klass, record).inject(&:merge!)
+ end
+
+ [klass, reflection_scope]
+ end.map do |(rhs_klass, reflection_scope), rs|
+ preloader_for(reflection).new(rhs_klass, rs, reflection, scope, reflection_scope, associate_by_default)
+ end
+ end
+
+ def polymorphic?
+ return false if root?
+ return @polymorphic if defined?(@polymorphic)
+
+ @polymorphic = source_records.any? do |record|
+ reflection = record.class._reflect_on_association(association)
+ reflection && reflection.options[:polymorphic]
+ end
+ end
+
+ def loaders
+ @loaders ||=
+ grouped_records.flat_map do |reflection, reflection_records|
+ preloaders_for_reflection(reflection, reflection_records)
+ end
+ end
+
+ private
+ def build_children(children)
+ Array.wrap(children).flat_map { |association|
+ Array(association).flat_map { |parent, child|
+ Branch.new(
+ parent: self,
+ association: parent,
+ children: child,
+ associate_by_default: associate_by_default,
+ scope: scope
+ )
+ }
+ }
+ end
+
+ # Returns a class containing the logic needed to load preload the data
+ # and attach it to a relation. The class returned implements a `run` method
+ # that accepts a preloader.
+ def preloader_for(reflection)
+ if reflection.options[:through]
+ ThroughAssociation
+ else
+ Association
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/associations/preloader/through_association.rb b/activerecord/lib/active_record/associations/preloader/through_association.rb
index b39a235a19..3ee1b4bb21 100644
--- a/activerecord/lib/active_record/associations/preloader/through_association.rb
+++ b/activerecord/lib/active_record/associations/preloader/through_association.rb
@@ -4,26 +4,22 @@ module ActiveRecord
module Associations
class Preloader
class ThroughAssociation < Association # :nodoc:
- PRELOADER = ActiveRecord::Associations::Preloader.new(associate_by_default: false)
-
- def initialize(*)
- super
- @already_loaded = owners.first.association(through_reflection.name).loaded?
- end
-
def preloaded_records
@preloaded_records ||= source_preloaders.flat_map(&:preloaded_records)
end
def records_by_owner
return @records_by_owner if defined?(@records_by_owner)
- source_records_by_owner = source_preloaders.map(&:records_by_owner).reduce(:merge)
- through_records_by_owner = through_preloaders.map(&:records_by_owner).reduce(:merge)
@records_by_owner = owners.each_with_object({}) do |owner, result|
+ if loaded?(owner)
+ result[owner] = target_for(owner)
+ next
+ end
+
through_records = through_records_by_owner[owner] || []
- if @already_loaded
+ if owners.first.association(through_reflection.name).loaded?
if source_type = reflection.options[:source_type]
through_records = through_records.select do |record|
record[reflection.foreign_type] == source_type
@@ -42,17 +38,47 @@ def records_by_owner
end
end
+ def runnable_loaders
+ if data_available?
+ [self]
+ elsif through_preloaders.all?(&:run?)
+ source_preloaders.flat_map(&:runnable_loaders)
+ else
+ through_preloaders.flat_map(&:runnable_loaders)
+ end
+ end
+
+ def future_classes
+ if run?
+ []
+ elsif through_preloaders.all?(&:run?)
+ source_preloaders.flat_map(&:future_classes).uniq
+ else
+ through_classes = through_preloaders.flat_map(&:future_classes)
+ source_classes = source_reflection.
+ chain.
+ reject { |reflection| reflection.respond_to?(:polymorphic?) && reflection.polymorphic? }.
+ map(&:klass)
+ (through_classes + source_classes).uniq
+ end
+ end
+
private
+ def data_available?
+ owners.all? { |owner| loaded?(owner) } ||
+ through_preloaders.all?(&:run?) && source_preloaders.all?(&:run?)
+ end
+
def source_preloaders
- @source_preloaders ||= PRELOADER.preload(middle_records, source_reflection.name, scope)
+ @source_preloaders ||= ActiveRecord::Associations::Preloader.new(records: middle_records, associations: source_reflection.name, scope: scope, associate_by_default: false).loaders
end
def middle_records
- through_preloaders.flat_map(&:preloaded_records)
+ through_records_by_owner.values.flatten
end
def through_preloaders
- @through_preloaders ||= PRELOADER.preload(owners, through_reflection.name, through_scope)
+ @through_preloaders ||= ActiveRecord::Associations::Preloader.new(records: owners, associations: through_reflection.name, scope: through_scope, associate_by_default: false).loaders
end
def through_reflection
@@ -63,6 +89,14 @@ def source_reflection
reflection.source_reflection
end
+ def source_records_by_owner
+ @source_records_by_owner ||= source_preloaders.map(&:records_by_owner).reduce(:merge)
+ end
+
+ def through_records_by_owner
+ @through_records_by_owner ||= through_preloaders.map(&:records_by_owner).reduce(:merge)
+ end
+
def preload_index
@preload_index ||= preloaded_records.each_with_object({}).with_index do |(record, result), index|
result[record] = index
@@ -73,6 +107,8 @@ def through_scope
scope = through_reflection.klass.unscoped
options = reflection.options
+ return scope if options[:disable_joins]
+
values = reflection_scope.values
if annotations = values[:annotate]
scope.annotate!(*annotations)
@@ -108,7 +144,7 @@ def through_scope
end
end
- scope
+ cascade_strict_loading(scope)
end
end
end
diff --git a/activerecord/lib/active_record/associations/singular_association.rb b/activerecord/lib/active_record/associations/singular_association.rb
index b7e5987c4b..551db0dc9a 100644
--- a/activerecord/lib/active_record/associations/singular_association.rb
+++ b/activerecord/lib/active_record/associations/singular_association.rb
@@ -2,9 +2,11 @@
module ActiveRecord
module Associations
- class SingularAssociation < Association #:nodoc:
+ class SingularAssociation < Association # :nodoc:
# Implements the reader method, e.g. foo.bar for Foo.has_one :bar
def reader
+ ensure_klass_exists!
+
if !loaded? || stale_target?
reload
end
@@ -32,11 +34,15 @@ def force_reload_reader
private
def scope_for_create
- super.except!(klass.primary_key)
+ super.except!(*Array(klass.primary_key))
end
def find_target
- super.first
+ if disable_joins
+ scope.first
+ else
+ super.first
+ end
end
def replace(record)
diff --git a/activerecord/lib/active_record/associations/through_association.rb b/activerecord/lib/active_record/associations/through_association.rb
index 3f5e9066eb..e3680bac93 100644
--- a/activerecord/lib/active_record/associations/through_association.rb
+++ b/activerecord/lib/active_record/associations/through_association.rb
@@ -3,10 +3,14 @@
module ActiveRecord
module Associations
# = Active Record Through Association
- module ThroughAssociation #:nodoc:
+ module ThroughAssociation # :nodoc:
delegate :source_reflection, to: :reflection
private
+ def transaction(&block)
+ through_reflection.klass.transaction(&block)
+ end
+
def through_reflection
@through_reflection ||= begin
refl = reflection.through_reflection
@@ -55,12 +59,11 @@ def construct_join_attributes(*records)
association_primary_key = source_reflection.association_primary_key(reflection.klass)
- if association_primary_key == reflection.klass.primary_key && !options[:source_type]
+ if Array(association_primary_key) == reflection.klass.composite_query_constraints_list && !options[:source_type]
join_attributes = { source_reflection.name => records }
else
- join_attributes = {
- source_reflection.foreign_key => records.map(&association_primary_key.to_sym)
- }
+ assoc_pk_values = records.map { |record| record._read_attribute(association_primary_key) }
+ join_attributes = { source_reflection.foreign_key => assoc_pk_values }
end
if options[:source_type]
@@ -74,16 +77,20 @@ def construct_join_attributes(*records)
end
end
- # Note: this does not capture all cases, for example it would be crazy to try to
- # properly support stale-checking for nested associations.
+ # Note: this does not capture all cases, for example it would be impractical
+ # to try to properly support stale-checking for nested associations.
def stale_state
if through_reflection.belongs_to?
- owner[through_reflection.foreign_key] && owner[through_reflection.foreign_key].to_s
+ Array(through_reflection.foreign_key).filter_map do |foreign_key_column|
+ owner[foreign_key_column] && owner[foreign_key_column].to_s
+ end.presence
end
end
def foreign_key_present?
- through_reflection.belongs_to? && !owner[through_reflection.foreign_key].nil?
+ through_reflection.belongs_to? && Array(through_reflection.foreign_key).all? do |foreign_key_column|
+ !owner[foreign_key_column].nil?
+ end
end
def ensure_mutable
@@ -107,11 +114,15 @@ def ensure_not_nested
end
def build_record(attributes)
- inverse = source_reflection.inverse_of
- target = through_association.target
-
- if inverse && target && !target.is_a?(Array)
- attributes[inverse.foreign_key] = target.id
+ if source_reflection.collection?
+ inverse = source_reflection.inverse_of
+ target = through_association.target
+
+ if inverse && target && !target.is_a?(Array)
+ Array(target.id).zip(Array(inverse.foreign_key)).map do |primary_key_value, foreign_key_column|
+ attributes[foreign_key_column] = primary_key_value
+ end
+ end
end
super
diff --git a/activerecord/lib/active_record/asynchronous_queries_tracker.rb b/activerecord/lib/active_record/asynchronous_queries_tracker.rb
new file mode 100644
index 0000000000..d9cd4117e8
--- /dev/null
+++ b/activerecord/lib/active_record/asynchronous_queries_tracker.rb
@@ -0,0 +1,60 @@
+# frozen_string_literal: true
+
+module ActiveRecord
+ class AsynchronousQueriesTracker # :nodoc:
+ module NullSession # :nodoc:
+ class << self
+ def active?
+ true
+ end
+
+ def finalize
+ end
+ end
+ end
+
+ class Session # :nodoc:
+ def initialize
+ @active = true
+ end
+
+ def active?
+ @active
+ end
+
+ def finalize
+ @active = false
+ end
+ end
+
+ class << self
+ def install_executor_hooks(executor = ActiveSupport::Executor)
+ executor.register_hook(self)
+ end
+
+ def run
+ ActiveRecord::Base.asynchronous_queries_tracker.start_session
+ end
+
+ def complete(asynchronous_queries_tracker)
+ asynchronous_queries_tracker.finalize_session
+ end
+ end
+
+ attr_reader :current_session
+
+ def initialize
+ @current_session = NullSession
+ end
+
+ def start_session
+ @current_session = Session.new
+ self
+ end
+
+ def finalize_session
+ @current_session.finalize
+ @current_session = NullSession
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/attribute_assignment.rb b/activerecord/lib/active_record/attribute_assignment.rb
index 553484883c..b075153f87 100644
--- a/activerecord/lib/active_record/attribute_assignment.rb
+++ b/activerecord/lib/active_record/attribute_assignment.rb
@@ -1,7 +1,5 @@
# frozen_string_literal: true
-require "active_model/forbidden_attributes_protection"
-
module ActiveRecord
module AttributeAssignment
include ActiveModel::AttributeAssignment
@@ -46,7 +44,7 @@ def assign_multiparameter_attributes(pairs)
def execute_callstack_for_multiparameter_attributes(callstack)
errors = []
callstack.each do |name, values_with_empty_parameters|
- if values_with_empty_parameters.each_value.all?(&:nil?)
+ if values_with_empty_parameters.each_value.all?(NilClass)
values = nil
else
values = values_with_empty_parameters
diff --git a/activerecord/lib/active_record/attribute_methods.rb b/activerecord/lib/active_record/attribute_methods.rb
index 3b58472184..31a8d340e2 100644
--- a/activerecord/lib/active_record/attribute_methods.rb
+++ b/activerecord/lib/active_record/attribute_methods.rb
@@ -23,7 +23,7 @@ module AttributeMethods
RESTRICTED_CLASS_METHODS = %w(private public protected allocate new name parent superclass)
- class GeneratedAttributeMethods < Module #:nodoc:
+ class GeneratedAttributeMethods < Module # :nodoc:
include Mutex_m
end
@@ -33,26 +33,94 @@ def dangerous_attribute_methods # :nodoc:
Base.instance_methods +
Base.private_instance_methods -
Base.superclass.instance_methods -
- Base.superclass.private_instance_methods
+ Base.superclass.private_instance_methods +
+ %i[__id__ dup freeze frozen? hash class clone]
).map { |m| -m.to_s }.to_set.freeze
end
end
module ClassMethods
- def inherited(child_class) #:nodoc:
- child_class.initialize_generated_modules
- super
- end
-
def initialize_generated_modules # :nodoc:
@generated_attribute_methods = const_set(:GeneratedAttributeMethods, GeneratedAttributeMethods.new)
private_constant :GeneratedAttributeMethods
@attribute_methods_generated = false
+ @alias_attributes_mass_generated = false
include @generated_attribute_methods
super
end
+ def alias_attribute(new_name, old_name)
+ super
+
+ if @alias_attributes_mass_generated
+ ActiveSupport::CodeGenerator.batch(generated_attribute_methods, __FILE__, __LINE__) do |code_generator|
+ generate_alias_attribute_methods(code_generator, new_name, old_name)
+ end
+ end
+ end
+
+ def eagerly_generate_alias_attribute_methods(_new_name, _old_name) # :nodoc:
+ # alias attributes in Active Record are lazily generated
+ end
+
+ def generate_alias_attributes # :nodoc:
+ superclass.generate_alias_attributes unless superclass == Base
+ return if @alias_attributes_mass_generated
+
+ generated_attribute_methods.synchronize do
+ return if @alias_attributes_mass_generated
+ ActiveSupport::CodeGenerator.batch(generated_attribute_methods, __FILE__, __LINE__) do |code_generator|
+ aliases_by_attribute_name.each do |old_name, new_names|
+ new_names.each do |new_name|
+ generate_alias_attribute_methods(code_generator, new_name, old_name)
+ end
+ end
+ end
+
+ @alias_attributes_mass_generated = true
+ end
+ end
+
+ def alias_attribute_method_definition(code_generator, pattern, new_name, old_name)
+ method_name = pattern.method_name(new_name).to_s
+ target_name = pattern.method_name(old_name).to_s
+ parameters = pattern.parameters
+ old_name = old_name.to_s
+
+ method_defined = method_defined?(target_name) || private_method_defined?(target_name)
+ manually_defined = method_defined &&
+ !self.instance_method(target_name).owner.is_a?(GeneratedAttributeMethods)
+ reserved_method_name = ::ActiveRecord::AttributeMethods.dangerous_attribute_methods.include?(target_name)
+
+ if !abstract_class? && !has_attribute?(old_name)
+ # We only need to issue this deprecation warning once, so we issue it when defining the original reader method.
+ should_warn = target_name == old_name
+ if should_warn
+ ActiveRecord.deprecator.warn(
+ "#{self} model aliases `#{old_name}`, but `#{old_name}` is not an attribute. " \
+ "Starting in Rails 7.2, alias_attribute with non-attribute targets will raise. " \
+ "Use `alias_method :#{new_name}, :#{old_name}` or define the method manually."
+ )
+ end
+ super
+ elsif manually_defined && !reserved_method_name
+ aliased_method_redefined_as_well = method_defined_within?(method_name, self)
+ return if aliased_method_redefined_as_well
+
+ ActiveRecord.deprecator.warn(
+ "#{self} model aliases `#{old_name}` and has a method called `#{target_name}` defined. " \
+ "Starting in Rails 7.2 `#{method_name}` will not be calling `#{target_name}` anymore. " \
+ "You may want to additionally define `#{method_name}` to preserve the current behavior."
+ )
+ super
+ else
+ define_proxy_call(code_generator, method_name, pattern.proxy_target, parameters, old_name,
+ namespace: :proxy_alias_attribute
+ )
+ end
+ end
+
# Generates all the attribute related methods for columns in the database
# accessors, mutators and query methods.
def define_attribute_methods # :nodoc:
@@ -71,6 +139,7 @@ def undefine_attribute_methods # :nodoc:
generated_attribute_methods.synchronize do
super if defined?(@attribute_methods_generated) && @attribute_methods_generated
@attribute_methods_generated = false
+ @alias_attributes_mass_generated = false
end
end
@@ -97,7 +166,7 @@ def instance_method_already_implemented?(method_name)
super
else
# If ThisClass < ... < SomeSuperClass < ... < Base and SomeSuperClass
- # defines its own attribute method, then we don't want to overwrite that.
+ # defines its own attribute method, then we don't want to override that.
defined = method_defined_within?(method_name, superclass, Base) &&
! superclass.instance_method(method_name).owner.is_a?(GeneratedAttributeMethods)
defined || super
@@ -186,6 +255,16 @@ def has_attribute?(attr_name)
def _has_attribute?(attr_name) # :nodoc:
attribute_types.key?(attr_name)
end
+
+ private
+ def inherited(child_class)
+ super
+ child_class.initialize_generated_modules
+ child_class.class_eval do
+ @alias_attributes_mass_generated = false
+ @attribute_names = nil
+ end
+ end
end
# A Person object with a name attribute can ask <tt>person.respond_to?(:name)</tt>,
@@ -267,9 +346,8 @@ def attributes
# Returns an <tt>#inspect</tt>-like string for the value of the
# attribute +attr_name+. String attributes are truncated up to 50
- # characters, Date and Time attributes are returned in the
- # <tt>:db</tt> format. Other attributes return the value of
- # <tt>#inspect</tt> without modification.
+ # characters. Other attributes return the value of <tt>#inspect</tt>
+ # without modification.
#
# person = Person.create!(name: 'David Heinemeier Hansson ' * 3)
#
@@ -277,7 +355,7 @@ def attributes
# # => "\"David Heinemeier Hansson David Heinemeier Hansson ...\""
#
# person.attribute_for_inspect(:created_at)
- # # => "\"2012-10-22 00:15:07\""
+ # # => "\"2012-10-22 00:15:07.000000000 +0000\""
#
# person.attribute_for_inspect(:tag_ids)
# # => "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]"
@@ -310,37 +388,40 @@ def attribute_present?(attr_name)
!value.nil? && !(value.respond_to?(:empty?) && value.empty?)
end
- # Returns the value of the attribute identified by <tt>attr_name</tt> after it has been typecast (for example,
- # "2004-12-12" in a date column is cast to a date object, like Date.new(2004, 12, 12)). It raises
- # <tt>ActiveModel::MissingAttributeError</tt> if the identified attribute is missing.
- #
- # Note: +:id+ is always present.
+ # Returns the value of the attribute identified by +attr_name+ after it has
+ # been type cast. (For information about specific type casting behavior, see
+ # the types under ActiveModel::Type.)
#
# class Person < ActiveRecord::Base
# belongs_to :organization
# end
#
- # person = Person.new(name: 'Francesco', age: '22')
- # person[:name] # => "Francesco"
- # person[:age] # => 22
+ # person = Person.new(name: "Francesco", date_of_birth: "2004-12-12")
+ # person[:name] # => "Francesco"
+ # person[:date_of_birth] # => Date.new(2004, 12, 12)
+ # person[:organization_id] # => nil
#
- # person = Person.select('id').first
- # person[:name] # => ActiveModel::MissingAttributeError: missing attribute: name
- # person[:organization_id] # => ActiveModel::MissingAttributeError: missing attribute: organization_id
+ # Raises ActiveModel::MissingAttributeError if the attribute is missing.
+ # Note, however, that the +id+ attribute will never be considered missing.
+ #
+ # person = Person.select(:name).first
+ # person[:name] # => "Francesco"
+ # person[:date_of_birth] # => ActiveModel::MissingAttributeError: missing attribute 'date_of_birth' for Person
+ # person[:organization_id] # => ActiveModel::MissingAttributeError: missing attribute 'organization_id' for Person
+ # person[:id] # => nil
def [](attr_name)
read_attribute(attr_name) { |n| missing_attribute(n, caller) }
end
- # Updates the attribute identified by <tt>attr_name</tt> with the specified +value+.
- # (Alias for the protected #write_attribute method).
+ # Updates the attribute identified by +attr_name+ using the specified
+ # +value+. The attribute value will be type cast upon being read.
#
# class Person < ActiveRecord::Base
# end
#
# person = Person.new
- # person[:age] = '22'
- # person[:age] # => 22
- # person[:age].class # => Integer
+ # person[:date_of_birth] = "2004-12-12"
+ # person[:date_of_birth] # => Date.new(2004, 12, 12)
def []=(attr_name, value)
write_attribute(attr_name, value)
end
@@ -361,10 +442,9 @@ def []=(attr_name, value)
# end
#
# private
- #
- # def print_accessed_fields
- # p @posts.first.accessed_fields
- # end
+ # def print_accessed_fields
+ # p @posts.first.accessed_fields
+ # end
# end
#
# Which allows you to quickly change your code to:
@@ -385,25 +465,26 @@ def attribute_method?(attr_name)
end
def attributes_with_values(attribute_names)
- attribute_names.index_with do |name|
- _read_attribute(name)
- end
+ attribute_names.index_with { |name| @attributes[name] }
end
- # Filters the primary keys and readonly attributes from the attribute names.
+ # Filters the primary keys, readonly attributes and virtual columns from the attribute names.
def attributes_for_update(attribute_names)
attribute_names &= self.class.column_names
attribute_names.delete_if do |name|
- self.class.readonly_attribute?(name)
+ self.class.readonly_attribute?(name) ||
+ self.class.counter_cache_column?(name) ||
+ column_for_attribute(name).virtual?
end
end
- # Filters out the primary keys, from the attribute names, when the primary
+ # Filters out the virtual columns and also primary keys, from the attribute names, when the primary
# key is to be generated (e.g. the id attribute has no value).
def attributes_for_create(attribute_names)
attribute_names &= self.class.column_names
attribute_names.delete_if do |name|
- pk_attribute?(name) && id.nil?
+ (pk_attribute?(name) && id.nil?) ||
+ column_for_attribute(name).virtual?
end
end
@@ -414,7 +495,7 @@ def format_for_inspect(name, value)
inspected_value = if value.is_a?(String) && value.length > 50
"#{value[0, 50]}...".inspect
elsif value.is_a?(Date) || value.is_a?(Time)
- %("#{value.to_s(:inspect)}")
+ %("#{value.to_fs(:inspect)}")
else
value.inspect
end
diff --git a/activerecord/lib/active_record/attribute_methods/before_type_cast.rb b/activerecord/lib/active_record/attribute_methods/before_type_cast.rb
index 33ca3d38b6..3284149350 100644
--- a/activerecord/lib/active_record/attribute_methods/before_type_cast.rb
+++ b/activerecord/lib/active_record/attribute_methods/before_type_cast.rb
@@ -29,8 +29,8 @@ module BeforeTypeCast
extend ActiveSupport::Concern
included do
- attribute_method_suffix "_before_type_cast", "_for_database"
- attribute_method_suffix "_came_from_user?"
+ attribute_method_suffix "_before_type_cast", "_for_database", parameters: false
+ attribute_method_suffix "_came_from_user?", parameters: false
end
# Returns the value of the attribute identified by +attr_name+ before
@@ -52,6 +52,23 @@ def read_attribute_before_type_cast(attr_name)
attribute_before_type_cast(name)
end
+ # Returns the value of the attribute identified by +attr_name+ after
+ # serialization.
+ #
+ # class Book < ActiveRecord::Base
+ # enum :status, { draft: 1, published: 2 }
+ # end
+ #
+ # book = Book.new(status: "published")
+ # book.read_attribute(:status) # => "published"
+ # book.read_attribute_for_database(:status) # => 2
+ def read_attribute_for_database(attr_name)
+ name = attr_name.to_s
+ name = self.class.attribute_aliases[name] || name
+
+ attribute_for_database(name)
+ end
+
# Returns a hash of attributes before typecasting and deserialization.
#
# class Task < ActiveRecord::Base
@@ -66,6 +83,11 @@ def attributes_before_type_cast
@attributes.values_before_type_cast
end
+ # Returns a hash of attributes for assignment to the database.
+ def attributes_for_database
+ @attributes.values_for_database
+ end
+
private
# Dispatch target for <tt>*_before_type_cast</tt> attribute methods.
def attribute_before_type_cast(attr_name)
diff --git a/activerecord/lib/active_record/attribute_methods/dirty.rb b/activerecord/lib/active_record/attribute_methods/dirty.rb
index bf5cc82f7a..40b58606eb 100644
--- a/activerecord/lib/active_record/attribute_methods/dirty.rb
+++ b/activerecord/lib/active_record/attribute_methods/dirty.rb
@@ -4,6 +4,38 @@
module ActiveRecord
module AttributeMethods
+ # = Active Record Attribute Methods \Dirty
+ #
+ # Provides a way to track changes in your Active Record models. It adds all
+ # methods from ActiveModel::Dirty and adds database-specific methods.
+ #
+ # A newly created +Person+ object is unchanged:
+ #
+ # class Person < ActiveRecord::Base
+ # end
+ #
+ # person = Person.create(name: "Allison")
+ # person.changed? # => false
+ #
+ # Change the name:
+ #
+ # person.name = 'Alice'
+ # person.name_in_database # => "Allison"
+ # person.will_save_change_to_name? # => true
+ # person.name_change_to_be_saved # => ["Allison", "Alice"]
+ # person.changes_to_save # => {"name"=>["Allison", "Alice"]}
+ #
+ # Save the changes:
+ #
+ # person.save
+ # person.name_in_database # => "Alice"
+ # person.saved_change_to_name? # => true
+ # person.saved_change_to_name # => ["Allison", "Alice"]
+ # person.name_before_last_change # => "Allison"
+ #
+ # Similar to ActiveModel::Dirty, methods can be invoked as
+ # +saved_change_to_name?+ or by passing an argument to the generic method
+ # <tt>saved_change_to_attribute?("name")</tt>.
module Dirty
extend ActiveSupport::Concern
@@ -14,16 +46,17 @@ module Dirty
raise "You cannot include Dirty after Timestamp"
end
- class_attribute :partial_writes, instance_writer: false, default: true
+ class_attribute :partial_updates, instance_writer: false, default: true
+ class_attribute :partial_inserts, instance_writer: false, default: true
# Attribute methods for "changed in last call to save?"
- attribute_method_affix(prefix: "saved_change_to_", suffix: "?")
- attribute_method_prefix("saved_change_to_")
- attribute_method_suffix("_before_last_save")
+ attribute_method_affix(prefix: "saved_change_to_", suffix: "?", parameters: "**options")
+ attribute_method_prefix("saved_change_to_", parameters: false)
+ attribute_method_suffix("_before_last_save", parameters: false)
# Attribute methods for "will change if I call save?"
- attribute_method_affix(prefix: "will_save_change_to_", suffix: "?")
- attribute_method_suffix("_change_to_be_saved", "_in_database")
+ attribute_method_affix(prefix: "will_save_change_to_", suffix: "?", parameters: "**options")
+ attribute_method_suffix("_change_to_be_saved", "_in_database", parameters: false)
end
# <tt>reload</tt> the record and clears changed attributes.
@@ -43,11 +76,13 @@ def reload(*)
#
# ==== Options
#
- # +from+ When passed, this method will return false unless the original
- # value is equal to the given option
+ # [+from+]
+ # When specified, this method will return false unless the original
+ # value is equal to the given value.
#
- # +to+ When passed, this method will return false unless the value was
- # changed to the given value
+ # [+to+]
+ # When specified, this method will return false unless the value will be
+ # changed to the given value.
def saved_change_to_attribute?(attr_name, **options)
mutations_before_last_save.changed?(attr_name.to_s, **options)
end
@@ -93,11 +128,13 @@ def saved_changes
#
# ==== Options
#
- # +from+ When passed, this method will return false unless the original
- # value is equal to the given option
+ # [+from+]
+ # When specified, this method will return false unless the original
+ # value is equal to the given value.
#
- # +to+ When passed, this method will return false unless the value will be
- # changed to the given value
+ # [+to+]
+ # When specified, this method will return false unless the value will be
+ # changed to the given value.
def will_save_change_to_attribute?(attr_name, **options)
mutations_from_database.changed?(attr_name.to_s, **options)
end
@@ -156,10 +193,12 @@ def attributes_in_database
end
private
- def write_attribute_without_type_cast(attr_name, value)
- result = super
- clear_attribute_change(attr_name)
- result
+ def init_internals
+ super
+ @mutations_before_last_save = nil
+ @mutations_from_database = nil
+ @_touch_attr_names = nil
+ @_skip_dirty_tracking = nil
end
def _touch_row(attribute_names, time)
@@ -191,20 +230,32 @@ def _touch_row(attribute_names, time)
@_touch_attr_names, @_skip_dirty_tracking = nil, nil
end
- def _update_record(attribute_names = attribute_names_for_partial_writes)
+ def _update_record(attribute_names = attribute_names_for_partial_updates)
affected_rows = super
changes_applied
affected_rows
end
- def _create_record(attribute_names = attribute_names_for_partial_writes)
+ def _create_record(attribute_names = attribute_names_for_partial_inserts)
id = super
changes_applied
id
end
- def attribute_names_for_partial_writes
- partial_writes? ? changed_attribute_names_to_save : attribute_names
+ def attribute_names_for_partial_updates
+ partial_updates? ? changed_attribute_names_to_save : attribute_names
+ end
+
+ def attribute_names_for_partial_inserts
+ if partial_inserts?
+ changed_attribute_names_to_save
+ else
+ attribute_names.reject do |attr_name|
+ if column_for_attribute(attr_name).default_function
+ !attribute_changed?(attr_name)
+ end
+ end
+ end
end
end
end
diff --git a/activerecord/lib/active_record/attribute_methods/primary_key.rb b/activerecord/lib/active_record/attribute_methods/primary_key.rb
index 977409ff4c..4adee074e9 100644
--- a/activerecord/lib/active_record/attribute_methods/primary_key.rb
+++ b/activerecord/lib/active_record/attribute_methods/primary_key.rb
@@ -4,6 +4,7 @@
module ActiveRecord
module AttributeMethods
+ # = Active Record Attribute Methods Primary Key
module PrimaryKey
extend ActiveSupport::Concern
@@ -11,41 +12,80 @@ module PrimaryKey
# available.
def to_key
key = id
- [key] if key
+ Array(key) if key
end
- # Returns the primary key column's value.
+ # Returns the primary key column's value. If the primary key is composite,
+ # returns an array of the primary key column values.
def id
- _read_attribute(@primary_key)
+ return _read_attribute(@primary_key) unless @primary_key.is_a?(Array)
+
+ @primary_key.map { |pk| _read_attribute(pk) }
+ end
+
+ def primary_key_values_present? # :nodoc:
+ return id.all? if self.class.composite_primary_key?
+
+ !!id
end
- # Sets the primary key column's value.
+ # Sets the primary key column's value. If the primary key is composite,
+ # raises TypeError when the set value not enumerable.
def id=(value)
- _write_attribute(@primary_key, value)
+ if self.class.composite_primary_key?
+ raise TypeError, "Expected value matching #{self.class.primary_key.inspect}, got #{value.inspect}." unless value.is_a?(Enumerable)
+ @primary_key.zip(value) { |attr, value| _write_attribute(attr, value) }
+ else
+ _write_attribute(@primary_key, value)
+ end
end
- # Queries the primary key column's value.
+ # Queries the primary key column's value. If the primary key is composite,
+ # all primary key column values must be queryable.
def id?
- query_attribute(@primary_key)
+ if self.class.composite_primary_key?
+ @primary_key.all? { |col| _query_attribute(col) }
+ else
+ _query_attribute(@primary_key)
+ end
end
- # Returns the primary key column's value before type cast.
+ # Returns the primary key column's value before type cast. If the primary key is composite,
+ # returns an array of primary key column values before type cast.
def id_before_type_cast
- attribute_before_type_cast(@primary_key)
+ if self.class.composite_primary_key?
+ @primary_key.map { |col| attribute_before_type_cast(col) }
+ else
+ attribute_before_type_cast(@primary_key)
+ end
end
- # Returns the primary key column's previous value.
+ # Returns the primary key column's previous value. If the primary key is composite,
+ # returns an array of primary key column previous values.
def id_was
- attribute_was(@primary_key)
+ if self.class.composite_primary_key?
+ @primary_key.map { |col| attribute_was(col) }
+ else
+ attribute_was(@primary_key)
+ end
end
- # Returns the primary key column's value from the database.
+ # Returns the primary key column's value from the database. If the primary key is composite,
+ # returns an array of primary key column values from database.
def id_in_database
- attribute_in_database(@primary_key)
+ if self.class.composite_primary_key?
+ @primary_key.map { |col| attribute_in_database(col) }
+ else
+ attribute_in_database(@primary_key)
+ end
end
def id_for_database # :nodoc:
- @attributes[@primary_key].value_for_database
+ if self.class.composite_primary_key?
+ @primary_key.map { |col| @attributes[col].value_for_database }
+ else
+ @attributes[@primary_key].value_for_database
+ end
end
private
@@ -55,6 +95,7 @@ def attribute_method?(attr_name)
module ClassMethods
ID_ATTRIBUTE_METHODS = %w(id id= id? id_before_type_cast id_was id_in_database id_for_database).to_set
+ PRIMARY_KEY_NOT_SET = BasicObject.new
def instance_method_already_implemented?(method_name)
super || primary_key && ID_ATTRIBUTE_METHODS.include?(method_name)
@@ -68,17 +109,23 @@ def dangerous_attribute_method?(method_name)
# Overwriting will negate any effect of the +primary_key_prefix_type+
# setting, though.
def primary_key
- @primary_key = reset_primary_key unless defined? @primary_key
+ if PRIMARY_KEY_NOT_SET.equal?(@primary_key)
+ @primary_key = reset_primary_key
+ end
@primary_key
end
+ def composite_primary_key? # :nodoc:
+ primary_key.is_a?(Array)
+ end
+
# Returns a quoted version of the primary key name, used to construct
# SQL statements.
def quoted_primary_key
@quoted_primary_key ||= connection.quote_column_name(primary_key)
end
- def reset_primary_key #:nodoc:
+ def reset_primary_key # :nodoc:
if base_class?
self.primary_key = get_primary_key(base_class.name)
else
@@ -86,15 +133,14 @@ def reset_primary_key #:nodoc:
end
end
- def get_primary_key(base_name) #:nodoc:
+ def get_primary_key(base_name) # :nodoc:
if base_name && primary_key_prefix_type == :table_name
base_name.foreign_key(false)
elsif base_name && primary_key_prefix_type == :table_name_with_underscore
base_name.foreign_key
else
if ActiveRecord::Base != self && table_exists?
- pk = connection.schema_cache.primary_keys(table_name)
- suppress_composite_primary_key(pk)
+ connection.schema_cache.primary_keys(table_name)
else
"id"
end
@@ -117,20 +163,26 @@ def get_primary_key(base_name) #:nodoc:
#
# Project.primary_key # => "foo_id"
def primary_key=(value)
- @primary_key = value && -value.to_s
+ @primary_key = derive_primary_key(value)
@quoted_primary_key = nil
@attributes_builder = nil
end
private
- def suppress_composite_primary_key(pk)
- return pk unless pk.is_a?(Array)
+ def derive_primary_key(value)
+ return unless value
+
+ return -value.to_s unless value.is_a?(Array)
- warn <<~WARNING
- WARNING: Active Record does not support composite primary key.
+ value.map { |v| -v.to_s }.freeze
+ end
- #{table_name} has composite primary key. Composite primary key is ignored.
- WARNING
+ def inherited(base)
+ super
+ base.class_eval do
+ @primary_key = PRIMARY_KEY_NOT_SET
+ @quoted_primary_key = nil
+ end
end
end
end
diff --git a/activerecord/lib/active_record/attribute_methods/query.rb b/activerecord/lib/active_record/attribute_methods/query.rb
index d17e8d8513..aa371ab4b2 100644
--- a/activerecord/lib/active_record/attribute_methods/query.rb
+++ b/activerecord/lib/active_record/attribute_methods/query.rb
@@ -2,37 +2,49 @@
module ActiveRecord
module AttributeMethods
+ # = Active Record Attribute Methods \Query
module Query
extend ActiveSupport::Concern
included do
- attribute_method_suffix "?"
+ attribute_method_suffix "?", parameters: false
end
def query_attribute(attr_name)
- value = self[attr_name]
-
- case value
- when true then true
- when false, nil then false
- else
- if !type_for_attribute(attr_name) { false }
- if Numeric === value || !value.match?(/[^0-9]/)
- !value.to_i.zero?
+ value = self.public_send(attr_name)
+
+ query_cast_attribute(attr_name, value)
+ end
+
+ def _query_attribute(attr_name) # :nodoc:
+ value = self._read_attribute(attr_name.to_s)
+
+ query_cast_attribute(attr_name, value)
+ end
+
+ alias :attribute? :query_attribute
+ private :attribute?
+
+ private
+ def query_cast_attribute(attr_name, value)
+ case value
+ when true then true
+ when false, nil then false
+ else
+ if !type_for_attribute(attr_name) { false }
+ if Numeric === value || !value.match?(/[^0-9]/)
+ !value.to_i.zero?
+ else
+ return false if ActiveModel::Type::Boolean::FALSE_VALUES.include?(value)
+ !value.blank?
+ end
+ elsif value.respond_to?(:zero?)
+ !value.zero?
else
- return false if ActiveModel::Type::Boolean::FALSE_VALUES.include?(value)
!value.blank?
end
- elsif value.respond_to?(:zero?)
- !value.zero?
- else
- !value.blank?
end
end
- end
-
- alias :attribute? :query_attribute
- private :attribute?
end
end
end
diff --git a/activerecord/lib/active_record/attribute_methods/read.rb b/activerecord/lib/active_record/attribute_methods/read.rb
index 12cbfc867f..0eb327db64 100644
--- a/activerecord/lib/active_record/attribute_methods/read.rb
+++ b/activerecord/lib/active_record/attribute_methods/read.rb
@@ -2,6 +2,7 @@
module ActiveRecord
module AttributeMethods
+ # = Active Record Attribute Methods \Read
module Read
extend ActiveSupport::Concern
@@ -11,28 +12,42 @@ def define_method_attribute(name, owner:)
ActiveModel::AttributeMethods::AttrNames.define_attribute_accessor_method(
owner, name
) do |temp_method_name, attr_name_expr|
- owner <<
- "def #{temp_method_name}" <<
- " _read_attribute(#{attr_name_expr}) { |n| missing_attribute(n, caller) }" <<
- "end"
+ owner.define_cached_method(name, as: temp_method_name, namespace: :active_record) do |batch|
+ batch <<
+ "def #{temp_method_name}" <<
+ " _read_attribute(#{attr_name_expr}) { |n| missing_attribute(n, caller) }" <<
+ "end"
+ end
end
end
end
- # Returns the value of the attribute identified by <tt>attr_name</tt> after
- # it has been typecast (for example, "2004-12-12" in a date column is cast
- # to a date object, like Date.new(2004, 12, 12)).
+ # Returns the value of the attribute identified by +attr_name+ after it
+ # has been type cast. For example, a date attribute will cast "2004-12-12"
+ # to <tt>Date.new(2004, 12, 12)</tt>. (For information about specific type
+ # casting behavior, see the types under ActiveModel::Type.)
def read_attribute(attr_name, &block)
name = attr_name.to_s
name = self.class.attribute_aliases[name] || name
- name = @primary_key if name == "id" && @primary_key
- @attributes.fetch_value(name, &block)
+ return @attributes.fetch_value(name, &block) unless name == "id" && @primary_key
+
+ if self.class.composite_primary_key?
+ @attributes.fetch_value("id", &block)
+ else
+ if @primary_key != "id"
+ ActiveRecord.deprecator.warn(<<-MSG.squish)
+ Using read_attribute(:id) to read the primary key value is deprecated.
+ Use #id instead.
+ MSG
+ end
+ @attributes.fetch_value(@primary_key, &block)
+ end
end
# This method exists to avoid the expensive primary_key check internally, without
# breaking compatibility with the read_attribute API
- def _read_attribute(attr_name, &block) # :nodoc
+ def _read_attribute(attr_name, &block) # :nodoc:
@attributes.fetch_value(attr_name, &block)
end
diff --git a/activerecord/lib/active_record/attribute_methods/serialization.rb b/activerecord/lib/active_record/attribute_methods/serialization.rb
index 624a70b425..44639308d4 100644
--- a/activerecord/lib/active_record/attribute_methods/serialization.rb
+++ b/activerecord/lib/active_record/attribute_methods/serialization.rb
@@ -2,6 +2,7 @@
module ActiveRecord
module AttributeMethods
+ # = Active Record Attribute Methods \Serialization
module Serialization
extend ActiveSupport::Concern
@@ -15,16 +16,18 @@ def initialize(name, type)
end
end
+ included do
+ class_attribute :default_column_serializer, instance_accessor: false, default: Coders::YAMLColumn
+ end
+
module ClassMethods
- # If you have an attribute that needs to be saved to the database as an
- # object, and retrieved as the same object, then specify the name of that
- # attribute using this method and it will be handled automatically. The
- # serialization is done through YAML. If +class_name+ is specified, the
- # serialized object must be of that class on assignment and retrieval.
- # Otherwise SerializationTypeMismatch will be raised.
+ # If you have an attribute that needs to be saved to the database as a
+ # serialized object, and retrieved by deserializing into the same object,
+ # then specify the name of that attribute using this method and serialization
+ # will be handled automatically.
#
- # Empty objects as <tt>{}</tt>, in the case of +Hash+, or <tt>[]</tt>, in the case of
- # +Array+, will always be persisted as null.
+ # The serialization format may be YAML, JSON, or any custom format using a
+ # custom coder class.
#
# Keep in mind that database adapters handle certain serialization tasks
# for you. For instance: +json+ and +jsonb+ types in PostgreSQL will be
@@ -37,57 +40,211 @@ module ClassMethods
#
# ==== Parameters
#
- # * +attr_name+ - The field name that should be serialized.
- # * +class_name_or_coder+ - Optional, a coder object, which responds to +.load+ and +.dump+
- # or a class name that the object type should be equal to.
+ # * +attr_name+ - The name of the attribute to serialize.
+ # * +coder+ The serializer implementation to use, e.g. +JSON+.
+ # * The attribute value will be serialized
+ # using the coder's <tt>dump(value)</tt> method, and will be
+ # deserialized using the coder's <tt>load(string)</tt> method. The
+ # +dump+ method may return +nil+ to serialize the value as +NULL+.
+ # * +type+ - Optional. What the type of the serialized object should be.
+ # * Attempting to serialize another type will raise an
+ # ActiveRecord::SerializationTypeMismatch error.
+ # * If the column is +NULL+ or starting from a new record, the default value
+ # will set to +type.new+
+ # * +yaml+ - Optional. Yaml specific options. The allowed config is:
+ # * +:permitted_classes+ - +Array+ with the permitted classes.
+ # * +:unsafe_load+ - Unsafely load YAML blobs, allow YAML to load any class.
#
# ==== Options
#
- # +default+ The default value to use when no value is provided. If this option
- # is not passed, the previous default value (if any) will be used.
- # Otherwise, the default will be +nil+.
+ # * +:default+ - The default value to use when no value is provided. If
+ # this option is not passed, the previous default value (if any) will
+ # be used. Otherwise, the default will be +nil+.
+ #
+ # ==== Choosing a serializer
+ #
+ # While any serialization format can be used, it is recommended to carefully
+ # evaluate the properties of a serializer before using it, as migrating to
+ # another format later on can be difficult.
+ #
+ # ===== Avoid accepting arbitrary types
+ #
+ # When serializing data in a column, it is heavily recommended to make sure
+ # only expected types will be serialized. For instance some serializer like
+ # +Marshal+ or +YAML+ are capable of serializing almost any Ruby object.
+ #
+ # This can lead to unexpected types being serialized, and it is important
+ # that type serialization remains backward and forward compatible as long
+ # as some database records still contain these serialized types.
#
- # ==== Example
+ # class Address
+ # def initialize(line, city, country)
+ # @line, @city, @country = line, city, country
+ # end
+ # end
+ #
+ # In the above example, if any of the +Address+ attributes is renamed,
+ # instances that were persisted before the change will be loaded with the
+ # old attributes. This problem is even worse when the serialized type comes
+ # from a dependency which doesn't expect to be serialized this way and may
+ # change its internal representation without notice.
+ #
+ # As such, it is heavily recommended to instead convert these objects into
+ # primitives of the serialization format, for example:
+ #
+ # class Address
+ # attr_reader :line, :city, :country
+ #
+ # def self.load(payload)
+ # data = YAML.safe_load(payload)
+ # new(data["line"], data["city"], data["country"])
+ # end
+ #
+ # def self.dump(address)
+ # YAML.safe_dump(
+ # "line" => address.line,
+ # "city" => address.city,
+ # "country" => address.country,
+ # )
+ # end
+ #
+ # def initialize(line, city, country)
+ # @line, @city, @country = line, city, country
+ # end
+ # end
#
- # # Serialize a preferences attribute.
# class User < ActiveRecord::Base
- # serialize :preferences
+ # serialize :address, coder: Address
# end
#
- # # Serialize preferences using JSON as coder.
+ # This pattern allows to be more deliberate about what is serialized, and
+ # to evolve the format in a backward compatible way.
+ #
+ # ===== Ensure serialization stability
+ #
+ # Some serialization methods may accept some types they don't support by
+ # silently casting them to other types. This can cause bugs when the
+ # data is deserialized.
+ #
+ # For instance the +JSON+ serializer provided in the standard library will
+ # silently cast unsupported types to +String+:
+ #
+ # >> JSON.parse(JSON.dump(Struct.new(:foo)))
+ # => "#<Class:0x000000013090b4c0>"
+ #
+ # ==== Examples
+ #
+ # ===== Serialize the +preferences+ attribute using YAML
+ #
+ # class User < ActiveRecord::Base
+ # serialize :preferences, coder: YAML
+ # end
+ #
+ # ===== Serialize the +preferences+ attribute using JSON
+ #
# class User < ActiveRecord::Base
- # serialize :preferences, JSON
+ # serialize :preferences, coder: JSON
# end
#
- # # Serialize preferences as Hash using YAML coder.
+ # ===== Serialize the +preferences+ +Hash+ using YAML
+ #
# class User < ActiveRecord::Base
- # serialize :preferences, Hash
+ # serialize :preferences, type: Hash, coder: YAML
# end
- def serialize(attr_name, class_name_or_coder = Object, **options)
- # When ::JSON is used, force it to go through the Active Support JSON encoder
- # to ensure special objects (e.g. Active Record models) are dumped correctly
- # using the #as_json hook.
- coder = if class_name_or_coder == ::JSON
- Coders::JSON
- elsif [:load, :dump].all? { |x| class_name_or_coder.respond_to?(x) }
- class_name_or_coder
- else
- Coders::YAMLColumn.new(attr_name, class_name_or_coder)
+ #
+ # ===== Serializes +preferences+ to YAML, permitting select classes
+ #
+ # class User < ActiveRecord::Base
+ # serialize :preferences, coder: YAML, yaml: { permitted_classes: [Symbol, Time] }
+ # end
+ #
+ # ===== Serialize the +preferences+ attribute using a custom coder
+ #
+ # class Rot13JSON
+ # def self.rot13(string)
+ # string.tr("a-zA-Z", "n-za-mN-ZA-M")
+ # end
+ #
+ # # Serializes an attribute value to a string that will be stored in the database.
+ # def self.dump(value)
+ # rot13(ActiveSupport::JSON.dump(value))
+ # end
+ #
+ # # Deserializes a string from the database to an attribute value.
+ # def self.load(string)
+ # ActiveSupport::JSON.load(rot13(string))
+ # end
+ # end
+ #
+ # class User < ActiveRecord::Base
+ # serialize :preferences, coder: Rot13JSON
+ # end
+ #
+ def serialize(attr_name, class_name_or_coder = nil, coder: nil, type: Object, yaml: {}, **options)
+ unless class_name_or_coder.nil?
+ if class_name_or_coder == ::JSON || [:load, :dump].all? { |x| class_name_or_coder.respond_to?(x) }
+ ActiveRecord.deprecator.warn(<<~MSG)
+ Passing the coder as positional argument is deprecated and will be removed in Rails 7.2.
+
+ Please pass the coder as a keyword argument:
+
+ serialize #{attr_name.inspect}, coder: #{class_name_or_coder}
+ MSG
+ coder = class_name_or_coder
+ else
+ ActiveRecord.deprecator.warn(<<~MSG)
+ Passing the class as positional argument is deprecated and will be removed in Rails 7.2.
+
+ Please pass the class as a keyword argument:
+
+ serialize #{attr_name.inspect}, type: #{class_name_or_coder.name}
+ MSG
+ type = class_name_or_coder
+ end
end
- decorate_attribute_type(attr_name.to_s, **options) do |cast_type|
- if type_incompatible_with_serialize?(cast_type, class_name_or_coder)
+ coder ||= default_column_serializer
+ unless coder
+ raise ArgumentError, <<~MSG.squish
+ missing keyword: :coder
+
+ If no default coder is configured, a coder must be provided to `serialize`.
+ MSG
+ end
+
+ column_serializer = build_column_serializer(attr_name, coder, type, yaml)
+
+ attribute(attr_name, **options) do |cast_type|
+ if type_incompatible_with_serialize?(cast_type, coder, type)
raise ColumnNotSerializableError.new(attr_name, cast_type)
end
- Type::Serialized.new(cast_type, coder)
+ cast_type = cast_type.subtype if Type::Serialized === cast_type
+ Type::Serialized.new(cast_type, column_serializer)
end
end
private
- def type_incompatible_with_serialize?(type, class_name)
- type.is_a?(ActiveRecord::Type::Json) && class_name == ::JSON ||
- type.respond_to?(:type_cast_array, true) && class_name == ::Array
+ def build_column_serializer(attr_name, coder, type, yaml = nil)
+ # When ::JSON is used, force it to go through the Active Support JSON encoder
+ # to ensure special objects (e.g. Active Record models) are dumped correctly
+ # using the #as_json hook.
+ coder = Coders::JSON if coder == ::JSON
+
+ if coder == ::YAML || coder == Coders::YAMLColumn
+ Coders::YAMLColumn.new(attr_name, type, **(yaml || {}))
+ elsif coder.respond_to?(:new) && !coder.respond_to?(:load)
+ coder.new(attr_name, type)
+ elsif type && type != Object
+ Coders::ColumnSerializer.new(attr_name, coder, type)
+ else
+ coder
+ end
+ end
+
+ def type_incompatible_with_serialize?(cast_type, coder, type)
+ cast_type.is_a?(ActiveRecord::Type::Json) && coder == ::JSON ||
+ cast_type.respond_to?(:type_cast_array, true) && type == ::Array
end
end
end
diff --git a/activerecord/lib/active_record/attribute_methods/time_zone_conversion.rb b/activerecord/lib/active_record/attribute_methods/time_zone_conversion.rb
index f5eb10c079..30ca312f0c 100644
--- a/activerecord/lib/active_record/attribute_methods/time_zone_conversion.rb
+++ b/activerecord/lib/active_record/attribute_methods/time_zone_conversion.rb
@@ -25,6 +25,8 @@ def cast(value)
rescue ArgumentError
nil
end
+ elsif value.respond_to?(:infinite?) && value.infinite?
+ value
else
map_avoiding_infinite_recursion(super) { |v| cast(v) }
end
@@ -36,7 +38,7 @@ def convert_time_to_time_zone(value)
if value.acts_like?(:time)
value.in_time_zone
- elsif value.is_a?(::Float)
+ elsif value.respond_to?(:infinite?) && value.infinite?
value
else
map_avoiding_infinite_recursion(value) { |v| convert_time_to_time_zone(v) }
@@ -61,8 +63,7 @@ def map_avoiding_infinite_recursion(value)
extend ActiveSupport::Concern
included do
- mattr_accessor :time_zone_aware_attributes, instance_writer: false, default: false
-
+ class_attribute :time_zone_aware_attributes, instance_writer: false, default: false
class_attribute :skip_time_zone_conversion_for_attributes, instance_writer: false, default: []
class_attribute :time_zone_aware_types, instance_writer: false, default: [ :datetime, :time ]
end
diff --git a/activerecord/lib/active_record/attribute_methods/write.rb b/activerecord/lib/active_record/attribute_methods/write.rb
index b0eb37c080..304bbce676 100644
--- a/activerecord/lib/active_record/attribute_methods/write.rb
+++ b/activerecord/lib/active_record/attribute_methods/write.rb
@@ -2,11 +2,12 @@
module ActiveRecord
module AttributeMethods
+ # = Active Record Attribute Methods \Write
module Write
extend ActiveSupport::Concern
included do
- attribute_method_suffix "="
+ attribute_method_suffix "=", parameters: "value"
end
module ClassMethods # :nodoc:
@@ -15,17 +16,18 @@ def define_method_attribute=(name, owner:)
ActiveModel::AttributeMethods::AttrNames.define_attribute_accessor_method(
owner, name, writer: true,
) do |temp_method_name, attr_name_expr|
- owner <<
- "def #{temp_method_name}(value)" <<
- " _write_attribute(#{attr_name_expr}, value)" <<
- "end"
+ owner.define_cached_method("#{name}=", as: temp_method_name, namespace: :active_record) do |batch|
+ batch <<
+ "def #{temp_method_name}(value)" <<
+ " _write_attribute(#{attr_name_expr}, value)" <<
+ "end"
+ end
end
end
end
- # Updates the attribute identified by <tt>attr_name</tt> with the
- # specified +value+. Empty strings for Integer and Float columns are
- # turned into +nil+.
+ # Updates the attribute identified by +attr_name+ using the specified
+ # +value+. The attribute value will be type cast upon being read.
def write_attribute(attr_name, value)
name = attr_name.to_s
name = self.class.attribute_aliases[name] || name
@@ -42,11 +44,6 @@ def _write_attribute(attr_name, value) # :nodoc:
alias :attribute= :_write_attribute
private :attribute=
-
- private
- def write_attribute_without_type_cast(attr_name, value)
- @attributes.write_cast_value(attr_name, value)
- end
end
end
end
diff --git a/activerecord/lib/active_record/attributes.rb b/activerecord/lib/active_record/attributes.rb
index 5f1514d878..6606ae9e77 100644
--- a/activerecord/lib/active_record/attributes.rb
+++ b/activerecord/lib/active_record/attributes.rb
@@ -10,11 +10,8 @@ module Attributes
included do
class_attribute :attributes_to_define_after_schema_loads, instance_accessor: false, default: {} # :internal:
end
-
+ # = Active Record \Attributes
module ClassMethods
- ##
- # :call-seq: attribute(name, cast_type = nil, **options)
- #
# Defines an attribute with a type on this model. It will override the
# type of existing attributes if needed. This allows control over how
# values are converted to and from SQL when assigned to a model. It also
@@ -197,10 +194,10 @@ module ClassMethods
# end
#
# Product.where(price_in_bitcoins: Money.new(5, "USD"))
- # # => SELECT * FROM products WHERE price_in_bitcoins = 0.02230
+ # # SELECT * FROM products WHERE price_in_bitcoins = 0.02230
#
# Product.where(price_in_bitcoins: Money.new(5, "GBP"))
- # # => SELECT * FROM products WHERE price_in_bitcoins = 0.03412
+ # # SELECT * FROM products WHERE price_in_bitcoins = 0.03412
#
# ==== Dirty Tracking
#
@@ -208,14 +205,31 @@ module ClassMethods
# tracking is performed. The methods +changed?+ and +changed_in_place?+
# will be called from ActiveModel::Dirty. See the documentation for those
# methods in ActiveModel::Type::Value for more details.
- def attribute(name, cast_type = nil, **options, &block)
+ def attribute(name, cast_type = nil, default: NO_DEFAULT_PROVIDED, **options)
name = name.to_s
+ name = attribute_aliases[name] || name
+
reload_schema_from_cache
+ case cast_type
+ when Symbol
+ cast_type = Type.lookup(cast_type, **options, adapter: Type.adapter_name_from(self))
+ when nil
+ if (prev_cast_type, prev_default = attributes_to_define_after_schema_loads[name])
+ default = prev_default if default == NO_DEFAULT_PROVIDED
+ else
+ prev_cast_type = -> subtype { subtype }
+ end
+
+ cast_type = if block_given?
+ -> subtype { yield Proc === prev_cast_type ? prev_cast_type[subtype] : prev_cast_type }
+ else
+ prev_cast_type
+ end
+ end
+
self.attributes_to_define_after_schema_loads =
- attributes_to_define_after_schema_loads.merge(
- name => [cast_type || block, options]
- )
+ attributes_to_define_after_schema_loads.merge(name => [cast_type, default])
end
# This is the low level API which sits beneath +attribute+. It only
@@ -248,8 +262,9 @@ def define_attribute(
def load_schema! # :nodoc:
super
- attributes_to_define_after_schema_loads.each do |name, (type, options)|
- define_attribute(name, _lookup_cast_type(name, type, options), **options.slice(:default))
+ attributes_to_define_after_schema_loads.each do |name, (cast_type, default)|
+ cast_type = cast_type[type_for_attribute(name)] if Proc === cast_type
+ define_attribute(name, cast_type, default: default)
end
end
@@ -272,32 +287,6 @@ def define_default_attribute(name, value, type, from_user:)
end
_default_attributes[name] = default_attribute
end
-
- def decorate_attribute_type(attr_name, **default)
- type, options = attributes_to_define_after_schema_loads[attr_name]
-
- default.with_defaults!(default: options[:default]) if options&.key?(:default)
-
- attribute(attr_name, **default) do |cast_type|
- if type && !type.is_a?(Proc)
- cast_type = _lookup_cast_type(attr_name, type, options)
- end
-
- yield cast_type
- end
- end
-
- def _lookup_cast_type(name, type, options)
- case type
- when Symbol
- adapter_name = ActiveRecord::Type.adapter_name_from(self)
- ActiveRecord::Type.lookup(type, **options.except(:default), adapter: adapter_name)
- when Proc
- type[type_for_attribute(name)]
- else
- type || type_for_attribute(name)
- end
- end
end
end
end
diff --git a/activerecord/lib/active_record/autosave_association.rb b/activerecord/lib/active_record/autosave_association.rb
index 7b550dae5f..50e6d48981 100644
--- a/activerecord/lib/active_record/autosave_association.rb
+++ b/activerecord/lib/active_record/autosave_association.rb
@@ -26,7 +26,7 @@ module ActiveRecord
#
# Child records are validated unless <tt>:validate</tt> is +false+.
#
- # == Callbacks
+ # == \Callbacks
#
# Association with autosave option defines several callbacks on your
# model (around_save, before_save, after_create, after_update). Please note that
@@ -138,7 +138,7 @@ module ActiveRecord
module AutosaveAssociation
extend ActiveSupport::Concern
- module AssociationBuilderExtension #:nodoc:
+ module AssociationBuilderExtension # :nodoc:
def self.build(model, reflection)
model.send(:add_autosave_association_callbacks, reflection)
end
@@ -150,25 +150,10 @@ def self.valid_options
included do
Associations::Builder::Association.extensions << AssociationBuilderExtension
- mattr_accessor :index_nested_attribute_errors, instance_writer: false, default: false
end
module ClassMethods # :nodoc:
private
- if Module.method(:method_defined?).arity == 1 # MRI 2.5 and older
- using Module.new {
- refine Module do
- def method_defined?(method, inherit = true)
- if inherit
- super(method)
- else
- instance_methods(false).include?(method.to_sym)
- end
- end
- end
- }
- end
-
def define_non_cyclic_method(name, &block)
return if method_defined?(name, false)
@@ -210,7 +195,7 @@ def add_autosave_association_callbacks(reflection)
after_create save_method
after_update save_method
elsif reflection.has_one?
- define_method(save_method) { save_has_one_association(reflection) } unless method_defined?(save_method)
+ define_non_cyclic_method(save_method) { save_has_one_association(reflection) }
# Configures two callbacks instead of a single after_save so that
# the model may rely on their execution order relative to its
# own callbacks.
@@ -288,6 +273,11 @@ def changed_for_autosave?
end
private
+ def init_internals
+ super
+ @_already_called = nil
+ end
+
# Returns the record for an association collection that should be validated
# or saved. If +autosave+ is +false+ only new records will be returned,
# unless the parent is/was a new record itself.
@@ -349,7 +339,7 @@ def association_valid?(reflection, record, index = nil)
unless valid = record.valid?(context)
if reflection.options[:autosave]
- indexed_attribute = !index.nil? && (reflection.options[:index_errors] || ActiveRecord::Base.index_nested_attribute_errors)
+ indexed_attribute = !index.nil? && (reflection.options[:index_errors] || ActiveRecord.index_nested_attribute_errors)
record.errors.group_by_attribute.each { |attribute, errors|
attribute = normalize_reflection_attribute(indexed_attribute, reflection, index, attribute)
@@ -419,6 +409,8 @@ def save_collection_association(reflection)
saved = true
if autosave != false && (new_record_before_save || record.new_record?)
+ association.set_inverse_instance(record)
+
if autosave
saved = association.insert_record(record, false)
elsif !reflection.nested?
@@ -457,14 +449,18 @@ def save_has_one_association(reflection)
if autosave && record.marked_for_destruction?
record.destroy
elsif autosave != false
- key = reflection.options[:primary_key] ? public_send(reflection.options[:primary_key]) : id
+ primary_key = Array(compute_primary_key(reflection, self)).map(&:to_s)
+ primary_key_value = primary_key.map { |key| _read_attribute(key) }
- if (autosave && record.changed_for_autosave?) || record_changed?(reflection, record, key)
+ if (autosave && record.changed_for_autosave?) || _record_changed?(reflection, record, primary_key_value)
unless reflection.through_reflection
- record[reflection.foreign_key] = key
- if inverse_reflection = reflection.inverse_of
- record.association(inverse_reflection.name).inversed_from(self)
+ foreign_key = Array(reflection.foreign_key)
+ primary_key_foreign_key_pairs = primary_key.zip(foreign_key)
+
+ primary_key_foreign_key_pairs.each do |primary_key, foreign_key|
+ record[foreign_key] = _read_attribute(primary_key)
end
+ association.set_inverse_instance(record)
end
saved = record.save(validate: !autosave)
@@ -476,16 +472,28 @@ def save_has_one_association(reflection)
end
# If the record is new or it has changed, returns true.
- def record_changed?(reflection, record, key)
+ def _record_changed?(reflection, record, key)
record.new_record? ||
- association_foreign_key_changed?(reflection, record, key) ||
+ (association_foreign_key_changed?(reflection, record, key) ||
+ inverse_polymorphic_association_changed?(reflection, record)) ||
record.will_save_change_to_attribute?(reflection.foreign_key)
end
def association_foreign_key_changed?(reflection, record, key)
return false if reflection.through_reflection?
- record._has_attribute?(reflection.foreign_key) && record._read_attribute(reflection.foreign_key) != key
+ foreign_key = Array(reflection.foreign_key)
+ return false unless foreign_key.all? { |key| record._has_attribute?(key) }
+
+ foreign_key.map { |key| record._read_attribute(key) } != Array(key)
+ end
+
+ def inverse_polymorphic_association_changed?(reflection, record)
+ return false unless reflection.inverse_of&.polymorphic?
+
+ class_name = record._read_attribute(reflection.inverse_of.foreign_type)
+
+ reflection.active_record != record.class.polymorphic_class_for(class_name)
end
# Saves the associated record if it's new or <tt>:autosave</tt> is enabled.
@@ -500,14 +508,21 @@ def save_belongs_to_association(reflection)
autosave = reflection.options[:autosave]
if autosave && record.marked_for_destruction?
- self[reflection.foreign_key] = nil
+ foreign_key = Array(reflection.foreign_key)
+ foreign_key.each { |key| self[key] = nil }
record.destroy
elsif autosave != false
saved = record.save(validate: !autosave) if record.new_record? || (autosave && record.changed_for_autosave?)
if association.updated?
- association_id = record.public_send(reflection.options[:primary_key] || :id)
- self[reflection.foreign_key] = association_id
+ primary_key = Array(compute_primary_key(reflection, record)).map(&:to_s)
+ foreign_key = Array(reflection.foreign_key)
+
+ primary_key_foreign_key_pairs = primary_key.zip(foreign_key)
+ primary_key_foreign_key_pairs.each do |primary_key, foreign_key|
+ association_id = record._read_attribute(primary_key)
+ self[foreign_key] = association_id unless self[foreign_key] == association_id
+ end
association.loaded!
end
@@ -516,6 +531,22 @@ def save_belongs_to_association(reflection)
end
end
+ def compute_primary_key(reflection, record)
+ if primary_key_options = reflection.options[:primary_key]
+ primary_key_options
+ elsif reflection.options[:query_constraints] && (query_constraints = record.class.query_constraints_list)
+ query_constraints
+ elsif record.class.has_query_constraints? && !reflection.options[:foreign_key]
+ record.class.query_constraints_list
+ elsif record.class.composite_primary_key?
+ # If record has composite primary key of shape [:<tenant_key>, :id], infer primary_key as :id
+ primary_key = record.class.primary_key
+ primary_key.include?("id") ? "id" : primary_key
+ else
+ record.class.primary_key
+ end
+ end
+
def custom_validation_context?
validation_context && [:create, :update].exclude?(validation_context)
end
diff --git a/activerecord/lib/active_record/base.rb b/activerecord/lib/active_record/base.rb
index a79ce54fbf..80d83443a4 100644
--- a/activerecord/lib/active_record/base.rb
+++ b/activerecord/lib/active_record/base.rb
@@ -12,7 +12,7 @@
require "active_record/type_caster"
require "active_record/database_configurations"
-module ActiveRecord #:nodoc:
+module ActiveRecord # :nodoc:
# = Active Record
#
# Active Record objects don't specify their attributes directly, but rather infer them from
@@ -137,6 +137,23 @@ module ActiveRecord #:nodoc:
# anonymous = User.new(name: "")
# anonymous.name? # => false
#
+ # Query methods will also respect any overrides of default accessors:
+ #
+ # class User
+ # # Has admin boolean column
+ # def admin
+ # false
+ # end
+ # end
+ #
+ # user.update(admin: true)
+ #
+ # user.read_attribute(:admin) # => true, gets the column value
+ # user[:admin] # => true, also gets the column value
+ #
+ # user.admin # => false, due to the getter override
+ # user.admin? # => false, due to the getter override
+ #
# == Accessing attributes before they have been typecasted
#
# Sometimes you want to be able to read the raw attribute data without having the column-determined
@@ -294,11 +311,12 @@ class Base
include Attributes
include Locking::Optimistic
include Locking::Pessimistic
+ include Encryption::EncryptableRecord
include AttributeMethods
include Callbacks
include Timestamp
include Associations
- include ActiveModel::SecurePassword
+ include SecurePassword
include AutosaveAssociation
include NestedAttributes
include Transactions
@@ -308,8 +326,13 @@ class Base
include Serialization
include Store
include SecureToken
+ include TokenFor
include SignedId
include Suppressor
+ include Normalization
+ include Marshalling::Methods
+
+ self.param_delimiter = "_"
end
ActiveSupport.run_load_hooks(:active_record, Base)
diff --git a/activerecord/lib/active_record/callbacks.rb b/activerecord/lib/active_record/callbacks.rb
index ee61d063bb..29c72d1024 100644
--- a/activerecord/lib/active_record/callbacks.rb
+++ b/activerecord/lib/active_record/callbacks.rb
@@ -84,7 +84,7 @@ module ActiveRecord
# == Types of callbacks
#
# There are three types of callbacks accepted by the callback macros: method references (symbol), callback objects,
- # inline methods (using a proc). Method references and callback objects are the recommended approaches,
+ # inline methods (using a proc). \Method references and callback objects are the recommended approaches,
# inline methods using a proc are sometimes appropriate (such as for creating mix-ins).
#
# The method reference callbacks work by specifying a protected or private method available in the object, like this:
@@ -173,7 +173,7 @@ module ActiveRecord
#
# If a <tt>before_*</tt> callback throws +:abort+, all the later callbacks and
# the associated action are cancelled.
- # Callbacks are generally run in the order they are defined, with the exception of callbacks defined as
+ # \Callbacks are generally run in the order they are defined, with the exception of callbacks defined as
# methods on the model, which are called last.
#
# == Ordering callbacks
@@ -224,42 +224,26 @@ module ActiveRecord
# after_save :do_something_else
#
# private
+ # def log_children
+ # # Child processing
+ # end
#
- # def log_children
- # # Child processing
- # end
- #
- # def do_something_else
- # # Something else
- # end
+ # def do_something_else
+ # # Something else
+ # end
# end
#
# In this case the +log_children+ is executed before +do_something_else+.
- # The same applies to all non-transactional callbacks.
+ # This applies to all non-transactional callbacks, and to +before_commit+.
#
- # As seen below, in case there are multiple transactional callbacks the order
- # is reversed.
+ # For transactional +after_+ callbacks (+after_commit+, +after_rollback+, etc), the order
+ # can be set via configuration.
#
- # For example:
- #
- # class Topic < ActiveRecord::Base
- # has_many :children
- #
- # after_commit :log_children
- # after_commit :do_something_else
- #
- # private
- #
- # def log_children
- # # Child processing
- # end
- #
- # def do_something_else
- # # Something else
- # end
- # end
+ # config.active_record.run_after_transaction_callbacks_in_order_defined = false
#
- # In this case the +do_something_else+ is executed before +log_children+.
+ # When set to +true+ (the default from \Rails 7.1), callbacks are executed in the order they
+ # are defined, just like the example above. When set to +false+, the order is reversed, so
+ # +do_something_else+ is executed before +log_children+.
#
# == \Transactions
#
@@ -432,7 +416,7 @@ module ClassMethods
define_model_callbacks :save, :create, :update, :destroy
end
- def destroy #:nodoc:
+ def destroy # :nodoc:
@_destroy_callback_already_called ||= false
return if @_destroy_callback_already_called
@_destroy_callback_already_called = true
@@ -444,7 +428,7 @@ def destroy #:nodoc:
@_destroy_callback_already_called = false
end
- def touch(*, **) #:nodoc:
+ def touch(*, **) # :nodoc:
_run_touch_callbacks { super }
end
@@ -462,7 +446,7 @@ def _create_record
end
def _update_record
- _run_update_callbacks { super }
+ _run_update_callbacks { record_update_timestamps { super } }
end
end
end
diff --git a/activerecord/lib/active_record/coders/column_serializer.rb b/activerecord/lib/active_record/coders/column_serializer.rb
new file mode 100644
index 0000000000..e8ee695013
--- /dev/null
+++ b/activerecord/lib/active_record/coders/column_serializer.rb
@@ -0,0 +1,61 @@
+# frozen_string_literal: true
+
+module ActiveRecord
+ module Coders # :nodoc:
+ class ColumnSerializer # :nodoc:
+ attr_reader :object_class
+ attr_reader :coder
+
+ def initialize(attr_name, coder, object_class = Object)
+ @attr_name = attr_name
+ @object_class = object_class
+ @coder = coder
+ check_arity_of_constructor
+ end
+
+ def init_with(coder) # :nodoc:
+ @attr_name = coder["attr_name"]
+ @object_class = coder["object_class"]
+ @coder = coder["coder"]
+ end
+
+ def dump(object)
+ return if object.nil?
+
+ assert_valid_value(object, action: "dump")
+ coder.dump(object)
+ end
+
+ def load(payload)
+ if payload.nil?
+ if @object_class != ::Object
+ return @object_class.new
+ end
+ return nil
+ end
+
+ object = coder.load(payload)
+
+ assert_valid_value(object, action: "load")
+ object ||= object_class.new if object_class != Object
+
+ object
+ end
+
+ # Public because it's called by Type::Serialized
+ def assert_valid_value(object, action:)
+ unless object.nil? || object_class === object
+ raise SerializationTypeMismatch,
+ "can't #{action} `#{@attr_name}`: was supposed to be a #{object_class}, but was a #{object.class}. -- #{object.inspect}"
+ end
+ end
+
+ private
+ def check_arity_of_constructor
+ load(nil)
+ rescue ArgumentError
+ raise ArgumentError, "Cannot serialize #{object_class}. Classes passed to `serialize` must have a 0 argument constructor."
+ end
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/coders/json.rb b/activerecord/lib/active_record/coders/json.rb
index a69b38487e..9fa7d1f9d9 100644
--- a/activerecord/lib/active_record/coders/json.rb
+++ b/activerecord/lib/active_record/coders/json.rb
@@ -2,7 +2,7 @@
module ActiveRecord
module Coders # :nodoc:
- class JSON # :nodoc:
+ module JSON # :nodoc:
def self.dump(obj)
ActiveSupport::JSON.encode(obj)
end
diff --git a/activerecord/lib/active_record/coders/yaml_column.rb b/activerecord/lib/active_record/coders/yaml_column.rb
index b700b8c86f..a1fcc292c5 100644
--- a/activerecord/lib/active_record/coders/yaml_column.rb
+++ b/activerecord/lib/active_record/coders/yaml_column.rb
@@ -4,37 +4,83 @@
module ActiveRecord
module Coders # :nodoc:
- class YAMLColumn # :nodoc:
- attr_accessor :object_class
-
- def initialize(attr_name, object_class = Object)
- @attr_name = attr_name
- @object_class = object_class
- check_arity_of_constructor
- end
+ class YAMLColumn < ColumnSerializer # :nodoc:
+ class SafeCoder
+ def initialize(permitted_classes: [], unsafe_load: nil)
+ @permitted_classes = permitted_classes
+ @unsafe_load = unsafe_load
+ end
- def dump(obj)
- return if obj.nil?
+ if Gem::Version.new(Psych::VERSION) >= Gem::Version.new("5.1")
+ def dump(object)
+ if @unsafe_load.nil? ? ActiveRecord.use_yaml_unsafe_load : @unsafe_load
+ ::YAML.dump(object)
+ else
+ ::YAML.safe_dump(
+ object,
+ permitted_classes: @permitted_classes + ActiveRecord.yaml_column_permitted_classes,
+ aliases: true,
+ )
+ end
+ end
+ else
+ def dump(object)
+ YAML.dump(object)
+ end
+ end
- assert_valid_value(obj, action: "dump")
- YAML.dump obj
+ if YAML.respond_to?(:unsafe_load)
+ def load(payload)
+ if @unsafe_load.nil? ? ActiveRecord.use_yaml_unsafe_load : @unsafe_load
+ YAML.unsafe_load(payload)
+ else
+ YAML.safe_load(
+ payload,
+ permitted_classes: @permitted_classes + ActiveRecord.yaml_column_permitted_classes,
+ aliases: true,
+ )
+ end
+ end
+ else
+ def load(payload)
+ if @unsafe_load.nil? ? ActiveRecord.use_yaml_unsafe_load : @unsafe_load
+ YAML.load(payload)
+ else
+ YAML.safe_load(
+ payload,
+ permitted_classes: @permitted_classes + ActiveRecord.yaml_column_permitted_classes,
+ aliases: true,
+ )
+ end
+ end
+ end
end
- def load(yaml)
- return object_class.new if object_class != Object && yaml.nil?
- return yaml unless yaml.is_a?(String) && yaml.start_with?("---")
- obj = yaml_load(yaml)
-
- assert_valid_value(obj, action: "load")
- obj ||= object_class.new if object_class != Object
+ def initialize(attr_name, object_class = Object, permitted_classes: [], unsafe_load: nil)
+ super(
+ attr_name,
+ SafeCoder.new(permitted_classes: permitted_classes || [], unsafe_load: unsafe_load),
+ object_class,
+ )
+ check_arity_of_constructor
+ end
- obj
+ def init_with(coder) # :nodoc:
+ unless coder["coder"]
+ permitted_classes = coder["permitted_classes"] || []
+ unsafe_load = coder["unsafe_load"] || false
+ coder["coder"] = SafeCoder.new(permitted_classes: permitted_classes, unsafe_load: unsafe_load)
+ end
+ super(coder)
end
- def assert_valid_value(obj, action:)
- unless obj.nil? || obj.is_a?(object_class)
- raise SerializationTypeMismatch,
- "can't #{action} `#{@attr_name}`: was supposed to be a #{object_class}, but was a #{obj.class}. -- #{obj.inspect}"
+ def coder
+ # This is to retain forward compatibility when loading records serialized with Marshal
+ # from a previous version of Rails.
+ @coder ||= begin
+ permitted_classes = defined?(@permitted_classes) ? @permitted_classes : []
+ unsafe_load = defined?(@unsafe_load) && @unsafe_load.nil?
+ SafeCoder.new(permitted_classes: permitted_classes, unsafe_load: unsafe_load)
end
end
@@ -44,28 +90,6 @@ def check_arity_of_constructor
rescue ArgumentError
raise ArgumentError, "Cannot serialize #{object_class}. Classes passed to `serialize` must have a 0 argument constructor."
end
-
- if YAML.respond_to?(:unsafe_load)
- def yaml_load(payload)
- if ActiveRecord::Base.use_yaml_unsafe_load
- YAML.unsafe_load(payload)
- elsif YAML.method(:safe_load).parameters.include?([:key, :permitted_classes])
- YAML.safe_load(payload, permitted_classes: ActiveRecord::Base.yaml_column_permitted_classes, aliases: true)
- else
- YAML.safe_load(payload, ActiveRecord::Base.yaml_column_permitted_classes, [], true)
- end
- end
- else
- def yaml_load(payload)
- if ActiveRecord::Base.use_yaml_unsafe_load
- YAML.load(payload)
- elsif YAML.method(:safe_load).parameters.include?([:key, :permitted_classes])
- YAML.safe_load(payload, permitted_classes: ActiveRecord::Base.yaml_column_permitted_classes, aliases: true)
- else
- YAML.safe_load(payload, ActiveRecord::Base.yaml_column_permitted_classes, [], true)
- end
- end
- end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters.rb b/activerecord/lib/active_record/connection_adapters.rb
index da07d5d295..0c1ce39982 100644
--- a/activerecord/lib/active_record/connection_adapters.rb
+++ b/activerecord/lib/active_record/connection_adapters.rb
@@ -11,14 +11,16 @@ module ConnectionAdapters
autoload :Column
autoload :PoolConfig
autoload :PoolManager
- autoload :LegacyPoolManager
autoload :SchemaCache
+ autoload :BoundSchemaReflection, "active_record/connection_adapters/schema_cache"
+ autoload :SchemaReflection, "active_record/connection_adapters/schema_cache"
autoload :Deduplicable
autoload_at "active_record/connection_adapters/abstract/schema_definitions" do
autoload :IndexDefinition
autoload :ColumnDefinition
autoload :ChangeColumnDefinition
+ autoload :ChangeColumnDefaultDefinition
autoload :ForeignKeyDefinition
autoload :CheckConstraintDefinition
autoload :TableDefinition
@@ -27,20 +29,21 @@ module ConnectionAdapters
autoload :ReferenceDefinition
end
- autoload_at "active_record/connection_adapters/abstract/connection_pool" do
- autoload :ConnectionHandler
- end
-
autoload_under "abstract" do
autoload :SchemaStatements
autoload :DatabaseStatements
autoload :DatabaseLimits
autoload :Quoting
- autoload :ConnectionPool
+ autoload :ConnectionHandler
autoload :QueryCache
autoload :Savepoints
end
+ autoload_at "active_record/connection_adapters/abstract/connection_pool" do
+ autoload :ConnectionPool
+ autoload :NullPool
+ end
+
autoload_at "active_record/connection_adapters/abstract/transaction" do
autoload :TransactionManager
autoload :NullTransaction
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/connection_handler.rb b/activerecord/lib/active_record/connection_adapters/abstract/connection_handler.rb
new file mode 100644
index 0000000000..192fed6549
--- /dev/null
+++ b/activerecord/lib/active_record/connection_adapters/abstract/connection_handler.rb
@@ -0,0 +1,367 @@
+# frozen_string_literal: true
+
+require "thread"
+require "concurrent/map"
+
+module ActiveRecord
+ module ConnectionAdapters
+ # = Active Record Connection Handler
+ #
+ # ConnectionHandler is a collection of ConnectionPool objects. It is used
+ # for keeping separate connection pools that connect to different databases.
+ #
+ # For example, suppose that you have 5 models, with the following hierarchy:
+ #
+ # class Author < ActiveRecord::Base
+ # end
+ #
+ # class BankAccount < ActiveRecord::Base
+ # end
+ #
+ # class Book < ActiveRecord::Base
+ # establish_connection :library_db
+ # end
+ #
+ # class ScaryBook < Book
+ # end
+ #
+ # class GoodBook < Book
+ # end
+ #
+ # And a database.yml that looked like this:
+ #
+ # development:
+ # database: my_application
+ # host: localhost
+ #
+ # library_db:
+ # database: library
+ # host: some.library.org
+ #
+ # Your primary database in the development environment is "my_application"
+ # but the Book model connects to a separate database called "library_db"
+ # (this can even be a database on a different machine).
+ #
+ # Book, ScaryBook, and GoodBook will all use the same connection pool to
+ # "library_db" while Author, BankAccount, and any other models you create
+ # will use the default connection pool to "my_application".
+ #
+ # The various connection pools are managed by a single instance of
+ # ConnectionHandler accessible via ActiveRecord::Base.connection_handler.
+ # All Active Record models use this handler to determine the connection pool that they
+ # should use.
+ #
+ # The ConnectionHandler class is not coupled with the Active models, as it has no knowledge
+ # about the model. The model needs to pass a connection specification name to the handler,
+ # in order to look up the correct connection pool.
+ class ConnectionHandler
+ FINALIZER = lambda { |_| ActiveSupport::ForkTracker.check! }
+ private_constant :FINALIZER
+
+ class StringConnectionName # :nodoc:
+ attr_reader :name
+
+ def initialize(name)
+ @name = name
+ end
+
+ def primary_class?
+ false
+ end
+
+ def current_preventing_writes
+ false
+ end
+ end
+
+ def initialize
+ # These caches are keyed by pool_config.connection_name (PoolConfig#connection_name).
+ @connection_name_to_pool_manager = Concurrent::Map.new(initial_capacity: 2)
+
+ # Backup finalizer: if the forked child skipped Kernel#fork the early discard has not occurred
+ ObjectSpace.define_finalizer self, FINALIZER
+ end
+
+ def prevent_writes # :nodoc:
+ ActiveSupport::IsolatedExecutionState[:active_record_prevent_writes]
+ end
+
+ def prevent_writes=(prevent_writes) # :nodoc:
+ ActiveSupport::IsolatedExecutionState[:active_record_prevent_writes] = prevent_writes
+ end
+
+ def connection_pool_names # :nodoc:
+ connection_name_to_pool_manager.keys
+ end
+
+ def all_connection_pools
+ ActiveRecord.deprecator.warn(<<-MSG.squish)
+ The `all_connection_pools` method is deprecated in favor of `connection_pool_list`.
+ Call `connection_pool_list(:all)` to get the same behavior as `all_connection_pools`.
+ MSG
+ connection_name_to_pool_manager.values.flat_map { |m| m.pool_configs.map(&:pool) }
+ end
+
+ # Returns the pools for a connection handler and given role. If +:all+ is passed,
+ # all pools belonging to the connection handler will be returned.
+ def connection_pool_list(role = nil)
+ if role.nil?
+ deprecation_for_pool_handling(__method__)
+ role = ActiveRecord::Base.current_role
+ connection_name_to_pool_manager.values.flat_map { |m| m.pool_configs(role).map(&:pool) }
+ elsif role == :all
+ connection_name_to_pool_manager.values.flat_map { |m| m.pool_configs.map(&:pool) }
+ else
+ connection_name_to_pool_manager.values.flat_map { |m| m.pool_configs(role).map(&:pool) }
+ end
+ end
+ alias :connection_pools :connection_pool_list
+
+ def each_connection_pool(role = nil, &block) # :nodoc:
+ role = nil if role == :all
+ return enum_for(__method__, role) unless block_given?
+
+ connection_name_to_pool_manager.each_value do |manager|
+ manager.each_pool_config(role) do |pool_config|
+ yield pool_config.pool
+ end
+ end
+ end
+
+ def establish_connection(config, owner_name: Base, role: Base.current_role, shard: Base.current_shard, clobber: false)
+ owner_name = determine_owner_name(owner_name, config)
+
+ pool_config = resolve_pool_config(config, owner_name, role, shard)
+ db_config = pool_config.db_config
+
+ pool_manager = set_pool_manager(pool_config.connection_name)
+
+ # If there is an existing pool with the same values as the pool_config
+ # don't remove the connection. Connections should only be removed if we are
+ # establishing a connection on a class that is already connected to a different
+ # configuration.
+ existing_pool_config = pool_manager.get_pool_config(role, shard)
+
+ if !clobber && existing_pool_config && existing_pool_config.db_config == db_config
+ # Update the pool_config's connection class if it differs. This is used
+ # for ensuring that ActiveRecord::Base and the primary_abstract_class use
+ # the same pool. Without this granular swapping will not work correctly.
+ if owner_name.primary_class? && (existing_pool_config.connection_class != owner_name)
+ existing_pool_config.connection_class = owner_name
+ end
+
+ existing_pool_config.pool
+ else
+ disconnect_pool_from_pool_manager(pool_manager, role, shard)
+ pool_manager.set_pool_config(role, shard, pool_config)
+
+ payload = {
+ connection_name: pool_config.connection_name,
+ role: role,
+ shard: shard,
+ config: db_config.configuration_hash
+ }
+
+ ActiveSupport::Notifications.instrumenter.instrument("!connection.active_record", payload) do
+ pool_config.pool
+ end
+ end
+ end
+
+ # Returns true if there are any active connections among the connection
+ # pools that the ConnectionHandler is managing.
+ def active_connections?(role = nil)
+ if role.nil?
+ deprecation_for_pool_handling(__method__)
+ role = ActiveRecord::Base.current_role
+ end
+
+ each_connection_pool(role).any?(&:active_connection?)
+ end
+
+ # Returns any connections in use by the current thread back to the pool,
+ # and also returns connections to the pool cached by threads that are no
+ # longer alive.
+ def clear_active_connections!(role = nil)
+ if role.nil?
+ deprecation_for_pool_handling(__method__)
+ role = ActiveRecord::Base.current_role
+ end
+
+ each_connection_pool(role).each(&:release_connection)
+ end
+
+ # Clears the cache which maps classes.
+ #
+ # See ConnectionPool#clear_reloadable_connections! for details.
+ def clear_reloadable_connections!(role = nil)
+ if role.nil?
+ deprecation_for_pool_handling(__method__)
+ role = ActiveRecord::Base.current_role
+ end
+
+ each_connection_pool(role).each(&:clear_reloadable_connections!)
+ end
+
+ def clear_all_connections!(role = nil)
+ if role.nil?
+ deprecation_for_pool_handling(__method__)
+ role = ActiveRecord::Base.current_role
+ end
+
+ each_connection_pool(role).each(&:disconnect!)
+ end
+
+ # Disconnects all currently idle connections.
+ #
+ # See ConnectionPool#flush! for details.
+ def flush_idle_connections!(role = nil)
+ if role.nil?
+ deprecation_for_pool_handling(__method__)
+ role = ActiveRecord::Base.current_role
+ end
+
+ each_connection_pool(role).each(&:flush!)
+ end
+
+ # Locate the connection of the nearest super class. This can be an
+ # active or defined connection: if it is the latter, it will be
+ # opened and set as the active connection for the class it was defined
+ # for (not necessarily the current class).
+ def retrieve_connection(connection_name, role: ActiveRecord::Base.current_role, shard: ActiveRecord::Base.current_shard) # :nodoc:
+ pool = retrieve_connection_pool(connection_name, role: role, shard: shard)
+
+ unless pool
+ if shard != ActiveRecord::Base.default_shard
+ message = "No connection pool for '#{connection_name}' found for the '#{shard}' shard."
+ elsif role != ActiveRecord::Base.default_role
+ message = "No connection pool for '#{connection_name}' found for the '#{role}' role."
+ else
+ message = "No connection pool for '#{connection_name}' found."
+ end
+
+ raise ConnectionNotEstablished, message
+ end
+
+ pool.connection
+ end
+
+ # Returns true if a connection that's accessible to this class has
+ # already been opened.
+ def connected?(connection_name, role: ActiveRecord::Base.current_role, shard: ActiveRecord::Base.current_shard)
+ pool = retrieve_connection_pool(connection_name, role: role, shard: shard)
+ pool && pool.connected?
+ end
+
+ def remove_connection_pool(connection_name, role: ActiveRecord::Base.current_role, shard: ActiveRecord::Base.current_shard)
+ if pool_manager = get_pool_manager(connection_name)
+ disconnect_pool_from_pool_manager(pool_manager, role, shard)
+ end
+ end
+
+ # Retrieving the connection pool happens a lot, so we cache it in @connection_name_to_pool_manager.
+ # This makes retrieving the connection pool O(1) once the process is warm.
+ # When a connection is established or removed, we invalidate the cache.
+ def retrieve_connection_pool(connection_name, role: ActiveRecord::Base.current_role, shard: ActiveRecord::Base.current_shard)
+ pool_config = get_pool_manager(connection_name)&.get_pool_config(role, shard)
+ pool_config&.pool
+ end
+
+ private
+ attr_reader :connection_name_to_pool_manager
+
+ # Returns the pool manager for a connection name / identifier.
+ def get_pool_manager(connection_name)
+ connection_name_to_pool_manager[connection_name]
+ end
+
+ # Get the existing pool manager or initialize and assign a new one.
+ def set_pool_manager(connection_name)
+ connection_name_to_pool_manager[connection_name] ||= PoolManager.new
+ end
+
+ def pool_managers
+ connection_name_to_pool_manager.values
+ end
+
+ def deprecation_for_pool_handling(method)
+ roles = []
+ pool_managers.each do |pool_manager|
+ roles << pool_manager.role_names
+ end
+
+ if roles.flatten.uniq.count > 1
+ ActiveRecord.deprecator.warn(<<-MSG.squish)
+ `#{method}` currently only applies to connection pools in the current
+ role (`#{ActiveRecord::Base.current_role}`). In Rails 7.2, this method
+ will apply to all known pools, regardless of role. To affect only those
+ connections belonging to a specific role, pass the role name as an
+ argument. To switch to the new behavior, pass `:all` as the role name.
+ MSG
+ end
+ end
+
+ def disconnect_pool_from_pool_manager(pool_manager, role, shard)
+ pool_config = pool_manager.remove_pool_config(role, shard)
+
+ if pool_config
+ pool_config.disconnect!
+ pool_config.db_config
+ end
+ end
+
+ # Returns an instance of PoolConfig for a given adapter.
+ # Accepts a hash one layer deep that contains all connection information.
+ #
+ # == Example
+ #
+ # config = { "production" => { "host" => "localhost", "database" => "foo", "adapter" => "sqlite3" } }
+ # pool_config = Base.configurations.resolve_pool_config(:production)
+ # pool_config.db_config.configuration_hash
+ # # => { host: "localhost", database: "foo", adapter: "sqlite3" }
+ #
+ def resolve_pool_config(config, connection_name, role, shard)
+ db_config = Base.configurations.resolve(config)
+
+ raise(AdapterNotSpecified, "database configuration does not specify adapter") unless db_config.adapter
+
+ # Require the adapter itself and give useful feedback about
+ # 1. Missing adapter gems and
+ # 2. Adapter gems' missing dependencies.
+ path_to_adapter = "active_record/connection_adapters/#{db_config.adapter}_adapter"
+ begin
+ require path_to_adapter
+ rescue LoadError => e
+ # We couldn't require the adapter itself. Raise an exception that
+ # points out config typos and missing gems.
+ if e.path == path_to_adapter
+ # We can assume that a non-builtin adapter was specified, so it's
+ # either misspelled or missing from Gemfile.
+ raise LoadError, "Could not load the '#{db_config.adapter}' Active Record adapter. Ensure that the adapter is spelled correctly in config/database.yml and that you've added the necessary adapter gem to your Gemfile.", e.backtrace
+
+ # Bubbled up from the adapter require. Prefix the exception message
+ # with some guidance about how to address it and reraise.
+ else
+ raise LoadError, "Error loading the '#{db_config.adapter}' Active Record adapter. Missing a gem it depends on? #{e.message}", e.backtrace
+ end
+ end
+
+ unless ActiveRecord::Base.respond_to?(db_config.adapter_method)
+ raise AdapterNotFound, "database configuration specifies nonexistent #{db_config.adapter} adapter"
+ end
+
+ ConnectionAdapters::PoolConfig.new(connection_name, db_config, role, shard)
+ end
+
+ def determine_owner_name(owner_name, config)
+ if owner_name.is_a?(String) || owner_name.is_a?(Symbol)
+ StringConnectionName.new(owner_name.to_s)
+ elsif config.is_a?(Symbol)
+ StringConnectionName.new(config.to_s)
+ else
+ owner_name
+ end
+ end
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/connection_pool.rb b/activerecord/lib/active_record/connection_adapters/abstract/connection_pool.rb
index 7e0981e09c..13c745f7fd 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract/connection_pool.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract/connection_pool.rb
@@ -3,32 +3,40 @@
require "thread"
require "concurrent/map"
require "monitor"
-require "weakref"
+
+require "active_record/connection_adapters/abstract/connection_pool/queue"
+require "active_record/connection_adapters/abstract/connection_pool/reaper"
module ActiveRecord
module ConnectionAdapters
module AbstractPool # :nodoc:
- def get_schema_cache(connection)
- self.schema_cache ||= SchemaCache.new(connection)
- schema_cache.connection = connection
- schema_cache
- end
-
- def set_schema_cache(cache)
- self.schema_cache = cache
- end
end
class NullPool # :nodoc:
include ConnectionAdapters::AbstractPool
- attr_accessor :schema_cache
+ class NullConfig # :nodoc:
+ def method_missing(*)
+ nil
+ end
+ end
+ NULL_CONFIG = NullConfig.new # :nodoc:
+
+ def schema_reflection
+ SchemaReflection.new(nil)
+ end
- def connection_klass
- nil
+ def connection_class; end
+ def checkin(_); end
+ def remove(_); end
+ def async_executor; end
+ def db_config
+ NULL_CONFIG
end
end
+ # = Active Record Connection Pool
+ #
# Connection pool base class for managing Active Record database
# connections.
#
@@ -43,19 +51,17 @@ def connection_klass
# handle cases in which there are more threads than connections: if all
# connections have been checked out, and a thread tries to checkout a
# connection anyway, then ConnectionPool will wait until some other thread
- # has checked in a connection.
+ # has checked in a connection, or the +checkout_timeout+ has expired.
#
# == Obtaining (checking out) a connection
#
# Connections can be obtained and used from a connection pool in several
# ways:
#
- # 1. Simply use {ActiveRecord::Base.connection}[rdoc-ref:ConnectionHandling.connection]
- # as with Active Record 2.1 and
- # earlier (pre-connection-pooling). Eventually, when you're done with
- # the connection(s) and wish it to be returned to the pool, you call
- # {ActiveRecord::Base.clear_active_connections!}[rdoc-ref:ConnectionAdapters::ConnectionHandler#clear_active_connections!].
- # This will be the default behavior for Active Record when used in conjunction with
+ # 1. Simply use {ActiveRecord::Base.connection}[rdoc-ref:ConnectionHandling.connection].
+ # When you're done with the connection(s) and wish it to be returned to the pool, you call
+ # {ActiveRecord::Base.connection_handler.clear_active_connections!}[rdoc-ref:ConnectionAdapters::ConnectionHandler#clear_active_connections!].
+ # This is the default behavior for Active Record when used in conjunction with
# Action Pack's request handling cycle.
# 2. Manually check out a connection from the pool with
# {ActiveRecord::Base.connection_pool.checkout}[rdoc-ref:#checkout]. You are responsible for
@@ -68,6 +74,12 @@ def connection_klass
# Connections in the pool are actually AbstractAdapter objects (or objects
# compatible with AbstractAdapter's interface).
#
+ # While a thread has a connection checked out from the pool using one of the
+ # above three methods, that connection will automatically be the one used
+ # by ActiveRecord queries executing on that thread. It is not required to
+ # explicitly pass the checked out connection to \Rails models or queries, for
+ # example.
+ #
# == Options
#
# There are several connection-pooling-related options that you can add to
@@ -90,279 +102,14 @@ def connection_klass
# * private methods that require being called in a +synchronize+ blocks
# are now explicitly documented
class ConnectionPool
- # Threadsafe, fair, LIFO queue. Meant to be used by ConnectionPool
- # with which it shares a Monitor.
- class Queue
- def initialize(lock = Monitor.new)
- @lock = lock
- @cond = @lock.new_cond
- @num_waiting = 0
- @queue = []
- end
-
- # Test if any threads are currently waiting on the queue.
- def any_waiting?
- synchronize do
- @num_waiting > 0
- end
- end
-
- # Returns the number of threads currently waiting on this
- # queue.
- def num_waiting
- synchronize do
- @num_waiting
- end
- end
-
- # Add +element+ to the queue. Never blocks.
- def add(element)
- synchronize do
- @queue.push element
- @cond.signal
- end
- end
-
- # If +element+ is in the queue, remove and return it, or +nil+.
- def delete(element)
- synchronize do
- @queue.delete(element)
- end
- end
-
- # Remove all elements from the queue.
- def clear
- synchronize do
- @queue.clear
- end
- end
-
- # Remove the head of the queue.
- #
- # If +timeout+ is not given, remove and return the head of the
- # queue if the number of available elements is strictly
- # greater than the number of threads currently waiting (that
- # is, don't jump ahead in line). Otherwise, return +nil+.
- #
- # If +timeout+ is given, block if there is no element
- # available, waiting up to +timeout+ seconds for an element to
- # become available.
- #
- # Raises:
- # - ActiveRecord::ConnectionTimeoutError if +timeout+ is given and no element
- # becomes available within +timeout+ seconds,
- def poll(timeout = nil)
- synchronize { internal_poll(timeout) }
- end
-
- private
- def internal_poll(timeout)
- no_wait_poll || (timeout && wait_poll(timeout))
- end
-
- def synchronize(&block)
- @lock.synchronize(&block)
- end
-
- # Test if the queue currently contains any elements.
- def any?
- !@queue.empty?
- end
-
- # A thread can remove an element from the queue without
- # waiting if and only if the number of currently available
- # connections is strictly greater than the number of waiting
- # threads.
- def can_remove_no_wait?
- @queue.size > @num_waiting
- end
-
- # Removes and returns the head of the queue if possible, or +nil+.
- def remove
- @queue.pop
- end
-
- # Remove and return the head of the queue if the number of
- # available elements is strictly greater than the number of
- # threads currently waiting. Otherwise, return +nil+.
- def no_wait_poll
- remove if can_remove_no_wait?
- end
-
- # Waits on the queue up to +timeout+ seconds, then removes and
- # returns the head of the queue.
- def wait_poll(timeout)
- @num_waiting += 1
-
- t0 = Concurrent.monotonic_time
- elapsed = 0
- loop do
- ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
- @cond.wait(timeout - elapsed)
- end
-
- return remove if any?
-
- elapsed = Concurrent.monotonic_time - t0
- if elapsed >= timeout
- msg = "could not obtain a connection from the pool within %0.3f seconds (waited %0.3f seconds); all pooled connections were in use" %
- [timeout, elapsed]
- raise ConnectionTimeoutError, msg
- end
- end
- ensure
- @num_waiting -= 1
- end
- end
-
- # Adds the ability to turn a basic fair FIFO queue into one
- # biased to some thread.
- module BiasableQueue # :nodoc:
- class BiasedConditionVariable # :nodoc:
- # semantics of condition variables guarantee that +broadcast+, +broadcast_on_biased+,
- # +signal+ and +wait+ methods are only called while holding a lock
- def initialize(lock, other_cond, preferred_thread)
- @real_cond = lock.new_cond
- @other_cond = other_cond
- @preferred_thread = preferred_thread
- @num_waiting_on_real_cond = 0
- end
-
- def broadcast
- broadcast_on_biased
- @other_cond.broadcast
- end
-
- def broadcast_on_biased
- @num_waiting_on_real_cond = 0
- @real_cond.broadcast
- end
-
- def signal
- if @num_waiting_on_real_cond > 0
- @num_waiting_on_real_cond -= 1
- @real_cond
- else
- @other_cond
- end.signal
- end
-
- def wait(timeout)
- if Thread.current == @preferred_thread
- @num_waiting_on_real_cond += 1
- @real_cond
- else
- @other_cond
- end.wait(timeout)
- end
- end
-
- def with_a_bias_for(thread)
- previous_cond = nil
- new_cond = nil
- synchronize do
- previous_cond = @cond
- @cond = new_cond = BiasedConditionVariable.new(@lock, @cond, thread)
- end
- yield
- ensure
- synchronize do
- @cond = previous_cond if previous_cond
- new_cond.broadcast_on_biased if new_cond # wake up any remaining sleepers
- end
- end
- end
-
- # Connections must be leased while holding the main pool mutex. This is
- # an internal subclass that also +.leases+ returned connections while
- # still in queue's critical section (queue synchronizes with the same
- # <tt>@lock</tt> as the main pool) so that a returned connection is already
- # leased and there is no need to re-enter synchronized block.
- class ConnectionLeasingQueue < Queue # :nodoc:
- include BiasableQueue
-
- private
- def internal_poll(timeout)
- conn = super
- conn.lease if conn
- conn
- end
- end
-
- # Every +frequency+ seconds, the reaper will call +reap+ and +flush+ on
- # +pool+. A reaper instantiated with a zero frequency will never reap
- # the connection pool.
- #
- # Configure the frequency by setting +reaping_frequency+ in your database
- # yaml file (default 60 seconds).
- class Reaper
- attr_reader :pool, :frequency
-
- def initialize(pool, frequency)
- @pool = pool
- @frequency = frequency
- end
-
- @mutex = Mutex.new
- @pools = {}
- @threads = {}
-
- class << self
- def register_pool(pool, frequency) # :nodoc:
- @mutex.synchronize do
- unless @threads[frequency]&.alive?
- @threads[frequency] = spawn_thread(frequency)
- end
- @pools[frequency] ||= []
- @pools[frequency] << WeakRef.new(pool)
- end
- end
-
- private
- def spawn_thread(frequency)
- Thread.new(frequency) do |t|
- # Advise multi-threaded app servers to ignore this thread for
- # the purposes of fork safety warnings
- Thread.current.thread_variable_set(:fork_safe, true)
- running = true
- while running
- sleep t
- @mutex.synchronize do
- @pools[frequency].select! do |pool|
- pool.weakref_alive? && !pool.discarded?
- end
-
- @pools[frequency].each do |p|
- p.reap
- p.flush
- rescue WeakRef::RefError
- end
-
- if @pools[frequency].empty?
- @pools.delete(frequency)
- @threads.delete(frequency)
- running = false
- end
- end
- end
- end
- end
- end
-
- def run
- return unless frequency && frequency > 0
- self.class.register_pool(pool, frequency)
- end
- end
-
include MonitorMixin
include QueryCache::ConnectionPoolConfiguration
include ConnectionAdapters::AbstractPool
attr_accessor :automatic_reconnect, :checkout_timeout
- attr_reader :db_config, :size, :reaper, :pool_config, :connection_klass
+ attr_reader :db_config, :size, :reaper, :pool_config, :async_executor, :role, :shard
- delegate :schema_cache, :schema_cache=, to: :pool_config
+ delegate :schema_reflection, :schema_reflection=, to: :pool_config
# Creates a new ConnectionPool object. +pool_config+ is a PoolConfig
# object which describes database connection information (e.g. adapter,
@@ -375,7 +122,8 @@ def initialize(pool_config)
@pool_config = pool_config
@db_config = pool_config.db_config
- @connection_klass = pool_config.connection_klass
+ @role = pool_config.role
+ @shard = pool_config.shard
@checkout_timeout = db_config.checkout_timeout
@idle_timeout = db_config.idle_timeout
@@ -407,16 +155,22 @@ def initialize(pool_config)
@lock_thread = false
+ @async_executor = build_async_executor
+
@reaper = Reaper.new(self, db_config.reaping_frequency)
@reaper.run
end
def lock_thread=(lock_thread)
if lock_thread
- @lock_thread = Thread.current
+ @lock_thread = ActiveSupport::IsolatedExecutionState.context
else
@lock_thread = nil
end
+
+ if (active_connection = @thread_cached_conns[connection_cache_key(current_thread)])
+ active_connection.lock_thread = @lock_thread
+ end
end
# Retrieve the connection associated with the current thread, or call
@@ -428,6 +182,12 @@ def connection
@thread_cached_conns[connection_cache_key(current_thread)] ||= checkout
end
+ def connection_class # :nodoc:
+ pool_config.connection_class
+ end
+ alias :connection_klass :connection_class
+ deprecate :connection_klass, deprecator: ActiveRecord.deprecator
+
# Returns true if there is an open connection being used for the current thread.
#
# This method only works for connections that have been obtained through
@@ -444,18 +204,23 @@ def active_connection?
# This method only works for connections that have been obtained through
# #connection or #with_connection methods, connections obtained through
# #checkout will not be automatically released.
- def release_connection(owner_thread = Thread.current)
+ def release_connection(owner_thread = ActiveSupport::IsolatedExecutionState.context)
if conn = @thread_cached_conns.delete(connection_cache_key(owner_thread))
checkin conn
end
end
- # If a connection obtained through #connection or #with_connection methods
- # already exists yield it to the block. If no such connection
- # exists checkout a connection, yield it to the block, and checkin the
- # connection when finished.
+ # Yields a connection from the connection pool to the block. If no connection
+ # is already checked out by the current thread, a connection will be checked
+ # out from the pool, yielded to the block, and then returned to the pool when
+ # the block is finished. If a connection has already been checked out on the
+ # current thread, such as via #connection or #with_connection, that existing
+ # connection will be the one yielded and it will not be returned to the pool
+ # automatically at the end of the block; it is expected that such an existing
+ # connection will be properly returned to the pool by the code that checked
+ # it out.
def with_connection
- unless conn = @thread_cached_conns[connection_cache_key(Thread.current)]
+ unless conn = @thread_cached_conns[connection_cache_key(ActiveSupport::IsolatedExecutionState.context)]
conn = connection
fresh_connection = true
end
@@ -585,7 +350,9 @@ def clear_reloadable_connections!
# Raises:
# - ActiveRecord::ConnectionTimeoutError no connection can be obtained from the pool.
def checkout(checkout_timeout = @checkout_timeout)
- checkout_and_verify(acquire_connection(checkout_timeout))
+ connection = checkout_and_verify(acquire_connection(checkout_timeout))
+ connection.lock_thread = @lock_thread
+ connection
end
# Check-in a database connection back into the pool, indicating that you
@@ -602,6 +369,7 @@ def checkin(conn)
conn.expire
end
+ conn.lock_thread = nil
@available.add conn
end
end
@@ -695,8 +463,7 @@ def num_waiting_in_queue # :nodoc:
@available.num_waiting
end
- # Return connection pool's usage statistic
- # Example:
+ # Returns the connection pool's usage statistic.
#
# ActiveRecord::Base.connection_pool.stat # => { size: 15, connections: 1, busy: 1, dead: 0, idle: 0, waiting: 0, checkout_timeout: 5 }
def stat
@@ -713,7 +480,28 @@ def stat
end
end
+ def schedule_query(future_result) # :nodoc:
+ @async_executor.post { future_result.execute_or_skip }
+ Thread.pass
+ end
+
private
+ def build_async_executor
+ case ActiveRecord.async_query_executor
+ when :multi_thread_pool
+ if @db_config.max_threads > 0
+ Concurrent::ThreadPoolExecutor.new(
+ min_threads: @db_config.min_threads,
+ max_threads: @db_config.max_threads,
+ max_queue: @db_config.max_queue,
+ fallback_policy: :caller_runs
+ )
+ end
+ when :global_thread_pool
+ ActiveRecord.global_thread_pool_async_query_executor
+ end
+ end
+
#--
# this is unfortunately not concurrent
def bulk_make_new_connections(num_new_conns_needed)
@@ -736,7 +524,7 @@ def connection_cache_key(thread)
end
def current_thread
- @lock_thread || Thread.current
+ @lock_thread || ActiveSupport::IsolatedExecutionState.context
end
# Take control of all existing connections so a "group" action such as
@@ -753,17 +541,17 @@ def with_exclusively_acquired_all_connections(raise_on_acquisition_timeout = tru
def attempt_to_checkout_all_existing_connections(raise_on_acquisition_timeout = true)
collected_conns = synchronize do
# account for our own connections
- @connections.select { |conn| conn.owner == Thread.current }
+ @connections.select { |conn| conn.owner == ActiveSupport::IsolatedExecutionState.context }
end
newly_checked_out = []
- timeout_time = Concurrent.monotonic_time + (@checkout_timeout * 2)
+ timeout_time = Process.clock_gettime(Process::CLOCK_MONOTONIC) + (@checkout_timeout * 2)
- @available.with_a_bias_for(Thread.current) do
+ @available.with_a_bias_for(ActiveSupport::IsolatedExecutionState.context) do
loop do
synchronize do
return if collected_conns.size == @connections.size && @now_connecting == 0
- remaining_timeout = timeout_time - Concurrent.monotonic_time
+ remaining_timeout = timeout_time - Process.clock_gettime(Process::CLOCK_MONOTONIC)
remaining_timeout = 0 if remaining_timeout < 0
conn = checkout_for_exclusive_access(remaining_timeout)
collected_conns << conn
@@ -806,7 +594,7 @@ def checkout_for_exclusive_access(checkout_timeout)
thread_report = []
@connections.each do |conn|
- unless conn.owner == Thread.current
+ unless conn.owner == ActiveSupport::IsolatedExecutionState.context
thread_report << "#{conn} is owned by #{conn.owner}"
end
end
@@ -867,7 +655,13 @@ def acquire_connection(checkout_timeout)
conn
else
reap
- @available.poll(checkout_timeout)
+ # Retry after reaping, which may return an available connection,
+ # remove an inactive connection, or both
+ if conn = @available.poll || try_to_checkout_new_connection
+ conn
+ else
+ @available.poll(checkout_timeout)
+ end
end
end
@@ -879,9 +673,12 @@ def remove_connection_from_thread_cache(conn, owner_thread = conn.owner)
alias_method :release, :remove_connection_from_thread_cache
def new_connection
- Base.public_send(db_config.adapter_method, db_config.configuration_hash).tap do |conn|
- conn.check_version
- end
+ connection = Base.public_send(db_config.adapter_method, db_config.configuration_hash)
+ connection.pool = self
+ connection.check_version
+ connection
+ rescue ConnectionNotEstablished => ex
+ raise ex.set_pool(self)
end
# If the pool is not at a <tt>@size</tt> limit, establish new connection. Connecting
@@ -928,302 +725,14 @@ def checkout_new_connection
def checkout_and_verify(c)
c._run_checkout_callbacks do
- c.verify!
+ c.clean!
end
c
- rescue
+ rescue Exception
remove c
c.disconnect!
raise
end
end
-
- # ConnectionHandler is a collection of ConnectionPool objects. It is used
- # for keeping separate connection pools that connect to different databases.
- #
- # For example, suppose that you have 5 models, with the following hierarchy:
- #
- # class Author < ActiveRecord::Base
- # end
- #
- # class BankAccount < ActiveRecord::Base
- # end
- #
- # class Book < ActiveRecord::Base
- # establish_connection :library_db
- # end
- #
- # class ScaryBook < Book
- # end
- #
- # class GoodBook < Book
- # end
- #
- # And a database.yml that looked like this:
- #
- # development:
- # database: my_application
- # host: localhost
- #
- # library_db:
- # database: library
- # host: some.library.org
- #
- # Your primary database in the development environment is "my_application"
- # but the Book model connects to a separate database called "library_db"
- # (this can even be a database on a different machine).
- #
- # Book, ScaryBook and GoodBook will all use the same connection pool to
- # "library_db" while Author, BankAccount, and any other models you create
- # will use the default connection pool to "my_application".
- #
- # The various connection pools are managed by a single instance of
- # ConnectionHandler accessible via ActiveRecord::Base.connection_handler.
- # All Active Record models use this handler to determine the connection pool that they
- # should use.
- #
- # The ConnectionHandler class is not coupled with the Active models, as it has no knowledge
- # about the model. The model needs to pass a connection specification name to the handler,
- # in order to look up the correct connection pool.
- class ConnectionHandler
- FINALIZER = lambda { |_| ActiveSupport::ForkTracker.check! }
- private_constant :FINALIZER
-
- def initialize
- # These caches are keyed by pool_config.connection_specification_name (PoolConfig#connection_specification_name).
- @owner_to_pool_manager = Concurrent::Map.new(initial_capacity: 2)
-
- # Backup finalizer: if the forked child skipped Kernel#fork the early discard has not occurred
- ObjectSpace.define_finalizer self, FINALIZER
- end
-
- def prevent_writes # :nodoc:
- Thread.current[:prevent_writes]
- end
-
- def prevent_writes=(prevent_writes) # :nodoc:
- Thread.current[:prevent_writes] = prevent_writes
- end
-
- # Prevent writing to the database regardless of role.
- #
- # In some cases you may want to prevent writes to the database
- # even if you are on a database that can write. `while_preventing_writes`
- # will prevent writes to the database for the duration of the block.
- #
- # This method does not provide the same protection as a readonly
- # user and is meant to be a safeguard against accidental writes.
- #
- # See `READ_QUERY` for the queries that are blocked by this
- # method.
- def while_preventing_writes(enabled = true)
- unless ActiveRecord::Base.legacy_connection_handling
- raise NotImplementedError, "`while_preventing_writes` is only available on the connection_handler with legacy_connection_handling"
- end
-
- original, self.prevent_writes = self.prevent_writes, enabled
- yield
- ensure
- self.prevent_writes = original
- end
-
- def connection_pool_names # :nodoc:
- owner_to_pool_manager.keys
- end
-
- def all_connection_pools
- owner_to_pool_manager.values.flat_map { |m| m.pool_configs.map(&:pool) }
- end
-
- def connection_pool_list(role = ActiveRecord::Base.current_role)
- owner_to_pool_manager.values.flat_map { |m| m.pool_configs(role).map(&:pool) }
- end
- alias :connection_pools :connection_pool_list
-
- def establish_connection(config, owner_name: Base, role: ActiveRecord::Base.current_role, shard: Base.current_shard)
- owner_name = config.to_s if config.is_a?(Symbol)
-
- pool_config = resolve_pool_config(config, owner_name)
- db_config = pool_config.db_config
-
- # Protects the connection named `ActiveRecord::Base` from being removed
- # if the user calls `establish_connection :primary`.
- if owner_to_pool_manager.key?(pool_config.connection_specification_name)
- remove_connection_pool(pool_config.connection_specification_name, role: role, shard: shard)
- end
-
- message_bus = ActiveSupport::Notifications.instrumenter
- payload = {}
- if pool_config
- payload[:spec_name] = pool_config.connection_specification_name
- payload[:shard] = shard
- payload[:config] = db_config.configuration_hash
- end
-
- if ActiveRecord::Base.legacy_connection_handling
- owner_to_pool_manager[pool_config.connection_specification_name] ||= LegacyPoolManager.new
- else
- owner_to_pool_manager[pool_config.connection_specification_name] ||= PoolManager.new
- end
- pool_manager = get_pool_manager(pool_config.connection_specification_name)
- pool_manager.set_pool_config(role, shard, pool_config)
-
- message_bus.instrument("!connection.active_record", payload) do
- pool_config.pool
- end
- end
-
- # Returns true if there are any active connections among the connection
- # pools that the ConnectionHandler is managing.
- def active_connections?(role = ActiveRecord::Base.current_role)
- connection_pool_list(role).any?(&:active_connection?)
- end
-
- # Returns any connections in use by the current thread back to the pool,
- # and also returns connections to the pool cached by threads that are no
- # longer alive.
- def clear_active_connections!(role = ActiveRecord::Base.current_role)
- connection_pool_list(role).each(&:release_connection)
- end
-
- # Clears the cache which maps classes.
- #
- # See ConnectionPool#clear_reloadable_connections! for details.
- def clear_reloadable_connections!(role = ActiveRecord::Base.current_role)
- connection_pool_list(role).each(&:clear_reloadable_connections!)
- end
-
- def clear_all_connections!(role = ActiveRecord::Base.current_role)
- connection_pool_list(role).each(&:disconnect!)
- end
-
- # Disconnects all currently idle connections.
- #
- # See ConnectionPool#flush! for details.
- def flush_idle_connections!(role = ActiveRecord::Base.current_role)
- connection_pool_list(role).each(&:flush!)
- end
-
- # Locate the connection of the nearest super class. This can be an
- # active or defined connection: if it is the latter, it will be
- # opened and set as the active connection for the class it was defined
- # for (not necessarily the current class).
- def retrieve_connection(spec_name, role: ActiveRecord::Base.current_role, shard: ActiveRecord::Base.current_shard) # :nodoc:
- pool = retrieve_connection_pool(spec_name, role: role, shard: shard)
-
- unless pool
- if shard != ActiveRecord::Base.default_shard
- message = "No connection pool for '#{spec_name}' found for the '#{shard}' shard."
- elsif ActiveRecord::Base.connection_handler != ActiveRecord::Base.default_connection_handler
- message = "No connection pool for '#{spec_name}' found for the '#{ActiveRecord::Base.current_role}' role."
- elsif role != ActiveRecord::Base.default_role
- message = "No connection pool for '#{spec_name}' found for the '#{role}' role."
- else
- message = "No connection pool for '#{spec_name}' found."
- end
-
- raise ConnectionNotEstablished, message
- end
-
- pool.connection
- end
-
- # Returns true if a connection that's accessible to this class has
- # already been opened.
- def connected?(spec_name, role: ActiveRecord::Base.current_role, shard: ActiveRecord::Base.current_shard)
- pool = retrieve_connection_pool(spec_name, role: role, shard: shard)
- pool && pool.connected?
- end
-
- # Remove the connection for this class. This will close the active
- # connection and the defined connection (if they exist). The result
- # can be used as an argument for #establish_connection, for easily
- # re-establishing the connection.
- def remove_connection(owner, role: ActiveRecord::Base.current_role, shard: ActiveRecord::Base.current_shard)
- remove_connection_pool(owner, role: role, shard: shard)&.configuration_hash
- end
- deprecate remove_connection: "Use #remove_connection_pool, which now returns a DatabaseConfig object instead of a Hash"
-
- def remove_connection_pool(owner, role: ActiveRecord::Base.current_role, shard: ActiveRecord::Base.current_shard)
- if pool_manager = get_pool_manager(owner)
- pool_config = pool_manager.remove_pool_config(role, shard)
-
- if pool_config
- pool_config.disconnect!
- pool_config.db_config
- end
- end
- end
-
- # Retrieving the connection pool happens a lot, so we cache it in @owner_to_pool_manager.
- # This makes retrieving the connection pool O(1) once the process is warm.
- # When a connection is established or removed, we invalidate the cache.
- def retrieve_connection_pool(owner, role: ActiveRecord::Base.current_role, shard: ActiveRecord::Base.current_shard)
- pool_config = get_pool_manager(owner)&.get_pool_config(role, shard)
- pool_config&.pool
- end
-
- private
- attr_reader :owner_to_pool_manager
-
- # Returns the pool manager for an owner.
- #
- # Using `"primary"` to look up the pool manager for `ActiveRecord::Base` is
- # deprecated in favor of looking it up by `"ActiveRecord::Base"`.
- #
- # During the deprecation period, if `"primary"` is passed, the pool manager
- # for `ActiveRecord::Base` will still be returned.
- def get_pool_manager(owner)
- return owner_to_pool_manager[owner] if owner_to_pool_manager.key?(owner)
-
- if owner == "primary"
- ActiveSupport::Deprecation.warn("Using `\"primary\"` as a `connection_specification_name` is deprecated and will be removed in Rails 7.0.0. Please use `ActiveRecord::Base`.")
- owner_to_pool_manager[Base.name]
- end
- end
-
- # Returns an instance of PoolConfig for a given adapter.
- # Accepts a hash one layer deep that contains all connection information.
- #
- # == Example
- #
- # config = { "production" => { "host" => "localhost", "database" => "foo", "adapter" => "sqlite3" } }
- # pool_config = Base.configurations.resolve_pool_config(:production)
- # pool_config.db_config.configuration_hash
- # # => { host: "localhost", database: "foo", adapter: "sqlite3" }
- #
- def resolve_pool_config(config, owner_name)
- db_config = Base.configurations.resolve(config)
-
- raise(AdapterNotSpecified, "database configuration does not specify adapter") unless db_config.adapter
-
- # Require the adapter itself and give useful feedback about
- # 1. Missing adapter gems and
- # 2. Adapter gems' missing dependencies.
- path_to_adapter = "active_record/connection_adapters/#{db_config.adapter}_adapter"
- begin
- require path_to_adapter
- rescue LoadError => e
- # We couldn't require the adapter itself. Raise an exception that
- # points out config typos and missing gems.
- if e.path == path_to_adapter
- # We can assume that a non-builtin adapter was specified, so it's
- # either misspelled or missing from Gemfile.
- raise LoadError, "Could not load the '#{db_config.adapter}' Active Record adapter. Ensure that the adapter is spelled correctly in config/database.yml and that you've added the necessary adapter gem to your Gemfile.", e.backtrace
-
- # Bubbled up from the adapter require. Prefix the exception message
- # with some guidance about how to address it and reraise.
- else
- raise LoadError, "Error loading the '#{db_config.adapter}' Active Record adapter. Missing a gem it depends on? #{e.message}", e.backtrace
- end
- end
-
- unless ActiveRecord::Base.respond_to?(db_config.adapter_method)
- raise AdapterNotFound, "database configuration specifies nonexistent #{db_config.adapter} adapter"
- end
-
- ConnectionAdapters::PoolConfig.new(owner_name, db_config)
- end
- end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/connection_pool/queue.rb b/activerecord/lib/active_record/connection_adapters/abstract/connection_pool/queue.rb
new file mode 100644
index 0000000000..263b2a82be
--- /dev/null
+++ b/activerecord/lib/active_record/connection_adapters/abstract/connection_pool/queue.rb
@@ -0,0 +1,211 @@
+# frozen_string_literal: true
+
+require "thread"
+require "monitor"
+
+module ActiveRecord
+ module ConnectionAdapters
+ class ConnectionPool
+ # = Active Record Connection Pool \Queue
+ #
+ # Threadsafe, fair, LIFO queue. Meant to be used by ConnectionPool
+ # with which it shares a Monitor.
+ class Queue
+ def initialize(lock = Monitor.new)
+ @lock = lock
+ @cond = @lock.new_cond
+ @num_waiting = 0
+ @queue = []
+ end
+
+ # Test if any threads are currently waiting on the queue.
+ def any_waiting?
+ synchronize do
+ @num_waiting > 0
+ end
+ end
+
+ # Returns the number of threads currently waiting on this
+ # queue.
+ def num_waiting
+ synchronize do
+ @num_waiting
+ end
+ end
+
+ # Add +element+ to the queue. Never blocks.
+ def add(element)
+ synchronize do
+ @queue.push element
+ @cond.signal
+ end
+ end
+
+ # If +element+ is in the queue, remove and return it, or +nil+.
+ def delete(element)
+ synchronize do
+ @queue.delete(element)
+ end
+ end
+
+ # Remove all elements from the queue.
+ def clear
+ synchronize do
+ @queue.clear
+ end
+ end
+
+ # Remove the head of the queue.
+ #
+ # If +timeout+ is not given, remove and return the head of the
+ # queue if the number of available elements is strictly
+ # greater than the number of threads currently waiting (that
+ # is, don't jump ahead in line). Otherwise, return +nil+.
+ #
+ # If +timeout+ is given, block if there is no element
+ # available, waiting up to +timeout+ seconds for an element to
+ # become available.
+ #
+ # Raises:
+ # - ActiveRecord::ConnectionTimeoutError if +timeout+ is given and no element
+ # becomes available within +timeout+ seconds,
+ def poll(timeout = nil)
+ synchronize { internal_poll(timeout) }
+ end
+
+ private
+ def internal_poll(timeout)
+ no_wait_poll || (timeout && wait_poll(timeout))
+ end
+
+ def synchronize(&block)
+ @lock.synchronize(&block)
+ end
+
+ # Test if the queue currently contains any elements.
+ def any?
+ !@queue.empty?
+ end
+
+ # A thread can remove an element from the queue without
+ # waiting if and only if the number of currently available
+ # connections is strictly greater than the number of waiting
+ # threads.
+ def can_remove_no_wait?
+ @queue.size > @num_waiting
+ end
+
+ # Removes and returns the head of the queue if possible, or +nil+.
+ def remove
+ @queue.pop
+ end
+
+ # Remove and return the head of the queue if the number of
+ # available elements is strictly greater than the number of
+ # threads currently waiting. Otherwise, return +nil+.
+ def no_wait_poll
+ remove if can_remove_no_wait?
+ end
+
+ # Waits on the queue up to +timeout+ seconds, then removes and
+ # returns the head of the queue.
+ def wait_poll(timeout)
+ @num_waiting += 1
+
+ t0 = Process.clock_gettime(Process::CLOCK_MONOTONIC)
+ elapsed = 0
+ loop do
+ ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
+ @cond.wait(timeout - elapsed)
+ end
+
+ return remove if any?
+
+ elapsed = Process.clock_gettime(Process::CLOCK_MONOTONIC) - t0
+ if elapsed >= timeout
+ msg = "could not obtain a connection from the pool within %0.3f seconds (waited %0.3f seconds); all pooled connections were in use" %
+ [timeout, elapsed]
+ raise ConnectionTimeoutError, msg
+ end
+ end
+ ensure
+ @num_waiting -= 1
+ end
+ end
+
+ # Adds the ability to turn a basic fair FIFO queue into one
+ # biased to some thread.
+ module BiasableQueue # :nodoc:
+ class BiasedConditionVariable # :nodoc:
+ # semantics of condition variables guarantee that +broadcast+, +broadcast_on_biased+,
+ # +signal+ and +wait+ methods are only called while holding a lock
+ def initialize(lock, other_cond, preferred_thread)
+ @real_cond = lock.new_cond
+ @other_cond = other_cond
+ @preferred_thread = preferred_thread
+ @num_waiting_on_real_cond = 0
+ end
+
+ def broadcast
+ broadcast_on_biased
+ @other_cond.broadcast
+ end
+
+ def broadcast_on_biased
+ @num_waiting_on_real_cond = 0
+ @real_cond.broadcast
+ end
+
+ def signal
+ if @num_waiting_on_real_cond > 0
+ @num_waiting_on_real_cond -= 1
+ @real_cond
+ else
+ @other_cond
+ end.signal
+ end
+
+ def wait(timeout)
+ if Thread.current == @preferred_thread
+ @num_waiting_on_real_cond += 1
+ @real_cond
+ else
+ @other_cond
+ end.wait(timeout)
+ end
+ end
+
+ def with_a_bias_for(thread)
+ previous_cond = nil
+ new_cond = nil
+ synchronize do
+ previous_cond = @cond
+ @cond = new_cond = BiasedConditionVariable.new(@lock, @cond, thread)
+ end
+ yield
+ ensure
+ synchronize do
+ @cond = previous_cond if previous_cond
+ new_cond.broadcast_on_biased if new_cond # wake up any remaining sleepers
+ end
+ end
+ end
+
+ # Connections must be leased while holding the main pool mutex. This is
+ # an internal subclass that also +.leases+ returned connections while
+ # still in queue's critical section (queue synchronizes with the same
+ # <tt>@lock</tt> as the main pool) so that a returned connection is already
+ # leased and there is no need to re-enter synchronized block.
+ class ConnectionLeasingQueue < Queue # :nodoc:
+ include BiasableQueue
+
+ private
+ def internal_poll(timeout)
+ conn = super
+ conn.lease if conn
+ conn
+ end
+ end
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/connection_pool/reaper.rb b/activerecord/lib/active_record/connection_adapters/abstract/connection_pool/reaper.rb
new file mode 100644
index 0000000000..e5b80ba2a6
--- /dev/null
+++ b/activerecord/lib/active_record/connection_adapters/abstract/connection_pool/reaper.rb
@@ -0,0 +1,78 @@
+# frozen_string_literal: true
+
+require "thread"
+require "weakref"
+
+module ActiveRecord
+ module ConnectionAdapters
+ class ConnectionPool
+ # = Active Record Connection Pool \Reaper
+ #
+ # Every +frequency+ seconds, the reaper will call +reap+ and +flush+ on
+ # +pool+. A reaper instantiated with a zero frequency will never reap
+ # the connection pool.
+ #
+ # Configure the frequency by setting +reaping_frequency+ in your database
+ # YAML file (default 60 seconds).
+ class Reaper
+ attr_reader :pool, :frequency
+
+ def initialize(pool, frequency)
+ @pool = pool
+ @frequency = frequency
+ end
+
+ @mutex = Mutex.new
+ @pools = {}
+ @threads = {}
+
+ class << self
+ def register_pool(pool, frequency) # :nodoc:
+ @mutex.synchronize do
+ unless @threads[frequency]&.alive?
+ @threads[frequency] = spawn_thread(frequency)
+ end
+ @pools[frequency] ||= []
+ @pools[frequency] << WeakRef.new(pool)
+ end
+ end
+
+ private
+ def spawn_thread(frequency)
+ Thread.new(frequency) do |t|
+ # Advise multi-threaded app servers to ignore this thread for
+ # the purposes of fork safety warnings
+ Thread.current.thread_variable_set(:fork_safe, true)
+ running = true
+ while running
+ sleep t
+ @mutex.synchronize do
+ @pools[frequency].select! do |pool|
+ pool.weakref_alive? && !pool.discarded?
+ end
+
+ @pools[frequency].each do |p|
+ p.reap
+ p.flush
+ rescue WeakRef::RefError
+ end
+
+ if @pools[frequency].empty?
+ @pools.delete(frequency)
+ @threads.delete(frequency)
+ running = false
+ end
+ end
+ end
+ end
+ end
+ end
+
+ def run
+ return unless frequency && frequency > 0
+ self.class.register_pool(pool, frequency)
+ end
+ end
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/database_limits.rb b/activerecord/lib/active_record/connection_adapters/abstract/database_limits.rb
index b1ff8eec74..be58e98c51 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract/database_limits.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract/database_limits.rb
@@ -7,33 +7,21 @@ def max_identifier_length # :nodoc:
64
end
- # Returns the maximum length of a table alias.
- def table_alias_length
+ # Returns the maximum length of a table name.
+ def table_name_length
max_identifier_length
end
- # Returns the maximum allowed length for an index name. This
- # limit is enforced by \Rails and is less than or equal to
- # #index_name_length. The gap between
- # #index_name_length is to allow internal \Rails
- # operations to use prefixes in temporary operations.
- def allowed_index_name_length
- index_name_length
+ # Returns the maximum length of a table alias.
+ def table_alias_length
+ max_identifier_length
end
- deprecate :allowed_index_name_length
# Returns the maximum length of an index name.
def index_name_length
max_identifier_length
end
- # Returns the maximum number of elements in an IN (x,y,z) clause.
- # +nil+ means no limit.
- def in_clause_length
- nil
- end
- deprecate :in_clause_length
-
private
def bind_params_length
65535
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb b/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb
index f1db1513ad..521fa21a3f 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb
@@ -15,7 +15,12 @@ def to_sql(arel_or_sql_string, binds = [])
end
def to_sql_and_binds(arel_or_sql_string, binds = [], preparable = nil) # :nodoc:
+ # Arel::TreeManager -> Arel::Node
if arel_or_sql_string.respond_to?(:ast)
+ arel_or_sql_string = arel_or_sql_string.ast
+ end
+
+ if Arel.arel_node?(arel_or_sql_string) && !(String === arel_or_sql_string)
unless binds.empty?
raise "Passing bind parameters with an arel AST is forbidden. " \
"The values must be stored on the AST directly"
@@ -25,7 +30,7 @@ def to_sql_and_binds(arel_or_sql_string, binds = [], preparable = nil) # :nodoc:
if prepared_statements
collector.preparable = true
- sql, binds = visitor.compile(arel_or_sql_string.ast, collector)
+ sql, binds = visitor.compile(arel_or_sql_string, collector)
if binds.length > bind_params_length
unprepared_statement do
@@ -34,7 +39,7 @@ def to_sql_and_binds(arel_or_sql_string, binds = [], preparable = nil) # :nodoc:
end
preparable = collector.preparable
else
- sql = visitor.compile(arel_or_sql_string.ast, collector)
+ sql = visitor.compile(arel_or_sql_string, collector)
end
[sql.freeze, binds, preparable]
else
@@ -59,28 +64,24 @@ def cacheable_query(klass, arel) # :nodoc:
end
# Returns an ActiveRecord::Result instance.
- def select_all(arel, name = nil, binds = [], preparable: nil)
+ def select_all(arel, name = nil, binds = [], preparable: nil, async: false)
arel = arel_from_relation(arel)
sql, binds, preparable = to_sql_and_binds(arel, binds, preparable)
- if prepared_statements && preparable
- select_prepared(sql, name, binds)
- else
- select(sql, name, binds)
- end
+ select(sql, name, binds, prepare: prepared_statements && preparable, async: async && FutureResult::SelectAll)
rescue ::RangeError
- ActiveRecord::Result.new([], [])
+ ActiveRecord::Result.empty(async: async)
end
# Returns a record hash with the column names as keys and column values
# as values.
- def select_one(arel, name = nil, binds = [])
- select_all(arel, name, binds).first
+ def select_one(arel, name = nil, binds = [], async: false)
+ select_all(arel, name, binds, async: async).then(&:first)
end
# Returns a single value from a record
- def select_value(arel, name = nil, binds = [])
- single_value_from_rows(select_rows(arel, name, binds))
+ def select_value(arel, name = nil, binds = [], async: false)
+ select_rows(arel, name, binds, async: async).then { |rows| single_value_from_rows(rows) }
end
# Returns an array of the values of the first column in a select:
@@ -91,8 +92,8 @@ def select_values(arel, name = nil, binds = [])
# Returns an array of arrays containing the field values.
# Order is the same as that returned by +columns+.
- def select_rows(arel, name = nil, binds = [])
- select_all(arel, name, binds).rows
+ def select_rows(arel, name = nil, binds = [], async: false)
+ select_all(arel, name, binds, async: async).then(&:rows)
end
def query_value(sql, name = nil) # :nodoc:
@@ -104,7 +105,7 @@ def query_values(sql, name = nil) # :nodoc:
end
def query(sql, name = nil) # :nodoc:
- exec_query(sql, name).rows
+ internal_exec_query(sql, name).rows
end
# Determines whether the SQL statement is a write query.
@@ -114,47 +115,63 @@ def write_query?(sql)
# Executes the SQL statement in the context of this connection and returns
# the raw result from the connection adapter.
+ #
+ # Setting +allow_retry+ to true causes the db to reconnect and retry
+ # executing the SQL statement in case of a connection-related exception.
+ # This option should only be enabled for known idempotent queries.
+ #
+ # Note: the query is assumed to have side effects and the query cache
+ # will be cleared. If the query is read-only, consider using #select_all
+ # instead.
+ #
# Note: depending on your database connector, the result returned by this
- # method may be manually memory managed. Consider using the exec_query
+ # method may be manually memory managed. Consider using #exec_query
# wrapper instead.
- def execute(sql, name = nil)
- raise NotImplementedError
+ def execute(sql, name = nil, allow_retry: false)
+ internal_execute(sql, name, allow_retry: allow_retry)
end
# Executes +sql+ statement in the context of this connection using
# +binds+ as the bind substitutes. +name+ is logged along with
# the executed +sql+ statement.
+ #
+ # Note: the query is assumed to have side effects and the query cache
+ # will be cleared. If the query is read-only, consider using #select_all
+ # instead.
def exec_query(sql, name = "SQL", binds = [], prepare: false)
- raise NotImplementedError
+ internal_exec_query(sql, name, binds, prepare: prepare)
end
# Executes insert +sql+ statement in the context of this connection using
# +binds+ as the bind substitutes. +name+ is logged along with
# the executed +sql+ statement.
- def exec_insert(sql, name = nil, binds = [], pk = nil, sequence_name = nil)
- sql, binds = sql_for_insert(sql, pk, binds)
- exec_query(sql, name, binds)
+ # Some adapters support the `returning` keyword argument which allows to control the result of the query:
+ # `nil` is the default value and maintains default behavior. If an array of column names is passed -
+ # the result will contain values of the specified columns from the inserted row.
+ def exec_insert(sql, name = nil, binds = [], pk = nil, sequence_name = nil, returning: nil)
+ sql, binds = sql_for_insert(sql, pk, binds, returning)
+ internal_exec_query(sql, name, binds)
end
# Executes delete +sql+ statement in the context of this connection using
# +binds+ as the bind substitutes. +name+ is logged along with
# the executed +sql+ statement.
def exec_delete(sql, name = nil, binds = [])
- exec_query(sql, name, binds)
+ internal_exec_query(sql, name, binds)
end
# Executes update +sql+ statement in the context of this connection using
# +binds+ as the bind substitutes. +name+ is logged along with
# the executed +sql+ statement.
def exec_update(sql, name = nil, binds = [])
- exec_query(sql, name, binds)
+ internal_exec_query(sql, name, binds)
end
def exec_insert_all(sql, name) # :nodoc:
- exec_query(sql, name)
+ internal_exec_query(sql, name)
end
- def explain(arel, binds = []) # :nodoc:
+ def explain(arel, binds = [], options = []) # :nodoc:
raise NotImplementedError
end
@@ -166,9 +183,15 @@ def explain(arel, binds = []) # :nodoc:
#
# If the next id was calculated in advance (as in Oracle), it should be
# passed in as +id_value+.
- def insert(arel, name = nil, pk = nil, id_value = nil, sequence_name = nil, binds = [])
+ # Some adapters support the `returning` keyword argument which allows defining the return value of the method:
+ # `nil` is the default value and maintains default behavior. If an array of column names is passed -
+ # an array of is returned from the method representing values of the specified columns from the inserted row.
+ def insert(arel, name = nil, pk = nil, id_value = nil, sequence_name = nil, binds = [], returning: nil)
sql, binds = to_sql_and_binds(arel, binds)
- value = exec_insert(sql, name, binds, pk, sequence_name)
+ value = exec_insert(sql, name, binds, pk, sequence_name, returning: returning)
+
+ return returning_column_values(value) unless returning.nil?
+
id_value || last_inserted_id(value)
end
alias create insert
@@ -191,7 +214,7 @@ def truncate(table_name, name = nil)
end
def truncate_tables(*table_names) # :nodoc:
- table_names -= [schema_migration.table_name, InternalMetadata.table_name]
+ table_names -= [schema_migration.table_name, internal_metadata.table_name]
return if table_names.empty?
@@ -308,26 +331,28 @@ def truncate_tables(*table_names) # :nodoc:
# * You are joining an existing open transaction
# * You are creating a nested (savepoint) transaction
#
- # The mysql2 and postgresql adapters support setting the transaction
+ # The mysql2, trilogy, and postgresql adapters support setting the transaction
# isolation level.
- def transaction(requires_new: nil, isolation: nil, joinable: true)
+ # :args: (requires_new: nil, isolation: nil, &block)
+ def transaction(requires_new: nil, isolation: nil, joinable: true, &block)
if !requires_new && current_transaction.joinable?
if isolation
raise ActiveRecord::TransactionIsolationError, "cannot set isolation when joining a transaction"
end
yield
else
- transaction_manager.within_new_transaction(isolation: isolation, joinable: joinable) { yield }
+ transaction_manager.within_new_transaction(isolation: isolation, joinable: joinable, &block)
end
rescue ActiveRecord::Rollback
# rollbacks are silently swallowed
end
- attr_reader :transaction_manager #:nodoc:
+ attr_reader :transaction_manager # :nodoc:
delegate :within_new_transaction, :open_transactions, :current_transaction, :begin_transaction,
:commit_transaction, :rollback_transaction, :materialize_transactions,
- :disable_lazy_transactions!, :enable_lazy_transactions!, to: :transaction_manager
+ :disable_lazy_transactions!, :enable_lazy_transactions!, :dirty_current_transaction,
+ to: :transaction_manager
def mark_transaction_written_if_write(sql) # :nodoc:
transaction = current_transaction
@@ -340,8 +365,24 @@ def transaction_open?
current_transaction.open?
end
- def reset_transaction #:nodoc:
+ def reset_transaction(restore: false) # :nodoc:
+ # Store the existing transaction state to the side
+ old_state = @transaction_manager if restore && @transaction_manager&.restorable?
+
@transaction_manager = ConnectionAdapters::TransactionManager.new(self)
+
+ if block_given?
+ # Reconfigure the connection without any transaction state in the way
+ result = yield
+
+ # Now the connection's fully established, we can swap back
+ if old_state
+ @transaction_manager = old_state
+ @transaction_manager.restore_transactions
+ end
+
+ result
+ end
end
# Register a record with the current transaction so that its after_commit and after_rollback callbacks
@@ -376,9 +417,17 @@ def commit_db_transaction() end
# done if the transaction block raises an exception or returns false.
def rollback_db_transaction
exec_rollback_db_transaction
+ rescue ActiveRecord::ConnectionNotEstablished, ActiveRecord::ConnectionFailed
+ # Connection's gone; that counts as a rollback
end
- def exec_rollback_db_transaction() end #:nodoc:
+ def exec_rollback_db_transaction() end # :nodoc:
+
+ def restart_db_transaction
+ exec_restart_db_transaction
+ end
+
+ def exec_restart_db_transaction() end # :nodoc:
def rollback_to_savepoint(name = nil)
exec_rollback_to_savepoint(name)
@@ -397,7 +446,7 @@ def reset_sequence!(table, column, sequence = nil)
# something beyond a simple insert (e.g. Oracle).
# Most of adapters should implement +insert_fixtures_set+ that leverages bulk SQL insert.
# We keep this method to provide fallback
- # for databases like sqlite that do not support bulk inserts.
+ # for databases like SQLite that do not support bulk inserts.
def insert_fixture(fixture, table_name)
execute(build_fixture_sql(Array.wrap(fixture), table_name), "Fixture Insert")
end
@@ -445,13 +494,43 @@ def with_yaml_fallback(value) # :nodoc:
end
end
+ # This is a safe default, even if not high precision on all databases
+ HIGH_PRECISION_CURRENT_TIMESTAMP = Arel.sql("CURRENT_TIMESTAMP").freeze # :nodoc:
+ private_constant :HIGH_PRECISION_CURRENT_TIMESTAMP
+
+ # Returns an Arel SQL literal for the CURRENT_TIMESTAMP for usage with
+ # arbitrary precision date/time columns.
+ #
+ # Adapters supporting datetime with precision should override this to
+ # provide as much precision as is available.
+ def high_precision_current_timestamp
+ HIGH_PRECISION_CURRENT_TIMESTAMP
+ end
+
+ def internal_exec_query(sql, name = "SQL", binds = [], prepare: false, async: false) # :nodoc:
+ raise NotImplementedError
+ end
+
private
+ def internal_execute(sql, name = "SCHEMA", allow_retry: false, materialize_transactions: true)
+ sql = transform_query(sql)
+ check_if_write_query(sql)
+
+ mark_transaction_written_if_write(sql)
+
+ raw_execute(sql, name, allow_retry: allow_retry, materialize_transactions: materialize_transactions)
+ end
+
def execute_batch(statements, name = nil)
statements.each do |statement|
- execute(statement, name)
+ internal_execute(statement, name)
end
end
+ def raw_execute(sql, name, async: false, allow_retry: false, materialize_transactions: true)
+ raise NotImplementedError
+ end
+
DEFAULT_INSERT_VALUE = Arel.sql("DEFAULT").freeze
private_constant :DEFAULT_INSERT_VALUE
@@ -460,7 +539,7 @@ def default_insert_value(column)
end
def build_fixture_sql(fixtures, table_name)
- columns = schema_cache.columns_hash(table_name)
+ columns = schema_cache.columns_hash(table_name).reject { |_, column| supports_virtual_columns? && column.virtual? }
values_list = fixtures.map do |fixture|
fixture = fixture.stringify_keys
@@ -481,8 +560,7 @@ def build_fixture_sql(fixtures, table_name)
end
table = Arel::Table.new(table_name)
- manager = Arel::InsertManager.new
- manager.into(table)
+ manager = Arel::InsertManager.new(table)
if values_list.size == 1
values = values_list.shift
@@ -503,10 +581,10 @@ def build_fixture_sql(fixtures, table_name)
end
def build_fixture_statements(fixture_set)
- fixture_set.map do |table_name, fixtures|
+ fixture_set.filter_map do |table_name, fixtures|
next if fixtures.empty?
build_fixture_sql(fixtures, table_name)
- end.compact
+ end
end
def build_truncate_statement(table_name)
@@ -528,15 +606,49 @@ def combine_multi_statements(total_sql)
end
# Returns an ActiveRecord::Result instance.
- def select(sql, name = nil, binds = [])
- exec_query(sql, name, binds, prepare: false)
- end
+ def select(sql, name = nil, binds = [], prepare: false, async: false)
+ if async && async_enabled?
+ if current_transaction.joinable?
+ raise AsynchronousQueryInsideTransactionError, "Asynchronous queries are not allowed inside transactions"
+ end
+
+ future_result = async.new(
+ pool,
+ sql,
+ name,
+ binds,
+ prepare: prepare,
+ )
+ if supports_concurrent_connections? && current_transaction.closed?
+ future_result.schedule!(ActiveRecord::Base.asynchronous_queries_session)
+ else
+ future_result.execute!(self)
+ end
+ return future_result
+ end
- def select_prepared(sql, name = nil, binds = [])
- exec_query(sql, name, binds, prepare: true)
+ result = internal_exec_query(sql, name, binds, prepare: prepare)
+ if async
+ FutureResult::Complete.new(result)
+ else
+ result
+ end
end
- def sql_for_insert(sql, pk, binds)
+ def sql_for_insert(sql, pk, binds, returning) # :nodoc:
+ if supports_insert_returning?
+ if pk.nil?
+ # Extract the table from the insert sql. Yuck.
+ table_ref = extract_table_ref_from_insert_sql(sql)
+ pk = primary_key(table_ref) if table_ref
+ end
+
+ returning_columns = returning || Array(pk)
+
+ returning_columns_statement = returning_columns.map { |c| quote_column_name(c) }.join(", ")
+ sql = "#{sql} RETURNING #{returning_columns_statement}" if returning_columns.any?
+ end
+
[sql, binds]
end
@@ -544,6 +656,10 @@ def last_inserted_id(result)
single_value_from_rows(result.rows)
end
+ def returning_column_values(result)
+ [last_inserted_id(result)]
+ end
+
def single_value_from_rows(rows)
row = rows.first
row && row.first
@@ -556,6 +672,12 @@ def arel_from_relation(relation)
relation
end
end
+
+ def extract_table_ref_from_insert_sql(sql)
+ if sql =~ /into\s("[A-Za-z0-9_."\[\]\s]+"|[A-Za-z0-9_."\[\]]+)\s*/im
+ $1.delete('"').strip
+ end
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb b/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb
index 6223e37698..6b15541a16 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb
@@ -5,10 +5,13 @@
module ActiveRecord
module ConnectionAdapters # :nodoc:
module QueryCache
+ DEFAULT_SIZE = 100 # :nodoc:
+
class << self
- def included(base) #:nodoc:
- dirties_query_cache base, :create, :insert, :update, :delete, :truncate, :truncate_tables,
- :rollback_to_savepoint, :rollback_db_transaction, :exec_insert_all
+ def included(base) # :nodoc:
+ dirties_query_cache base, :exec_query, :execute, :create, :insert, :update, :delete, :truncate,
+ :truncate_tables, :rollback_to_savepoint, :rollback_db_transaction, :restart_db_transaction,
+ :exec_insert_all
base.set_callback :checkout, :after, :configure_query_cache!
base.set_callback :checkin, :after, :disable_query_cache!
@@ -17,7 +20,7 @@ def included(base) #:nodoc:
def dirties_query_cache(base, *method_names)
method_names.each do |method_name|
base.class_eval <<-end_code, __FILE__, __LINE__ + 1
- def #{method_name}(*)
+ def #{method_name}(...)
ActiveRecord::Base.clear_query_caches_for_current_thread
super
end
@@ -51,8 +54,9 @@ def query_cache_enabled
def initialize(*)
super
- @query_cache = Hash.new { |h, sql| h[sql] = {} }
+ @query_cache = {}
@query_cache_enabled = false
+ @query_cache_max_size = nil
end
# Enable the query cache within the block.
@@ -93,32 +97,73 @@ def clear_query_cache
end
end
- def select_all(arel, name = nil, binds = [], preparable: nil)
- if @query_cache_enabled && !locked?(arel)
- arel = arel_from_relation(arel)
+ def select_all(arel, name = nil, binds = [], preparable: nil, async: false) # :nodoc:
+ arel = arel_from_relation(arel)
+
+ # If arel is locked this is a SELECT ... FOR UPDATE or somesuch.
+ # Such queries should not be cached.
+ if @query_cache_enabled && !(arel.respond_to?(:locked) && arel.locked)
sql, binds, preparable = to_sql_and_binds(arel, binds, preparable)
- cache_sql(sql, name, binds) { super(sql, name, binds, preparable: preparable) }
+ if async
+ result = lookup_sql_cache(sql, name, binds) || super(sql, name, binds, preparable: preparable, async: async)
+ FutureResult::Complete.new(result)
+ else
+ cache_sql(sql, name, binds) { super(sql, name, binds, preparable: preparable, async: async) }
+ end
else
super
end
end
private
+ def lookup_sql_cache(sql, name, binds)
+ key = binds.empty? ? sql : [sql, binds]
+ hit = false
+ result = nil
+
+ @lock.synchronize do
+ if (result = @query_cache.delete(key))
+ hit = true
+ @query_cache[key] = result
+ end
+ end
+
+ if hit
+ ActiveSupport::Notifications.instrument(
+ "sql.active_record",
+ cache_notification_info(sql, name, binds)
+ )
+
+ result
+ end
+ end
+
def cache_sql(sql, name, binds)
+ key = binds.empty? ? sql : [sql, binds]
+ result = nil
+ hit = false
+
@lock.synchronize do
- result =
- if @query_cache[sql].key?(binds)
- ActiveSupport::Notifications.instrument(
- "sql.active_record",
- cache_notification_info(sql, name, binds)
- )
- @query_cache[sql][binds]
- else
- @query_cache[sql][binds] = yield
+ if (result = @query_cache.delete(key))
+ hit = true
+ @query_cache[key] = result
+ else
+ result = @query_cache[key] = yield
+ if @query_cache_max_size && @query_cache.size > @query_cache_max_size
+ @query_cache.shift
end
- result.dup
+ end
end
+
+ if hit
+ ActiveSupport::Notifications.instrument(
+ "sql.active_record",
+ cache_notification_info(sql, name, binds)
+ )
+ end
+
+ result.dup
end
# Database adapters can override this method to
@@ -134,15 +179,21 @@ def cache_notification_info(sql, name, binds)
}
end
- # If arel is locked this is a SELECT ... FOR UPDATE or somesuch. Such
- # queries should not be cached.
- def locked?(arel)
- arel = arel.arel if arel.is_a?(Relation)
- arel.respond_to?(:locked) && arel.locked
- end
-
def configure_query_cache!
- enable_query_cache! if pool.query_cache_enabled
+ case query_cache = pool.db_config.query_cache
+ when 0, false
+ return
+ when Integer
+ @query_cache_max_size = query_cache
+ when nil
+ @query_cache_max_size = DEFAULT_SIZE
+ else
+ @query_cache_max_size = nil # no limit
+ end
+
+ if pool.query_cache_enabled
+ enable_query_cache!
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/quoting.rb b/activerecord/lib/active_record/connection_adapters/abstract/quoting.rb
index aac5bfe0a0..80b04475cd 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract/quoting.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract/quoting.rb
@@ -5,42 +5,69 @@
module ActiveRecord
module ConnectionAdapters # :nodoc:
+ # = Active Record Connection Adapters \Quoting
module Quoting
# Quotes the column value to help prevent
# {SQL injection attacks}[https://en.wikipedia.org/wiki/SQL_injection].
def quote(value)
- if value.is_a?(Base)
- ActiveSupport::Deprecation.warn(<<~MSG)
- Passing an Active Record object to `quote` directly is deprecated
- and will be no longer quoted as id value in Rails 7.0.
- MSG
- value = value.id_for_database
+ case value
+ when String, Symbol, ActiveSupport::Multibyte::Chars
+ "'#{quote_string(value.to_s)}'"
+ when true then quoted_true
+ when false then quoted_false
+ when nil then "NULL"
+ # BigDecimals need to be put in a non-normalized form and quoted.
+ when BigDecimal then value.to_s("F")
+ when Numeric then value.to_s
+ when Type::Binary::Data then quoted_binary(value)
+ when Type::Time::Value then "'#{quoted_time(value)}'"
+ when Date, Time then "'#{quoted_date(value)}'"
+ when Class then "'#{value}'"
+ when ActiveSupport::Duration
+ warn_quote_duration_deprecated
+ value.to_s
+ else raise TypeError, "can't quote #{value.class.name}"
end
-
- _quote(value)
end
# Cast a +value+ to a type that the database understands. For example,
# SQLite does not understand dates, so this method will convert a Date
# to a String.
- def type_cast(value, column = nil)
- if value.is_a?(Base)
- ActiveSupport::Deprecation.warn(<<~MSG)
- Passing an Active Record object to `type_cast` directly is deprecated
- and will be no longer type casted as id value in Rails 7.0.
- MSG
- value = value.id_for_database
+ def type_cast(value)
+ case value
+ when Symbol, ActiveSupport::Multibyte::Chars, Type::Binary::Data
+ value.to_s
+ when true then unquoted_true
+ when false then unquoted_false
+ # BigDecimals need to be put in a non-normalized form and quoted.
+ when BigDecimal then value.to_s("F")
+ when nil, Numeric, String then value
+ when Type::Time::Value then quoted_time(value)
+ when Date, Time then quoted_date(value)
+ else raise TypeError, "can't cast #{value.class.name}"
end
+ end
- if column
- ActiveSupport::Deprecation.warn(<<~MSG)
- Passing a column to `type_cast` is deprecated and will be removed in Rails 7.0.
- MSG
- type = lookup_cast_type_from_column(column)
- value = type.serialize(value)
- end
+ # Quote a value to be used as a bound parameter of unknown type. For example,
+ # MySQL might perform dangerous castings when comparing a string to a number,
+ # so this method will cast numbers to string.
+ #
+ # Deprecated: Consider `Arel.sql("... ? ...", value)` or
+ # +sanitize_sql+ instead.
+ def quote_bound_value(value)
+ ActiveRecord.deprecator.warn(<<~MSG.squish)
+ #quote_bound_value is deprecated and will be removed in Rails 7.2.
+ Consider Arel.sql(".. ? ..", value) or #sanitize_sql instead.
+ MSG
- _type_cast(value)
+ quote(cast_bound_value(value))
+ end
+
+ # Cast a value to be used as a bound parameter of unknown type. For example,
+ # MySQL might perform dangerous castings when comparing a string to a number,
+ # so this method will cast numbers to string.
+ def cast_bound_value(value) # :nodoc:
+ value
end
# If you are having to call this function, you are likely doing something
@@ -59,7 +86,7 @@ def lookup_cast_type_from_column(column) # :nodoc:
# Quotes a string, escaping any ' (single quote) and \ (backslash)
# characters.
def quote_string(s)
- s.gsub('\\', '\&\&').gsub("'", "''") # ' (for ruby-mode)
+ s.gsub("\\", '\&\&').gsub("'", "''") # ' (for ruby-mode)
end
# Quotes the column name. Defaults to no quoting.
@@ -75,7 +102,7 @@ def quote_table_name(table_name)
# Override to return the quoted table name for assignment. Defaults to
# table quoting.
#
- # This works for mysql2 where table.column can be used to
+ # This works for MySQL where table.column can be used to
# resolve ambiguity.
#
# We override this in the sqlite3 and postgresql adapters to use only
@@ -113,14 +140,14 @@ def unquoted_false
# if the value is a Time responding to usec.
def quoted_date(value)
if value.acts_like?(:time)
- if ActiveRecord::Base.default_timezone == :utc
- value = value.getutc if value.respond_to?(:getutc) && !value.utc?
+ if default_timezone == :utc
+ value = value.getutc if !value.utc?
else
- value = value.getlocal if value.respond_to?(:getlocal)
+ value = value.getlocal
end
end
- result = value.to_s(:db)
+ result = value.to_fs(:db)
if value.respond_to?(:usec) && value.usec > 0
result << "." << sprintf("%06d", value.usec)
else
@@ -168,7 +195,7 @@ def column_name_with_order_matcher # :nodoc:
(
(?:
# table_name.column_name | function(one or no argument)
- ((?:\w+\.)?\w+) | \w+\((?:|\g<2>)\)
+ ((?:\w+\.)?\w+ | \w+\((?:|\g<2>)\))
)
(?:(?:\s+AS)?\s+\w+)?
)
@@ -192,7 +219,7 @@ def column_name_with_order_matcher # :nodoc:
(
(?:
# table_name.column_name | function(one or no argument)
- ((?:\w+\.)?\w+) | \w+\((?:|\g<2>)\)
+ ((?:\w+\.)?\w+ | \w+\((?:|\g<2>)\))
)
(?:\s+ASC|\s+DESC)?
(?:\s+NULLS\s+(?:FIRST|LAST))?
@@ -205,16 +232,11 @@ def column_name_with_order_matcher # :nodoc:
private
def type_casted_binds(binds)
- case binds.first
- when Array
- binds.map { |column, value| type_cast(value, column) }
- else
- binds.map do |value|
- if ActiveModel::Attribute === value
- type_cast(value.value_for_database)
- else
- type_cast(value)
- end
+ binds.map do |value|
+ if ActiveModel::Attribute === value
+ type_cast(value.value_for_database)
+ else
+ type_cast(value)
end
end
end
@@ -223,37 +245,20 @@ def lookup_cast_type(sql_type)
type_map.lookup(sql_type)
end
- def _quote(value)
- case value
- when String, Symbol, ActiveSupport::Multibyte::Chars
- "'#{quote_string(value.to_s)}'"
- when true then quoted_true
- when false then quoted_false
- when nil then "NULL"
- # BigDecimals need to be put in a non-normalized form and quoted.
- when BigDecimal then value.to_s("F")
- when Numeric, ActiveSupport::Duration then value.to_s
- when Type::Binary::Data then quoted_binary(value)
- when Type::Time::Value then "'#{quoted_time(value)}'"
- when Date, Time then "'#{quoted_date(value)}'"
- when Class then "'#{value}'"
- else raise TypeError, "can't quote #{value.class.name}"
- end
- end
-
- def _type_cast(value)
- case value
- when Symbol, ActiveSupport::Multibyte::Chars, Type::Binary::Data
- value.to_s
- when true then unquoted_true
- when false then unquoted_false
- # BigDecimals need to be put in a non-normalized form and quoted.
- when BigDecimal then value.to_s("F")
- when nil, Numeric, String then value
- when Type::Time::Value then quoted_time(value)
- when Date, Time then quoted_date(value)
- else raise TypeError, "can't cast #{value.class.name}"
- end
+ def warn_quote_duration_deprecated
+ ActiveRecord.deprecator.warn(<<~MSG)
+ Using ActiveSupport::Duration as an interpolated bind parameter in a SQL
+ string template is deprecated. To avoid this warning, you should explicitly
+ convert the duration to a more specific database type. For example, if you
+ want to use a duration as an integer number of seconds:
+ ```
+ Record.where("duration = ?", 1.hour.to_i)
+ ```
+ If you want to use a duration as an ISO 8601 string:
+ ```
+ Record.where("duration = ?", 1.hour.iso8601)
+ ```
+ MSG
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/savepoints.rb b/activerecord/lib/active_record/connection_adapters/abstract/savepoints.rb
index d6dbef3fc8..fbfe923f58 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract/savepoints.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract/savepoints.rb
@@ -2,21 +2,22 @@
module ActiveRecord
module ConnectionAdapters
+ # = Active Record Connection Adapters \Savepoints
module Savepoints
def current_savepoint_name
current_transaction.savepoint_name
end
def create_savepoint(name = current_savepoint_name)
- execute("SAVEPOINT #{name}", "TRANSACTION")
+ internal_execute("SAVEPOINT #{name}", "TRANSACTION")
end
def exec_rollback_to_savepoint(name = current_savepoint_name)
- execute("ROLLBACK TO SAVEPOINT #{name}", "TRANSACTION")
+ internal_execute("ROLLBACK TO SAVEPOINT #{name}", "TRANSACTION")
end
def release_savepoint(name = current_savepoint_name)
- execute("RELEASE SAVEPOINT #{name}", "TRANSACTION")
+ internal_execute("RELEASE SAVEPOINT #{name}", "TRANSACTION")
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/schema_creation.rb b/activerecord/lib/active_record/connection_adapters/abstract/schema_creation.rb
index 55cce5e501..86b86f190a 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract/schema_creation.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract/schema_creation.rb
@@ -14,8 +14,10 @@ def accept(o)
end
delegate :quote_column_name, :quote_table_name, :quote_default_expression, :type_to_sql,
- :options_include_default?, :supports_indexes_in_create?, :supports_foreign_keys?, :foreign_key_options,
- :quoted_columns_for_index, :supports_partial_index?, :supports_check_constraints?, :check_constraint_options,
+ :options_include_default?, :supports_indexes_in_create?, :use_foreign_keys?,
+ :quoted_columns_for_index, :supports_partial_index?, :supports_check_constraints?,
+ :supports_index_include?, :supports_exclusion_constraints?, :supports_unique_constraints?,
+ :supports_nulls_not_distinct?,
to: :@conn, private: true
private
@@ -51,12 +53,20 @@ def visit_TableDefinition(o)
statements.concat(o.indexes.map { |column_name, options| index_in_create(o.name, column_name, options) })
end
- if supports_foreign_keys?
- statements.concat(o.foreign_keys.map { |to_table, options| foreign_key_in_create(o.name, to_table, options) })
+ if use_foreign_keys?
+ statements.concat(o.foreign_keys.map { |fk| accept fk })
end
if supports_check_constraints?
- statements.concat(o.check_constraints.map { |expression, options| check_constraint_in_create(o.name, expression, options) })
+ statements.concat(o.check_constraints.map { |chk| accept chk })
+ end
+
+ if supports_exclusion_constraints?
+ statements.concat(o.exclusion_constraints.map { |exc| accept exc })
+ end
+
+ if supports_unique_constraints?
+ statements.concat(o.unique_constraints.map { |exc| accept exc })
end
create_sql << "(#{statements.join(', ')})" if statements.present?
@@ -70,10 +80,12 @@ def visit_PrimaryKeyDefinition(o)
end
def visit_ForeignKeyDefinition(o)
+ quoted_columns = Array(o.column).map { |c| quote_column_name(c) }
+ quoted_primary_keys = Array(o.primary_key).map { |c| quote_column_name(c) }
sql = +<<~SQL
CONSTRAINT #{quote_column_name(o.name)}
- FOREIGN KEY (#{quote_column_name(o.column)})
- REFERENCES #{quote_table_name(o.to_table)} (#{quote_column_name(o.primary_key)})
+ FOREIGN KEY (#{quoted_columns.join(", ")})
+ REFERENCES #{quote_table_name(o.to_table)} (#{quoted_primary_keys.join(", ")})
SQL
sql << " #{action_sql('DELETE', o.on_delete)}" if o.on_delete
sql << " #{action_sql('UPDATE', o.on_update)}" if o.on_update
@@ -100,6 +112,8 @@ def visit_CreateIndexDefinition(o)
sql << "#{quote_column_name(index.name)} ON #{quote_table_name(index.table)}"
sql << "USING #{index.using}" if supports_index_using? && index.using
sql << "(#{quoted_columns(index)})"
+ sql << "INCLUDE (#{quoted_include_columns(index.include)})" if supports_index_include? && index.include
+ sql << "NULLS NOT DISTINCT" if supports_nulls_not_distinct? && index.nulls_not_distinct
sql << "WHERE #{index.where}" if supports_partial_index? && index.where
sql.join(" ")
@@ -159,19 +173,6 @@ def table_modifier_in_create(o)
" TEMPORARY" if o.temporary
end
- def foreign_key_in_create(from_table, to_table, options)
- prefix = ActiveRecord::Base.table_name_prefix
- suffix = ActiveRecord::Base.table_name_suffix
- to_table = "#{prefix}#{to_table}#{suffix}"
- options = foreign_key_options(from_table, to_table, options)
- accept ForeignKeyDefinition.new(from_table, to_table, options)
- end
-
- def check_constraint_in_create(table_name, expression, options)
- options = check_constraint_options(table_name, expression, options)
- accept CheckConstraintDefinition.new(table_name, expression, options)
- end
-
def action_sql(action, dependency)
case dependency
when :nullify then "ON #{action} SET NULL"
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/schema_definitions.rb b/activerecord/lib/active_record/connection_adapters/abstract/schema_definitions.rb
index c2d53d556b..1eb1ca23c3 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract/schema_definitions.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract/schema_definitions.rb
@@ -1,12 +1,13 @@
# frozen_string_literal: true
+
module ActiveRecord
- module ConnectionAdapters #:nodoc:
+ module ConnectionAdapters # :nodoc:
# Abstract representation of an index definition on a table. Instances of
# this type are typically created and returned by methods in database
# adapters. e.g. ActiveRecord::ConnectionAdapters::MySQL::SchemaStatements#indexes
class IndexDefinition # :nodoc:
- attr_reader :table, :name, :unique, :columns, :lengths, :orders, :opclasses, :where, :type, :using, :comment
+ attr_reader :table, :name, :unique, :columns, :lengths, :orders, :opclasses, :where, :type, :using, :include, :nulls_not_distinct, :comment, :valid
def initialize(
table, name,
@@ -18,7 +19,10 @@ def initialize(
where: nil,
type: nil,
using: nil,
- comment: nil
+ include: nil,
+ nulls_not_distinct: nil,
+ comment: nil,
+ valid: true
)
@table = table
@name = name
@@ -30,7 +34,14 @@ def initialize(
@where = where
@type = type
@using = using
+ @include = include
+ @nulls_not_distinct = nulls_not_distinct
@comment = comment
+ @valid = valid
+ end
+
+ def valid?
+ @valid
end
def column_options
@@ -41,6 +52,16 @@ def column_options
}
end
+ def defined_for?(columns = nil, name: nil, unique: nil, valid: nil, include: nil, nulls_not_distinct: nil, **options)
+ columns = options[:column] if columns.blank?
+ (columns.nil? || Array(self.columns) == Array(columns).map(&:to_s)) &&
+ (name.nil? || self.name == name.to_s) &&
+ (unique.nil? || self.unique == unique) &&
+ (valid.nil? || self.valid == valid) &&
+ (include.nil? || Array(self.include) == Array(include).map(&:to_s)) &&
+ (nulls_not_distinct.nil? || self.nulls_not_distinct == nulls_not_distinct)
+ end
+
private
def concise_options(options)
if columns.size == options.size && options.values.uniq.size == 1
@@ -56,11 +77,24 @@ def concise_options(options)
# +columns+ attribute of said TableDefinition object, in order to be used
# for generating a number of table creation or table changing SQL statements.
ColumnDefinition = Struct.new(:name, :type, :options, :sql_type) do # :nodoc:
+ self::OPTION_NAMES = [
+ :limit,
+ :precision,
+ :scale,
+ :default,
+ :null,
+ :collation,
+ :comment,
+ :primary_key,
+ :if_exists,
+ :if_not_exists
+ ]
+
def primary_key?
options[:primary_key]
end
- [:limit, :precision, :scale, :default, :null, :collation, :comment].each do |option_name|
+ (self::OPTION_NAMES - [:primary_key]).each do |option_name|
module_eval <<-CODE, __FILE__, __LINE__ + 1
def #{option_name}
options[:#{option_name}]
@@ -79,13 +113,15 @@ def aliased_types(name, fallback)
AddColumnDefinition = Struct.new(:column) # :nodoc:
- ChangeColumnDefinition = Struct.new(:column, :name) #:nodoc:
+ ChangeColumnDefinition = Struct.new(:column, :name) # :nodoc:
+
+ ChangeColumnDefaultDefinition = Struct.new(:column, :default) # :nodoc:
CreateIndexDefinition = Struct.new(:index, :algorithm, :if_not_exists) # :nodoc:
PrimaryKeyDefinition = Struct.new(:name) # :nodoc:
- ForeignKeyDefinition = Struct.new(:from_table, :to_table, :options) do #:nodoc:
+ ForeignKeyDefinition = Struct.new(:from_table, :to_table, :options) do # :nodoc:
def name
options[:name]
end
@@ -106,6 +142,10 @@ def on_update
options[:on_update]
end
+ def deferrable
+ options[:deferrable]
+ end
+
def custom_primary_key?
options[:primary_key] != default_primary_key
end
@@ -121,8 +161,8 @@ def export_name_on_schema_dump?
def defined_for?(to_table: nil, validate: nil, **options)
(to_table.nil? || to_table.to_s == self.to_table) &&
- (validate.nil? || validate == options.fetch(:validate, validate)) &&
- options.all? { |k, v| self.options[k].to_s == v.to_s }
+ (validate.nil? || validate == self.options.fetch(:validate, validate)) &&
+ options.all? { |k, v| Array(self.options[k]).map(&:to_s) == Array(v).map(&:to_s) }
end
private
@@ -144,6 +184,12 @@ def validate?
def export_name_on_schema_dump?
!ActiveRecord::SchemaDumper.chk_ignore_pattern.match?(name) if name
end
+
+ def defined_for?(name:, expression: nil, validate: nil, **options)
+ self.name == name.to_s &&
+ (validate.nil? || validate == self.options.fetch(:validate, validate)) &&
+ options.all? { |k, v| self.options[k].to_s == v.to_s }
+ end
end
class ReferenceDefinition # :nodoc:
@@ -167,6 +213,20 @@ def initialize(
end
end
+ def add(table_name, connection)
+ columns.each do |name, type, options|
+ connection.add_column(table_name, name, type, **options)
+ end
+
+ if index
+ connection.add_index(table_name, column_names, **index_options(table_name))
+ end
+
+ if foreign_key
+ connection.add_foreign_key(table_name, foreign_table_name, **foreign_key_options)
+ end
+ end
+
def add_to(table)
columns.each do |name, type, options|
table.column(name, type, **options)
@@ -188,8 +248,12 @@ def as_options(value)
value.is_a?(Hash) ? value : {}
end
+ def conditional_options
+ options.slice(:if_exists, :if_not_exists)
+ end
+
def polymorphic_options
- as_options(polymorphic).merge(options.slice(:null, :first, :after))
+ as_options(polymorphic).merge(conditional_options).merge(options.slice(:null, :first, :after))
end
def polymorphic_index_name(table_name)
@@ -197,7 +261,7 @@ def polymorphic_index_name(table_name)
end
def index_options(table_name)
- index_options = as_options(index)
+ index_options = as_options(index).merge(conditional_options)
# legacy reference index names are used on versions 6.0 and earlier
return index_options if options[:_uses_legacy_reference_index_name]
@@ -207,7 +271,7 @@ def index_options(table_name)
end
def foreign_key_options
- as_options(foreign_key).merge(column: column_name)
+ as_options(foreign_key).merge(column: column_name, **conditional_options)
end
def columns
@@ -257,6 +321,7 @@ def primary_key(name, type = :primary_key, **options)
define_column_methods :bigint, :binary, :boolean, :date, :datetime, :decimal,
:float, :integer, :json, :string, :text, :time, :timestamp, :virtual
+ alias :blob :binary
alias :numeric :decimal
end
@@ -275,13 +340,15 @@ def #{column_type}(*names, **options)
end
end
+ # = Active Record Connection Adapters \Table \Definition
+ #
# Represents the schema of an SQL table in an abstract way. This class
# provides methods for manipulating the schema representation.
#
# Inside migration files, the +t+ object in {create_table}[rdoc-ref:SchemaStatements#create_table]
# is actually of this type:
#
- # class SomeMigration < ActiveRecord::Migration[6.0]
+ # class SomeMigration < ActiveRecord::Migration[7.1]
# def up
# create_table :foo do |t|
# puts t.class # => "ActiveRecord::ConnectionAdapters::TableDefinition"
@@ -322,6 +389,23 @@ def initialize(
@comment = comment
end
+ def set_primary_key(table_name, id, primary_key, **options)
+ if id && !as
+ pk = primary_key || Base.get_primary_key(table_name.to_s.singularize)
+
+ if id.is_a?(Hash)
+ options.merge!(id.except(:type))
+ id = id.fetch(:type, :primary_key)
+ end
+
+ if pk.is_a?(Array)
+ primary_keys(pk)
+ else
+ primary_key(pk, id, **options)
+ end
+ end
+ end
+
def primary_keys(name = nil) # :nodoc:
@primary_keys = PrimaryKeyDefinition.new(name) if name
@primary_keys
@@ -406,14 +490,7 @@ def column(name, type, index: nil, **options)
name = name.to_s
type = type.to_sym if type
- if @columns_hash[name]
- if @columns_hash[name].primary_key?
- raise ArgumentError, "you can't redefine the primary key column '#{name}'. To define a custom primary key, pass { id: false } to create_table."
- else
- raise ArgumentError, "you can't define an already defined column '#{name}'."
- end
- end
-
+ raise_on_duplicate_column(name)
@columns_hash[name] = new_column_definition(name, type, **options)
if index
@@ -438,12 +515,12 @@ def index(column_name, **options)
indexes << [column_name, options]
end
- def foreign_key(table_name, **options) # :nodoc:
- foreign_keys << [table_name, options]
+ def foreign_key(to_table, **options)
+ foreign_keys << new_foreign_key_definition(to_table, options)
end
def check_constraint(expression, **options)
- check_constraints << [expression, options]
+ check_constraints << new_check_constraint_definition(expression, options)
end
# Appends <tt>:datetime</tt> columns <tt>:created_at</tt> and
@@ -480,13 +557,41 @@ def new_column_definition(name, type, **options) # :nodoc:
type = integer_like_primary_key_type(type, options)
end
type = aliased_types(type.to_s, type)
+
+ if @conn.supports_datetime_with_precision?
+ if type == :datetime && !options.key?(:precision)
+ options[:precision] = 6
+ end
+ end
+
options[:primary_key] ||= type == :primary_key
options[:null] = false if options[:primary_key]
create_column_definition(name, type, options)
end
+ def new_foreign_key_definition(to_table, options) # :nodoc:
+ prefix = ActiveRecord::Base.table_name_prefix
+ suffix = ActiveRecord::Base.table_name_suffix
+ to_table = "#{prefix}#{to_table}#{suffix}"
+ options = @conn.foreign_key_options(name, to_table, options)
+ ForeignKeyDefinition.new(name, to_table, options)
+ end
+
+ def new_check_constraint_definition(expression, options) # :nodoc:
+ options = @conn.check_constraint_options(name, expression, options)
+ CheckConstraintDefinition.new(name, expression, options)
+ end
+
private
+ def valid_column_definition_options
+ @conn.valid_column_definition_options
+ end
+
def create_column_definition(name, type, options)
+ unless options[:_skip_validate_options]
+ options.except(:_uses_legacy_reference_index_name, :_skip_validate_options).assert_valid_keys(valid_column_definition_options)
+ end
+
ColumnDefinition.new(name, type, options)
end
@@ -501,6 +606,16 @@ def integer_like_primary_key?(type, options)
def integer_like_primary_key_type(type, options)
type
end
+
+ def raise_on_duplicate_column(name)
+ if @columns_hash[name]
+ if @columns_hash[name].primary_key?
+ raise ArgumentError, "you can't redefine the primary key column '#{name}' on '#{@name}'. To define a custom primary key, pass { id: false } to create_table."
+ else
+ raise ArgumentError, "you can't define an already defined column '#{name}' on '#{@name}'."
+ end
+ end
+ end
end
class AlterTable # :nodoc:
@@ -520,7 +635,7 @@ def initialize(td)
def name; @td.name; end
def add_foreign_key(to_table, options)
- @foreign_key_adds << ForeignKeyDefinition.new(name, to_table, options)
+ @foreign_key_adds << @td.new_foreign_key_definition(to_table, options)
end
def drop_foreign_key(name)
@@ -528,7 +643,7 @@ def drop_foreign_key(name)
end
def add_check_constraint(expression, options)
- @check_constraint_adds << CheckConstraintDefinition.new(name, expression, options)
+ @check_constraint_adds << @td.new_check_constraint_definition(expression, options)
end
def drop_check_constraint(constraint_name)
@@ -542,6 +657,8 @@ def add_column(name, type, **options)
end
end
+ # = Active Record Connection Adapters \Table
+ #
# Represents an SQL table in an abstract way for updating a table.
# Also see TableDefinition and {connection.create_table}[rdoc-ref:SchemaStatements#create_table]
#
@@ -572,6 +689,7 @@ def add_column(name, type, **options)
# t.time
# t.date
# t.binary
+ # t.blob
# t.boolean
# t.foreign_key
# t.json
@@ -601,6 +719,7 @@ def initialize(table_name, base)
#
# See TableDefinition#column for details of the options you can use.
def column(column_name, type, index: nil, **options)
+ raise_on_if_exist_options(options)
@base.add_column(name, column_name, type, **options)
if index
index_options = index.is_a?(Hash) ? index : {}
@@ -626,6 +745,7 @@ def column_exists?(column_name, type = nil, **options)
#
# See {connection.add_index}[rdoc-ref:SchemaStatements#add_index] for details of the options you can use.
def index(column_name, **options)
+ raise_on_if_exist_options(options)
@base.add_index(name, column_name, **options)
end
@@ -636,8 +756,8 @@ def index(column_name, **options)
# end
#
# See {connection.index_exists?}[rdoc-ref:SchemaStatements#index_exists?]
- def index_exists?(column_name, options = {})
- @base.index_exists?(name, column_name, options)
+ def index_exists?(column_name, **options)
+ @base.index_exists?(name, column_name, **options)
end
# Renames the given index on the table.
@@ -655,6 +775,7 @@ def rename_index(index_name, new_index_name)
#
# See {connection.add_timestamps}[rdoc-ref:SchemaStatements#add_timestamps]
def timestamps(**options)
+ raise_on_if_exist_options(options)
@base.add_timestamps(name, **options)
end
@@ -665,6 +786,7 @@ def timestamps(**options)
#
# See TableDefinition#column for details of the options you can use.
def change(column_name, type, **options)
+ raise_on_if_exist_options(options)
@base.change_column(name, column_name, type, **options)
end
@@ -696,6 +818,7 @@ def change_null(column_name, null, default = nil)
#
# See {connection.remove_columns}[rdoc-ref:SchemaStatements#remove_columns]
def remove(*column_names, **options)
+ raise_on_if_exist_options(options)
@base.remove_columns(name, *column_names, **options)
end
@@ -708,6 +831,7 @@ def remove(*column_names, **options)
#
# See {connection.remove_index}[rdoc-ref:SchemaStatements#remove_index]
def remove_index(column_name = nil, **options)
+ raise_on_if_exist_options(options)
@base.remove_index(name, column_name, **options)
end
@@ -736,6 +860,7 @@ def rename(column_name, new_column_name)
#
# See {connection.add_reference}[rdoc-ref:SchemaStatements#add_reference] for details of the options you can use.
def references(*args, **options)
+ raise_on_if_exist_options(options)
args.each do |ref_name|
@base.add_reference(name, ref_name, **options)
end
@@ -749,6 +874,7 @@ def references(*args, **options)
#
# See {connection.remove_reference}[rdoc-ref:SchemaStatements#remove_reference]
def remove_references(*args, **options)
+ raise_on_if_exist_options(options)
args.each do |ref_name|
@base.remove_reference(name, ref_name, **options)
end
@@ -762,6 +888,7 @@ def remove_references(*args, **options)
#
# See {connection.add_foreign_key}[rdoc-ref:SchemaStatements#add_foreign_key]
def foreign_key(*args, **options)
+ raise_on_if_exist_options(options)
@base.add_foreign_key(name, *args, **options)
end
@@ -772,6 +899,7 @@ def foreign_key(*args, **options)
#
# See {connection.remove_foreign_key}[rdoc-ref:SchemaStatements#remove_foreign_key]
def remove_foreign_key(*args, **options)
+ raise_on_if_exist_options(options)
@base.remove_foreign_key(name, *args, **options)
end
@@ -789,8 +917,8 @@ def foreign_key_exists?(*args, **options)
# t.check_constraint("price > 0", name: "price_check")
#
# See {connection.add_check_constraint}[rdoc-ref:SchemaStatements#add_check_constraint]
- def check_constraint(*args)
- @base.add_check_constraint(name, *args)
+ def check_constraint(*args, **options)
+ @base.add_check_constraint(name, *args, **options)
end
# Removes the given check constraint from the table.
@@ -798,9 +926,36 @@ def check_constraint(*args)
# t.remove_check_constraint(name: "price_check")
#
# See {connection.remove_check_constraint}[rdoc-ref:SchemaStatements#remove_check_constraint]
- def remove_check_constraint(*args)
- @base.remove_check_constraint(name, *args)
+ def remove_check_constraint(*args, **options)
+ @base.remove_check_constraint(name, *args, **options)
+ end
+
+ # Checks if a check_constraint exists on a table.
+ #
+ # unless t.check_constraint_exists?(name: "price_check")
+ # t.check_constraint("price > 0", name: "price_check")
+ # end
+ #
+ # See {connection.check_constraint_exists?}[rdoc-ref:SchemaStatements#check_constraint_exists?]
+ def check_constraint_exists?(*args, **options)
+ @base.check_constraint_exists?(name, *args, **options)
end
+
+ private
+ def raise_on_if_exist_options(options)
+ unrecognized_option = options.keys.find do |key|
+ key == :if_exists || key == :if_not_exists
+ end
+ if unrecognized_option
+ conditional = unrecognized_option == :if_exists ? "if" : "unless"
+ message = <<~TXT
+ Option #{unrecognized_option} will be ignored. If you are calling an expression like
+ `t.column(.., #{unrecognized_option}: true)` from inside a change_table block, try a
+ conditional clause instead, as in `t.column(..) #{conditional} t.column_exists?(..)`
+ TXT
+ raise ArgumentError.new(message)
+ end
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/schema_dumper.rb b/activerecord/lib/active_record/connection_adapters/abstract/schema_dumper.rb
index 3817c8829e..193e7079bc 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract/schema_dumper.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract/schema_dumper.rb
@@ -3,6 +3,8 @@
module ActiveRecord
module ConnectionAdapters # :nodoc:
class SchemaDumper < SchemaDumper # :nodoc:
+ DEFAULT_DATETIME_PRECISION = 6 # :nodoc:
+
def self.create(connection, options)
new(connection, options)
end
@@ -63,7 +65,18 @@ def schema_limit(column)
end
def schema_precision(column)
- column.precision.inspect if column.precision
+ if column.type == :datetime
+ case column.precision
+ when nil
+ "nil"
+ when DEFAULT_DATETIME_PRECISION
+ nil
+ else
+ column.precision.inspect
+ end
+ elsif column.precision
+ column.precision.inspect
+ end
end
def schema_scale(column)
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/schema_statements.rb b/activerecord/lib/active_record/connection_adapters/abstract/schema_statements.rb
index 0c5cdf62e4..13b35fedf7 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract/schema_statements.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract/schema_statements.rb
@@ -1,7 +1,7 @@
# frozen_string_literal: true
require "active_support/core_ext/string/access"
-require "digest/sha2"
+require "openssl"
module ActiveRecord
module ConnectionAdapters # :nodoc:
@@ -29,7 +29,7 @@ def table_alias_for(table_name)
table_name[0...table_alias_length].tr(".", "_")
end
- # Returns the relation names useable to back Active Record models.
+ # Returns the relation names usable to back Active Record models.
# For most adapters this means all #tables and #views.
def data_sources
query_values(data_source_sql, "SCHEMA")
@@ -96,25 +96,19 @@ def indexes(table_name)
# # Check an index with a custom name exists
# index_exists?(:suppliers, :company_id, name: "idx_company_id")
#
+ # # Check a valid index exists (PostgreSQL only)
+ # index_exists?(:suppliers, :company_id, valid: true)
+ #
def index_exists?(table_name, column_name, **options)
- checks = []
-
- if column_name.present?
- column_names = Array(column_name).map(&:to_s)
- checks << lambda { |i| Array(i.columns) == column_names }
- end
-
- checks << lambda { |i| i.unique } if options[:unique]
- checks << lambda { |i| i.name == options[:name].to_s } if options[:name]
-
- indexes(table_name).any? { |i| checks.all? { |check| check[i] } }
+ indexes(table_name).any? { |i| i.defined_for?(column_name, **options) }
end
# Returns an array of +Column+ objects for the table specified by +table_name+.
def columns(table_name)
table_name = table_name.to_s
- column_definitions(table_name).map do |field|
- new_column_from_field(table_name, field)
+ definitions = column_definitions(table_name)
+ definitions.map do |field|
+ new_column_from_field(table_name, field, definitions)
end
end
@@ -124,6 +118,9 @@ def columns(table_name)
# column_exists?(:suppliers, :name)
#
# # Check a column exists of a particular type
+ # #
+ # # This works for standard non-casted types (eg. string) but is unreliable
+ # # for types that may get cast to something else (eg. char, bigint).
# column_exists?(:suppliers, :name, :string)
#
# # Check a column exists with a specific definition
@@ -260,7 +257,7 @@ def primary_key(table_name)
#
# generates:
#
- # CREATE TABLE order (
+ # CREATE TABLE orders (
# product_id bigint NOT NULL,
# client_id bigint NOT NULL
# );
@@ -293,25 +290,10 @@ def primary_key(table_name)
# SELECT * FROM orders INNER JOIN line_items ON order_id=orders.id
#
# See also TableDefinition#column for details on how to create columns.
- def create_table(table_name, id: :primary_key, primary_key: nil, force: nil, **options)
- td = create_table_definition(table_name, **extract_table_options!(options))
-
- if id && !td.as
- pk = primary_key || Base.get_primary_key(table_name.to_s.singularize)
-
- if id.is_a?(Hash)
- options.merge!(id.except(:type))
- id = id.fetch(:type, :primary_key)
- end
-
- if pk.is_a?(Array)
- td.primary_keys pk
- else
- td.primary_key pk, id, **options
- end
- end
-
- yield td if block_given?
+ def create_table(table_name, id: :primary_key, primary_key: nil, force: nil, **options, &block)
+ validate_create_table_options!(options)
+ validate_table_length!(table_name) unless options[:_uses_legacy_table_name]
+ td = build_create_table_definition(table_name, id: id, primary_key: primary_key, force: force, **options, &block)
if force
drop_table(table_name, force: force, if_exists: true)
@@ -319,7 +301,7 @@ def create_table(table_name, id: :primary_key, primary_key: nil, force: nil, **o
schema_cache.clear_data_source_cache!(table_name.to_s)
end
- result = execute schema_creation.accept td
+ result = execute schema_creation.accept(td)
unless supports_indexes_in_create?
td.indexes.each do |column_name, index_options|
@@ -340,6 +322,18 @@ def create_table(table_name, id: :primary_key, primary_key: nil, force: nil, **o
result
end
+ # Returns a TableDefinition object containing information about the table that would be created
+ # if the same arguments were passed to #create_table. See #create_table for information about
+ # passing a +table_name+, and other additional options that can be passed.
+ def build_create_table_definition(table_name, id: :primary_key, primary_key: nil, force: nil, **options)
+ table_definition = create_table_definition(table_name, **options.extract!(*valid_table_definition_options, :_skip_validate_options))
+ table_definition.set_primary_key(table_name, id, primary_key, **options.extract!(*valid_primary_key_options, :_skip_validate_options))
+
+ yield table_definition if block_given?
+
+ table_definition
+ end
+
# Creates a new join table with the name created using the lexical order of the first two
# arguments. These arguments can be a String or a Symbol.
#
@@ -383,7 +377,7 @@ def create_join_table(table_1, table_2, column_options: {}, **options)
column_options.reverse_merge!(null: false, index: false)
- t1_ref, t2_ref = [table_1, table_2].map { |t| t.to_s.singularize }
+ t1_ref, t2_ref = [table_1, table_2].map { |t| reference_name_for_table(t) }
create_table(join_table_name, **options.merge!(id: false)) do |td|
td.references t1_ref, **column_options
@@ -392,15 +386,33 @@ def create_join_table(table_1, table_2, column_options: {}, **options)
end
end
+ # Builds a TableDefinition object for a join table.
+ #
+ # This definition object contains information about the table that would be created
+ # if the same arguments were passed to #create_join_table. See #create_join_table for
+ # information about what arguments should be passed.
+ def build_create_join_table_definition(table_1, table_2, column_options: {}, **options) # :nodoc:
+ join_table_name = find_join_table_name(table_1, table_2, options)
+ column_options.reverse_merge!(null: false, index: false)
+
+ t1_ref, t2_ref = [table_1, table_2].map { |t| reference_name_for_table(t) }
+
+ build_create_table_definition(join_table_name, **options.merge!(id: false)) do |td|
+ td.references t1_ref, **column_options
+ td.references t2_ref, **column_options
+ yield td if block_given?
+ end
+ end
+
# Drops the join table specified by the given arguments.
- # See #create_join_table for details.
+ # See #create_join_table and #drop_table for details.
#
# Although this command ignores the block if one is given, it can be helpful
# to provide one in a migration's +change+ method so it can be reverted.
# In that case, the block will be used by #create_join_table.
def drop_join_table(table_1, table_2, **options)
join_table_name = find_join_table_name(table_1, table_2, options)
- drop_table(join_table_name)
+ drop_table(join_table_name, **options)
end
# A block for changing columns in +table+.
@@ -481,13 +493,13 @@ def drop_join_table(table_1, table_2, **options)
# end
#
# See also Table for details on all of the various column transformations.
- def change_table(table_name, **options)
+ def change_table(table_name, base = self, **options)
if supports_bulk_alter? && options[:bulk]
recorder = ActiveRecord::Migration::CommandRecorder.new(self)
yield update_table_definition(table_name, recorder)
bulk_change_table(table_name, recorder.commands)
else
- yield update_table_definition(table_name, self)
+ yield update_table_definition(table_name, base)
end
end
@@ -495,7 +507,7 @@ def change_table(table_name, **options)
#
# rename_table('octopuses', 'octopi')
#
- def rename_table(table_name, new_name)
+ def rename_table(table_name, new_name, **)
raise NotImplementedError, "rename_table is not implemented"
end
@@ -518,24 +530,31 @@ def drop_table(table_name, **options)
# Add a new +type+ column named +column_name+ to +table_name+.
#
+ # See {ActiveRecord::ConnectionAdapters::TableDefinition.column}[rdoc-ref:ActiveRecord::ConnectionAdapters::TableDefinition#column].
+ #
# The +type+ parameter is normally one of the migrations native types,
# which is one of the following:
# <tt>:primary_key</tt>, <tt>:string</tt>, <tt>:text</tt>,
# <tt>:integer</tt>, <tt>:bigint</tt>, <tt>:float</tt>, <tt>:decimal</tt>, <tt>:numeric</tt>,
# <tt>:datetime</tt>, <tt>:time</tt>, <tt>:date</tt>,
- # <tt>:binary</tt>, <tt>:boolean</tt>.
+ # <tt>:binary</tt>, <tt>:blob</tt>, <tt>:boolean</tt>.
#
# You may use a type not in this list as long as it is supported by your
# database (for example, "polygon" in MySQL), but this will not be database
# agnostic and should usually be avoided.
#
# Available options are (none of these exists by default):
+ # * <tt>:comment</tt> -
+ # Specifies the comment for the column. This option is ignored by some backends.
+ # * <tt>:collation</tt> -
+ # Specifies the collation for a <tt>:string</tt> or <tt>:text</tt> column.
+ # If not specified, the column will have the same collation as the table.
+ # * <tt>:default</tt> -
+ # The column's default value. Use +nil+ for +NULL+.
# * <tt>:limit</tt> -
# Requests a maximum column length. This is the number of characters for a <tt>:string</tt> column
- # and number of bytes for <tt>:text</tt>, <tt>:binary</tt>, and <tt>:integer</tt> columns.
+ # and number of bytes for <tt>:text</tt>, <tt>:binary</tt>, <tt>:blob</tt>, and <tt>:integer</tt> columns.
# This option is ignored by some backends.
- # * <tt>:default</tt> -
- # The column's default value. Use +nil+ for +NULL+.
# * <tt>:null</tt> -
# Allows or disallows +NULL+ values in the column.
# * <tt>:precision</tt> -
@@ -543,11 +562,6 @@ def drop_table(table_name, **options)
# <tt>:datetime</tt>, and <tt>:time</tt> columns.
# * <tt>:scale</tt> -
# Specifies the scale for the <tt>:decimal</tt> and <tt>:numeric</tt> columns.
- # * <tt>:collation</tt> -
- # Specifies the collation for a <tt>:string</tt> or <tt>:text</tt> column. If not specified, the
- # column will have the same collation as the table.
- # * <tt>:comment</tt> -
- # Specifies the comment for the column. This option is ignored by some backends.
# * <tt>:if_not_exists</tt> -
# Specifies if the column already exists to not try to re-add it. This will avoid
# duplicate column errors.
@@ -563,7 +577,7 @@ def drop_table(table_name, **options)
# * The SQL standard says the default scale should be 0, <tt>:scale</tt> <=
# <tt>:precision</tt>, and makes no comments about the requirements of
# <tt>:precision</tt>.
- # * MySQL: <tt>:precision</tt> [1..63], <tt>:scale</tt> [0..30].
+ # * MySQL: <tt>:precision</tt> [1..65], <tt>:scale</tt> [0..30].
# Default is (10,0).
# * PostgreSQL: <tt>:precision</tt> [1..infinity],
# <tt>:scale</tt> [0..infinity]. No default.
@@ -604,11 +618,10 @@ def drop_table(table_name, **options)
# # Ignores the method call if the column exists
# add_column(:shapes, :triangle, 'polygon', if_not_exists: true)
def add_column(table_name, column_name, type, **options)
- return if options[:if_not_exists] == true && column_exists?(table_name, column_name, type)
+ add_column_def = build_add_column_definition(table_name, column_name, type, **options)
+ return unless add_column_def
- at = create_alter_table table_name
- at.add_column(column_name, type, **options)
- execute schema_creation.accept at
+ execute schema_creation.accept(add_column_def)
end
def add_columns(table_name, *column_names, type:, **options) # :nodoc:
@@ -617,6 +630,25 @@ def add_columns(table_name, *column_names, type:, **options) # :nodoc:
end
end
+ # Builds an AlterTable object for adding a column to a table.
+ #
+ # This definition object contains information about the column that would be created
+ # if the same arguments were passed to #add_column. See #add_column for information about
+ # passing a +table_name+, +column_name+, +type+ and other options that can be passed.
+ def build_add_column_definition(table_name, column_name, type, **options) # :nodoc:
+ return if options[:if_not_exists] == true && column_exists?(table_name, column_name)
+
+ if supports_datetime_with_precision?
+ if type == :datetime && !options.key?(:precision)
+ options[:precision] = 6
+ end
+ end
+
+ alter_table = create_alter_table(table_name)
+ alter_table.add_column(column_name, type, **options)
+ alter_table
+ end
+
# Removes the given columns from the table definition.
#
# remove_columns(:suppliers, :qualification, :experience)
@@ -629,9 +661,8 @@ def remove_columns(table_name, *column_names, type: nil, **options)
raise ArgumentError.new("You must specify at least one column name. Example: remove_columns(:people, :first_name)")
end
- column_names.each do |column_name|
- remove_column(table_name, column_name, type, **options)
- end
+ remove_column_fragments = remove_columns_for_alter(table_name, *column_names, type: type, **options)
+ execute "ALTER TABLE #{quote_table_name(table_name)} #{remove_column_fragments.join(', ')}"
end
# Removes the column from the table definition.
@@ -641,7 +672,8 @@ def remove_columns(table_name, *column_names, type: nil, **options)
# The +type+ and +options+ parameters will be ignored if present. It can be helpful
# to provide these in a migration's +change+ method so it can be reverted.
# In that case, +type+ and +options+ will be used by #add_column.
- # Indexes on the column are automatically removed.
+ # Depending on the database you're using, indexes using this column may be
+ # automatically removed or modified to remove this column from the index.
#
# If the options provided include an +if_exists+ key, it will be used to check if the
# column does not exist. This will silently ignore the migration rather than raising
@@ -682,6 +714,15 @@ def change_column_default(table_name, column_name, default_or_changes)
raise NotImplementedError, "change_column_default is not implemented"
end
+ # Builds a ChangeColumnDefaultDefinition object.
+ #
+ # This definition object contains information about the column change that would occur
+ # if the same arguments were passed to #change_column_default. See #change_column_default for
+ # information about passing a +table_name+, +column_name+, +type+ and other options that can be passed.
+ def build_change_column_default_definition(table_name, column_name, default_or_changes) # :nodoc:
+ raise NotImplementedError, "build_change_column_default_definition is not implemented"
+ end
+
# Sets or removes a <tt>NOT NULL</tt> constraint on a column. The +null+ flag
# indicates whether the value can be +NULL+. For example
#
@@ -766,7 +807,7 @@ def rename_column(table_name, column_name, new_column_name)
#
# CREATE INDEX by_name_surname ON accounts(name(10), surname(15))
#
- # Note: SQLite doesn't support index length.
+ # Note: only supported by MySQL
#
# ====== Creating an index with a sort order (desc or asc, asc is the default)
#
@@ -788,6 +829,16 @@ def rename_column(table_name, column_name, new_column_name)
#
# Note: Partial indexes are only supported for PostgreSQL and SQLite.
#
+ # ====== Creating an index that includes additional columns
+ #
+ # add_index(:accounts, :branch_id, include: :party_id)
+ #
+ # generates:
+ #
+ # CREATE INDEX index_accounts_on_branch_id ON accounts USING btree(branch_id) INCLUDE (party_id)
+ #
+ # Note: only supported by PostgreSQL.
+ #
# ====== Creating an index with a specific method
#
# add_index(:developers, :name, using: 'btree')
@@ -833,12 +884,20 @@ def rename_column(table_name, column_name, new_column_name)
#
# For more information see the {"Transactional Migrations" section}[rdoc-ref:Migration].
def add_index(table_name, column_name, **options)
- index, algorithm, if_not_exists = add_index_options(table_name, column_name, **options)
-
- create_index = CreateIndexDefinition.new(index, algorithm, if_not_exists)
+ create_index = build_create_index_definition(table_name, column_name, **options)
execute schema_creation.accept(create_index)
end
+ # Builds a CreateIndexDefinition object.
+ #
+ # This definition object contains information about the index that would be created
+ # if the same arguments were passed to #add_index. See #add_index for information about
+ # passing a +table_name+, +column_name+, and other additional options that can be passed.
+ def build_create_index_definition(table_name, column_name, **options) # :nodoc:
+ index, algorithm, if_not_exists = add_index_options(table_name, column_name, **options)
+ CreateIndexDefinition.new(index, algorithm, if_not_exists)
+ end
+
# Removes the given index from the table.
#
# Removes the index on +branch_id+ in the +accounts+ table if exactly one such index exists.
@@ -901,10 +960,10 @@ def rename_index(table_name, old_name, new_name)
remove_index(table_name, name: old_name)
end
- def index_name(table_name, options) #:nodoc:
+ def index_name(table_name, options) # :nodoc:
if Hash === options
if options[:column]
- "index_#{table_name}_on_#{Array(options[:column]) * '_and_'}"
+ generate_index_name(table_name, options[:column])
elsif options[:name]
options[:name]
else
@@ -924,7 +983,6 @@ def index_name_exists?(table_name, index_name)
# Adds a reference. The reference column is a bigint by default,
# the <tt>:type</tt> option can be used to specify a different type.
# Optionally adds a +_type+ column, if <tt>:polymorphic</tt> option is provided.
- # #add_reference and #add_belongs_to are acceptable.
#
# The +options+ hash can include the following keys:
# [<tt>:type</tt>]
@@ -970,12 +1028,11 @@ def index_name_exists?(table_name, index_name)
# add_reference(:products, :supplier, foreign_key: { to_table: :firms })
#
def add_reference(table_name, ref_name, **options)
- ReferenceDefinition.new(ref_name, **options).add_to(update_table_definition(table_name, self))
+ ReferenceDefinition.new(ref_name, **options).add(table_name, self)
end
alias :add_belongs_to :add_reference
# Removes the reference(s). Also removes a +type+ column if one exists.
- # #remove_reference and #remove_belongs_to are acceptable.
#
# ====== Remove the reference
#
@@ -990,19 +1047,21 @@ def add_reference(table_name, ref_name, **options)
# remove_reference(:products, :user, foreign_key: true)
#
def remove_reference(table_name, ref_name, foreign_key: false, polymorphic: false, **options)
+ conditional_options = options.slice(:if_exists, :if_not_exists)
+
if foreign_key
reference_name = Base.pluralize_table_names ? ref_name.to_s.pluralize : ref_name
if foreign_key.is_a?(Hash)
- foreign_key_options = foreign_key
+ foreign_key_options = foreign_key.merge(conditional_options)
else
- foreign_key_options = { to_table: reference_name }
+ foreign_key_options = { to_table: reference_name, **conditional_options }
end
foreign_key_options[:column] ||= "#{ref_name}_id"
remove_foreign_key(table_name, **foreign_key_options)
end
- remove_column(table_name, "#{ref_name}_id")
- remove_column(table_name, "#{ref_name}_type") if polymorphic
+ remove_column(table_name, "#{ref_name}_id", **conditional_options)
+ remove_column(table_name, "#{ref_name}_type", **conditional_options) if polymorphic
end
alias :remove_belongs_to :remove_reference
@@ -1027,6 +1086,10 @@ def foreign_keys(table_name)
#
# ALTER TABLE "articles" ADD CONSTRAINT fk_rails_e74ce85cbc FOREIGN KEY ("author_id") REFERENCES "authors" ("id")
#
+ # ====== Creating a foreign key, ignoring method call if the foreign key exists
+ #
+ # add_foreign_key(:articles, :authors, if_not_exists: true)
+ #
# ====== Creating a foreign key on a specific column
#
# add_foreign_key :articles, :users, column: :author_id, primary_key: "lng_id"
@@ -1035,6 +1098,16 @@ def foreign_keys(table_name)
#
# ALTER TABLE "articles" ADD CONSTRAINT fk_rails_58ca3d3a82 FOREIGN KEY ("author_id") REFERENCES "users" ("lng_id")
#
+ # ====== Creating a composite foreign key
+ #
+ # Assuming "carts" table has "(shop_id, user_id)" as a primary key.
+ #
+ # add_foreign_key :orders, :carts, primary_key: [:shop_id, :user_id]
+ #
+ # generates:
+ #
+ # ALTER TABLE "orders" ADD CONSTRAINT fk_rails_6f5e4cb3a4 FOREIGN KEY ("cart_shop_id", "cart_user_id") REFERENCES "carts" ("shop_id", "user_id")
+ #
# ====== Creating a cascading foreign key
#
# add_foreign_key :articles, :authors, on_delete: :cascade
@@ -1045,19 +1118,28 @@ def foreign_keys(table_name)
#
# The +options+ hash can include the following keys:
# [<tt>:column</tt>]
- # The foreign key column name on +from_table+. Defaults to <tt>to_table.singularize + "_id"</tt>
+ # The foreign key column name on +from_table+. Defaults to <tt>to_table.singularize + "_id"</tt>.
+ # Pass an array to create a composite foreign key.
# [<tt>:primary_key</tt>]
# The primary key column name on +to_table+. Defaults to +id+.
+ # Pass an array to create a composite foreign key.
# [<tt>:name</tt>]
# The constraint name. Defaults to <tt>fk_rails_<identifier></tt>.
# [<tt>:on_delete</tt>]
- # Action that happens <tt>ON DELETE</tt>. Valid values are +:nullify+, +:cascade+ and +:restrict+
+ # Action that happens <tt>ON DELETE</tt>. Valid values are +:nullify+, +:cascade+, and +:restrict+
# [<tt>:on_update</tt>]
- # Action that happens <tt>ON UPDATE</tt>. Valid values are +:nullify+, +:cascade+ and +:restrict+
+ # Action that happens <tt>ON UPDATE</tt>. Valid values are +:nullify+, +:cascade+, and +:restrict+
+ # [<tt>:if_not_exists</tt>]
+ # Specifies if the foreign key already exists to not try to re-add it. This will avoid
+ # duplicate column errors.
# [<tt>:validate</tt>]
# (PostgreSQL only) Specify whether or not the constraint should be validated. Defaults to +true+.
+ # [<tt>:deferrable</tt>]
+ # (PostgreSQL only) Specify whether or not the foreign key should be deferrable. Valid values are booleans or
+ # +:deferred+ or +:immediate+ to specify the default behavior. Defaults to +false+.
def add_foreign_key(from_table, to_table, **options)
- return unless supports_foreign_keys?
+ return unless use_foreign_keys?
+ return if options[:if_not_exists] == true && foreign_key_exists?(from_table, to_table, **options.slice(:column))
options = foreign_key_options(from_table, to_table, options)
at = create_alter_table from_table
@@ -1087,12 +1169,18 @@ def add_foreign_key(from_table, to_table, **options)
#
# remove_foreign_key :accounts, name: :special_fk_name
#
+ # Checks if the foreign key exists before trying to remove it. Will silently ignore indexes that
+ # don't exist.
+ #
+ # remove_foreign_key :accounts, :branches, if_exists: true
+ #
# The +options+ hash accepts the same keys as SchemaStatements#add_foreign_key
# with an addition of
# [<tt>:to_table</tt>]
# The name of the table that contains the referenced primary key.
def remove_foreign_key(from_table, to_table = nil, **options)
- return unless supports_foreign_keys?
+ return unless use_foreign_keys?
+ return if options.delete(:if_exists) == true && !foreign_key_exists?(from_table, to_table)
fk_name_to_delete = foreign_key_for!(from_table, to_table: to_table, **options).name
@@ -1117,15 +1205,33 @@ def foreign_key_exists?(from_table, to_table = nil, **options)
foreign_key_for(from_table, to_table: to_table, **options).present?
end
- def foreign_key_column_for(table_name) # :nodoc:
+ def foreign_key_column_for(table_name, column_name) # :nodoc:
name = strip_table_name_prefix_and_suffix(table_name)
- "#{name.singularize}_id"
+ "#{name.singularize}_#{column_name}"
end
def foreign_key_options(from_table, to_table, options) # :nodoc:
options = options.dup
- options[:column] ||= foreign_key_column_for(to_table)
+
+ if options[:primary_key].is_a?(Array)
+ options[:column] ||= options[:primary_key].map do |pk_column|
+ foreign_key_column_for(to_table, pk_column)
+ end
+ else
+ options[:column] ||= foreign_key_column_for(to_table, "id")
+ end
+
options[:name] ||= foreign_key_name(from_table, options)
+
+ if options[:column].is_a?(Array) || options[:primary_key].is_a?(Array)
+ if Array(options[:primary_key]).size != Array(options[:column]).size
+ raise ArgumentError, <<~MSG.squish
+ For composite primary keys, specify :column and :primary_key, where
+ :column must reference all the :primary_key columns from #{to_table.inspect}
+ MSG
+ end
+ end
+
options
end
@@ -1147,12 +1253,16 @@ def check_constraints(table_name)
# The +options+ hash can include the following keys:
# [<tt>:name</tt>]
# The constraint name. Defaults to <tt>chk_rails_<identifier></tt>.
+ # [<tt>:if_not_exists</tt>]
+ # Silently ignore if the constraint already exists, rather than raise an error.
# [<tt>:validate</tt>]
# (PostgreSQL only) Specify whether or not the constraint should be validated. Defaults to +true+.
- def add_check_constraint(table_name, expression, **options)
+ def add_check_constraint(table_name, expression, if_not_exists: false, **options)
return unless supports_check_constraints?
options = check_constraint_options(table_name, expression, options)
+ return if if_not_exists && check_constraint_exists?(table_name, **options)
+
at = create_alter_table(table_name)
at.add_check_constraint(expression, options)
@@ -1165,16 +1275,24 @@ def check_constraint_options(table_name, expression, options) # :nodoc:
options
end
- # Removes the given check constraint from the table.
+ # Removes the given check constraint from the table. Removing a check constraint
+ # that does not exist will raise an error.
#
# remove_check_constraint :products, name: "price_check"
#
+ # To silently ignore a non-existent check constraint rather than raise an error,
+ # use the +if_exists+ option.
+ #
+ # remove_check_constraint :products, name: "price_check", if_exists: true
+ #
# The +expression+ parameter will be ignored if present. It can be helpful
# to provide this in a migration's +change+ method so it can be reverted.
# In that case, +expression+ will be used by #add_check_constraint.
- def remove_check_constraint(table_name, expression = nil, **options)
+ def remove_check_constraint(table_name, expression = nil, if_exists: false, **options)
return unless supports_check_constraints?
+ return if if_exists && !check_constraint_exists?(table_name, **options)
+
chk_name_to_delete = check_constraint_for!(table_name, expression: expression, **options).name
at = create_alter_table(table_name)
@@ -1183,8 +1301,20 @@ def remove_check_constraint(table_name, expression = nil, **options)
execute schema_creation.accept(at)
end
+
+ # Checks to see if a check constraint exists on a table for a given check constraint definition.
+ #
+ # check_constraint_exists?(:products, name: "price_check")
+ #
+ def check_constraint_exists?(table_name, **options)
+ if !options.key?(:name) && !options.key?(:expression)
+ raise ArgumentError, "At least one of :name or :expression must be supplied"
+ end
+ check_constraint_for(table_name, **options).present?
+ end
+
def dump_schema_information # :nodoc:
- versions = schema_migration.all_versions
+ versions = schema_migration.versions
insert_versions_sql(versions) if versions.any?
end
@@ -1256,20 +1386,39 @@ def columns_for_distinct(columns, orders) # :nodoc:
columns
end
+ def distinct_relation_for_primary_key(relation) # :nodoc:
+ primary_key_columns = Array(relation.primary_key).map do |column|
+ visitor.compile(relation.table[column])
+ end
+
+ values = columns_for_distinct(
+ primary_key_columns,
+ relation.order_values
+ )
+
+ limited = relation.reselect(values).distinct!
+ limited_ids = select_rows(limited.arel, "SQL").map do |results|
+ results.last(Array(relation.primary_key).length) # ignores order values for MySQL and PostgreSQL
+ end
+
+ if limited_ids.empty?
+ relation.none!
+ else
+ relation.where!(**Array(relation.primary_key).zip(limited_ids.transpose).to_h)
+ end
+
+ relation.limit_value = relation.offset_value = nil
+ relation
+ end
+
# Adds timestamps (+created_at+ and +updated_at+) columns to +table_name+.
# Additional options (like +:null+) are forwarded to #add_column.
#
# add_timestamps(:suppliers, null: true)
#
def add_timestamps(table_name, **options)
- options[:null] = false if options[:null].nil?
-
- if !options.key?(:precision) && supports_datetime_with_precision?
- options[:precision] = 6
- end
-
- add_column table_name, :created_at, :datetime, **options
- add_column table_name, :updated_at, :datetime, **options
+ fragments = add_timestamps_for_alter(table_name, **options)
+ execute "ALTER TABLE #{quote_table_name(table_name)} #{fragments.join(', ')}"
end
# Removes the timestamp columns (+created_at+ and +updated_at+) from the table definition.
@@ -1277,16 +1426,15 @@ def add_timestamps(table_name, **options)
# remove_timestamps(:suppliers)
#
def remove_timestamps(table_name, **options)
- remove_column table_name, :updated_at
- remove_column table_name, :created_at
+ remove_columns table_name, :updated_at, :created_at
end
- def update_table_definition(table_name, base) #:nodoc:
+ def update_table_definition(table_name, base) # :nodoc:
Table.new(table_name, base)
end
def add_index_options(table_name, column_name, name: nil, if_not_exists: false, internal: false, **options) # :nodoc:
- options.assert_valid_keys(:unique, :length, :order, :opclass, :where, :type, :using, :comment, :algorithm)
+ options.assert_valid_keys(:unique, :length, :order, :opclass, :where, :type, :using, :comment, :algorithm, :include, :nulls_not_distinct)
column_names = index_column_names(column_name)
@@ -1305,6 +1453,8 @@ def add_index_options(table_name, column_name, name: nil, if_not_exists: false,
where: options[:where],
type: options[:type],
using: options[:using],
+ include: options[:include],
+ nulls_not_distinct: options[:nulls_not_distinct],
comment: options[:comment]
)
@@ -1352,7 +1502,79 @@ def create_schema_dumper(options) # :nodoc:
SchemaDumper.create(self, options)
end
+ def use_foreign_keys?
+ supports_foreign_keys? && foreign_keys_enabled?
+ end
+
+ # Returns an instance of SchemaCreation, which can be used to visit a schema definition
+ # object and return DDL.
+ def schema_creation # :nodoc:
+ SchemaCreation.new(self)
+ end
+
+ def bulk_change_table(table_name, operations) # :nodoc:
+ sql_fragments = []
+ non_combinable_operations = []
+
+ operations.each do |command, args|
+ table, arguments = args.shift, args
+ method = :"#{command}_for_alter"
+
+ if respond_to?(method, true)
+ sqls, procs = Array(send(method, table, *arguments)).partition { |v| v.is_a?(String) }
+ sql_fragments.concat(sqls)
+ non_combinable_operations.concat(procs)
+ else
+ execute "ALTER TABLE #{quote_table_name(table_name)} #{sql_fragments.join(", ")}" unless sql_fragments.empty?
+ non_combinable_operations.each(&:call)
+ sql_fragments = []
+ non_combinable_operations = []
+ send(command, table, *arguments)
+ end
+ end
+
+ execute "ALTER TABLE #{quote_table_name(table_name)} #{sql_fragments.join(", ")}" unless sql_fragments.empty?
+ non_combinable_operations.each(&:call)
+ end
+
+ def valid_table_definition_options # :nodoc:
+ [:temporary, :if_not_exists, :options, :as, :comment, :charset, :collation]
+ end
+
+ def valid_column_definition_options # :nodoc:
+ ColumnDefinition::OPTION_NAMES
+ end
+
+ def valid_primary_key_options # :nodoc:
+ [:limit, :default, :precision]
+ end
+
+ # Returns the maximum length of an index name in bytes.
+ def max_index_name_size
+ 62
+ end
+
private
+ def generate_index_name(table_name, column)
+ name = "index_#{table_name}_on_#{Array(column) * '_and_'}"
+ return name if name.bytesize <= max_index_name_size
+
+ # Fallback to short version, add hash to ensure uniqueness
+ hashed_identifier = "_" + OpenSSL::Digest::SHA256.hexdigest(name).first(10)
+ name = "idx_on_#{Array(column) * '_'}"
+
+ short_limit = max_index_name_size - hashed_identifier.bytesize
+ short_name = name.mb_chars.limit(short_limit).to_s
+
+ "#{short_name}#{hashed_identifier}"
+ end
+
+ def validate_change_column_null_argument!(value)
+ unless value == true || value == false
+ raise ArgumentError, "change_column_null expects a boolean value (true for NULL, false for NOT NULL). Got: #{value.inspect}"
+ end
+ end
+
def column_options_keys
[:limit, :precision, :scale, :default, :null, :collation, :comment]
end
@@ -1387,7 +1609,7 @@ def index_name_for_remove(table_name, column_name, options)
checks = []
- if !options.key?(:name) && column_name.is_a?(String) && /\W/.match?(column_name)
+ if !options.key?(:name) && expression_column_name?(column_name)
options[:name] = index_name(table_name, column_name)
column_names = []
else
@@ -1396,7 +1618,7 @@ def index_name_for_remove(table_name, column_name, options)
checks << lambda { |i| i.name == options[:name].to_s } if options.key?(:name)
- if column_names.present?
+ if column_names.present? && !(options.key?(:name) && expression_column_name?(column_names))
checks << lambda { |i| index_name(table_name, i.columns) == index_name(table_name, column_names) }
end
@@ -1406,7 +1628,7 @@ def index_name_for_remove(table_name, column_name, options)
if matching_indexes.count > 1
raise ArgumentError, "Multiple indexes found on #{table_name} columns #{column_names}. " \
- "Specify an index name from #{matching_indexes.map(&:name).join(', ')}"
+ "Specify an index name from #{matching_indexes.map(&:name).join(', ')}"
elsif matching_indexes.none?
raise ArgumentError, "No indexes found on #{table_name} with the options provided."
else
@@ -1436,10 +1658,6 @@ def rename_column_indexes(table_name, column_name, new_column_name)
end
end
- def schema_creation
- SchemaCreation.new(self)
- end
-
def create_table_definition(name, **options)
TableDefinition.new(self, name, **options)
end
@@ -1448,8 +1666,12 @@ def create_alter_table(name)
AlterTable.new create_table_definition(name)
end
- def extract_table_options!(options)
- options.extract!(:temporary, :if_not_exists, :options, :as, :comment, :charset, :collation)
+ def validate_create_table_options!(options)
+ unless options[:_skip_validate_options]
+ options
+ .except(:_uses_legacy_table_name, :_skip_validate_options)
+ .assert_valid_keys(valid_table_definition_options, valid_primary_key_options)
+ end
end
def fetch_type_metadata(sql_type)
@@ -1464,7 +1686,7 @@ def fetch_type_metadata(sql_type)
end
def index_column_names(column_names)
- if column_names.is_a?(String) && /\W/.match?(column_names)
+ if expression_column_name?(column_names)
column_names
else
Array(column_names)
@@ -1472,13 +1694,18 @@ def index_column_names(column_names)
end
def index_name_options(column_names)
- if column_names.is_a?(String) && /\W/.match?(column_names)
+ if expression_column_name?(column_names)
column_names = column_names.scan(/\w+/).join("_")
end
{ column: column_names }
end
+ # Try to identify whether the given column name is an expression
+ def expression_column_name?(column_name)
+ column_name.is_a?(String) && /\W/.match?(column_name)
+ end
+
def strip_table_name_prefix_and_suffix(table_name)
prefix = Base.table_name_prefix
suffix = Base.table_name_suffix
@@ -1487,15 +1714,16 @@ def strip_table_name_prefix_and_suffix(table_name)
def foreign_key_name(table_name, options)
options.fetch(:name) do
- identifier = "#{table_name}_#{options.fetch(:column)}_fk"
- hashed_identifier = Digest::SHA256.hexdigest(identifier).first(10)
+ columns = Array(options.fetch(:column)).map(&:to_s)
+ identifier = "#{table_name}_#{columns * '_and_'}_fk"
+ hashed_identifier = OpenSSL::Digest::SHA256.hexdigest(identifier).first(10)
"fk_rails_#{hashed_identifier}"
end
end
def foreign_key_for(from_table, **options)
- return unless supports_foreign_keys?
+ return unless use_foreign_keys?
foreign_keys(from_table).detect { |fk| fk.defined_for?(**options) }
end
@@ -1512,11 +1740,15 @@ def extract_foreign_key_action(specifier)
end
end
+ def foreign_keys_enabled?
+ @config.fetch(:foreign_keys, true)
+ end
+
def check_constraint_name(table_name, **options)
options.fetch(:name) do
expression = options.fetch(:expression)
identifier = "#{table_name}_#{expression}_chk"
- hashed_identifier = Digest::SHA256.hexdigest(identifier).first(10)
+ hashed_identifier = OpenSSL::Digest::SHA256.hexdigest(identifier).first(10)
"chk_rails_#{hashed_identifier}"
end
@@ -1525,7 +1757,7 @@ def check_constraint_name(table_name, **options)
def check_constraint_for(table_name, **options)
return unless supports_check_constraints?
chk_name = check_constraint_name(table_name, **options)
- check_constraints(table_name).detect { |chk| chk.name == chk_name }
+ check_constraints(table_name).detect { |chk| chk.defined_for?(name: chk_name, **options) }
end
def check_constraint_for!(table_name, expression: nil, **options)
@@ -1539,6 +1771,12 @@ def validate_index_length!(table_name, new_name, internal = false)
end
end
+ def validate_table_length!(table_name)
+ if table_name.length > table_name_length
+ raise ArgumentError, "Table name '#{table_name}' is too long; the limit is #{table_name_length} characters"
+ end
+ end
+
def extract_new_default_value(default_or_changes)
if default_or_changes.is_a?(Hash) && default_or_changes.has_key?(:from) && default_or_changes.has_key?(:to)
default_or_changes[:to]
@@ -1552,29 +1790,8 @@ def can_remove_index_by_name?(column_name, options)
column_name.nil? && options.key?(:name) && options.except(:name, :algorithm).empty?
end
- def bulk_change_table(table_name, operations)
- sql_fragments = []
- non_combinable_operations = []
-
- operations.each do |command, args|
- table, arguments = args.shift, args
- method = :"#{command}_for_alter"
-
- if respond_to?(method, true)
- sqls, procs = Array(send(method, table, *arguments)).partition { |v| v.is_a?(String) }
- sql_fragments << sqls
- non_combinable_operations.concat(procs)
- else
- execute "ALTER TABLE #{quote_table_name(table_name)} #{sql_fragments.join(", ")}" unless sql_fragments.empty?
- non_combinable_operations.each(&:call)
- sql_fragments = []
- non_combinable_operations = []
- send(command, table, *arguments)
- end
- end
-
- execute "ALTER TABLE #{quote_table_name(table_name)} #{sql_fragments.join(", ")}" unless sql_fragments.empty?
- non_combinable_operations.each(&:call)
+ def reference_name_for_table(table_name)
+ table_name.to_s.singularize
end
def add_column_for_alter(table_name, column_name, type, **options)
@@ -1583,6 +1800,11 @@ def add_column_for_alter(table_name, column_name, type, **options)
schema_creation.accept(AddColumnDefinition.new(cd))
end
+ def change_column_default_for_alter(table_name, column_name, default_or_changes)
+ cd = build_change_column_default_definition(table_name, column_name, default_or_changes)
+ schema_creation.accept(cd)
+ end
+
def rename_column_sql(table_name, column_name, new_column_name)
"RENAME COLUMN #{quote_column_name(column_name)} TO #{quote_column_name(new_column_name)}"
end
@@ -1617,8 +1839,8 @@ def insert_versions_sql(versions)
if versions.is_a?(Array)
sql = +"INSERT INTO #{sm_table} (version) VALUES\n"
- sql << versions.map { |v| "(#{quote(v)})" }.join(",\n")
- sql << ";\n\n"
+ sql << versions.reverse.map { |v| "(#{quote(v)})" }.join(",\n")
+ sql << ";"
sql
else
"INSERT INTO #{sm_table} (version) VALUES (#{quote(versions)});"
diff --git a/activerecord/lib/active_record/connection_adapters/abstract/transaction.rb b/activerecord/lib/active_record/connection_adapters/abstract/transaction.rb
index d26e869d08..2e6122fb66 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract/transaction.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract/transaction.rb
@@ -2,6 +2,7 @@
module ActiveRecord
module ConnectionAdapters
+ # = Active Record Connection Adapters Transaction State
class TransactionState
def initialize(state = nil)
@state = state
@@ -73,19 +74,55 @@ def nullify!
end
end
- class NullTransaction #:nodoc:
+ class TransactionInstrumenter
+ def initialize(payload = {})
+ @handle = nil
+ @started = false
+ @payload = nil
+ @base_payload = payload
+ end
+
+ class InstrumentationNotStartedError < ActiveRecordError; end
+ class InstrumentationAlreadyStartedError < ActiveRecordError; end
+
+ def start
+ raise InstrumentationAlreadyStartedError.new("Called start on an already started transaction") if @started
+ @started = true
+
+ @payload = @base_payload.dup
+ @handle = ActiveSupport::Notifications.instrumenter.build_handle("transaction.active_record", @payload)
+ @handle.start
+ end
+
+ def finish(outcome)
+ raise InstrumentationNotStartedError.new("Called finish on a transaction that hasn't started") unless @started
+ @started = false
+
+ @payload[:outcome] = outcome
+ @handle.finish
+ end
+ end
+
+ class NullTransaction # :nodoc:
def initialize; end
def state; end
def closed?; true; end
def open?; false; end
def joinable?; false; end
def add_record(record, _ = true); end
+ def restartable?; false; end
+ def dirty?; false; end
+ def dirty!; end
+ def invalidated?; false; end
+ def invalidate!; end
end
- class Transaction #:nodoc:
+ class Transaction # :nodoc:
attr_reader :connection, :state, :savepoint_name, :isolation_level
attr_accessor :written
+ delegate :invalidate!, :invalidated?, to: :@state
+
def initialize(connection, isolation: nil, joinable: true, run_commit_callbacks: false)
@connection = connection
@state = TransactionState.new
@@ -95,6 +132,16 @@ def initialize(connection, isolation: nil, joinable: true, run_commit_callbacks:
@joinable = joinable
@run_commit_callbacks = run_commit_callbacks
@lazy_enrollment_records = nil
+ @dirty = false
+ @instrumenter = TransactionInstrumenter.new(connection: connection)
+ end
+
+ def dirty!
+ @dirty = true
+ end
+
+ def dirty?
+ @dirty
end
def add_record(record, ensure_finalize = true)
@@ -115,22 +162,41 @@ def records
@records
end
+ # Can this transaction's current state be recreated by
+ # rollback+begin ?
+ def restartable?
+ joinable? && !dirty?
+ end
+
+ def incomplete!
+ @instrumenter.finish(:incomplete) if materialized?
+ end
+
def materialize!
@materialized = true
+ @instrumenter.start
end
def materialized?
@materialized
end
+ def restore!
+ if materialized?
+ incomplete!
+ @materialized = false
+ materialize!
+ end
+ end
+
def rollback_records
return unless records
- ite = records.uniq(&:__id__)
- already_run_callbacks = {}
- while record = ite.shift
- trigger_callbacks = record.trigger_transactional_callbacks?
- should_run_callbacks = !already_run_callbacks[record] && trigger_callbacks
- already_run_callbacks[record] ||= trigger_callbacks
+
+ ite = unique_records
+
+ instances_to_run_callbacks_on = prepare_instances_to_run_callbacks_on(ite)
+
+ run_action_on_records(ite, instances_to_run_callbacks_on) do |record, should_run_callbacks|
record.rolledback!(force_restore_state: full_rollback?, should_run_callbacks: should_run_callbacks)
end
ensure
@@ -140,20 +206,38 @@ def rollback_records
end
def before_commit_records
- records.uniq.each(&:before_committed!) if records && @run_commit_callbacks
+ return unless records
+
+ if @run_commit_callbacks
+ if ActiveRecord.before_committed_on_all_records
+ ite = unique_records
+
+ instances_to_run_callbacks_on = records.each_with_object({}) do |record, candidates|
+ candidates[record] = record
+ end
+
+ run_action_on_records(ite, instances_to_run_callbacks_on) do |record, should_run_callbacks|
+ record.before_committed! if should_run_callbacks
+ end
+ else
+ records.uniq.each(&:before_committed!)
+ end
+ end
end
def commit_records
return unless records
- ite = records.uniq(&:__id__)
- already_run_callbacks = {}
- while record = ite.shift
- if @run_commit_callbacks
- trigger_callbacks = record.trigger_transactional_callbacks?
- should_run_callbacks = !already_run_callbacks[record] && trigger_callbacks
- already_run_callbacks[record] ||= trigger_callbacks
+
+ ite = unique_records
+
+ if @run_commit_callbacks
+ instances_to_run_callbacks_on = prepare_instances_to_run_callbacks_on(ite)
+
+ run_action_on_records(ite, instances_to_run_callbacks_on) do |record, should_run_callbacks|
record.committed!(should_run_callbacks: should_run_callbacks)
- else
+ end
+ else
+ while record = ite.shift
# if not running callbacks, only adds the record to the parent transaction
connection.add_transaction_record(record)
end
@@ -166,8 +250,76 @@ def full_rollback?; true; end
def joinable?; @joinable; end
def closed?; false; end
def open?; !closed?; end
+
+ private
+ def unique_records
+ records.uniq(&:__id__)
+ end
+
+ def run_action_on_records(records, instances_to_run_callbacks_on)
+ while record = records.shift
+ should_run_callbacks = record.__id__ == instances_to_run_callbacks_on[record].__id__
+
+ yield record, should_run_callbacks
+ end
+ end
+
+ def prepare_instances_to_run_callbacks_on(records)
+ records.each_with_object({}) do |record, candidates|
+ next unless record.trigger_transactional_callbacks?
+
+ earlier_saved_candidate = candidates[record]
+
+ next if earlier_saved_candidate && record.class.run_commit_callbacks_on_first_saved_instances_in_transaction
+
+ # If the candidate instance destroyed itself in the database, then
+ # instances which were added to the transaction afterwards, and which
+ # think they updated themselves, are wrong. They should not replace
+ # our candidate as an instance to run callbacks on
+ next if earlier_saved_candidate&.destroyed? && !record.destroyed?
+
+ # If the candidate instance was created inside of this transaction,
+ # then instances which were subsequently loaded from the database
+ # and updated need that state transferred to them so that
+ # the after_create_commit callbacks are run
+ record._new_record_before_last_commit = true if earlier_saved_candidate&._new_record_before_last_commit
+
+ # The last instance to save itself is likeliest to have internal
+ # state that matches what's committed to the database
+ candidates[record] = record
+ end
+ end
+ end
+
+ # = Active Record Restart Parent \Transaction
+ class RestartParentTransaction < Transaction
+ def initialize(connection, parent_transaction, **options)
+ super(connection, **options)
+
+ @parent = parent_transaction
+
+ if isolation_level
+ raise ActiveRecord::TransactionIsolationError, "cannot set transaction isolation in a nested transaction"
+ end
+
+ @parent.state.add_child(@state)
+ end
+
+ delegate :materialize!, :materialized?, :restart, to: :@parent
+
+ def rollback
+ @state.rollback!
+ @parent.restart
+ end
+
+ def commit
+ @state.commit!
+ end
+
+ def full_rollback?; false; end
end
+ # = Active Record Savepoint \Transaction
class SavepointTransaction < Transaction
def initialize(connection, savepoint_name, parent_transaction, **options)
super(connection, **options)
@@ -186,19 +338,33 @@ def materialize!
super
end
+ def restart
+ return unless materialized?
+
+ @instrumenter.finish(:restart)
+ @instrumenter.start
+
+ connection.rollback_to_savepoint(savepoint_name)
+ end
+
def rollback
- connection.rollback_to_savepoint(savepoint_name) if materialized?
+ unless @state.invalidated?
+ connection.rollback_to_savepoint(savepoint_name) if materialized?
+ end
@state.rollback!
+ @instrumenter.finish(:rollback) if materialized?
end
def commit
connection.release_savepoint(savepoint_name) if materialized?
@state.commit!
+ @instrumenter.finish(:commit) if materialized?
end
def full_rollback?; false; end
end
+ # = Active Record Real \Transaction
class RealTransaction < Transaction
def materialize!
if isolation_level
@@ -210,18 +376,34 @@ def materialize!
super
end
+ def restart
+ return unless materialized?
+
+ @instrumenter.finish(:restart)
+
+ if connection.supports_restart_db_transaction?
+ @instrumenter.start
+ connection.restart_db_transaction
+ else
+ connection.rollback_db_transaction
+ materialize!
+ end
+ end
+
def rollback
connection.rollback_db_transaction if materialized?
@state.full_rollback!
+ @instrumenter.finish(:rollback) if materialized?
end
def commit
connection.commit_db_transaction if materialized?
@state.full_commit!
+ @instrumenter.finish(:commit) if materialized?
end
end
- class TransactionManager #:nodoc:
+ class TransactionManager # :nodoc:
def initialize(connection)
@stack = []
@connection = connection
@@ -241,21 +423,31 @@ def begin_transaction(isolation: nil, joinable: true, _lazy: true)
joinable: joinable,
run_commit_callbacks: run_commit_callbacks
)
+ elsif current_transaction.restartable?
+ RestartParentTransaction.new(
+ @connection,
+ current_transaction,
+ isolation: isolation,
+ joinable: joinable,
+ run_commit_callbacks: run_commit_callbacks
+ )
else
SavepointTransaction.new(
@connection,
"active_record_#{@stack.size}",
- @stack.last,
+ current_transaction,
isolation: isolation,
joinable: joinable,
run_commit_callbacks: run_commit_callbacks
)
end
- if @connection.supports_lazy_transactions? && lazy_transactions_enabled? && _lazy
- @has_unmaterialized_transactions = true
- else
- transaction.materialize!
+ unless transaction.materialized?
+ if @connection.supports_lazy_transactions? && lazy_transactions_enabled? && _lazy
+ @has_unmaterialized_transactions = true
+ else
+ transaction.materialize!
+ end
end
@stack.push(transaction)
transaction
@@ -275,18 +467,35 @@ def lazy_transactions_enabled?
@lazy_transactions_enabled
end
+ def dirty_current_transaction
+ current_transaction.dirty!
+ end
+
+ def restore_transactions
+ return false unless restorable?
+
+ @stack.each(&:restore!)
+
+ true
+ end
+
+ def restorable?
+ @stack.none?(&:dirty?)
+ end
+
def materialize_transactions
return if @materializing_transactions
- return unless @has_unmaterialized_transactions
- @connection.lock.synchronize do
- begin
- @materializing_transactions = true
- @stack.each { |t| t.materialize! unless t.materialized? }
- ensure
- @materializing_transactions = false
+ if @has_unmaterialized_transactions
+ @connection.lock.synchronize do
+ begin
+ @materializing_transactions = true
+ @stack.each { |t| t.materialize! unless t.materialized? }
+ ensure
+ @materializing_transactions = false
+ end
+ @has_unmaterialized_transactions = false
end
- @has_unmaterialized_transactions = false
end
end
@@ -300,6 +509,8 @@ def commit_transaction
@stack.pop
end
+ dirty_current_transaction if transaction.dirty?
+
transaction.commit
transaction.commit_records
end
@@ -307,8 +518,12 @@ def commit_transaction
def rollback_transaction(transaction = nil)
@connection.lock.synchronize do
- transaction ||= @stack.pop
- transaction.rollback unless transaction.state.invalidated?
+ transaction ||= @stack.last
+ begin
+ transaction.rollback
+ ensure
+ @stack.pop if @stack.last == transaction
+ end
transaction.rollback_records
end
end
@@ -316,39 +531,41 @@ def rollback_transaction(transaction = nil)
def within_new_transaction(isolation: nil, joinable: true)
@connection.lock.synchronize do
transaction = begin_transaction(isolation: isolation, joinable: joinable)
- ret = yield
- completed = true
- ret
- rescue Exception => error
- if transaction
- transaction.state.invalidate! if error.is_a? ActiveRecord::TransactionRollbackError
+ begin
+ ret = yield
+ completed = true
+ ret
+ rescue Exception => error
rollback_transaction
after_failure_actions(transaction, error)
- end
- raise
- ensure
- if transaction
- if error
- # @connection still holds an open or invalid transaction, so we must not
- # put it back in the pool for reuse.
- @connection.throw_away! unless transaction.state.rolledback?
- else
+ raise
+ ensure
+ unless error
+ # In 7.1 we enforce timeout >= 0.4.0 which no longer use throw, so we can
+ # go back to the original behavior of committing on non-local return.
+ # If users are using throw, we assume it's not an error case.
+ completed = true if ActiveRecord.commit_transaction_on_non_local_return
+
if Thread.current.status == "aborting"
rollback_transaction
+ elsif !completed && transaction.written
+ ActiveRecord.deprecator.warn(<<~EOW)
+ A transaction is being rolled back because the transaction block was
+ exited using `return`, `break` or `throw`.
+ In Rails 7.2 this transaction will be committed instead.
+ To opt-in to the new behavior now and suppress this warning
+ you can set:
+
+ Rails.application.config.active_record.commit_transaction_on_non_local_return = true
+ EOW
+ rollback_transaction
else
- if !completed && transaction.written
- ActiveSupport::Deprecation.warn(<<~EOW)
- Using `return`, `break` or `throw` to exit a transaction block is
- deprecated without replacement. If the `throw` came from
- `Timeout.timeout(duration)`, pass an exception class as a second
- argument so it doesn't use `throw` to abort its block. This results
- in the transaction being committed, but in the next release of Rails
- it will rollback.
- EOW
- end
begin
commit_transaction
+ rescue ActiveRecord::ConnectionFailed
+ transaction.invalidate! unless transaction.state.completed?
+ raise
rescue Exception
rollback_transaction(transaction) unless transaction.state.completed?
raise
@@ -356,6 +573,11 @@ def within_new_transaction(isolation: nil, joinable: true)
end
end
end
+ ensure
+ unless transaction&.state&.completed?
+ @connection.throw_away!
+ transaction&.incomplete!
+ end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/abstract_adapter.rb b/activerecord/lib/active_record/connection_adapters/abstract_adapter.rb
index 9a02270880..d2392eeb36 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract_adapter.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract_adapter.rb
@@ -4,6 +4,7 @@
require "active_record/connection_adapters/sql_type_metadata"
require "active_record/connection_adapters/abstract/schema_dumper"
require "active_record/connection_adapters/abstract/schema_creation"
+require "active_support/concurrency/null_lock"
require "active_support/concurrency/load_interlock_aware_monitor"
require "arel/collectors/bind"
require "arel/collectors/composite"
@@ -12,6 +13,8 @@
module ActiveRecord
module ConnectionAdapters # :nodoc:
+ # = Active Record Abstract Adapter
+ #
# Active Record supports multiple database systems. AbstractAdapter and
# related classes form the abstraction layer which makes this possible.
# An AbstractAdapter represents a connection to a database, and provides an
@@ -36,12 +39,20 @@ class AbstractAdapter
include Savepoints
SIMPLE_INT = /\A\d+\z/
- COMMENT_REGEX = %r{(?:\-\-.*\n)*|/\*(?:[^\*]|\*[^/])*\*/}m
+ COMMENT_REGEX = %r{(?:--.*\n)|/\*(?:[^*]|\*[^/])*\*/}
- attr_accessor :pool
+ attr_reader :pool
attr_reader :visitor, :owner, :logger, :lock
alias :in_use? :owner
+ def pool=(value)
+ return if value.eql?(@pool)
+ @schema_cache = nil
+ @pool = value
+
+ @pool.schema_reflection.load!(self) if ActiveRecord.lazily_load_schema_cache
+ end
+
set_callback :checkin, :after, :enable_lazy_transactions!
def self.type_cast_config_to_integer(config)
@@ -62,44 +73,142 @@ def self.type_cast_config_to_boolean(config)
end
end
+ def self.validate_default_timezone(config)
+ case config
+ when nil
+ when "utc", "local"
+ config.to_sym
+ else
+ raise ArgumentError, "default_timezone must be either 'utc' or 'local'"
+ end
+ end
+
DEFAULT_READ_QUERY = [:begin, :commit, :explain, :release, :rollback, :savepoint, :select, :with] # :nodoc:
private_constant :DEFAULT_READ_QUERY
def self.build_read_query_regexp(*parts) # :nodoc:
parts += DEFAULT_READ_QUERY
parts = parts.map { |part| /#{part}/i }
- /\A(?:[\(\s]|#{COMMENT_REGEX})*#{Regexp.union(*parts)}/
+ /\A(?:[(\s]|#{COMMENT_REGEX})*#{Regexp.union(*parts)}/
end
- def self.quoted_column_names # :nodoc:
- @quoted_column_names ||= {}
+ def self.find_cmd_and_exec(commands, *args) # :doc:
+ commands = Array(commands)
+
+ dirs_on_path = ENV["PATH"].to_s.split(File::PATH_SEPARATOR)
+ unless (ext = RbConfig::CONFIG["EXEEXT"]).empty?
+ commands = commands.map { |cmd| "#{cmd}#{ext}" }
+ end
+
+ full_path_command = nil
+ found = commands.detect do |cmd|
+ dirs_on_path.detect do |path|
+ full_path_command = File.join(path, cmd)
+ begin
+ stat = File.stat(full_path_command)
+ rescue SystemCallError
+ else
+ stat.file? && stat.executable?
+ end
+ end
+ end
+
+ if found
+ exec full_path_command, *args
+ else
+ abort("Couldn't find database client: #{commands.join(', ')}. Check your $PATH and try again.")
+ end
end
- def self.quoted_table_names # :nodoc:
- @quoted_table_names ||= {}
+ # Opens a database console session.
+ def self.dbconsole(config, options = {})
+ raise NotImplementedError
end
- def initialize(connection, logger = nil, config = {}) # :nodoc:
+ def initialize(config_or_deprecated_connection, deprecated_logger = nil, deprecated_connection_options = nil, deprecated_config = nil) # :nodoc:
super()
- @connection = connection
- @owner = nil
- @instrumenter = ActiveSupport::Notifications.instrumenter
- @logger = logger
- @config = config
- @pool = ActiveRecord::ConnectionAdapters::NullPool.new
- @idle_since = Concurrent.monotonic_time
+ @raw_connection = nil
+ @unconfigured_connection = nil
+
+ if config_or_deprecated_connection.is_a?(Hash)
+ @config = config_or_deprecated_connection.symbolize_keys
+ @logger = ActiveRecord::Base.logger
+
+ if deprecated_logger || deprecated_connection_options || deprecated_config
+ raise ArgumentError, "when initializing an ActiveRecord adapter with a config hash, that should be the only argument"
+ end
+ else
+ # Soft-deprecated for now; we'll probably warn in future.
+
+ @unconfigured_connection = config_or_deprecated_connection
+ @logger = deprecated_logger || ActiveRecord::Base.logger
+ if deprecated_config
+ @config = (deprecated_config || {}).symbolize_keys
+ @connection_parameters = deprecated_connection_options
+ else
+ @config = (deprecated_connection_options || {}).symbolize_keys
+ @connection_parameters = nil
+ end
+ end
+
+ @owner = nil
+ @instrumenter = ActiveSupport::Notifications.instrumenter
+ @pool = ActiveRecord::ConnectionAdapters::NullPool.new
+ @idle_since = Process.clock_gettime(Process::CLOCK_MONOTONIC)
@visitor = arel_visitor
@statements = build_statement_pool
- @lock = ActiveSupport::Concurrency::LoadInterlockAwareMonitor.new
+ self.lock_thread = nil
- @prepared_statements = self.class.type_cast_config_to_boolean(
- config.fetch(:prepared_statements, true)
+ @prepared_statements = !ActiveRecord.disable_prepared_statements && self.class.type_cast_config_to_boolean(
+ @config.fetch(:prepared_statements) { default_prepared_statements }
)
@advisory_locks_enabled = self.class.type_cast_config_to_boolean(
- config.fetch(:advisory_locks, true)
+ @config.fetch(:advisory_locks, true)
)
+
+ @default_timezone = self.class.validate_default_timezone(@config[:default_timezone])
+
+ @raw_connection_dirty = false
+ @verified = false
+ end
+
+ THREAD_LOCK = ActiveSupport::Concurrency::ThreadLoadInterlockAwareMonitor.new
+ private_constant :THREAD_LOCK
+
+ FIBER_LOCK = ActiveSupport::Concurrency::LoadInterlockAwareMonitor.new
+ private_constant :FIBER_LOCK
+
+ def lock_thread=(lock_thread) # :nodoc:
+ @lock =
+ case lock_thread
+ when Thread
+ THREAD_LOCK
+ when Fiber
+ FIBER_LOCK
+ else
+ ActiveSupport::Concurrency::NullLock
+ end
+ end
+
+ EXCEPTION_NEVER = { Exception => :never }.freeze # :nodoc:
+ EXCEPTION_IMMEDIATE = { Exception => :immediate }.freeze # :nodoc:
+ private_constant :EXCEPTION_NEVER, :EXCEPTION_IMMEDIATE
+ def with_instrumenter(instrumenter, &block) # :nodoc:
+ Thread.handle_interrupt(EXCEPTION_NEVER) do
+ previous_instrumenter = @instrumenter
+ @instrumenter = instrumenter
+ Thread.handle_interrupt(EXCEPTION_IMMEDIATE, &block)
+ ensure
+ @instrumenter = previous_instrumenter
+ end
+ end
+
+ def check_if_write_query(sql) # :nodoc:
+ if preventing_writes? && write_query?(sql)
+ raise ActiveRecord::ReadOnlyError, "Write query attempted while in readonly mode: #{sql}"
+ end
end
def replica?
@@ -110,21 +219,31 @@ def use_metadata_table?
@config.fetch(:use_metadata_table, true)
end
+ def connection_retries
+ (@config[:connection_retries] || 1).to_i
+ end
+
+ def retry_deadline
+ if @config[:retry_deadline]
+ @config[:retry_deadline].to_f
+ else
+ nil
+ end
+ end
+
+ def default_timezone
+ @default_timezone || ActiveRecord.default_timezone
+ end
+
# Determines whether writes are currently being prevented.
#
- # Returns true if the connection is a replica.
- #
- # If the application is using legacy handling, returns
- # true if +connection_handler.prevent_writes+ is set.
- #
- # If the application is using the new connection handling
- # will return true based on +current_preventing_writes+.
+ # Returns true if the connection is a replica or returns
+ # the value of +current_preventing_writes+.
def preventing_writes?
return true if replica?
- return ActiveRecord::Base.connection_handler.prevent_writes if ActiveRecord::Base.legacy_connection_handling
- return false if connection_klass.nil?
+ return false if connection_class.nil?
- connection_klass.current_preventing_writes
+ connection_class.current_preventing_writes
end
def migrations_paths # :nodoc:
@@ -132,25 +251,15 @@ def migrations_paths # :nodoc:
end
def migration_context # :nodoc:
- MigrationContext.new(migrations_paths, schema_migration)
+ MigrationContext.new(migrations_paths, schema_migration, internal_metadata)
end
def schema_migration # :nodoc:
- @schema_migration ||= begin
- conn = self
- spec_name = conn.pool.pool_config.connection_specification_name
-
- return ActiveRecord::SchemaMigration if spec_name == "ActiveRecord::Base"
-
- schema_migration_name = "#{spec_name}::SchemaMigration"
-
- Class.new(ActiveRecord::SchemaMigration) do
- define_singleton_method(:name) { schema_migration_name }
- define_singleton_method(:to_s) { schema_migration_name }
+ SchemaMigration.new(self)
+ end
- self.connection_specification_name = spec_name
- end
- end
+ def internal_metadata # :nodoc:
+ InternalMetadata.new(self)
end
def prepared_statements?
@@ -159,7 +268,7 @@ def prepared_statements?
alias :prepared_statements :prepared_statements?
def prepared_statements_disabled_cache # :nodoc:
- Thread.current[:ar_prepared_statements_disabled_cache] ||= Set.new
+ ActiveSupport::IsolatedExecutionState[:active_record_prepared_statements_disabled_cache] ||= Set.new
end
class Version
@@ -189,41 +298,48 @@ def valid_type?(type) # :nodoc:
def lease
if in_use?
msg = +"Cannot lease connection, "
- if @owner == Thread.current
+ if @owner == ActiveSupport::IsolatedExecutionState.context
msg << "it is already leased by the current thread."
else
msg << "it is already in use by a different thread: #{@owner}. " \
- "Current thread: #{Thread.current}."
+ "Current thread: #{ActiveSupport::IsolatedExecutionState.context}."
end
raise ActiveRecordError, msg
end
- @owner = Thread.current
+ @owner = ActiveSupport::IsolatedExecutionState.context
end
- def connection_klass # :nodoc:
- @pool.connection_klass
+ def connection_class # :nodoc:
+ @pool.connection_class
end
- def schema_cache
- @pool.get_schema_cache(self)
+ # The role (e.g. +:writing+) for the current connection. In a
+ # non-multi role application, +:writing+ is returned.
+ def role
+ @pool.role
end
- def schema_cache=(cache)
- cache.connection = self
- @pool.set_schema_cache(cache)
+ # The shard (e.g. +:default+) for the current connection. In
+ # a non-sharded application, +:default+ is returned.
+ def shard
+ @pool.shard
+ end
+
+ def schema_cache
+ @schema_cache ||= BoundSchemaReflection.new(@pool.schema_reflection, self)
end
# this method must only be called while holding connection pool's mutex
def expire
if in_use?
- if @owner != Thread.current
+ if @owner != ActiveSupport::IsolatedExecutionState.context
raise ActiveRecordError, "Cannot expire connection, " \
"it is owned by a different thread: #{@owner}. " \
- "Current thread: #{Thread.current}."
+ "Current thread: #{ActiveSupport::IsolatedExecutionState.context}."
end
- @idle_since = Concurrent.monotonic_time
+ @idle_since = Process.clock_gettime(Process::CLOCK_MONOTONIC)
@owner = nil
else
raise ActiveRecordError, "Cannot expire connection, it is not currently leased."
@@ -233,10 +349,10 @@ def expire
# this method must only be called while holding connection pool's mutex (and a desire for segfaults)
def steal! # :nodoc:
if in_use?
- if @owner != Thread.current
+ if @owner != ActiveSupport::IsolatedExecutionState.context
pool.send :remove_connection_from_thread_cache, self, @owner
- @owner = Thread.current
+ @owner = ActiveSupport::IsolatedExecutionState.context
end
else
raise ActiveRecordError, "Cannot steal connection, it is not currently leased."
@@ -246,7 +362,7 @@ def steal! # :nodoc:
# Seconds since this connection was returned to the pool
def seconds_idle # :nodoc:
return 0 if in_use?
- Concurrent.monotonic_time - @idle_since
+ Process.clock_gettime(Process::CLOCK_MONOTONIC) - @idle_since
end
def unprepared_statement
@@ -264,7 +380,14 @@ def adapter_name
# Does the database for this adapter exist?
def self.database_exists?(config)
- raise NotImplementedError
+ new(config).database_exists?
+ end
+
+ def database_exists?
+ connect!
+ true
+ rescue ActiveRecord::NoDatabaseError
+ false
end
# Does this adapter support DDL rollbacks in transactions? That is, would
@@ -282,6 +405,16 @@ def supports_savepoints?
false
end
+ # Do TransactionRollbackErrors on savepoints affect the parent
+ # transaction?
+ def savepoint_errors_invalidate_transactions?
+ false
+ end
+
+ def supports_restart_db_transaction?
+ false
+ end
+
# Does this adapter support application-enforced advisory locking?
def supports_advisory_locks?
false
@@ -308,6 +441,11 @@ def supports_partial_index?
false
end
+ # Does this adapter support including non-key columns?
+ def supports_index_include?
+ false
+ end
+
# Does this adapter support expression indices?
def supports_expression_index?
false
@@ -344,11 +482,26 @@ def supports_validate_constraints?
false
end
+ # Does this adapter support creating deferrable constraints?
+ def supports_deferrable_constraints?
+ false
+ end
+
# Does this adapter support creating check constraints?
def supports_check_constraints?
false
end
+ # Does this adapter support creating exclusion constraints?
+ def supports_exclusion_constraints?
+ false
+ end
+
+ # Does this adapter support creating unique constraints?
+ def supports_unique_constraints?
+ false
+ end
+
# Does this adapter support views?
def supports_views?
false
@@ -364,7 +517,7 @@ def supports_datetime_with_precision?
false
end
- # Does this adapter support json data type?
+ # Does this adapter support JSON data type?
def supports_json?
false
end
@@ -418,12 +571,49 @@ def supports_insert_conflict_target?
false
end
+ def supports_concurrent_connections?
+ true
+ end
+
+ def supports_nulls_not_distinct?
+ false
+ end
+
+ def return_value_after_insert?(column) # :nodoc:
+ column.auto_incremented_by_db?
+ end
+
+ def async_enabled? # :nodoc:
+ supports_concurrent_connections? &&
+ !ActiveRecord.async_query_executor.nil? && !pool.async_executor.nil?
+ end
+
# This is meant to be implemented by the adapters that support extensions
- def disable_extension(name)
+ def disable_extension(name, **)
end
# This is meant to be implemented by the adapters that support extensions
- def enable_extension(name)
+ def enable_extension(name, **)
+ end
+
+ # This is meant to be implemented by the adapters that support custom enum types
+ def create_enum(*) # :nodoc:
+ end
+
+ # This is meant to be implemented by the adapters that support custom enum types
+ def drop_enum(*) # :nodoc:
+ end
+
+ # This is meant to be implemented by the adapters that support custom enum types
+ def rename_enum(*) # :nodoc:
+ end
+
+ # This is meant to be implemented by the adapters that support custom enum types
+ def add_enum_value(*) # :nodoc:
+ end
+
+ # This is meant to be implemented by the adapters that support custom enum types
+ def rename_enum_value(*) # :nodoc:
end
def advisory_locks_enabled? # :nodoc:
@@ -461,6 +651,21 @@ def disable_referential_integrity
yield
end
+ # Override to check all foreign key constraints in a database.
+ def all_foreign_keys_valid?
+ check_all_foreign_keys_valid!
+ true
+ rescue ActiveRecord::StatementInvalid
+ false
+ end
+ deprecate :all_foreign_keys_valid?, deprecator: ActiveRecord.deprecator
+
+ # Override to check all foreign key constraints in a database.
+ # The adapter should raise a +ActiveRecord::StatementInvalid+ if foreign key
+ # constraints are not met.
+ def check_all_foreign_keys_valid!
+ end
+
# CONNECTION MANAGEMENT ====================================
# Checks whether the connection to the database is still active. This includes
@@ -469,19 +674,50 @@ def disable_referential_integrity
def active?
end
- # Disconnects from the database if already connected, and establishes a
- # new connection with the database. Implementors should call super if they
- # override the default implementation.
- def reconnect!
- clear_cache!
- reset_transaction
+ # Disconnects from the database if already connected, and establishes a new
+ # connection with the database. Implementors should define private #reconnect
+ # instead.
+ def reconnect!(restore_transactions: false)
+ retries_available = connection_retries
+ deadline = retry_deadline && Process.clock_gettime(Process::CLOCK_MONOTONIC) + retry_deadline
+
+ @lock.synchronize do
+ reconnect
+
+ enable_lazy_transactions!
+ @raw_connection_dirty = false
+ @verified = true
+
+ reset_transaction(restore: restore_transactions) do
+ clear_cache!(new_connection: true)
+ configure_connection
+ end
+ rescue => original_exception
+ translated_exception = translate_exception_class(original_exception, nil, nil)
+ retry_deadline_exceeded = deadline && deadline < Process.clock_gettime(Process::CLOCK_MONOTONIC)
+
+ if !retry_deadline_exceeded && retries_available > 0
+ retries_available -= 1
+
+ if retryable_connection_error?(translated_exception)
+ backoff(connection_retries - retries_available)
+ retry
+ end
+ end
+
+ @verified = false
+
+ raise translated_exception
+ end
end
+
# Disconnects from the database if already connected. Otherwise, this
# method does nothing.
def disconnect!
- clear_cache!
+ clear_cache!(new_connection: true)
reset_transaction
+ @raw_connection_dirty = false
end
# Immediately forget this connection ever existed. Unlike disconnect!,
@@ -492,22 +728,20 @@ def disconnect!
# rid of a connection that belonged to its parent.
def discard!
# This should be overridden by concrete adapters.
- #
- # Prevent @connection's finalizer from touching the socket, or
- # otherwise communicating with its server, when it is collected.
- if schema_cache.connection == self
- schema_cache.connection = nil
- end
end
# Reset the state of this connection, directing the DBMS to clear
# transactions and other connection-related server-side state. Usually a
# database-dependent operation.
#
- # The default implementation does nothing; the implementation should be
- # overridden by concrete adapters.
+ # If a database driver or protocol does not support such a feature,
+ # implementors may alias this to #reconnect!. Otherwise, implementors
+ # should call super immediately after resetting the connection (and while
+ # still holding @lock).
def reset!
- # this should be overridden by concrete adapters
+ clear_cache!(new_connection: true)
+ reset_transaction
+ configure_connection
end
# Removes the connection from the pool and disconnect it.
@@ -517,8 +751,16 @@ def throw_away!
end
# Clear any caching the database adapter may be doing.
- def clear_cache!
- @lock.synchronize { @statements.clear } if @statements
+ def clear_cache!(new_connection: false)
+ if @statements
+ @lock.synchronize do
+ if new_connection
+ @statements.reset
+ else
+ @statements.clear
+ end
+ end
+ end
end
# Returns true if its required to reload the connection between requests for development mode.
@@ -530,7 +772,33 @@ def requires_reloading?
# This is done under the hood by calling #active?. If the connection
# is no longer active, then this method will reconnect to the database.
def verify!
- reconnect! unless active?
+ unless active?
+ if @unconfigured_connection
+ @lock.synchronize do
+ if @unconfigured_connection
+ @raw_connection = @unconfigured_connection
+ @unconfigured_connection = nil
+ configure_connection
+ @verified = true
+ return
+ end
+ end
+ end
+
+ reconnect!(restore_transactions: true)
+ end
+
+ @verified = true
+ end
+
+ def connect!
+ verify!
+ self
+ end
+
+ def clean! # :nodoc:
+ @raw_connection_dirty = false
+ @verified = nil
end
# Provides access to the underlying database driver for this adapter. For
@@ -539,9 +807,16 @@ def verify!
#
# This is useful for when you need to call a proprietary method such as
# PostgreSQL's lo_* methods.
+ #
+ # Active Record cannot track if the database is getting modified using
+ # this client. If that is the case, generally you'll want to invalidate
+ # the query cache using +ActiveRecord::Base.clear_query_cache+.
def raw_connection
- disable_lazy_transactions!
- @connection
+ with_raw_connection do |conn|
+ disable_lazy_transactions!
+ @raw_connection_dirty = true
+ conn
+ end
end
def default_uniqueness_comparison(attribute, value) # :nodoc:
@@ -599,78 +874,259 @@ def database_version # :nodoc:
def check_version # :nodoc:
end
- private
- def type_map
- @type_map ||= Type::TypeMap.new.tap do |mapping|
- initialize_type_map(mapping)
+ # Returns the version identifier of the schema currently available in
+ # the database. This is generally equal to the number of the highest-
+ # numbered migration that has been executed, or 0 if no schema
+ # information is present / the database is empty.
+ def schema_version
+ migration_context.current_version
+ end
+
+ class << self
+ def register_class_with_precision(mapping, key, klass, **kwargs) # :nodoc:
+ mapping.register_type(key) do |*args|
+ precision = extract_precision(args.last)
+ klass.new(precision: precision, **kwargs)
end
end
- def initialize_type_map(m = type_map)
- register_class_with_limit m, %r(boolean)i, Type::Boolean
- register_class_with_limit m, %r(char)i, Type::String
- register_class_with_limit m, %r(binary)i, Type::Binary
- register_class_with_limit m, %r(text)i, Type::Text
- register_class_with_precision m, %r(date)i, Type::Date
- register_class_with_precision m, %r(time)i, Type::Time
- register_class_with_precision m, %r(datetime)i, Type::DateTime
- register_class_with_limit m, %r(float)i, Type::Float
- register_class_with_limit m, %r(int)i, Type::Integer
-
- m.alias_type %r(blob)i, "binary"
- m.alias_type %r(clob)i, "text"
- m.alias_type %r(timestamp)i, "datetime"
- m.alias_type %r(numeric)i, "decimal"
- m.alias_type %r(number)i, "decimal"
- m.alias_type %r(double)i, "float"
-
- m.register_type %r(^json)i, Type::Json.new
-
- m.register_type(%r(decimal)i) do |sql_type|
- scale = extract_scale(sql_type)
- precision = extract_precision(sql_type)
-
- if scale == 0
- # FIXME: Remove this class as well
- Type::DecimalWithoutScale.new(precision: precision)
+ def extended_type_map(default_timezone:) # :nodoc:
+ Type::TypeMap.new(self::TYPE_MAP).tap do |m|
+ register_class_with_precision m, %r(\A[^\(]*time)i, Type::Time, timezone: default_timezone
+ register_class_with_precision m, %r(\A[^\(]*datetime)i, Type::DateTime, timezone: default_timezone
+ m.alias_type %r(\A[^\(]*timestamp)i, "datetime"
+ end
+ end
+
+ private
+ def initialize_type_map(m)
+ register_class_with_limit m, %r(boolean)i, Type::Boolean
+ register_class_with_limit m, %r(char)i, Type::String
+ register_class_with_limit m, %r(binary)i, Type::Binary
+ register_class_with_limit m, %r(text)i, Type::Text
+ register_class_with_precision m, %r(date)i, Type::Date
+ register_class_with_precision m, %r(time)i, Type::Time
+ register_class_with_precision m, %r(datetime)i, Type::DateTime
+ register_class_with_limit m, %r(float)i, Type::Float
+ register_class_with_limit m, %r(int)i, Type::Integer
+
+ m.alias_type %r(blob)i, "binary"
+ m.alias_type %r(clob)i, "text"
+ m.alias_type %r(timestamp)i, "datetime"
+ m.alias_type %r(numeric)i, "decimal"
+ m.alias_type %r(number)i, "decimal"
+ m.alias_type %r(double)i, "float"
+
+ m.register_type %r(^json)i, Type::Json.new
+
+ m.register_type(%r(decimal)i) do |sql_type|
+ scale = extract_scale(sql_type)
+ precision = extract_precision(sql_type)
+
+ if scale == 0
+ # FIXME: Remove this class as well
+ Type::DecimalWithoutScale.new(precision: precision)
+ else
+ Type::Decimal.new(precision: precision, scale: scale)
+ end
+ end
+ end
+
+ def register_class_with_limit(mapping, key, klass)
+ mapping.register_type(key) do |*args|
+ limit = extract_limit(args.last)
+ klass.new(limit: limit)
+ end
+ end
+
+ def extract_scale(sql_type)
+ case sql_type
+ when /\((\d+)\)/ then 0
+ when /\((\d+)(,(\d+))\)/ then $3.to_i
+ end
+ end
+
+ def extract_precision(sql_type)
+ $1.to_i if sql_type =~ /\((\d+)(,\d+)?\)/
+ end
+
+ def extract_limit(sql_type)
+ $1.to_i if sql_type =~ /\((.*)\)/
+ end
+ end
+
+ TYPE_MAP = Type::TypeMap.new.tap { |m| initialize_type_map(m) }
+ EXTENDED_TYPE_MAPS = Concurrent::Map.new
+
+ private
+ def reconnect_can_restore_state?
+ transaction_manager.restorable? && !@raw_connection_dirty
+ end
+
+ # Lock the monitor, ensure we're properly connected and
+ # transactions are materialized, and then yield the underlying
+ # raw connection object.
+ #
+ # If +allow_retry+ is true, a connection-related exception will
+ # cause an automatic reconnect and re-run of the block, up to
+ # the connection's configured +connection_retries+ setting
+ # and the configured +retry_deadline+ limit. (Note that when
+ # +allow_retry+ is true, it's possible to return without having marked
+ # the connection as verified. If the block is guaranteed to exercise the
+ # connection, consider calling `verified!` to avoid needless
+ # verification queries in subsequent calls.)
+ #
+ # If +materialize_transactions+ is false, the block will be run without
+ # ensuring virtual transactions have been materialized in the DB
+ # server's state. The active transaction will also remain clean
+ # (if it is not already dirty), meaning it's able to be restored
+ # by reconnecting and opening an equivalent-depth set of new
+ # transactions. This should only be used by transaction control
+ # methods, and internal transaction-agnostic queries.
+ #
+ ###
+ #
+ # It's not the primary use case, so not something to optimize
+ # for, but note that this method does need to be re-entrant:
+ # +materialize_transactions+ will re-enter if it has work to do,
+ # and the yield block can also do so under some circumstances.
+ #
+ # In the latter case, we really ought to guarantee the inner
+ # call will not reconnect (which would interfere with the
+ # still-yielded connection in the outer block), but we currently
+ # provide no special enforcement there.
+ #
+ def with_raw_connection(allow_retry: false, materialize_transactions: true)
+ @lock.synchronize do
+ connect! if @raw_connection.nil? && reconnect_can_restore_state?
+
+ self.materialize_transactions if materialize_transactions
+
+ retries_available = allow_retry ? connection_retries : 0
+ deadline = retry_deadline && Process.clock_gettime(Process::CLOCK_MONOTONIC) + retry_deadline
+ reconnectable = reconnect_can_restore_state?
+
+ if @verified
+ # Cool, we're confident the connection's ready to use. (Note this might have
+ # become true during the above #materialize_transactions.)
+ elsif reconnectable
+ if allow_retry
+ # Not sure about the connection yet, but if anything goes wrong we can
+ # just reconnect and re-run our query
+ else
+ # We can reconnect if needed, but we don't trust the upcoming query to be
+ # safely re-runnable: let's verify the connection to be sure
+ verify!
+ end
else
- Type::Decimal.new(precision: precision, scale: scale)
+ # We don't know whether the connection is okay, but it also doesn't matter:
+ # we wouldn't be able to reconnect anyway. We're just going to run our query
+ # and hope for the best.
+ end
+
+ begin
+ yield @raw_connection
+ rescue => original_exception
+ translated_exception = translate_exception_class(original_exception, nil, nil)
+ invalidate_transaction(translated_exception)
+ retry_deadline_exceeded = deadline && deadline < Process.clock_gettime(Process::CLOCK_MONOTONIC)
+
+ if !retry_deadline_exceeded && retries_available > 0
+ retries_available -= 1
+
+ if retryable_query_error?(translated_exception)
+ backoff(connection_retries - retries_available)
+ retry
+ elsif reconnectable && retryable_connection_error?(translated_exception)
+ reconnect!(restore_transactions: true)
+ # Only allowed to reconnect once, because reconnect! has its own retry
+ # loop
+ reconnectable = false
+ retry
+ end
+ end
+
+ unless retryable_query_error?(translated_exception)
+ # Barring a known-retryable error inside the query (regardless of
+ # whether we were in a _position_ to retry it), we should infer that
+ # there's likely a real problem with the connection.
+ @verified = false
+ end
+
+ raise translated_exception
+ ensure
+ dirty_current_transaction if materialize_transactions
end
end
end
- def reload_type_map
- type_map.clear
- initialize_type_map
+ # Mark the connection as verified. Call this inside a
+ # `with_raw_connection` block only when the block is guaranteed to
+ # exercise the raw connection.
+ def verified!
+ @verified = true
end
- def register_class_with_limit(mapping, key, klass)
- mapping.register_type(key) do |*args|
- limit = extract_limit(args.last)
- klass.new(limit: limit)
- end
+ def retryable_connection_error?(exception)
+ exception.is_a?(ConnectionNotEstablished) || exception.is_a?(ConnectionFailed)
end
- def register_class_with_precision(mapping, key, klass)
- mapping.register_type(key) do |*args|
- precision = extract_precision(args.last)
- klass.new(precision: precision)
- end
+ def invalidate_transaction(exception)
+ return unless exception.is_a?(TransactionRollbackError)
+ return unless savepoint_errors_invalidate_transactions?
+
+ current_transaction.invalidate!
end
- def extract_scale(sql_type)
- case sql_type
- when /\((\d+)\)/ then 0
- when /\((\d+)(,(\d+))\)/ then $3.to_i
- end
+ def retryable_query_error?(exception)
+ # We definitely can't retry if we were inside an invalidated transaction.
+ return false if current_transaction.invalidated?
+
+ exception.is_a?(Deadlocked) || exception.is_a?(LockWaitTimeout)
end
- def extract_precision(sql_type)
- $1.to_i if sql_type =~ /\((\d+)(,\d+)?\)/
+ def backoff(counter)
+ sleep 0.1 * counter
end
- def extract_limit(sql_type)
- $1.to_i if sql_type =~ /\((.*)\)/
+ def reconnect
+ raise NotImplementedError
+ end
+
+ # Returns a raw connection for internal use with methods that are known
+ # to both be thread-safe and not rely upon actual server communication.
+ # This is useful for e.g. string escaping methods.
+ def any_raw_connection
+ @raw_connection || valid_raw_connection
+ end
+
+ # Similar to any_raw_connection, but ensures it is validated and
+ # connected. Any method called on this result still needs to be
+ # independently thread-safe, so it probably shouldn't talk to the
+ # server... but some drivers fail if they know the connection has gone
+ # away.
+ def valid_raw_connection
+ (@verified && @raw_connection) ||
+ # `allow_retry: false`, to force verification: the block won't
+ # raise, so a retry wouldn't help us get the valid connection we
+ # need.
+ with_raw_connection(allow_retry: false, materialize_transactions: false) { |conn| conn }
+ end
+
+ def extended_type_map_key
+ if @default_timezone
+ { default_timezone: @default_timezone }
+ end
+ end
+
+ def type_map
+ if key = extended_type_map_key
+ self.class::EXTENDED_TYPE_MAPS.compute_if_absent(key) do
+ self.class.extended_type_map(**key)
+ end
+ else
+ self.class::TYPE_MAP
+ end
end
def translate_exception_class(e, sql, binds)
@@ -683,7 +1139,7 @@ def translate_exception_class(e, sql, binds)
exception
end
- def log(sql, name = "SQL", binds = [], type_casted_binds = [], statement_name = nil) # :doc:
+ def log(sql, name = "SQL", binds = [], type_casted_binds = [], statement_name = nil, async: false, &block) # :doc:
@instrumenter.instrument(
"sql.active_record",
sql: sql,
@@ -691,22 +1147,28 @@ def log(sql, name = "SQL", binds = [], type_casted_binds = [], statement_name =
binds: binds,
type_casted_binds: type_casted_binds,
statement_name: statement_name,
- connection: self) do
- @lock.synchronize do
- yield
- end
- rescue => e
- raise translate_exception_class(e, sql, binds)
+ async: async,
+ connection: self,
+ &block
+ )
+ rescue ActiveRecord::StatementInvalid => ex
+ raise ex.set_query(sql, binds)
+ end
+
+ def transform_query(sql)
+ ActiveRecord.query_transformers.each do |transformer|
+ sql = transformer.call(sql, self)
end
+ sql
end
def translate_exception(exception, message:, sql:, binds:)
# override in derived class
case exception
- when RuntimeError
+ when RuntimeError, ActiveRecord::ActiveRecordError
exception
else
- ActiveRecord::StatementInvalid.new(message, sql: sql, binds: binds)
+ ActiveRecord::StatementInvalid.new(message, sql: sql, binds: binds, connection_pool: @pool)
end
end
@@ -753,6 +1215,26 @@ def build_statement_pool
def build_result(columns:, rows:, column_types: {})
ActiveRecord::Result.new(columns, rows, column_types)
end
+
+ # Perform any necessary initialization upon the newly-established
+ # @raw_connection -- this is the place to modify the adapter's
+ # connection settings, run queries to configure any application-global
+ # "session" variables, etc.
+ #
+ # Implementations may assume this method will only be called while
+ # holding @lock (or from #initialize).
+ def configure_connection
+ end
+
+ def default_prepared_statements
+ true
+ end
+
+ def warning_ignored?(warning)
+ ActiveRecord.db_warnings_ignore.any? do |warning_matcher|
+ warning.message.match?(warning_matcher) || warning.code.to_s.match?(warning_matcher)
+ end
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/abstract_mysql_adapter.rb b/activerecord/lib/active_record/connection_adapters/abstract_mysql_adapter.rb
index e065721eef..8b2ae616f1 100644
--- a/activerecord/lib/active_record/connection_adapters/abstract_mysql_adapter.rb
+++ b/activerecord/lib/active_record/connection_adapters/abstract_mysql_adapter.rb
@@ -3,6 +3,7 @@
require "active_record/connection_adapters/abstract_adapter"
require "active_record/connection_adapters/statement_pool"
require "active_record/connection_adapters/mysql/column"
+require "active_record/connection_adapters/mysql/database_statements"
require "active_record/connection_adapters/mysql/explain_pretty_printer"
require "active_record/connection_adapters/mysql/quoting"
require "active_record/connection_adapters/mysql/schema_creation"
@@ -14,6 +15,7 @@
module ActiveRecord
module ConnectionAdapters
class AbstractMysqlAdapter < AbstractAdapter
+ include MySQL::DatabaseStatements
include MySQL::Quoting
include MySQL::SchemaStatements
@@ -31,6 +33,7 @@ class AbstractMysqlAdapter < AbstractAdapter
string: { name: "varchar", limit: 255 },
text: { name: "text" },
integer: { name: "int", limit: 4 },
+ bigint: { name: "bigint" },
float: { name: "float", limit: 24 },
decimal: { name: "decimal" },
datetime: { name: "datetime" },
@@ -50,11 +53,37 @@ def dealloc(stmt)
end
end
- def initialize(connection, logger, connection_options, config)
- super(connection, logger, config)
+ class << self
+ def dbconsole(config, options = {})
+ mysql_config = config.configuration_hash
+
+ args = {
+ host: "--host",
+ port: "--port",
+ socket: "--socket",
+ username: "--user",
+ encoding: "--default-character-set",
+ sslca: "--ssl-ca",
+ sslcert: "--ssl-cert",
+ sslcapath: "--ssl-capath",
+ sslcipher: "--ssl-cipher",
+ sslkey: "--ssl-key",
+ ssl_mode: "--ssl-mode"
+ }.filter_map { |opt, arg| "#{arg}=#{mysql_config[opt]}" if mysql_config[opt] }
+
+ if mysql_config[:password] && options[:include_password]
+ args << "--password=#{mysql_config[:password]}"
+ elsif mysql_config[:password] && !mysql_config[:password].to_s.empty?
+ args << "-p"
+ end
+
+ args << config.database
+
+ find_cmd_and_exec(["mysql", "mysql5"], *args)
+ end
end
- def get_database_version #:nodoc:
+ def get_database_version # :nodoc:
full_version_string = get_full_version
version_string = version_string(full_version_string)
Version.new(version_string, full_version_string)
@@ -80,6 +109,10 @@ def supports_transaction_isolation?
true
end
+ def supports_restart_db_transaction?
+ true
+ end
+
def supports_explain?
true
end
@@ -94,7 +127,7 @@ def supports_foreign_keys?
def supports_check_constraints?
if mariadb?
- database_version >= "10.2.1"
+ database_version >= "10.3.10" || (database_version < "10.3" && database_version >= "10.2.22")
else
database_version >= "8.0.16"
end
@@ -174,7 +207,7 @@ def error_number(exception) # :nodoc:
# REFERENTIAL INTEGRITY ====================================
- def disable_referential_integrity #:nodoc:
+ def disable_referential_integrity # :nodoc:
old = query_value("SELECT @@FOREIGN_KEY_CHECKS")
begin
@@ -185,54 +218,43 @@ def disable_referential_integrity #:nodoc:
end
end
- # CONNECTION MANAGEMENT ====================================
-
- def clear_cache! # :nodoc:
- reload_type_map
- super
- end
-
#--
# DATABASE STATEMENTS ======================================
#++
- # Executes the SQL statement in the context of this connection.
- def execute(sql, name = nil)
- materialize_transactions
- mark_transaction_written_if_write(sql)
-
- log(sql, name) do
- ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
- @connection.query(sql)
- end
- end
- end
-
# Mysql2Adapter doesn't have to free a result after using it, but we use this method
# to write stuff in an abstract way without concerning ourselves about whether it
# needs to be explicitly freed or not.
- def execute_and_free(sql, name = nil) # :nodoc:
- yield execute(sql, name)
+ def execute_and_free(sql, name = nil, async: false) # :nodoc:
+ sql = transform_query(sql)
+ check_if_write_query(sql)
+
+ mark_transaction_written_if_write(sql)
+ yield raw_execute(sql, name, async: async)
end
- def begin_db_transaction
- execute("BEGIN", "TRANSACTION")
+ def begin_db_transaction # :nodoc:
+ internal_execute("BEGIN", "TRANSACTION", allow_retry: true, materialize_transactions: false)
end
- def begin_isolated_db_transaction(isolation)
- execute "SET TRANSACTION ISOLATION LEVEL #{transaction_isolation_levels.fetch(isolation)}"
+ def begin_isolated_db_transaction(isolation) # :nodoc:
+ internal_execute("SET TRANSACTION ISOLATION LEVEL #{transaction_isolation_levels.fetch(isolation)}", "TRANSACTION", allow_retry: true, materialize_transactions: false)
begin_db_transaction
end
- def commit_db_transaction #:nodoc:
- execute("COMMIT", "TRANSACTION")
+ def commit_db_transaction # :nodoc:
+ internal_execute("COMMIT", "TRANSACTION", allow_retry: false, materialize_transactions: true)
+ end
+
+ def exec_rollback_db_transaction # :nodoc:
+ internal_execute("ROLLBACK", "TRANSACTION", allow_retry: false, materialize_transactions: true)
end
- def exec_rollback_db_transaction #:nodoc:
- execute("ROLLBACK", "TRANSACTION")
+ def exec_restart_db_transaction # :nodoc:
+ internal_execute("ROLLBACK AND CHAIN", "TRANSACTION", allow_retry: false, materialize_transactions: true)
end
- def empty_insert_statement_value(primary_key = nil)
+ def empty_insert_statement_value(primary_key = nil) # :nodoc:
"VALUES ()"
end
@@ -270,7 +292,7 @@ def create_database(name, options = {})
#
# Example:
# drop_database('sebastian_development')
- def drop_database(name) #:nodoc:
+ def drop_database(name) # :nodoc:
execute "DROP DATABASE IF EXISTS #{quote_table_name(name)}"
end
@@ -309,7 +331,8 @@ def change_table_comment(table_name, comment_or_changes) # :nodoc:
#
# Example:
# rename_table('octopuses', 'octopi')
- def rename_table(table_name, new_name)
+ def rename_table(table_name, new_name, **options)
+ validate_table_length!(new_name) unless options[:_uses_legacy_table_name]
schema_cache.clear_data_source_cache!(table_name.to_s)
schema_cache.clear_data_source_cache!(new_name.to_s)
execute "RENAME TABLE #{quote_table_name(table_name)} TO #{quote_table_name(new_name)}"
@@ -346,12 +369,21 @@ def rename_index(table_name, old_name, new_name)
end
end
- def change_column_default(table_name, column_name, default_or_changes) #:nodoc:
+ def change_column_default(table_name, column_name, default_or_changes) # :nodoc:
+ execute "ALTER TABLE #{quote_table_name(table_name)} #{change_column_default_for_alter(table_name, column_name, default_or_changes)}"
+ end
+
+ def build_change_column_default_definition(table_name, column_name, default_or_changes) # :nodoc:
+ column = column_for(table_name, column_name)
+ return unless column
+
default = extract_new_default_value(default_or_changes)
- change_column table_name, column_name, nil, default: default
+ ChangeColumnDefaultDefinition.new(column, default)
end
- def change_column_null(table_name, column_name, null, default = nil) #:nodoc:
+ def change_column_null(table_name, column_name, null, default = nil) # :nodoc:
+ validate_change_column_null_argument!(null)
+
unless null || default.nil?
execute("UPDATE #{quote_table_name(table_name)} SET #{quote_column_name(column_name)}=#{quote(default)} WHERE #{quote_column_name(column_name)} IS NULL")
end
@@ -364,22 +396,64 @@ def change_column_comment(table_name, column_name, comment_or_changes) # :nodoc:
change_column table_name, column_name, nil, comment: comment
end
- def change_column(table_name, column_name, type, **options) #:nodoc:
+ def change_column(table_name, column_name, type, **options) # :nodoc:
execute("ALTER TABLE #{quote_table_name(table_name)} #{change_column_for_alter(table_name, column_name, type, **options)}")
end
- def rename_column(table_name, column_name, new_column_name) #:nodoc:
+ # Builds a ChangeColumnDefinition object.
+ #
+ # This definition object contains information about the column change that would occur
+ # if the same arguments were passed to #change_column. See #change_column for information about
+ # passing a +table_name+, +column_name+, +type+ and other options that can be passed.
+ def build_change_column_definition(table_name, column_name, type, **options) # :nodoc:
+ column = column_for(table_name, column_name)
+ type ||= column.sql_type
+
+ unless options.key?(:default)
+ options[:default] = column.default
+ end
+
+ unless options.key?(:null)
+ options[:null] = column.null
+ end
+
+ unless options.key?(:comment)
+ options[:comment] = column.comment
+ end
+
+ if options[:collation] == :no_collation
+ options.delete(:collation)
+ else
+ options[:collation] ||= column.collation if text_type?(type)
+ end
+
+ unless options.key?(:auto_increment)
+ options[:auto_increment] = column.auto_increment?
+ end
+
+ td = create_table_definition(table_name)
+ cd = td.new_column_definition(column.name, type, **options)
+ ChangeColumnDefinition.new(cd, column.name)
+ end
+
+ def rename_column(table_name, column_name, new_column_name) # :nodoc:
execute("ALTER TABLE #{quote_table_name(table_name)} #{rename_column_for_alter(table_name, column_name, new_column_name)}")
rename_column_indexes(table_name, column_name, new_column_name)
end
- def add_index(table_name, column_name, **options) #:nodoc:
+ def add_index(table_name, column_name, **options) # :nodoc:
+ create_index = build_create_index_definition(table_name, column_name, **options)
+ return unless create_index
+
+ execute schema_creation.accept(create_index)
+ end
+
+ def build_create_index_definition(table_name, column_name, **options) # :nodoc:
index, algorithm, if_not_exists = add_index_options(table_name, column_name, **options)
return if if_not_exists && index_exists?(table_name, column_name, name: index.name)
- create_index = CreateIndexDefinition.new(index, algorithm)
- execute schema_creation.accept(create_index)
+ CreateIndexDefinition.new(index, algorithm)
end
def add_sql_comment!(sql, comment) # :nodoc:
@@ -392,11 +466,13 @@ def foreign_keys(table_name)
scope = quoted_scope(table_name)
- fk_info = exec_query(<<~SQL, "SCHEMA")
+ # MySQL returns 1 row for each column of composite foreign keys.
+ fk_info = internal_exec_query(<<~SQL, "SCHEMA")
SELECT fk.referenced_table_name AS 'to_table',
fk.referenced_column_name AS 'primary_key',
fk.column_name AS 'column',
fk.constraint_name AS 'name',
+ fk.ordinal_position AS 'position',
rc.update_rule AS 'on_update',
rc.delete_rule AS 'on_delete'
FROM information_schema.referential_constraints rc
@@ -409,17 +485,24 @@ def foreign_keys(table_name)
AND rc.table_name = #{scope[:name]}
SQL
- fk_info.map do |row|
+ grouped_fk = fk_info.group_by { |row| row["name"] }.values.each { |group| group.sort_by! { |row| row["position"] } }
+ grouped_fk.map do |group|
+ row = group.first
options = {
- column: row["column"],
name: row["name"],
- primary_key: row["primary_key"]
+ on_update: extract_foreign_key_action(row["on_update"]),
+ on_delete: extract_foreign_key_action(row["on_delete"])
}
- options[:on_update] = extract_foreign_key_action(row["on_update"])
- options[:on_delete] = extract_foreign_key_action(row["on_delete"])
+ if group.one?
+ options[:column] = unquote_identifier(row["column"])
+ options[:primary_key] = row["primary_key"]
+ else
+ options[:column] = group.map { |row| unquote_identifier(row["column"]) }
+ options[:primary_key] = group.map { |row| row["primary_key"] }
+ end
- ForeignKeyDefinition.new(table_name, row["to_table"], options)
+ ForeignKeyDefinition.new(table_name, unquote_identifier(row["to_table"]), options)
end
end
@@ -427,7 +510,7 @@ def check_constraints(table_name)
if supports_check_constraints?
scope = quoted_scope(table_name)
- chk_info = exec_query(<<~SQL, "SCHEMA")
+ sql = <<~SQL
SELECT cc.constraint_name AS 'name',
cc.check_clause AS 'expression'
FROM information_schema.check_constraints cc
@@ -437,13 +520,24 @@ def check_constraints(table_name)
AND tc.table_name = #{scope[:name]}
AND cc.constraint_schema = #{scope[:schema]}
SQL
+ sql += " AND cc.table_name = #{scope[:name]}" if mariadb?
+
+ chk_info = internal_exec_query(sql, "SCHEMA")
chk_info.map do |row|
options = {
name: row["name"]
}
expression = row["expression"]
- expression = expression[1..-2] unless mariadb? # remove parentheses added by mysql
+ expression = expression[1..-2] if expression.start_with?("(") && expression.end_with?(")")
+ expression = strip_whitespace_characters(expression)
+
+ unless mariadb?
+ # MySQL returns check constraints expression in an already escaped form.
+ # This leads to duplicate escaping later (e.g. when the expression is used in the SchemaDumper).
+ expression = expression.gsub("\\'", "'")
+ end
+
CheckConstraintDefinition.new(table_name, expression, options)
end
else
@@ -548,8 +642,12 @@ def build_insert_sql(insert) # :nodoc:
sql << " ON DUPLICATE KEY UPDATE #{no_op_column}=#{no_op_column}"
elsif insert.update_duplicates?
sql << " ON DUPLICATE KEY UPDATE "
- sql << insert.touch_model_timestamps_unless { |column| "#{column}<=>VALUES(#{column})" }
- sql << insert.updatable_columns.map { |column| "#{column}=VALUES(#{column})" }.join(",")
+ if insert.raw_update_sql?
+ sql << insert.raw_update_sql
+ else
+ sql << insert.touch_model_timestamps_unless { |column| "#{column}<=>VALUES(#{column})" }
+ sql << insert.updatable_columns.map { |column| "#{column}=VALUES(#{column})" }.join(",")
+ end
end
sql
@@ -561,58 +659,99 @@ def check_version # :nodoc:
end
end
- private
- def initialize_type_map(m = type_map)
- super
+ class << self
+ def extended_type_map(default_timezone: nil, emulate_booleans:) # :nodoc:
+ super(default_timezone: default_timezone).tap do |m|
+ if emulate_booleans
+ m.register_type %r(^tinyint\(1\))i, Type::Boolean.new
+ end
+ end
+ end
+
+ private
+ def initialize_type_map(m)
+ super
+
+ m.register_type %r(tinytext)i, Type::Text.new(limit: 2**8 - 1)
+ m.register_type %r(tinyblob)i, Type::Binary.new(limit: 2**8 - 1)
+ m.register_type %r(text)i, Type::Text.new(limit: 2**16 - 1)
+ m.register_type %r(blob)i, Type::Binary.new(limit: 2**16 - 1)
+ m.register_type %r(mediumtext)i, Type::Text.new(limit: 2**24 - 1)
+ m.register_type %r(mediumblob)i, Type::Binary.new(limit: 2**24 - 1)
+ m.register_type %r(longtext)i, Type::Text.new(limit: 2**32 - 1)
+ m.register_type %r(longblob)i, Type::Binary.new(limit: 2**32 - 1)
+ m.register_type %r(^float)i, Type::Float.new(limit: 24)
+ m.register_type %r(^double)i, Type::Float.new(limit: 53)
+
+ register_integer_type m, %r(^bigint)i, limit: 8
+ register_integer_type m, %r(^int)i, limit: 4
+ register_integer_type m, %r(^mediumint)i, limit: 3
+ register_integer_type m, %r(^smallint)i, limit: 2
+ register_integer_type m, %r(^tinyint)i, limit: 1
+
+ m.alias_type %r(year)i, "integer"
+ m.alias_type %r(bit)i, "binary"
+ end
- m.register_type(%r(char)i) do |sql_type|
- limit = extract_limit(sql_type)
- Type.lookup(:string, adapter: :mysql2, limit: limit)
+ def register_integer_type(mapping, key, **options)
+ mapping.register_type(key) do |sql_type|
+ if /\bunsigned\b/.match?(sql_type)
+ Type::UnsignedInteger.new(**options)
+ else
+ Type::Integer.new(**options)
+ end
+ end
end
- m.register_type %r(tinytext)i, Type::Text.new(limit: 2**8 - 1)
- m.register_type %r(tinyblob)i, Type::Binary.new(limit: 2**8 - 1)
- m.register_type %r(text)i, Type::Text.new(limit: 2**16 - 1)
- m.register_type %r(blob)i, Type::Binary.new(limit: 2**16 - 1)
- m.register_type %r(mediumtext)i, Type::Text.new(limit: 2**24 - 1)
- m.register_type %r(mediumblob)i, Type::Binary.new(limit: 2**24 - 1)
- m.register_type %r(longtext)i, Type::Text.new(limit: 2**32 - 1)
- m.register_type %r(longblob)i, Type::Binary.new(limit: 2**32 - 1)
- m.register_type %r(^float)i, Type::Float.new(limit: 24)
- m.register_type %r(^double)i, Type::Float.new(limit: 53)
-
- register_integer_type m, %r(^bigint)i, limit: 8
- register_integer_type m, %r(^int)i, limit: 4
- register_integer_type m, %r(^mediumint)i, limit: 3
- register_integer_type m, %r(^smallint)i, limit: 2
- register_integer_type m, %r(^tinyint)i, limit: 1
-
- m.register_type %r(^tinyint\(1\))i, Type::Boolean.new if emulate_booleans
- m.alias_type %r(year)i, "integer"
- m.alias_type %r(bit)i, "binary"
-
- m.register_type %r(^enum)i, Type.lookup(:string, adapter: :mysql2)
- m.register_type %r(^set)i, Type.lookup(:string, adapter: :mysql2)
- end
-
- def register_integer_type(mapping, key, **options)
- mapping.register_type(key) do |sql_type|
- if /\bunsigned\b/.match?(sql_type)
- Type::UnsignedInteger.new(**options)
+ def extract_precision(sql_type)
+ if /\A(?:date)?time(?:stamp)?\b/.match?(sql_type)
+ super || 0
else
- Type::Integer.new(**options)
+ super
end
end
+ end
+
+ EXTENDED_TYPE_MAPS = Concurrent::Map.new
+ EMULATE_BOOLEANS_TRUE = { emulate_booleans: true }.freeze
+
+ private
+ def strip_whitespace_characters(expression)
+ expression = expression.gsub(/\\n|\\\\/, "")
+ expression = expression.gsub(/\s{2,}/, " ")
+ expression
end
- def extract_precision(sql_type)
- if /\A(?:date)?time(?:stamp)?\b/.match?(sql_type)
- super || 0
- else
- super
+ def extended_type_map_key
+ if @default_timezone
+ { default_timezone: @default_timezone, emulate_booleans: emulate_booleans }
+ elsif emulate_booleans
+ EMULATE_BOOLEANS_TRUE
end
end
+ def handle_warnings(sql)
+ return if ActiveRecord.db_warnings_action.nil? || @raw_connection.warning_count == 0
+
+ @affected_rows_before_warnings = @raw_connection.affected_rows
+ result = @raw_connection.query("SHOW WARNINGS")
+ result.each do |level, code, message|
+ warning = SQLWarning.new(message, code, level, sql, @pool)
+ next if warning_ignored?(warning)
+
+ ActiveRecord.db_warnings_action.call(warning)
+ end
+ end
+
+ def warning_ignored?(warning)
+ warning.level == "Note" || super
+ end
+
+ # Make sure we carry over any changes to ActiveRecord.default_timezone that have been
+ # made since we established the connection
+ def sync_timezone_changes(raw_connection)
+ end
+
# See https://dev.mysql.com/doc/mysql-errors/en/server-error-reference.html
ER_DB_CREATE_EXISTS = 1007
ER_FILSORT_ABORT = 1028
@@ -630,69 +769,59 @@ def extract_precision(sql_type)
ER_CANNOT_CREATE_TABLE = 1005
ER_LOCK_WAIT_TIMEOUT = 1205
ER_QUERY_INTERRUPTED = 1317
+ ER_CONNECTION_KILLED = 1927
+ CR_SERVER_GONE_ERROR = 2006
+ CR_SERVER_LOST = 2013
ER_QUERY_TIMEOUT = 3024
ER_FK_INCOMPATIBLE_COLUMNS = 3780
+ ER_CLIENT_INTERACTION_TIMEOUT = 4031
def translate_exception(exception, message:, sql:, binds:)
case error_number(exception)
when nil
if exception.message.match?(/MySQL client is not connected/i)
- ConnectionNotEstablished.new(exception)
+ ConnectionNotEstablished.new(exception, connection_pool: @pool)
else
super
end
+ when ER_CONNECTION_KILLED, CR_SERVER_GONE_ERROR, CR_SERVER_LOST, ER_CLIENT_INTERACTION_TIMEOUT
+ ConnectionFailed.new(message, sql: sql, binds: binds, connection_pool: @pool)
when ER_DB_CREATE_EXISTS
- DatabaseAlreadyExists.new(message, sql: sql, binds: binds)
+ DatabaseAlreadyExists.new(message, sql: sql, binds: binds, connection_pool: @pool)
when ER_DUP_ENTRY
- RecordNotUnique.new(message, sql: sql, binds: binds)
+ RecordNotUnique.new(message, sql: sql, binds: binds, connection_pool: @pool)
when ER_NO_REFERENCED_ROW, ER_ROW_IS_REFERENCED, ER_ROW_IS_REFERENCED_2, ER_NO_REFERENCED_ROW_2
- InvalidForeignKey.new(message, sql: sql, binds: binds)
+ InvalidForeignKey.new(message, sql: sql, binds: binds, connection_pool: @pool)
when ER_CANNOT_ADD_FOREIGN, ER_FK_INCOMPATIBLE_COLUMNS
- mismatched_foreign_key(message, sql: sql, binds: binds)
+ mismatched_foreign_key(message, sql: sql, binds: binds, connection_pool: @pool)
when ER_CANNOT_CREATE_TABLE
if message.include?("errno: 150")
- mismatched_foreign_key(message, sql: sql, binds: binds)
+ mismatched_foreign_key(message, sql: sql, binds: binds, connection_pool: @pool)
else
super
end
when ER_DATA_TOO_LONG
- ValueTooLong.new(message, sql: sql, binds: binds)
+ ValueTooLong.new(message, sql: sql, binds: binds, connection_pool: @pool)
when ER_OUT_OF_RANGE
- RangeError.new(message, sql: sql, binds: binds)
+ RangeError.new(message, sql: sql, binds: binds, connection_pool: @pool)
when ER_NOT_NULL_VIOLATION, ER_DO_NOT_HAVE_DEFAULT
- NotNullViolation.new(message, sql: sql, binds: binds)
+ NotNullViolation.new(message, sql: sql, binds: binds, connection_pool: @pool)
when ER_LOCK_DEADLOCK
- Deadlocked.new(message, sql: sql, binds: binds)
+ Deadlocked.new(message, sql: sql, binds: binds, connection_pool: @pool)
when ER_LOCK_WAIT_TIMEOUT
- LockWaitTimeout.new(message, sql: sql, binds: binds)
+ LockWaitTimeout.new(message, sql: sql, binds: binds, connection_pool: @pool)
when ER_QUERY_TIMEOUT, ER_FILSORT_ABORT
- StatementTimeout.new(message, sql: sql, binds: binds)
+ StatementTimeout.new(message, sql: sql, binds: binds, connection_pool: @pool)
when ER_QUERY_INTERRUPTED
- QueryCanceled.new(message, sql: sql, binds: binds)
+ QueryCanceled.new(message, sql: sql, binds: binds, connection_pool: @pool)
else
super
end
end
def change_column_for_alter(table_name, column_name, type, **options)
- column = column_for(table_name, column_name)
- type ||= column.sql_type
-
- unless options.key?(:default)
- options[:default] = column.default
- end
-
- unless options.key?(:null)
- options[:null] = column.null
- end
-
- unless options.key?(:comment)
- options[:comment] = column.comment
- end
-
- td = create_table_definition(table_name)
- cd = td.new_column_definition(column.name, type, **options)
- schema_creation.accept(ChangeColumnDefinition.new(cd, column.name))
+ cd = build_change_column_definition(table_name, column_name, type, **options)
+ schema_creation.accept(cd)
end
def rename_column_for_alter(table_name, column_name, new_column_name)
@@ -706,7 +835,7 @@ def rename_column_for_alter(table_name, column_name, new_column_name)
comment: column.comment
}
- current_type = exec_query("SHOW COLUMNS FROM #{quote_table_name(table_name)} LIKE #{quote(column_name)}", "SCHEMA").first["Type"]
+ current_type = internal_exec_query("SHOW COLUMNS FROM #{quote_table_name(table_name)} LIKE #{quote(column_name)}", "SCHEMA").first["Type"]
td = create_table_definition(table_name)
cd = td.new_column_definition(new_column_name, current_type, **options)
schema_creation.accept(ChangeColumnDefinition.new(cd, column.name))
@@ -743,9 +872,6 @@ def supports_rename_column?
def configure_connection
variables = @config.fetch(:variables, {}).stringify_keys
- # By default, MySQL 'where id is null' selects the last inserted id; Turn this off.
- variables["sql_auto_is_null"] = 0
-
# Increase timeout so the server doesn't disconnect us.
wait_timeout = self.class.type_cast_config_to_integer(@config[:wait_timeout])
wait_timeout = 2147483 unless wait_timeout.is_a?(Integer)
@@ -780,17 +906,16 @@ def configure_connection
end
# Gather up all of the SET variables...
- variable_assignments = variables.map do |k, v|
+ variable_assignments = variables.filter_map do |k, v|
if defaults.include?(v)
"@@SESSION.#{k} = DEFAULT" # Sets the value to the global or compile default
elsif !v.nil?
"@@SESSION.#{k} = #{quote(v)}"
end
- # or else nil; compact to clear nils out
- end.compact.join(", ")
+ end.join(", ")
# ...and send them all in one query
- execute("SET #{encoding} #{sql_mode_assignment} #{variable_assignments}", "SCHEMA")
+ internal_execute("SET #{encoding} #{sql_mode_assignment} #{variable_assignments}")
end
def column_definitions(table_name) # :nodoc:
@@ -800,7 +925,7 @@ def column_definitions(table_name) # :nodoc:
end
def create_table_info(table_name) # :nodoc:
- exec_query("SHOW CREATE TABLE #{quote_table_name(table_name)}", "SCHEMA").first["Create Table"]
+ internal_exec_query("SHOW CREATE TABLE #{quote_table_name(table_name)}", "SCHEMA").first["Create Table"]
end
def arel_visitor
@@ -811,18 +936,17 @@ def build_statement_pool
StatementPool.new(self.class.type_cast_config_to_integer(@config[:statement_limit]))
end
- def mismatched_foreign_key(message, sql:, binds:)
+ def mismatched_foreign_key_details(message:, sql:)
+ foreign_key_pat =
+ /Referencing column '(\w+)' and referenced/i =~ message ? $1 : '\w+'
+
match = %r/
(?:CREATE|ALTER)\s+TABLE\s*(?:`?\w+`?\.)?`?(?<table>\w+)`?.+?
- FOREIGN\s+KEY\s*\(`?(?<foreign_key>\w+)`?\)\s*
+ FOREIGN\s+KEY\s*\(`?(?<foreign_key>#{foreign_key_pat})`?\)\s*
REFERENCES\s*(`?(?<target_table>\w+)`?)\s*\(`?(?<primary_key>\w+)`?\)
/xmi.match(sql)
- options = {
- message: message,
- sql: sql,
- binds: binds,
- }
+ options = {}
if match
options[:table] = match[:table]
@@ -832,24 +956,29 @@ def mismatched_foreign_key(message, sql:, binds:)
options[:primary_key_column] = column_for(match[:target_table], match[:primary_key])
end
- MismatchedForeignKey.new(**options)
+ options
end
- def version_string(full_version_string)
- full_version_string.match(/^(?:5\.5\.5-)?(\d+\.\d+\.\d+)/)[1]
- end
+ def mismatched_foreign_key(message, sql:, binds:, connection_pool:)
+ options = {
+ message: message,
+ sql: sql,
+ binds: binds,
+ connection_pool: connection_pool
+ }
- # Alias MysqlString to work Mashal.load(File.read("legacy_record.dump")).
- # TODO: Remove the constant alias once Rails 6.1 has released.
- MysqlString = Type::String # :nodoc:
+ if sql
+ options.update mismatched_foreign_key_details(message: message, sql: sql)
+ else
+ options[:query_parser] = ->(sql) { mismatched_foreign_key_details(message: message, sql: sql) }
+ end
- ActiveRecord::Type.register(:immutable_string, adapter: :mysql2) do |_, **args|
- Type::ImmutableString.new(true: "1", false: "0", **args)
+ MismatchedForeignKey.new(**options)
end
- ActiveRecord::Type.register(:string, adapter: :mysql2) do |_, **args|
- Type::String.new(true: "1", false: "0", **args)
+
+ def version_string(full_version_string)
+ full_version_string.match(/^(?:5\.5\.5-)?(\d+\.\d+\.\d+)/)[1]
end
- ActiveRecord::Type.register(:unsigned_integer, Type::UnsignedInteger, adapter: :mysql2)
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/column.rb b/activerecord/lib/active_record/connection_adapters/column.rb
index 85521ce9ea..4c05be416a 100644
--- a/activerecord/lib/active_record/connection_adapters/column.rb
+++ b/activerecord/lib/active_record/connection_adapters/column.rb
@@ -63,6 +63,15 @@ def encode_with(coder)
coder["comment"] = @comment
end
+ # whether the column is auto-populated by the database using a sequence
+ def auto_incremented_by_db?
+ false
+ end
+
+ def auto_populated?
+ auto_incremented_by_db? || default_function
+ end
+
def ==(other)
other.is_a?(Column) &&
name == other.name &&
@@ -87,6 +96,10 @@ def hash
comment.hash
end
+ def virtual?
+ false
+ end
+
private
def deduplicated
@name = -name
diff --git a/activerecord/lib/active_record/connection_adapters/mysql/column.rb b/activerecord/lib/active_record/connection_adapters/mysql/column.rb
index c21529b0a8..0d4b022548 100644
--- a/activerecord/lib/active_record/connection_adapters/mysql/column.rb
+++ b/activerecord/lib/active_record/connection_adapters/mysql/column.rb
@@ -17,6 +17,7 @@ def case_sensitive?
def auto_increment?
extra == "auto_increment"
end
+ alias_method :auto_incremented_by_db?, :auto_increment?
def virtual?
/\b(?:VIRTUAL|STORED|PERSISTENT)\b/.match?(extra)
diff --git a/activerecord/lib/active_record/connection_adapters/mysql/database_statements.rb b/activerecord/lib/active_record/connection_adapters/mysql/database_statements.rb
index 1ef278c240..86509a136c 100644
--- a/activerecord/lib/active_record/connection_adapters/mysql/database_statements.rb
+++ b/activerecord/lib/active_record/connection_adapters/mysql/database_statements.rb
@@ -4,125 +4,57 @@ module ActiveRecord
module ConnectionAdapters
module MySQL
module DatabaseStatements
- # Returns an ActiveRecord::Result instance.
- def select_all(*, **) # :nodoc:
- result = if ExplainRegistry.collect? && prepared_statements
- unprepared_statement { super }
- else
- super
- end
- @connection.abandon_results!
- result
- end
-
- def query(sql, name = nil) # :nodoc:
- execute(sql, name).to_a
- end
-
- READ_QUERY = ActiveRecord::ConnectionAdapters::AbstractAdapter.build_read_query_regexp(
- :desc, :describe, :set, :show, :use
+ READ_QUERY = AbstractAdapter.build_read_query_regexp(
+ :desc, :describe, :set, :show, :use, :kill
) # :nodoc:
private_constant :READ_QUERY
+ # https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_current-timestamp
+ # https://dev.mysql.com/doc/refman/5.7/en/date-and-time-type-syntax.html
+ HIGH_PRECISION_CURRENT_TIMESTAMP = Arel.sql("CURRENT_TIMESTAMP(6)").freeze # :nodoc:
+ private_constant :HIGH_PRECISION_CURRENT_TIMESTAMP
+
def write_query?(sql) # :nodoc:
!READ_QUERY.match?(sql)
rescue ArgumentError # Invalid encoding
!READ_QUERY.match?(sql.b)
end
- def explain(arel, binds = [])
- sql = "EXPLAIN #{to_sql(arel, binds)}"
- start = Concurrent.monotonic_time
- result = exec_query(sql, "EXPLAIN", binds)
- elapsed = Concurrent.monotonic_time - start
-
- MySQL::ExplainPrettyPrinter.new.pp(result, elapsed)
+ def high_precision_current_timestamp
+ HIGH_PRECISION_CURRENT_TIMESTAMP
end
- # Executes the SQL statement in the context of this connection.
- def execute(sql, name = nil)
- if preventing_writes? && write_query?(sql)
- raise ActiveRecord::ReadOnlyError, "Write query attempted while in readonly mode: #{sql}"
- end
-
- # make sure we carry over any changes to ActiveRecord::Base.default_timezone that have been
- # made since we established the connection
- @connection.query_options[:database_timezone] = ActiveRecord::Base.default_timezone
+ def explain(arel, binds = [], options = [])
+ sql = build_explain_clause(options) + " " + to_sql(arel, binds)
+ start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
+ result = internal_exec_query(sql, "EXPLAIN", binds)
+ elapsed = Process.clock_gettime(Process::CLOCK_MONOTONIC) - start
- super
+ MySQL::ExplainPrettyPrinter.new.pp(result, elapsed)
end
- def exec_query(sql, name = "SQL", binds = [], prepare: false)
- if without_prepared_statement?(binds)
- execute_and_free(sql, name) do |result|
- if result
- build_result(columns: result.fields, rows: result.to_a)
- else
- build_result(columns: [], rows: [])
- end
- end
- else
- exec_stmt_and_free(sql, name, binds, cache_stmt: prepare) do |_, result|
- if result
- build_result(columns: result.fields, rows: result.to_a)
- else
- build_result(columns: [], rows: [])
- end
- end
- end
- end
+ def build_explain_clause(options = [])
+ return "EXPLAIN" if options.empty?
- def exec_delete(sql, name = nil, binds = [])
- if without_prepared_statement?(binds)
- @lock.synchronize do
- execute_and_free(sql, name) { @connection.affected_rows }
- end
+ explain_clause = "EXPLAIN #{options.join(" ").upcase}"
+
+ if analyze_without_explain? && explain_clause.include?("ANALYZE")
+ explain_clause.sub("EXPLAIN ", "")
else
- exec_stmt_and_free(sql, name, binds) { |stmt| stmt.affected_rows }
+ explain_clause
end
end
- alias :exec_update :exec_delete
private
- def execute_batch(statements, name = nil)
- combine_multi_statements(statements).each do |statement|
- execute(statement, name)
- end
- @connection.abandon_results!
+ # https://mariadb.com/kb/en/analyze-statement/
+ def analyze_without_explain?
+ mariadb? && database_version >= "10.1.0"
end
def default_insert_value(column)
super unless column.auto_increment?
end
- def last_inserted_id(result)
- @connection.last_id
- end
-
- def multi_statements_enabled?
- flags = @config[:flags]
-
- if flags.is_a?(Array)
- flags.include?("MULTI_STATEMENTS")
- else
- flags.anybits?(Mysql2::Client::MULTI_STATEMENTS)
- end
- end
-
- def with_multi_statements
- multi_statements_was = multi_statements_enabled?
-
- unless multi_statements_was
- @connection.set_server_option(Mysql2::Client::OPTION_MULTI_STATEMENTS_ON)
- end
-
- yield
- ensure
- unless multi_statements_was
- @connection.set_server_option(Mysql2::Client::OPTION_MULTI_STATEMENTS_OFF)
- end
- end
-
def combine_multi_statements(total_sql)
total_sql.each_with_object([]) do |sql, total_sql_chunks|
previous_packet = total_sql_chunks.last
@@ -149,47 +81,6 @@ def max_allowed_packet_reached?(current_packet, previous_packet)
def max_allowed_packet
@max_allowed_packet ||= show_variable("max_allowed_packet")
end
-
- def exec_stmt_and_free(sql, name, binds, cache_stmt: false)
- if preventing_writes? && write_query?(sql)
- raise ActiveRecord::ReadOnlyError, "Write query attempted while in readonly mode: #{sql}"
- end
-
- materialize_transactions
- mark_transaction_written_if_write(sql)
-
- # make sure we carry over any changes to ActiveRecord::Base.default_timezone that have been
- # made since we established the connection
- @connection.query_options[:database_timezone] = ActiveRecord::Base.default_timezone
-
- type_casted_binds = type_casted_binds(binds)
-
- log(sql, name, binds, type_casted_binds) do
- if cache_stmt
- stmt = @statements[sql] ||= @connection.prepare(sql)
- else
- stmt = @connection.prepare(sql)
- end
-
- begin
- result = ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
- stmt.execute(*type_casted_binds)
- end
- rescue Mysql2::Error => e
- if cache_stmt
- @statements.delete(sql)
- else
- stmt.close
- end
- raise e
- end
-
- ret = yield stmt, result
- result.free if result
- stmt.close unless cache_stmt
- ret
- end
- end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/mysql/quoting.rb b/activerecord/lib/active_record/connection_adapters/mysql/quoting.rb
index 4a88ed1834..4392349028 100644
--- a/activerecord/lib/active_record/connection_adapters/mysql/quoting.rb
+++ b/activerecord/lib/active_record/connection_adapters/mysql/quoting.rb
@@ -6,12 +6,35 @@ module ActiveRecord
module ConnectionAdapters
module MySQL
module Quoting # :nodoc:
+ QUOTED_COLUMN_NAMES = Concurrent::Map.new # :nodoc:
+ QUOTED_TABLE_NAMES = Concurrent::Map.new # :nodoc:
+
+ def cast_bound_value(value)
+ case value
+ when Rational
+ value.to_f.to_s
+ when Numeric
+ value.to_s
+ when BigDecimal
+ value.to_s("F")
+ when true
+ "1"
+ when false
+ "0"
+ when ActiveSupport::Duration
+ warn_quote_duration_deprecated
+ value.to_s
+ else
+ value
+ end
+ end
+
def quote_column_name(name)
- self.class.quoted_column_names[name] ||= "`#{super.gsub('`', '``')}`"
+ QUOTED_COLUMN_NAMES[name] ||= "`#{super.gsub('`', '``')}`"
end
def quote_table_name(name)
- self.class.quoted_table_names[name] ||= super.gsub(".", "`.`").freeze
+ QUOTED_TABLE_NAMES[name] ||= super.gsub(".", "`.`").freeze
end
def unquoted_true
@@ -34,6 +57,34 @@ def quoted_binary(value)
"x'#{value.hex}'"
end
+ def unquote_identifier(identifier)
+ if identifier && identifier.start_with?("`")
+ identifier[1..-2]
+ else
+ identifier
+ end
+ end
+
+ # Override +type_cast+ we pass to mysql2 Date and Time objects instead
+ # of Strings since MySQL adapters are able to handle those classes more efficiently.
+ def type_cast(value) # :nodoc:
+ case value
+ when ActiveSupport::TimeWithZone
+ # We need to check explicitly for ActiveSupport::TimeWithZone because
+ # we need to transform it to Time objects but we don't want to
+ # transform Time objects to themselves.
+ if default_timezone == :utc
+ value.getutc
+ else
+ value.getlocal
+ end
+ when Date, Time
+ value
+ else
+ super
+ end
+ end
+
def column_name_matcher
COLUMN_NAME
end
@@ -47,7 +98,7 @@ def column_name_with_order_matcher
(
(?:
# `table_name`.`column_name` | function(one or no argument)
- ((?:\w+\.|`\w+`\.)?(?:\w+|`\w+`)) | \w+\((?:|\g<2>)\)
+ ((?:\w+\.|`\w+`\.)?(?:\w+|`\w+`) | \w+\((?:|\g<2>)\))
)
(?:(?:\s+AS)?\s+(?:\w+|`\w+`))?
)
@@ -60,8 +111,9 @@ def column_name_with_order_matcher
(
(?:
# `table_name`.`column_name` | function(one or no argument)
- ((?:\w+\.|`\w+`\.)?(?:\w+|`\w+`)) | \w+\((?:|\g<2>)\)
+ ((?:\w+\.|`\w+`\.)?(?:\w+|`\w+`) | \w+\((?:|\g<2>)\))
)
+ (?:\s+COLLATE\s+(?:\w+|"\w+"))?
(?:\s+ASC|\s+DESC)?
)
(?:\s*,\s*\g<1>)*
@@ -69,27 +121,6 @@ def column_name_with_order_matcher
/ix
private_constant :COLUMN_NAME, :COLUMN_NAME_WITH_ORDER
-
- private
- # Override +_type_cast+ we pass to mysql2 Date and Time objects instead
- # of Strings since mysql2 is able to handle those classes more efficiently.
- def _type_cast(value)
- case value
- when ActiveSupport::TimeWithZone
- # We need to check explicitly for ActiveSupport::TimeWithZone because
- # we need to transform it to Time objects but we don't want to
- # transform Time objects to themselves.
- if ActiveRecord::Base.default_timezone == :utc
- value.getutc
- else
- value.getlocal
- end
- when Date, Time
- value
- else
- super
- end
- end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/mysql/schema_creation.rb b/activerecord/lib/active_record/connection_adapters/mysql/schema_creation.rb
index 60dd7846e2..cc43892140 100644
--- a/activerecord/lib/active_record/connection_adapters/mysql/schema_creation.rb
+++ b/activerecord/lib/active_record/connection_adapters/mysql/schema_creation.rb
@@ -24,6 +24,15 @@ def visit_ChangeColumnDefinition(o)
add_column_position!(change_column_sql, column_options(o.column))
end
+ def visit_ChangeColumnDefaultDefinition(o)
+ sql = +"ALTER COLUMN #{quote_column_name(o.column.name)} "
+ if o.default.nil? && !o.column.null
+ sql << "DROP DEFAULT"
+ else
+ sql << "SET DEFAULT #{quote_default_expression(o.default, o.column)}"
+ end
+ end
+
def visit_CreateIndexDefinition(o)
sql = visit_IndexDefinition(o.index, true)
sql << " #{o.algorithm}" if o.algorithm
diff --git a/activerecord/lib/active_record/connection_adapters/mysql/schema_definitions.rb b/activerecord/lib/active_record/connection_adapters/mysql/schema_definitions.rb
index 52a8a0b97d..33669cd7c2 100644
--- a/activerecord/lib/active_record/connection_adapters/mysql/schema_definitions.rb
+++ b/activerecord/lib/active_record/connection_adapters/mysql/schema_definitions.rb
@@ -57,6 +57,7 @@ module ColumnMethods
end
end
+ # = Active Record MySQL Adapter \Table Definition
class TableDefinition < ActiveRecord::ConnectionAdapters::TableDefinition
include ColumnMethods
@@ -85,16 +86,24 @@ def new_column_definition(name, type, **options) # :nodoc:
end
private
+ def valid_column_definition_options
+ super + [:auto_increment, :charset, :as, :size, :unsigned, :first, :after, :type, :stored]
+ end
+
def aliased_types(name, fallback)
fallback
end
def integer_like_primary_key_type(type, options)
- options[:auto_increment] = true
+ unless options[:auto_increment] == false
+ options[:auto_increment] = true
+ end
+
type
end
end
+ # = Active Record MySQL Adapter \Table
class Table < ActiveRecord::ConnectionAdapters::Table
include ColumnMethods
end
diff --git a/activerecord/lib/active_record/connection_adapters/mysql/schema_dumper.rb b/activerecord/lib/active_record/connection_adapters/mysql/schema_dumper.rb
index 0eff3131b6..da595356ed 100644
--- a/activerecord/lib/active_record/connection_adapters/mysql/schema_dumper.rb
+++ b/activerecord/lib/active_record/connection_adapters/mysql/schema_dumper.rb
@@ -53,14 +53,20 @@ def schema_limit(column)
end
def schema_precision(column)
- super unless /\A(?:date)?time(?:stamp)?\b/.match?(column.sql_type) && column.precision == 0
+ if /\Atime(?:stamp)?\b/.match?(column.sql_type) && column.precision == 0
+ nil
+ elsif column.type == :datetime
+ column.precision == 0 ? "nil" : super
+ else
+ super
+ end
end
def schema_collation(column)
if column.collation
@table_collation_cache ||= {}
@table_collation_cache[table_name] ||=
- @connection.exec_query("SHOW TABLE STATUS LIKE #{@connection.quote(table_name)}", "SCHEMA").first["Collation"]
+ @connection.internal_exec_query("SHOW TABLE STATUS LIKE #{@connection.quote(table_name)}", "SCHEMA").first["Collation"]
column.collation.inspect if column.collation != @table_collation_cache[table_name]
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/mysql/schema_statements.rb b/activerecord/lib/active_record/connection_adapters/mysql/schema_statements.rb
index 42ca9bc732..cfc351fa4b 100644
--- a/activerecord/lib/active_record/connection_adapters/mysql/schema_statements.rb
+++ b/activerecord/lib/active_record/connection_adapters/mysql/schema_statements.rb
@@ -36,7 +36,7 @@ def indexes(table_name)
end
if row[:Expression]
- expression = row[:Expression]
+ expression = row[:Expression].gsub("\\'", "'")
expression = +"(#{expression})" unless expression.start_with?("(")
indexes.last[-2] << expression
indexes.last[-1][:expressions] ||= {}
@@ -57,9 +57,9 @@ def indexes(table_name)
orders = options.delete(:orders)
lengths = options.delete(:lengths)
- columns = index[-1].map { |name|
+ columns = index[-1].to_h { |name|
[ name.to_sym, expressions[name] || +quote_column_name(name) ]
- }.to_h
+ }
index[-1] = add_options_for_index_columns(
columns, order: orders, length: lengths
@@ -125,6 +125,10 @@ def table_alias_length
256 # https://dev.mysql.com/doc/refman/en/identifiers.html
end
+ def schema_creation # :nodoc:
+ MySQL::SchemaCreation.new(self)
+ end
+
private
CHARSETS_OF_4BYTES_MAXLEN = ["utf8mb4", "utf16", "utf16le", "utf32"]
@@ -150,26 +154,45 @@ def default_row_format
@default_row_format
end
- def schema_creation
- MySQL::SchemaCreation.new(self)
+ def valid_primary_key_options
+ super + [:unsigned]
end
def create_table_definition(name, **options)
MySQL::TableDefinition.new(self, name, **options)
end
- def new_column_from_field(table_name, field)
+ def default_type(table_name, field_name)
+ match = create_table_info(table_name)&.match(/`#{field_name}` (.+) DEFAULT ('|\d+|[A-z]+)/)
+ default_pre = match[2] if match
+
+ if default_pre == "'"
+ :string
+ elsif default_pre&.match?(/^\d+$/)
+ :integer
+ elsif default_pre&.match?(/^[A-z]+$/)
+ :function
+ end
+ end
+
+ def new_column_from_field(table_name, field, _definitions)
+ field_name = field.fetch(:Field)
type_metadata = fetch_type_metadata(field[:Type], field[:Extra])
default, default_function = field[:Default], nil
if type_metadata.type == :datetime && /\ACURRENT_TIMESTAMP(?:\([0-6]?\))?\z/i.match?(default)
+ default = "#{default} ON UPDATE #{default}" if /on update CURRENT_TIMESTAMP/i.match?(field[:Extra])
default, default_function = nil, default
elsif type_metadata.extra == "DEFAULT_GENERATED"
default = +"(#{default})" unless default.start_with?("(")
default, default_function = nil, default
- elsif type_metadata.type == :text && default
+ elsif type_metadata.type == :text && default&.start_with?("'")
# strip and unescape quotes
default = default[1...-1].gsub("\\'", "'")
+ elsif default&.match?(/\A\d/)
+ # Its a number so we can skip the query to check if it is a function
+ elsif default && default_type(table_name, field_name) == :function
+ default, default_function = nil, default
end
MySQL::Column.new(
@@ -206,14 +229,15 @@ def add_options_for_index_columns(quoted_columns, **options)
def data_source_sql(name = nil, type: nil)
scope = quoted_scope(name, type: type)
- sql = +"SELECT table_name FROM (SELECT table_name, table_type FROM information_schema.tables "
- sql << " WHERE table_schema = #{scope[:schema]}) _subquery"
- if scope[:type] || scope[:name]
- conditions = []
- conditions << "_subquery.table_type = #{scope[:type]}" if scope[:type]
- conditions << "_subquery.table_name = #{scope[:name]}" if scope[:name]
- sql << " WHERE #{conditions.join(" AND ")}"
+ sql = +"SELECT table_name FROM information_schema.tables"
+ sql << " WHERE table_schema = #{scope[:schema]}"
+
+ if scope[:name]
+ sql << " AND table_name = #{scope[:name]}"
+ sql << " AND table_name IN (SELECT table_name FROM information_schema.tables WHERE table_schema = #{scope[:schema]})"
end
+
+ sql << " AND table_type = #{scope[:type]}" if scope[:type]
sql
end
diff --git a/activerecord/lib/active_record/connection_adapters/mysql2/database_statements.rb b/activerecord/lib/active_record/connection_adapters/mysql2/database_statements.rb
new file mode 100644
index 0000000000..e488968c15
--- /dev/null
+++ b/activerecord/lib/active_record/connection_adapters/mysql2/database_statements.rb
@@ -0,0 +1,151 @@
+# frozen_string_literal: true
+
+module ActiveRecord
+ module ConnectionAdapters
+ module Mysql2
+ module DatabaseStatements
+ # Returns an ActiveRecord::Result instance.
+ def select_all(*, **) # :nodoc:
+ result = nil
+ with_raw_connection do |conn|
+ result = if ExplainRegistry.collect? && prepared_statements
+ unprepared_statement { super }
+ else
+ super
+ end
+ conn.abandon_results!
+ end
+ result
+ end
+
+ def internal_exec_query(sql, name = "SQL", binds = [], prepare: false, async: false) # :nodoc:
+ if without_prepared_statement?(binds)
+ execute_and_free(sql, name, async: async) do |result|
+ if result
+ build_result(columns: result.fields, rows: result.to_a)
+ else
+ build_result(columns: [], rows: [])
+ end
+ end
+ else
+ exec_stmt_and_free(sql, name, binds, cache_stmt: prepare, async: async) do |_, result|
+ if result
+ build_result(columns: result.fields, rows: result.to_a)
+ else
+ build_result(columns: [], rows: [])
+ end
+ end
+ end
+ end
+
+ def exec_delete(sql, name = nil, binds = []) # :nodoc:
+ if without_prepared_statement?(binds)
+ with_raw_connection do |conn|
+ @affected_rows_before_warnings = nil
+ execute_and_free(sql, name) { @affected_rows_before_warnings || conn.affected_rows }
+ end
+ else
+ exec_stmt_and_free(sql, name, binds) { |stmt| stmt.affected_rows }
+ end
+ end
+ alias :exec_update :exec_delete
+
+ private
+ def sync_timezone_changes(raw_connection)
+ raw_connection.query_options[:database_timezone] = default_timezone
+ end
+
+ def execute_batch(statements, name = nil)
+ statements = statements.map { |sql| transform_query(sql) }
+ combine_multi_statements(statements).each do |statement|
+ with_raw_connection do |conn|
+ raw_execute(statement, name)
+ conn.abandon_results!
+ end
+ end
+ end
+
+ def last_inserted_id(result)
+ @raw_connection&.last_id
+ end
+
+ def multi_statements_enabled?
+ flags = @config[:flags]
+
+ if flags.is_a?(Array)
+ flags.include?("MULTI_STATEMENTS")
+ else
+ flags.anybits?(::Mysql2::Client::MULTI_STATEMENTS)
+ end
+ end
+
+ def with_multi_statements
+ if multi_statements_enabled?
+ return yield
+ end
+
+ with_raw_connection do |conn|
+ conn.set_server_option(::Mysql2::Client::OPTION_MULTI_STATEMENTS_ON)
+
+ yield
+ ensure
+ conn.set_server_option(::Mysql2::Client::OPTION_MULTI_STATEMENTS_OFF)
+ end
+ end
+
+ def raw_execute(sql, name, async: false, allow_retry: false, materialize_transactions: true)
+ log(sql, name, async: async) do
+ with_raw_connection(allow_retry: allow_retry, materialize_transactions: materialize_transactions) do |conn|
+ sync_timezone_changes(conn)
+ result = conn.query(sql)
+ verified!
+ handle_warnings(sql)
+ result
+ end
+ end
+ end
+
+ def exec_stmt_and_free(sql, name, binds, cache_stmt: false, async: false)
+ sql = transform_query(sql)
+ check_if_write_query(sql)
+
+ mark_transaction_written_if_write(sql)
+
+ type_casted_binds = type_casted_binds(binds)
+
+ log(sql, name, binds, type_casted_binds, async: async) do
+ with_raw_connection do |conn|
+ sync_timezone_changes(conn)
+
+ if cache_stmt
+ stmt = @statements[sql] ||= conn.prepare(sql)
+ else
+ stmt = conn.prepare(sql)
+ end
+
+ begin
+ result = ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
+ stmt.execute(*type_casted_binds)
+ end
+ verified!
+ result
+ rescue ::Mysql2::Error => e
+ if cache_stmt
+ @statements.delete(sql)
+ else
+ stmt.close
+ end
+ raise e
+ end
+
+ ret = yield stmt, result
+ result.free if result
+ stmt.close unless cache_stmt
+ ret
+ end
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/connection_adapters/mysql2_adapter.rb b/activerecord/lib/active_record/connection_adapters/mysql2_adapter.rb
index 478b4e4729..da59f37af7 100644
--- a/activerecord/lib/active_record/connection_adapters/mysql2_adapter.rb
+++ b/activerecord/lib/active_record/connection_adapters/mysql2_adapter.rb
@@ -1,62 +1,80 @@
# frozen_string_literal: true
require "active_record/connection_adapters/abstract_mysql_adapter"
-require "active_record/connection_adapters/mysql/database_statements"
+require "active_record/connection_adapters/mysql2/database_statements"
gem "mysql2", "~> 0.5"
require "mysql2"
module ActiveRecord
module ConnectionHandling # :nodoc:
+ def mysql2_adapter_class
+ ConnectionAdapters::Mysql2Adapter
+ end
+
# Establishes a connection to the database that's used by all Active Record objects.
def mysql2_connection(config)
- config = config.symbolize_keys
- config[:flags] ||= 0
-
- if config[:flags].kind_of? Array
- config[:flags].push "FOUND_ROWS"
- else
- config[:flags] |= Mysql2::Client::FOUND_ROWS
- end
-
- ConnectionAdapters::Mysql2Adapter.new(
- ConnectionAdapters::Mysql2Adapter.new_client(config),
- logger,
- nil,
- config,
- )
+ mysql2_adapter_class.new(config)
end
end
module ConnectionAdapters
+ # = Active Record MySQL2 Adapter
class Mysql2Adapter < AbstractMysqlAdapter
- ER_BAD_DB_ERROR = 1049
+ ER_BAD_DB_ERROR = 1049
+ ER_DBACCESS_DENIED_ERROR = 1044
+ ER_ACCESS_DENIED_ERROR = 1045
+ ER_CONN_HOST_ERROR = 2003
+ ER_UNKNOWN_HOST_ERROR = 2005
+
ADAPTER_NAME = "Mysql2"
- include MySQL::DatabaseStatements
+ include Mysql2::DatabaseStatements
class << self
def new_client(config)
- Mysql2::Client.new(config)
- rescue Mysql2::Error => error
- if error.error_number == ConnectionAdapters::Mysql2Adapter::ER_BAD_DB_ERROR
- raise ActiveRecord::NoDatabaseError
+ ::Mysql2::Client.new(config)
+ rescue ::Mysql2::Error => error
+ case error.error_number
+ when ER_BAD_DB_ERROR
+ raise ActiveRecord::NoDatabaseError.db_error(config[:database])
+ when ER_DBACCESS_DENIED_ERROR, ER_ACCESS_DENIED_ERROR
+ raise ActiveRecord::DatabaseConnectionError.username_error(config[:username])
+ when ER_CONN_HOST_ERROR, ER_UNKNOWN_HOST_ERROR
+ raise ActiveRecord::DatabaseConnectionError.hostname_error(config[:host])
else
raise ActiveRecord::ConnectionNotEstablished, error.message
end
end
- end
- def initialize(connection, logger, connection_options, config)
- superclass_config = config.reverse_merge(prepared_statements: false)
- super(connection, logger, connection_options, superclass_config)
- configure_connection
+ private
+ def initialize_type_map(m)
+ super
+
+ m.register_type(%r(char)i) do |sql_type|
+ limit = extract_limit(sql_type)
+ Type.lookup(:string, adapter: :mysql2, limit: limit)
+ end
+
+ m.register_type %r(^enum)i, Type.lookup(:string, adapter: :mysql2)
+ m.register_type %r(^set)i, Type.lookup(:string, adapter: :mysql2)
+ end
end
- def self.database_exists?(config)
- !!ActiveRecord::Base.mysql2_connection(config)
- rescue ActiveRecord::NoDatabaseError
- false
+ TYPE_MAP = Type::TypeMap.new.tap { |m| initialize_type_map(m) }
+
+ def initialize(...)
+ super
+
+ @config[:flags] ||= 0
+
+ if @config[:flags].kind_of? Array
+ @config[:flags].push "FOUND_ROWS"
+ else
+ @config[:flags] |= ::Mysql2::Client::FOUND_ROWS
+ end
+
+ @connection_parameters ||= @config
end
def supports_json?
@@ -75,17 +93,19 @@ def supports_savepoints?
true
end
+ def savepoint_errors_invalidate_transactions?
+ true
+ end
+
def supports_lazy_transactions?
true
end
# HELPER METHODS ===========================================
- def each_hash(result) # :nodoc:
+ def each_hash(result, &block) # :nodoc:
if block_given?
- result.each(as: :hash, symbolize_keys: true) do |row|
- yield row
- end
+ result.each(as: :hash, symbolize_keys: true, &block)
else
to_enum(:each_hash, result)
end
@@ -99,10 +119,11 @@ def error_number(exception)
# QUOTING ==================================================
#++
+ # Quotes strings for use in SQL input.
def quote_string(string)
- @connection.escape(string)
- rescue Mysql2::Error => error
- raise translate_exception(error, message: error.message, sql: "<escape>", binds: [])
+ with_raw_connection(allow_retry: true, materialize_transactions: false) do |connection|
+ connection.escape(string)
+ end
end
#--
@@ -110,37 +131,45 @@ def quote_string(string)
#++
def active?
- @connection.ping
+ !!@raw_connection&.ping
end
- def reconnect!
- super
- disconnect!
- connect
- end
alias :reset! :reconnect!
# Disconnects from the database if already connected.
# Otherwise, this method does nothing.
def disconnect!
super
- @connection.close
+ @raw_connection&.close
+ @raw_connection = nil
end
def discard! # :nodoc:
super
- @connection.automatic_close = false
- @connection = nil
+ @raw_connection&.automatic_close = false
+ @raw_connection = nil
end
private
+ def text_type?(type)
+ TYPE_MAP.lookup(type).is_a?(Type::String) || TYPE_MAP.lookup(type).is_a?(Type::Text)
+ end
+
def connect
- @connection = self.class.new_client(@config)
- configure_connection
+ @raw_connection = self.class.new_client(@connection_parameters)
+ rescue ConnectionNotEstablished => ex
+ raise ex.set_pool(@pool)
+ end
+
+ def reconnect
+ @raw_connection&.close
+ @raw_connection = nil
+ connect
end
def configure_connection
- @connection.query_options[:as] = :array
+ @raw_connection.query_options[:as] = :array
+ @raw_connection.query_options[:database_timezone] = default_timezone
super
end
@@ -149,16 +178,38 @@ def full_version
end
def get_full_version
- @connection.server_info[:version]
+ any_raw_connection.server_info[:version]
end
def translate_exception(exception, message:, sql:, binds:)
- if exception.is_a?(Mysql2::Error::TimeoutError) && !exception.error_number
- ActiveRecord::AdapterTimeout.new(message, sql: sql, binds: binds)
+ if exception.is_a?(::Mysql2::Error::TimeoutError) && !exception.error_number
+ ActiveRecord::AdapterTimeout.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ elsif exception.is_a?(::Mysql2::Error::ConnectionError)
+ if exception.message.match?(/MySQL client is not connected/i)
+ ActiveRecord::ConnectionNotEstablished.new(exception, connection_pool: @pool)
+ else
+ ActiveRecord::ConnectionFailed.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ end
else
super
end
end
+
+ def default_prepared_statements
+ false
+ end
+
+ ActiveRecord::Type.register(:immutable_string, adapter: :mysql2) do |_, **args|
+ Type::ImmutableString.new(true: "1", false: "0", **args)
+ end
+
+ ActiveRecord::Type.register(:string, adapter: :mysql2) do |_, **args|
+ Type::String.new(true: "1", false: "0", **args)
+ end
+
+ ActiveRecord::Type.register(:unsigned_integer, Type::UnsignedInteger, adapter: :mysql2)
end
+
+ ActiveSupport.run_load_hooks(:active_record_mysql2adapter, Mysql2Adapter)
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/pool_config.rb b/activerecord/lib/active_record/connection_adapters/pool_config.rb
index 681880a408..160cc44851 100644
--- a/activerecord/lib/active_record/connection_adapters/pool_config.rb
+++ b/activerecord/lib/active_record/connection_adapters/pool_config.rb
@@ -5,8 +5,13 @@ module ConnectionAdapters
class PoolConfig # :nodoc:
include Mutex_m
- attr_reader :db_config, :connection_klass
- attr_accessor :schema_cache
+ attr_reader :db_config, :role, :shard
+ attr_writer :schema_reflection
+ attr_accessor :connection_class
+
+ def schema_reflection
+ @schema_reflection ||= SchemaReflection.new(db_config.lazy_schema_cache_path)
+ end
INSTANCES = ObjectSpace::WeakMap.new
private_constant :INSTANCES
@@ -15,27 +20,31 @@ class << self
def discard_pools!
INSTANCES.each_key(&:discard_pool!)
end
+
+ def disconnect_all!
+ INSTANCES.each_key { |c| c.disconnect!(automatic_reconnect: true) }
+ end
end
- def initialize(connection_klass, db_config)
+ def initialize(connection_class, db_config, role, shard)
super()
- @connection_klass = connection_klass
+ @connection_class = connection_class
@db_config = db_config
+ @role = role
+ @shard = shard
@pool = nil
INSTANCES[self] = self
end
- def connection_specification_name
- if connection_klass.is_a?(String)
- connection_klass
- elsif connection_klass.primary_class?
+ def connection_name
+ if connection_class.primary_class?
"ActiveRecord::Base"
else
- connection_klass.name
+ connection_class.name
end
end
- def disconnect!
+ def disconnect!(automatic_reconnect: false)
ActiveSupport::ForkTracker.check!
return unless @pool
@@ -43,7 +52,7 @@ def disconnect!
synchronize do
return unless @pool
- @pool.automatic_reconnect = false
+ @pool.automatic_reconnect = automatic_reconnect
@pool.disconnect!
end
diff --git a/activerecord/lib/active_record/connection_adapters/pool_manager.rb b/activerecord/lib/active_record/connection_adapters/pool_manager.rb
index 2ee0c3dc2a..cb09694327 100644
--- a/activerecord/lib/active_record/connection_adapters/pool_manager.rb
+++ b/activerecord/lib/active_record/connection_adapters/pool_manager.rb
@@ -4,40 +4,50 @@ module ActiveRecord
module ConnectionAdapters
class PoolManager # :nodoc:
def initialize
- @name_to_role_mapping = Hash.new { |h, k| h[k] = {} }
+ @role_to_shard_mapping = Hash.new { |h, k| h[k] = {} }
end
def shard_names
- @name_to_role_mapping.values.flat_map { |shard_map| shard_map.keys }
+ @role_to_shard_mapping.values.flat_map { |shard_map| shard_map.keys }.uniq
end
def role_names
- @name_to_role_mapping.keys
+ @role_to_shard_mapping.keys
end
def pool_configs(role = nil)
if role
- @name_to_role_mapping[role].values
+ @role_to_shard_mapping[role].values
else
- @name_to_role_mapping.flat_map { |_, shard_map| shard_map.values }
+ @role_to_shard_mapping.flat_map { |_, shard_map| shard_map.values }
+ end
+ end
+
+ def each_pool_config(role = nil, &block)
+ if role
+ @role_to_shard_mapping[role].each_value(&block)
+ else
+ @role_to_shard_mapping.each_value do |shard_map|
+ shard_map.each_value(&block)
+ end
end
end
def remove_role(role)
- @name_to_role_mapping.delete(role)
+ @role_to_shard_mapping.delete(role)
end
def remove_pool_config(role, shard)
- @name_to_role_mapping[role].delete(shard)
+ @role_to_shard_mapping[role].delete(shard)
end
def get_pool_config(role, shard)
- @name_to_role_mapping[role][shard]
+ @role_to_shard_mapping[role][shard]
end
def set_pool_config(role, shard, pool_config)
if pool_config
- @name_to_role_mapping[role][shard] = pool_config
+ @role_to_shard_mapping[role][shard] = pool_config
else
raise ArgumentError, "The `pool_config` for the :#{role} role and :#{shard} shard was `nil`. Please check your configuration. If you want your writing role to be something other than `:writing` set `config.active_record.writing_role` in your application configuration. The same setting should be applied for the `reading_role` if applicable."
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/column.rb b/activerecord/lib/active_record/connection_adapters/postgresql/column.rb
index b9d10e9e2f..926ab09d90 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/column.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/column.rb
@@ -6,37 +6,65 @@ module PostgreSQL
class Column < ConnectionAdapters::Column # :nodoc:
delegate :oid, :fmod, to: :sql_type_metadata
- def initialize(*, serial: nil, **)
+ def initialize(*, serial: nil, identity: nil, generated: nil, **)
super
@serial = serial
+ @identity = identity
+ @generated = generated
+ end
+
+ def identity?
+ @identity
end
def serial?
@serial
end
+ def auto_incremented_by_db?
+ serial? || identity?
+ end
+
+ def virtual?
+ # We assume every generated column is virtual, no matter the concrete type
+ @generated.present?
+ end
+
+ def has_default?
+ super && !virtual?
+ end
+
def array
sql_type_metadata.sql_type.end_with?("[]")
end
alias :array? :array
+ def enum?
+ type == :enum
+ end
+
def sql_type
super.delete_suffix("[]")
end
def init_with(coder)
@serial = coder["serial"]
+ @identity = coder["identity"]
+ @generated = coder["generated"]
super
end
def encode_with(coder)
coder["serial"] = @serial
+ coder["identity"] = @identity
+ coder["generated"] = @generated
super
end
def ==(other)
other.is_a?(Column) &&
super &&
+ identity? == other.identity? &&
serial? == other.serial?
end
alias :eql? :==
@@ -44,6 +72,7 @@ def ==(other)
def hash
Column.hash ^
super.hash ^
+ identity?.hash ^
serial?.hash
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb b/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb
index 278cf58ea6..56e22c7f1b 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb
@@ -4,19 +4,21 @@ module ActiveRecord
module ConnectionAdapters
module PostgreSQL
module DatabaseStatements
- def explain(arel, binds = [])
- sql = "EXPLAIN #{to_sql(arel, binds)}"
- PostgreSQL::ExplainPrettyPrinter.new.pp(exec_query(sql, "EXPLAIN", binds))
+ def explain(arel, binds = [], options = [])
+ sql = build_explain_clause(options) + " " + to_sql(arel, binds)
+ result = internal_exec_query(sql, "EXPLAIN", binds)
+ PostgreSQL::ExplainPrettyPrinter.new.pp(result)
end
# Queries the database and returns the results in an Array-like object
- def query(sql, name = nil) #:nodoc:
- materialize_transactions
+ def query(sql, name = nil) # :nodoc:
mark_transaction_written_if_write(sql)
log(sql, name) do
- ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
- @connection.async_exec(sql).map_types!(@type_map_for_results).values
+ with_raw_connection do |conn|
+ result = conn.async_exec(sql).map_types!(@type_map_for_results).values
+ verified!
+ result
end
end
end
@@ -34,65 +36,53 @@ def write_query?(sql) # :nodoc:
# Executes an SQL statement, returning a PG::Result object on success
# or raising a PG::Error exception otherwise.
+ #
+ # Setting +allow_retry+ to true causes the db to reconnect and retry
+ # executing the SQL statement in case of a connection-related exception.
+ # This option should only be enabled for known idempotent queries.
+ #
# Note: the PG::Result object is manually memory managed; if you don't
# need it specifically, you may want consider the <tt>exec_query</tt> wrapper.
- def execute(sql, name = nil)
- if preventing_writes? && write_query?(sql)
- raise ActiveRecord::ReadOnlyError, "Write query attempted while in readonly mode: #{sql}"
- end
-
- materialize_transactions
- mark_transaction_written_if_write(sql)
+ def execute(...) # :nodoc:
+ super
+ ensure
+ @notice_receiver_sql_warnings = []
+ end
- log(sql, name) do
- ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
- @connection.async_exec(sql)
+ def raw_execute(sql, name, async: false, allow_retry: false, materialize_transactions: true)
+ log(sql, name, async: async) do
+ with_raw_connection(allow_retry: allow_retry, materialize_transactions: materialize_transactions) do |conn|
+ result = conn.async_exec(sql)
+ verified!
+ handle_warnings(result)
+ result
end
end
end
- def exec_query(sql, name = "SQL", binds = [], prepare: false)
- execute_and_clear(sql, name, binds, prepare: prepare) do |result|
+ def internal_exec_query(sql, name = "SQL", binds = [], prepare: false, async: false, allow_retry: false, materialize_transactions: true) # :nodoc:
+ execute_and_clear(sql, name, binds, prepare: prepare, async: async, allow_retry: allow_retry, materialize_transactions: materialize_transactions) do |result|
types = {}
fields = result.fields
fields.each_with_index do |fname, i|
ftype = result.ftype i
fmod = result.fmod i
- case type = get_oid_type(ftype, fmod, fname)
- when Type::Integer, Type::Float, OID::Decimal, Type::String, Type::DateTime, Type::Boolean
- # skip if a column has already been type casted by pg decoders
- else types[fname] = type
- end
+ types[fname] = types[i] = get_oid_type(ftype, fmod, fname)
end
build_result(columns: fields, rows: result.values, column_types: types)
end
end
- def exec_delete(sql, name = nil, binds = [])
+ def exec_delete(sql, name = nil, binds = []) # :nodoc:
execute_and_clear(sql, name, binds) { |result| result.cmd_tuples }
end
alias :exec_update :exec_delete
- def sql_for_insert(sql, pk, binds) # :nodoc:
- if pk.nil?
- # Extract the table from the insert sql. Yuck.
- table_ref = extract_table_ref_from_insert_sql(sql)
- pk = primary_key(table_ref) if table_ref
- end
-
- if pk = suppress_composite_primary_key(pk)
- sql = "#{sql} RETURNING #{quote_column_name(pk)}"
- end
-
- super
- end
- private :sql_for_insert
-
- def exec_insert(sql, name = nil, binds = [], pk = nil, sequence_name = nil)
+ def exec_insert(sql, name = nil, binds = [], pk = nil, sequence_name = nil, returning: nil) # :nodoc:
if use_insert_returning? || pk == false
super
else
- result = exec_query(sql, name, binds)
+ result = internal_exec_query(sql, name, binds)
unless sequence_name
table_ref = extract_table_ref_from_insert_sql(sql)
if table_ref
@@ -107,26 +97,56 @@ def exec_insert(sql, name = nil, binds = [], pk = nil, sequence_name = nil)
end
# Begins a transaction.
- def begin_db_transaction
- execute("BEGIN", "TRANSACTION")
+ def begin_db_transaction # :nodoc:
+ internal_execute("BEGIN", "TRANSACTION", allow_retry: true, materialize_transactions: false)
end
- def begin_isolated_db_transaction(isolation)
- begin_db_transaction
- execute "SET TRANSACTION ISOLATION LEVEL #{transaction_isolation_levels.fetch(isolation)}"
+ def begin_isolated_db_transaction(isolation) # :nodoc:
+ internal_execute("BEGIN ISOLATION LEVEL #{transaction_isolation_levels.fetch(isolation)}", "TRANSACTION", allow_retry: true, materialize_transactions: false)
end
# Commits a transaction.
- def commit_db_transaction
- execute("COMMIT", "TRANSACTION")
+ def commit_db_transaction # :nodoc:
+ internal_execute("COMMIT", "TRANSACTION", allow_retry: false, materialize_transactions: true)
end
# Aborts a transaction.
- def exec_rollback_db_transaction
- execute("ROLLBACK", "TRANSACTION")
+ def exec_rollback_db_transaction # :nodoc:
+ cancel_any_running_query
+ internal_execute("ROLLBACK", "TRANSACTION", allow_retry: false, materialize_transactions: true)
+ end
+
+ def exec_restart_db_transaction # :nodoc:
+ cancel_any_running_query
+ internal_execute("ROLLBACK AND CHAIN", "TRANSACTION", allow_retry: false, materialize_transactions: true)
+ end
+
+ # From https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT
+ HIGH_PRECISION_CURRENT_TIMESTAMP = Arel.sql("CURRENT_TIMESTAMP").freeze # :nodoc:
+ private_constant :HIGH_PRECISION_CURRENT_TIMESTAMP
+
+ def high_precision_current_timestamp
+ HIGH_PRECISION_CURRENT_TIMESTAMP
+ end
+
+ def build_explain_clause(options = [])
+ return "EXPLAIN" if options.empty?
+
+ "EXPLAIN (#{options.join(", ").upcase})"
end
private
+ IDLE_TRANSACTION_STATUSES = [PG::PQTRANS_IDLE, PG::PQTRANS_INTRANS, PG::PQTRANS_INERROR]
+ private_constant :IDLE_TRANSACTION_STATUSES
+
+ def cancel_any_running_query
+ return if @raw_connection.nil? || IDLE_TRANSACTION_STATUSES.include?(@raw_connection.transaction_status)
+
+ @raw_connection.cancel
+ @raw_connection.block
+ rescue PG::Error
+ end
+
def execute_batch(statements, name = nil)
execute(combine_multi_statements(statements))
end
@@ -137,12 +157,29 @@ def build_truncate_statements(table_names)
# Returns the current ID of a table's sequence.
def last_insert_id_result(sequence_name)
- exec_query("SELECT currval(#{quote(sequence_name)})", "SQL")
+ internal_exec_query("SELECT currval(#{quote(sequence_name)})", "SQL")
+ end
+
+ def returning_column_values(result)
+ result.rows.first
end
def suppress_composite_primary_key(pk)
pk unless pk.is_a?(Array)
end
+
+ def handle_warnings(sql)
+ @notice_receiver_sql_warnings.each do |warning|
+ next if warning_ignored?(warning)
+
+ warning.sql = sql
+ ActiveRecord.db_warnings_action.call(warning)
+ end
+ end
+
+ def warning_ignored?(warning)
+ ["WARNING", "ERROR", "FATAL", "PANIC"].exclude?(warning.level) || super
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/oid.rb b/activerecord/lib/active_record/connection_adapters/postgresql/oid.rb
index 1540b2ee28..30f0d49adc 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/oid.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/oid.rb
@@ -20,6 +20,8 @@
require "active_record/connection_adapters/postgresql/oid/legacy_point"
require "active_record/connection_adapters/postgresql/oid/range"
require "active_record/connection_adapters/postgresql/oid/specialized_string"
+require "active_record/connection_adapters/postgresql/oid/timestamp"
+require "active_record/connection_adapters/postgresql/oid/timestamp_with_time_zone"
require "active_record/connection_adapters/postgresql/oid/uuid"
require "active_record/connection_adapters/postgresql/oid/vector"
require "active_record/connection_adapters/postgresql/oid/xml"
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/oid/array.rb b/activerecord/lib/active_record/connection_adapters/postgresql/oid/array.rb
index 0bbe98145a..e46e47102b 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/oid/array.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/oid/array.rb
@@ -65,7 +65,7 @@ def type_cast_for_schema(value)
end
def map(value, &block)
- value.map(&block)
+ value.map { |v| subtype.map(v, &block) }
end
def changed_in_place?(raw_old_value, new_value)
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/oid/date.rb b/activerecord/lib/active_record/connection_adapters/postgresql/oid/date.rb
index 0fe72e01ea..633d7ddd9e 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/oid/date.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/oid/date.rb
@@ -16,6 +16,14 @@ def cast_value(value)
super
end
end
+
+ def type_cast_for_schema(value)
+ case value
+ when ::Float::INFINITY then "::Float::INFINITY"
+ when -::Float::INFINITY then "-::Float::INFINITY"
+ else super
+ end
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/oid/date_time.rb b/activerecord/lib/active_record/connection_adapters/postgresql/oid/date_time.rb
index 8fa052968e..fe29ebb8a0 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/oid/date_time.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/oid/date_time.rb
@@ -24,6 +24,11 @@ def type_cast_for_schema(value)
else super
end
end
+
+ protected
+ def real_type_unless_aliased(real_type)
+ ActiveRecord::ConnectionAdapters::PostgreSQLAdapter.datetime_type == real_type ? :datetime : real_type
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/oid/hstore.rb b/activerecord/lib/active_record/connection_adapters/postgresql/oid/hstore.rb
index 8d4dacbd64..d6f60f563e 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/oid/hstore.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/oid/hstore.rb
@@ -1,10 +1,14 @@
# frozen_string_literal: true
+require "strscan"
+
module ActiveRecord
module ConnectionAdapters
module PostgreSQL
module OID # :nodoc:
class Hstore < Type::Value # :nodoc:
+ ERROR = "Invalid Hstore document: %s"
+
include ActiveModel::Type::Helpers::Mutable
def type
@@ -12,15 +16,56 @@ def type
end
def deserialize(value)
- if value.is_a?(::String)
- ::Hash[value.scan(HstorePair).map { |k, v|
- v = v.upcase == "NULL" ? nil : v.gsub(/\A"(.*)"\Z/m, '\1').gsub(/\\(.)/, '\1')
- k = k.gsub(/\A"(.*)"\Z/m, '\1').gsub(/\\(.)/, '\1')
- [k, v]
- }]
- else
- value
+ return value unless value.is_a?(::String)
+
+ scanner = StringScanner.new(value)
+ hash = {}
+
+ until scanner.eos?
+ unless scanner.skip(/"/)
+ raise(ArgumentError, ERROR % scanner.string.inspect)
+ end
+
+ unless key = scanner.scan(/^(\\[\\"]|[^\\"])*?(?=")/)
+ raise(ArgumentError, ERROR % scanner.string.inspect)
+ end
+
+ unless scanner.skip(/"=>?/)
+ raise(ArgumentError, ERROR % scanner.string.inspect)
+ end
+
+ if scanner.scan(/NULL/)
+ value = nil
+ else
+ unless scanner.skip(/"/)
+ raise(ArgumentError, ERROR % scanner.string.inspect)
+ end
+
+ unless value = scanner.scan(/^(\\[\\"]|[^\\"])*?(?=")/)
+ raise(ArgumentError, ERROR % scanner.string.inspect)
+ end
+
+ unless scanner.skip(/"/)
+ raise(ArgumentError, ERROR % scanner.string.inspect)
+ end
+ end
+
+ key.gsub!('\"', '"')
+ key.gsub!("\\\\", "\\")
+
+ if value
+ value.gsub!('\"', '"')
+ value.gsub!("\\\\", "\\")
+ end
+
+ hash[key] = value
+
+ unless scanner.skip(/, /) || scanner.eos?
+ raise(ArgumentError, ERROR % scanner.string.inspect)
+ end
end
+
+ hash
end
def serialize(value)
@@ -46,12 +91,6 @@ def changed_in_place?(raw_old_value, new_value)
end
private
- HstorePair = begin
- quoted_string = /"[^"\\]*(?:\\.[^"\\]*)*"/
- unquoted_string = /(?:\\.|[^\s,])[^\s=,\\]*(?:\\.[^\s=,\\]*|=[^,>])*/
- /(#{quoted_string}|#{unquoted_string})\s*=>\s*(#{quoted_string}|#{unquoted_string})/
- end
-
def escape_hstore(value)
if value.nil?
"NULL"
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/oid/money.rb b/activerecord/lib/active_record/connection_adapters/postgresql/oid/money.rb
index 3703e9a646..86310407bf 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/oid/money.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/oid/money.rb
@@ -27,9 +27,10 @@ def cast_value(value)
value = value.sub(/^\((.+)\)$/, '-\1') # (4)
case value
when /^-?\D*+[\d,]+\.\d{2}$/ # (1)
- value.gsub!(/[^-\d.]/, "")
+ value.delete!("^-0-9.")
when /^-?\D*+[\d.]+,\d{2}$/ # (2)
- value.gsub!(/[^-\d,]/, "").sub!(/,/, ".")
+ value.delete!("^-0-9,")
+ value.tr!(",", ".")
end
super(value)
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/oid/range.rb b/activerecord/lib/active_record/connection_adapters/postgresql/oid/range.rb
index 64dafbd89f..ed674a5053 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/oid/range.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/oid/range.rb
@@ -18,7 +18,7 @@ def type_cast_for_schema(value)
end
def cast_value(value)
- return if value == "empty"
+ return if ["empty", ""].include? value
return value unless value.is_a?(::String)
extracted = extract_bounds(value)
@@ -28,7 +28,7 @@ def cast_value(value)
if !infinity?(from) && extracted[:exclude_start]
raise ArgumentError, "The Ruby Range object does not support excluding the beginning of a Range. (unsupported value: '#{value}')"
end
- ::Range.new(from, to, extracted[:exclude_end])
+ ::Range.new(*sanitize_bounds(from, to), extracted[:exclude_end])
end
def serialize(value)
@@ -76,6 +76,15 @@ def extract_bounds(value)
}
end
+ INFINITE_FLOAT_RANGE = (-::Float::INFINITY)..(::Float::INFINITY) # :nodoc:
+
+ def sanitize_bounds(from, to)
+ [
+ (from == -::Float::INFINITY && !INFINITE_FLOAT_RANGE.cover?(to)) ? nil : from,
+ (to == ::Float::INFINITY && !INFINITE_FLOAT_RANGE.cover?(from)) ? nil : to
+ ]
+ end
+
# When formatting the bound values of range types, PostgreSQL quotes
# the bound value using double-quotes in certain conditions. Within
# a double-quoted string, literal " and \ characters are themselves
@@ -88,7 +97,7 @@ def unquote(value)
if value.start_with?('"') && value.end_with?('"')
unquoted_value = value[1..-2]
unquoted_value.gsub!('""', '"')
- unquoted_value.gsub!('\\\\', '\\')
+ unquoted_value.gsub!("\\\\", "\\")
unquoted_value
else
value
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/oid/timestamp.rb b/activerecord/lib/active_record/connection_adapters/postgresql/oid/timestamp.rb
new file mode 100644
index 0000000000..e6326ab0d5
--- /dev/null
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/oid/timestamp.rb
@@ -0,0 +1,15 @@
+# frozen_string_literal: true
+
+module ActiveRecord
+ module ConnectionAdapters
+ module PostgreSQL
+ module OID # :nodoc:
+ class Timestamp < DateTime # :nodoc:
+ def type
+ real_type_unless_aliased(:timestamp)
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/oid/timestamp_with_time_zone.rb b/activerecord/lib/active_record/connection_adapters/postgresql/oid/timestamp_with_time_zone.rb
new file mode 100644
index 0000000000..9b99eae87d
--- /dev/null
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/oid/timestamp_with_time_zone.rb
@@ -0,0 +1,30 @@
+# frozen_string_literal: true
+
+module ActiveRecord
+ module ConnectionAdapters
+ module PostgreSQL
+ module OID # :nodoc:
+ class TimestampWithTimeZone < DateTime # :nodoc:
+ def type
+ real_type_unless_aliased(:timestamptz)
+ end
+
+ def cast_value(value)
+ return if value.blank?
+
+ time = super
+ return time if time.is_a?(ActiveSupport::TimeWithZone) || !time.acts_like?(:time)
+
+ # While in UTC mode, the PG gem may not return times back in "UTC" even if they were provided to PostgreSQL in UTC.
+ # We prefer times always in UTC, so here we convert back.
+ if is_utc?
+ time.getutc
+ else
+ time.getlocal
+ end
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/oid/type_map_initializer.rb b/activerecord/lib/active_record/connection_adapters/postgresql/oid/type_map_initializer.rb
index 203087bc36..ec020cb8f0 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/oid/type_map_initializer.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/oid/type_map_initializer.rb
@@ -33,15 +33,27 @@ def run(records)
composites.each { |row| register_composite_type(row) }
end
- def query_conditions_for_initial_load
+ def query_conditions_for_known_type_names
known_type_names = @store.keys.map { |n| "'#{n}'" }
- known_type_types = %w('r' 'e' 'd')
- <<~SQL % [known_type_names.join(", "), known_type_types.join(", ")]
+ <<~SQL % known_type_names.join(", ")
WHERE
t.typname IN (%s)
- OR t.typtype IN (%s)
- OR t.typinput = 'array_in(cstring,oid,integer)'::regprocedure
- OR t.typelem != 0
+ SQL
+ end
+
+ def query_conditions_for_known_type_types
+ known_type_types = %w('r' 'e' 'd')
+ <<~SQL % known_type_types.join(", ")
+ WHERE
+ t.typtype IN (%s)
+ SQL
+ end
+
+ def query_conditions_for_array_types
+ known_type_oids = @store.keys.reject { |k| k.is_a?(String) }
+ <<~SQL % [known_type_oids.join(", ")]
+ WHERE
+ t.typelem IN (%s)
SQL
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/quoting.rb b/activerecord/lib/active_record/connection_adapters/postgresql/quoting.rb
index 4db5f8f528..44243ed50e 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/quoting.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/quoting.rb
@@ -4,6 +4,9 @@ module ActiveRecord
module ConnectionAdapters
module PostgreSQL
module Quoting
+ QUOTED_COLUMN_NAMES = Concurrent::Map.new # :nodoc:
+ QUOTED_TABLE_NAMES = Concurrent::Map.new # :nodoc:
+
class IntegerOutOf64BitRange < StandardError
def initialize(msg)
super(msg)
@@ -12,19 +15,66 @@ def initialize(msg)
# Escapes binary strings for bytea input to the database.
def escape_bytea(value)
- @connection.escape_bytea(value) if value
+ valid_raw_connection.escape_bytea(value) if value
end
# Unescapes bytea output from a database to the binary string it represents.
# NOTE: This is NOT an inverse of escape_bytea! This is only to be used
# on escaped binary output from database drive.
def unescape_bytea(value)
- @connection.unescape_bytea(value) if value
+ valid_raw_connection.unescape_bytea(value) if value
+ end
+
+ def check_int_in_range(value)
+ if value.to_int > 9223372036854775807 || value.to_int < -9223372036854775808
+ exception = <<~ERROR
+ Provided value outside of the range of a signed 64bit integer.
+
+ PostgreSQL will treat the column type in question as a numeric.
+ This may result in a slow sequential scan due to a comparison
+ being performed between an integer or bigint value and a numeric value.
+
+ To allow for this potentially unwanted behavior, set
+ ActiveRecord.raise_int_wider_than_64bit to false.
+ ERROR
+ raise IntegerOutOf64BitRange.new exception
+ end
+ end
+
+ def quote(value) # :nodoc:
+ if ActiveRecord.raise_int_wider_than_64bit && value.is_a?(Integer)
+ check_int_in_range(value)
+ end
+
+ case value
+ when OID::Xml::Data
+ "xml '#{quote_string(value.to_s)}'"
+ when OID::Bit::Data
+ if value.binary?
+ "B'#{value}'"
+ elsif value.hex?
+ "X'#{value}'"
+ end
+ when Numeric
+ if value.finite?
+ super
+ else
+ "'#{value}'"
+ end
+ when OID::Array::Data
+ quote(encode_array(value))
+ when Range
+ quote(encode_range(value))
+ else
+ super
+ end
end
# Quotes strings for use in SQL input.
- def quote_string(s) #:nodoc:
- PG::Connection.escape(s)
+ def quote_string(s) # :nodoc:
+ with_raw_connection(allow_retry: true, materialize_transactions: false) do |connection|
+ connection.escape(s)
+ end
end
# Checks the following cases:
@@ -36,7 +86,7 @@ def quote_string(s) #:nodoc:
# - "schema.name".table_name
# - "schema.name"."table.name"
def quote_table_name(name) # :nodoc:
- self.class.quoted_table_names[name] ||= Utils.extract_schema_qualified_name(name.to_s).quoted.freeze
+ QUOTED_TABLE_NAMES[name] ||= Utils.extract_schema_qualified_name(name.to_s).quoted.freeze
end
# Quotes schema names for use in SQL queries.
@@ -50,11 +100,11 @@ def quote_table_name_for_assignment(table, attr)
# Quotes column names for use in SQL queries.
def quote_column_name(name) # :nodoc:
- self.class.quoted_column_names[name] ||= PG::Connection.quote_ident(super).freeze
+ QUOTED_COLUMN_NAMES[name] ||= PG::Connection.quote_ident(super).freeze
end
# Quote date/time values for use in SQL input.
- def quoted_date(value) #:nodoc:
+ def quoted_date(value) # :nodoc:
if value.year <= 0
bce_year = format("%04d", -value.year + 1)
super.sub(/^-?\d+/, bce_year) + " BC"
@@ -70,7 +120,7 @@ def quoted_binary(value) # :nodoc:
def quote_default_expression(value, column) # :nodoc:
if value.is_a?(Proc)
value.call
- elsif column.type == :uuid && value.is_a?(String) && /\(\)/.match?(value)
+ elsif column.type == :uuid && value.is_a?(String) && value.include?("()")
value # Does not quote function default values for UUID columns
elsif column.respond_to?(:array?)
type = lookup_cast_type_from_column(column)
@@ -80,7 +130,26 @@ def quote_default_expression(value, column) # :nodoc:
end
end
+ def type_cast(value) # :nodoc:
+ case value
+ when Type::Binary::Data
+ # Return a bind param hash with format as binary.
+ # See https://deveiate.org/code/pg/PG/Connection.html#method-i-exec_prepared-doc
+ # for more information
+ { value: value.to_s, format: 1 }
+ when OID::Xml::Data, OID::Bit::Data
+ value.to_s
+ when OID::Array::Data
+ encode_array(value)
+ when Range
+ encode_range(value)
+ else
+ super
+ end
+ end
+
def lookup_cast_type_from_column(column) # :nodoc:
+ verify! if type_map.nil?
type_map.lookup(column.oid, column.fmod, column.sql_type)
end
@@ -96,8 +165,8 @@ def column_name_with_order_matcher
\A
(
(?:
- # "table_name"."column_name"::type_name | function(one or no argument)::type_name
- ((?:\w+\.|"\w+"\.)?(?:\w+|"\w+")(?:::\w+)?) | \w+\((?:|\g<2>)\)(?:::\w+)?
+ # "schema_name"."table_name"."column_name"::type_name | function(one or no argument)::type_name
+ ((?:\w+\.|"\w+"\.){,2}(?:\w+|"\w+")(?:::\w+)? | \w+\((?:|\g<2>)\)(?:::\w+)?)
)
(?:(?:\s+AS)?\s+(?:\w+|"\w+"))?
)
@@ -109,9 +178,10 @@ def column_name_with_order_matcher
\A
(
(?:
- # "table_name"."column_name"::type_name | function(one or no argument)::type_name
- ((?:\w+\.|"\w+"\.)?(?:\w+|"\w+")(?:::\w+)?) | \w+\((?:|\g<2>)\)(?:::\w+)?
+ # "schema_name"."table_name"."column_name"::type_name | function(one or no argument)::type_name
+ ((?:\w+\.|"\w+"\.){,2}(?:\w+|"\w+")(?:::\w+)? | \w+\((?:|\g<2>)\)(?:::\w+)?)
)
+ (?:\s+COLLATE\s+"\w+")?
(?:\s+ASC|\s+DESC)?
(?:\s+NULLS\s+(?:FIRST|LAST))?
)
@@ -126,69 +196,6 @@ def lookup_cast_type(sql_type)
super(query_value("SELECT #{quote(sql_type)}::regtype::oid", "SCHEMA").to_i)
end
- def check_int_in_range(value)
- if value.to_int > 9223372036854775807 || value.to_int < -9223372036854775808
- exception = <<~ERROR
- Provided value outside of the range of a signed 64bit integer.
-
- PostgreSQL will treat the column type in question as a numeric.
- This may result in a slow sequential scan due to a comparison
- being performed between an integer or bigint value and a numeric value.
-
- To allow for this potentially unwanted behavior, set
- ActiveRecord::Base.raise_int_wider_than_64bit to false.
- ERROR
- raise IntegerOutOf64BitRange.new exception
- end
- end
-
- def _quote(value)
- if ActiveRecord::Base.raise_int_wider_than_64bit && value.is_a?(Integer)
- check_int_in_range(value)
- end
-
- case value
- when OID::Xml::Data
- "xml '#{quote_string(value.to_s)}'"
- when OID::Bit::Data
- if value.binary?
- "B'#{value}'"
- elsif value.hex?
- "X'#{value}'"
- end
- when Numeric
- if value.finite?
- super
- else
- "'#{value}'"
- end
- when OID::Array::Data
- _quote(encode_array(value))
- when Range
- _quote(encode_range(value))
- else
- super
- end
- end
-
- def _type_cast(value)
- case value
- when Type::Binary::Data
- # Return a bind param hash with format as binary.
- # See https://deveiate.org/code/pg/PG/Connection.html#method-i-exec_prepared-doc
- # for more information
- { value: value.to_s, format: 1 }
- when OID::Xml::Data, OID::Bit::Data
- value.to_s
- when OID::Array::Data
- encode_array(value)
- when Range
- encode_range(value)
- else
- super
- end
- end
-
def encode_array(array_data)
encoder = array_data.encoder
values = type_cast_array(array_data.values)
@@ -214,7 +221,7 @@ def determine_encoding_of_strings_in_array(value)
def type_cast_array(values)
case values
when ::Array then values.map { |item| type_cast_array(item) }
- else _type_cast(values)
+ else type_cast(values)
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/referential_integrity.rb b/activerecord/lib/active_record/connection_adapters/postgresql/referential_integrity.rb
index dfb0029daf..e609cdc19f 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/referential_integrity.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/referential_integrity.rb
@@ -37,6 +37,34 @@ def disable_referential_integrity # :nodoc:
rescue ActiveRecord::ActiveRecordError
end
end
+
+ def check_all_foreign_keys_valid! # :nodoc:
+ sql = <<~SQL
+ do $$
+ declare r record;
+ BEGIN
+ FOR r IN (
+ SELECT FORMAT(
+ 'UPDATE pg_constraint SET convalidated=false WHERE conname = ''%I'' AND connamespace::regnamespace = ''%I''::regnamespace; ALTER TABLE %I.%I VALIDATE CONSTRAINT %I;',
+ constraint_name,
+ table_schema,
+ table_schema,
+ table_name,
+ constraint_name
+ ) AS constraint_check
+ FROM information_schema.table_constraints WHERE constraint_type = 'FOREIGN KEY'
+ )
+ LOOP
+ EXECUTE (r.constraint_check);
+ END LOOP;
+ END;
+ $$;
+ SQL
+
+ transaction(requires_new: true) do
+ execute(sql)
+ end
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/schema_creation.rb b/activerecord/lib/active_record/connection_adapters/postgresql/schema_creation.rb
index ad84b30a78..c28312d74f 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/schema_creation.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/schema_creation.rb
@@ -5,12 +5,28 @@ module ConnectionAdapters
module PostgreSQL
class SchemaCreation < SchemaCreation # :nodoc:
private
+ delegate :quoted_include_columns_for_index, to: :@conn
+
def visit_AlterTable(o)
- super << o.constraint_validations.map { |fk| visit_ValidateConstraint fk }.join(" ")
+ sql = super
+ sql << o.constraint_validations.map { |fk| visit_ValidateConstraint fk }.join(" ")
+ sql << o.exclusion_constraint_adds.map { |con| visit_AddExclusionConstraint con }.join(" ")
+ sql << o.exclusion_constraint_drops.map { |con| visit_DropExclusionConstraint con }.join(" ")
+ sql << o.unique_constraint_adds.map { |con| visit_AddUniqueConstraint con }.join(" ")
+ sql << o.unique_constraint_drops.map { |con| visit_DropUniqueConstraint con }.join(" ")
end
def visit_AddForeignKey(o)
- super.dup.tap { |sql| sql << " NOT VALID" unless o.validate? }
+ super.dup.tap do |sql|
+ sql << " DEFERRABLE INITIALLY #{o.options[:deferrable].to_s.upcase}" if o.deferrable
+ sql << " NOT VALID" unless o.validate?
+ end
+ end
+
+ def visit_ForeignKeyDefinition(o)
+ super.dup.tap do |sql|
+ sql << " DEFERRABLE INITIALLY #{o.deferrable.to_s.upcase}" if o.deferrable
+ end
end
def visit_CheckConstraintDefinition(o)
@@ -21,6 +37,54 @@ def visit_ValidateConstraint(name)
"VALIDATE CONSTRAINT #{quote_column_name(name)}"
end
+ def visit_ExclusionConstraintDefinition(o)
+ sql = ["CONSTRAINT"]
+ sql << quote_column_name(o.name)
+ sql << "EXCLUDE"
+ sql << "USING #{o.using}" if o.using
+ sql << "(#{o.expression})"
+ sql << "WHERE (#{o.where})" if o.where
+ sql << "DEFERRABLE INITIALLY #{o.deferrable.to_s.upcase}" if o.deferrable
+
+ sql.join(" ")
+ end
+
+ def visit_UniqueConstraintDefinition(o)
+ column_name = Array(o.column).map { |column| quote_column_name(column) }.join(", ")
+
+ sql = ["CONSTRAINT"]
+ sql << quote_column_name(o.name)
+ sql << "UNIQUE"
+
+ if o.using_index
+ sql << "USING INDEX #{quote_column_name(o.using_index)}"
+ else
+ sql << "(#{column_name})"
+ end
+
+ if o.deferrable
+ sql << "DEFERRABLE INITIALLY #{o.deferrable.to_s.upcase}"
+ end
+
+ sql.join(" ")
+ end
+
+ def visit_AddExclusionConstraint(o)
+ "ADD #{accept(o)}"
+ end
+
+ def visit_DropExclusionConstraint(name)
+ "DROP CONSTRAINT #{quote_column_name(name)}"
+ end
+
+ def visit_AddUniqueConstraint(o)
+ "ADD #{accept(o)}"
+ end
+
+ def visit_DropUniqueConstraint(name)
+ "DROP CONSTRAINT #{quote_column_name(name)}"
+ end
+
def visit_ChangeColumnDefinition(o)
column = o.column
column.sql_type = type_to_sql(column.type, **column.options)
@@ -57,13 +121,39 @@ def visit_ChangeColumnDefinition(o)
change_column_sql
end
+ def visit_ChangeColumnDefaultDefinition(o)
+ sql = +"ALTER COLUMN #{quote_column_name(o.column.name)} "
+ if o.default.nil?
+ sql << "DROP DEFAULT"
+ else
+ sql << "SET DEFAULT #{quote_default_expression(o.default, o.column)}"
+ end
+ end
+
def add_column_options!(sql, options)
if options[:collation]
sql << " COLLATE \"#{options[:collation]}\""
end
+
+ if as = options[:as]
+ sql << " GENERATED ALWAYS AS (#{as})"
+
+ if options[:stored]
+ sql << " STORED"
+ else
+ raise ArgumentError, <<~MSG
+ PostgreSQL currently does not support VIRTUAL (not persisted) generated columns.
+ Specify 'stored: true' option for '#{options[:column].name}'
+ MSG
+ end
+ end
super
end
+ def quoted_include_columns(o)
+ String === o ? o : quoted_include_columns_for_index(o)
+ end
+
# Returns any SQL string to go between CREATE and TABLE. May be nil.
def table_modifier_in_create(o)
# A table cannot be both TEMPORARY and UNLOGGED, since all TEMPORARY
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/schema_definitions.rb b/activerecord/lib/active_record/connection_adapters/postgresql/schema_definitions.rb
index 754cbbdd6b..7f416d5825 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/schema_definitions.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/schema_definitions.rb
@@ -173,25 +173,117 @@ def primary_key(name, type = :primary_key, **options)
# :method: xml
# :call-seq: xml(*names, **options)
+ ##
+ # :method: timestamptz
+ # :call-seq: timestamptz(*names, **options)
+
+ ##
+ # :method: enum
+ # :call-seq: enum(*names, **options)
+
included do
define_column_methods :bigserial, :bit, :bit_varying, :cidr, :citext, :daterange,
:hstore, :inet, :interval, :int4range, :int8range, :jsonb, :ltree, :macaddr,
:money, :numrange, :oid, :point, :line, :lseg, :box, :path, :polygon, :circle,
- :serial, :tsrange, :tstzrange, :tsvector, :uuid, :xml
+ :serial, :tsrange, :tstzrange, :tsvector, :uuid, :xml, :timestamptz, :enum
+ end
+ end
+
+ ExclusionConstraintDefinition = Struct.new(:table_name, :expression, :options) do
+ def name
+ options[:name]
+ end
+
+ def using
+ options[:using]
+ end
+
+ def where
+ options[:where]
+ end
+
+ def deferrable
+ options[:deferrable]
+ end
+
+ def export_name_on_schema_dump?
+ !ActiveRecord::SchemaDumper.excl_ignore_pattern.match?(name) if name
+ end
+ end
+
+ UniqueConstraintDefinition = Struct.new(:table_name, :column, :options) do
+ def name
+ options[:name]
+ end
+
+ def deferrable
+ options[:deferrable]
+ end
+
+ def using_index
+ options[:using_index]
+ end
+
+ def export_name_on_schema_dump?
+ !ActiveRecord::SchemaDumper.unique_ignore_pattern.match?(name) if name
+ end
+
+ def defined_for?(name: nil, column: nil, **options)
+ (name.nil? || self.name == name.to_s) &&
+ (column.nil? || Array(self.column) == Array(column).map(&:to_s)) &&
+ options.all? { |k, v| self.options[k].to_s == v.to_s }
end
end
+ # = Active Record PostgreSQL Adapter \Table Definition
class TableDefinition < ActiveRecord::ConnectionAdapters::TableDefinition
include ColumnMethods
- attr_reader :unlogged
+ attr_reader :exclusion_constraints, :unique_constraints, :unlogged
def initialize(*, **)
super
+ @exclusion_constraints = []
+ @unique_constraints = []
@unlogged = ActiveRecord::ConnectionAdapters::PostgreSQLAdapter.create_unlogged_tables
end
+ def exclusion_constraint(expression, **options)
+ exclusion_constraints << new_exclusion_constraint_definition(expression, options)
+ end
+
+ def unique_constraint(column_name, **options)
+ unique_constraints << new_unique_constraint_definition(column_name, options)
+ end
+
+ def new_exclusion_constraint_definition(expression, options) # :nodoc:
+ options = @conn.exclusion_constraint_options(name, expression, options)
+ ExclusionConstraintDefinition.new(name, expression, options)
+ end
+
+ def new_unique_constraint_definition(column_name, options) # :nodoc:
+ options = @conn.unique_constraint_options(name, column_name, options)
+ UniqueConstraintDefinition.new(name, column_name, options)
+ end
+
+ def new_column_definition(name, type, **options) # :nodoc:
+ case type
+ when :virtual
+ type = options[:type]
+ end
+
+ super
+ end
+
private
+ def valid_column_definition_options
+ super + [:array, :using, :cast_as, :as, :type, :enum_type, :stored]
+ end
+
+ def aliased_types(name, fallback)
+ fallback
+ end
+
def integer_like_primary_key_type(type, options)
if type == :bigint || options[:limit] == 8
:bigserial
@@ -201,21 +293,79 @@ def integer_like_primary_key_type(type, options)
end
end
+ # = Active Record PostgreSQL Adapter \Table
class Table < ActiveRecord::ConnectionAdapters::Table
include ColumnMethods
+
+ # Adds an exclusion constraint.
+ #
+ # t.exclusion_constraint("price WITH =, availability_range WITH &&", using: :gist, name: "price_check")
+ #
+ # See {connection.add_exclusion_constraint}[rdoc-ref:SchemaStatements#add_exclusion_constraint]
+ def exclusion_constraint(*args)
+ @base.add_exclusion_constraint(name, *args)
+ end
+
+ # Removes the given exclusion constraint from the table.
+ #
+ # t.remove_exclusion_constraint(name: "price_check")
+ #
+ # See {connection.remove_exclusion_constraint}[rdoc-ref:SchemaStatements#remove_exclusion_constraint]
+ def remove_exclusion_constraint(*args)
+ @base.remove_exclusion_constraint(name, *args)
+ end
+
+ # Adds a unique constraint.
+ #
+ # t.unique_constraint(:position, name: 'unique_position', deferrable: :deferred)
+ #
+ # See {connection.add_unique_constraint}[rdoc-ref:SchemaStatements#add_unique_constraint]
+ def unique_constraint(*args)
+ @base.add_unique_constraint(name, *args)
+ end
+
+ # Removes the given unique constraint from the table.
+ #
+ # t.remove_unique_constraint(name: "unique_position")
+ #
+ # See {connection.remove_unique_constraint}[rdoc-ref:SchemaStatements#remove_unique_constraint]
+ def remove_unique_constraint(*args)
+ @base.remove_unique_constraint(name, *args)
+ end
end
+ # = Active Record PostgreSQL Adapter Alter \Table
class AlterTable < ActiveRecord::ConnectionAdapters::AlterTable
- attr_reader :constraint_validations
+ attr_reader :constraint_validations, :exclusion_constraint_adds, :exclusion_constraint_drops, :unique_constraint_adds, :unique_constraint_drops
def initialize(td)
super
@constraint_validations = []
+ @exclusion_constraint_adds = []
+ @exclusion_constraint_drops = []
+ @unique_constraint_adds = []
+ @unique_constraint_drops = []
end
def validate_constraint(name)
@constraint_validations << name
end
+
+ def add_exclusion_constraint(expression, options)
+ @exclusion_constraint_adds << @td.new_exclusion_constraint_definition(expression, options)
+ end
+
+ def drop_exclusion_constraint(constraint_name)
+ @exclusion_constraint_drops << constraint_name
+ end
+
+ def add_unique_constraint(column_name, options)
+ @unique_constraint_adds << @td.new_unique_constraint_definition(column_name, options)
+ end
+
+ def drop_unique_constraint(unique_constraint_name)
+ @unique_constraint_drops << unique_constraint_name
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/schema_dumper.rb b/activerecord/lib/active_record/connection_adapters/postgresql/schema_dumper.rb
index d201e40190..2e758b39eb 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/schema_dumper.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/schema_dumper.rb
@@ -16,9 +16,83 @@ def extensions(stream)
end
end
+ def types(stream)
+ types = @connection.enum_types
+ if types.any?
+ stream.puts " # Custom types defined in this database."
+ stream.puts " # Note that some types may not work with other database engines. Be careful if changing database."
+ types.sort.each do |name, values|
+ stream.puts " create_enum #{name.inspect}, #{values.split(",").inspect}"
+ end
+ stream.puts
+ end
+ end
+
+ def schemas(stream)
+ schema_names = @connection.schema_names - ["public"]
+
+ if schema_names.any?
+ schema_names.sort.each do |name|
+ stream.puts " create_schema #{name.inspect}"
+ end
+ stream.puts
+ end
+ end
+
+ def exclusion_constraints_in_create(table, stream)
+ if (exclusion_constraints = @connection.exclusion_constraints(table)).any?
+ add_exclusion_constraint_statements = exclusion_constraints.map do |exclusion_constraint|
+ parts = [
+ "t.exclusion_constraint #{exclusion_constraint.expression.inspect}"
+ ]
+
+ parts << "where: #{exclusion_constraint.where.inspect}" if exclusion_constraint.where
+ parts << "using: #{exclusion_constraint.using.inspect}" if exclusion_constraint.using
+ parts << "deferrable: #{exclusion_constraint.deferrable.inspect}" if exclusion_constraint.deferrable
+
+ if exclusion_constraint.export_name_on_schema_dump?
+ parts << "name: #{exclusion_constraint.name.inspect}"
+ end
+
+ " #{parts.join(', ')}"
+ end
+
+ stream.puts add_exclusion_constraint_statements.sort.join("\n")
+ end
+ end
+
+ def unique_constraints_in_create(table, stream)
+ if (unique_constraints = @connection.unique_constraints(table)).any?
+ add_unique_constraint_statements = unique_constraints.map do |unique_constraint|
+ parts = [
+ "t.unique_constraint #{unique_constraint.column.inspect}"
+ ]
+
+ parts << "deferrable: #{unique_constraint.deferrable.inspect}" if unique_constraint.deferrable
+
+ if unique_constraint.export_name_on_schema_dump?
+ parts << "name: #{unique_constraint.name.inspect}"
+ end
+
+ " #{parts.join(', ')}"
+ end
+
+ stream.puts add_unique_constraint_statements.sort.join("\n")
+ end
+ end
+
def prepare_column_options(column)
spec = super
spec[:array] = "true" if column.array?
+
+ if @connection.supports_virtual_columns? && column.virtual?
+ spec[:as] = extract_expression_for_virtual_column(column)
+ spec[:stored] = true
+ spec = { type: schema_type(column).inspect }.merge!(spec)
+ end
+
+ spec[:enum_type] = "\"#{column.sql_type}\"" if column.enum?
+
spec
end
@@ -43,6 +117,10 @@ def schema_type(column)
def schema_expression(column)
super unless column.serial?
end
+
+ def extract_expression_for_virtual_column(column)
+ column.default_function.inspect
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/schema_statements.rb b/activerecord/lib/active_record/connection_adapters/postgresql/schema_statements.rb
index f9cb5613ce..5f650d854b 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/schema_statements.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/schema_statements.rb
@@ -6,7 +6,7 @@ module PostgreSQL
module SchemaStatements
# Drops the database specified on the +name+ attribute
# and creates it again using the provided +options+.
- def recreate_database(name, options = {}) #:nodoc:
+ def recreate_database(name, options = {}) # :nodoc:
drop_database(name)
create_database(name, options)
end
@@ -50,7 +50,7 @@ def create_database(name, options = {})
#
# Example:
# drop_database 'matt_development'
- def drop_database(name) #:nodoc:
+ def drop_database(name) # :nodoc:
execute "DROP DATABASE IF EXISTS #{quote_table_name(name)}"
end
@@ -74,11 +74,11 @@ def index_name_exists?(table_name, index_name)
FROM pg_class t
INNER JOIN pg_index d ON t.oid = d.indrelid
INNER JOIN pg_class i ON d.indexrelid = i.oid
- LEFT JOIN pg_namespace n ON n.oid = i.relnamespace
+ LEFT JOIN pg_namespace n ON n.oid = t.relnamespace
WHERE i.relkind IN ('i', 'I')
AND i.relname = #{index[:name]}
AND t.relname = #{table[:name]}
- AND n.nspname = #{index[:schema]}
+ AND n.nspname = #{table[:schema]}
SQL
end
@@ -88,11 +88,11 @@ def indexes(table_name) # :nodoc:
result = query(<<~SQL, "SCHEMA")
SELECT distinct i.relname, d.indisunique, d.indkey, pg_get_indexdef(d.indexrelid), t.oid,
- pg_catalog.obj_description(i.oid, 'pg_class') AS comment
+ pg_catalog.obj_description(i.oid, 'pg_class') AS comment, d.indisvalid
FROM pg_class t
INNER JOIN pg_index d ON t.oid = d.indrelid
INNER JOIN pg_class i ON d.indexrelid = i.oid
- LEFT JOIN pg_namespace n ON n.oid = i.relnamespace
+ LEFT JOIN pg_namespace n ON n.oid = t.relnamespace
WHERE i.relkind IN ('i', 'I')
AND d.indisprimary = 'f'
AND t.relname = #{scope[:name]}
@@ -107,25 +107,24 @@ def indexes(table_name) # :nodoc:
inddef = row[3]
oid = row[4]
comment = row[5]
-
- using, expressions, where = inddef.scan(/ USING (\w+?) \((.+?)\)(?: WHERE (.+))?\z/m).flatten
+ valid = row[6]
+ using, expressions, include, nulls_not_distinct, where = inddef.scan(/ USING (\w+?) \((.+?)\)(?: INCLUDE \((.+?)\))?( NULLS NOT DISTINCT)?(?: WHERE (.+))?\z/m).flatten
orders = {}
opclasses = {}
+ include_columns = include ? include.split(",").map(&:strip) : []
if indkey.include?(0)
columns = expressions
else
- columns = Hash[query(<<~SQL, "SCHEMA")].values_at(*indkey).compact
- SELECT a.attnum, a.attname
- FROM pg_attribute a
- WHERE a.attrelid = #{oid}
- AND a.attnum IN (#{indkey.join(",")})
- SQL
+ columns = column_names_from_column_numbers(oid, indkey)
+
+ # prevent INCLUDE columns from being matched
+ columns.reject! { |c| include_columns.include?(c) }
# add info on sort order (only desc order is explicitly specified, asc is the default)
# and non-default opclasses
- expressions.scan(/(?<column>\w+)"?\s?(?<opclass>\w+_ops)?\s?(?<desc>DESC)?\s?(?<nulls>NULLS (?:FIRST|LAST))?/).each do |column, opclass, desc, nulls|
+ expressions.scan(/(?<column>\w+)"?\s?(?<opclass>\w+_ops(_\w+)?)?\s?(?<desc>DESC)?\s?(?<nulls>NULLS (?:FIRST|LAST))?/).each do |column, opclass, desc, nulls|
opclasses[column] = opclass.to_sym if opclass
if nulls
orders[column] = [desc, nulls].compact.join(" ")
@@ -144,7 +143,10 @@ def indexes(table_name) # :nodoc:
opclasses: opclasses,
where: where,
using: using.to_sym,
- comment: comment.presence
+ include: include_columns.presence,
+ nulls_not_distinct: nulls_not_distinct.present?,
+ comment: comment.presence,
+ valid: valid
)
end
end
@@ -223,7 +225,7 @@ def drop_schema(schema_name, **options)
# This should be not be called manually but set in database.yml.
def schema_search_path=(schema_csv)
if schema_csv
- execute("SET search_path TO #{schema_csv}", "SCHEMA")
+ internal_execute("SET search_path TO #{schema_csv}")
@schema_search_path = schema_csv
end
end
@@ -240,11 +242,11 @@ def client_min_messages
# Set the client message level.
def client_min_messages=(level)
- execute("SET client_min_messages TO '#{level}'", "SCHEMA")
+ internal_execute("SET client_min_messages TO '#{level}'")
end
# Returns the sequence name for a table's primary key or some other specified key.
- def default_sequence_name(table_name, pk = "id") #:nodoc:
+ def default_sequence_name(table_name, pk = "id") # :nodoc:
result = serial_sequence(table_name, pk)
return nil unless result
Utils.extract_schema_qualified_name(result).to_s
@@ -257,7 +259,7 @@ def serial_sequence(table, column)
end
# Sets the sequence of a table's primary key to the specified value.
- def set_pk_sequence!(table, value) #:nodoc:
+ def set_pk_sequence!(table, value) # :nodoc:
pk, sequence = pk_and_sequence_for(table)
if pk
@@ -272,7 +274,7 @@ def set_pk_sequence!(table, value) #:nodoc:
end
# Resets the sequence of a table's primary key to the maximum value.
- def reset_pk_sequence!(table, pk = nil, sequence = nil) #:nodoc:
+ def reset_pk_sequence!(table, pk = nil, sequence = nil) # :nodoc:
unless pk && sequence
default_pk, default_sequence = pk_and_sequence_for(table)
@@ -288,19 +290,19 @@ def reset_pk_sequence!(table, pk = nil, sequence = nil) #:nodoc:
quoted_sequence = quote_table_name(sequence)
max_pk = query_value("SELECT MAX(#{quote_column_name pk}) FROM #{quote_table_name(table)}", "SCHEMA")
if max_pk.nil?
- if database_version >= 100000
+ if database_version >= 10_00_00
minvalue = query_value("SELECT seqmin FROM pg_sequence WHERE seqrelid = #{quote(quoted_sequence)}::regclass", "SCHEMA")
else
minvalue = query_value("SELECT min_value FROM #{quoted_sequence}", "SCHEMA")
end
end
- query_value("SELECT setval(#{quote(quoted_sequence)}, #{max_pk ? max_pk : minvalue}, #{max_pk ? true : false})", "SCHEMA")
+ query_value("SELECT setval(#{quote(quoted_sequence)}, #{max_pk || minvalue}, #{max_pk ? true : false})", "SCHEMA")
end
end
# Returns a table's primary key and belonging sequence.
- def pk_and_sequence_for(table) #:nodoc:
+ def pk_and_sequence_for(table) # :nodoc:
# First try looking for a sequence with a dependency on the
# given table's primary key.
result = query(<<~SQL, "SCHEMA")[0]
@@ -339,7 +341,7 @@ def pk_and_sequence_for(table) #:nodoc:
JOIN pg_namespace nsp ON (t.relnamespace = nsp.oid)
WHERE t.oid = #{quote(quote_table_name(table))}::regclass
AND cons.contype = 'p'
- AND pg_get_expr(def.adbin, def.adrelid) ~* 'nextval|uuid_generate'
+ AND pg_get_expr(def.adbin, def.adrelid) ~* 'nextval|uuid_generate|gen_random_uuid'
SQL
end
@@ -375,49 +377,78 @@ def primary_keys(table_name) # :nodoc:
#
# Example:
# rename_table('octopuses', 'octopi')
- def rename_table(table_name, new_name)
+ def rename_table(table_name, new_name, **options)
+ validate_table_length!(new_name) unless options[:_uses_legacy_table_name]
clear_cache!
schema_cache.clear_data_source_cache!(table_name.to_s)
schema_cache.clear_data_source_cache!(new_name.to_s)
execute "ALTER TABLE #{quote_table_name(table_name)} RENAME TO #{quote_table_name(new_name)}"
pk, seq = pk_and_sequence_for(new_name)
if pk
- idx = "#{table_name}_pkey"
- new_idx = "#{new_name}_pkey"
+ # PostgreSQL automatically creates an index for PRIMARY KEY with name consisting of
+ # truncated table name and "_pkey" suffix fitting into max_identifier_length number of characters.
+ max_pkey_prefix = max_identifier_length - "_pkey".size
+ idx = "#{table_name[0, max_pkey_prefix]}_pkey"
+ new_idx = "#{new_name[0, max_pkey_prefix]}_pkey"
execute "ALTER INDEX #{quote_table_name(idx)} RENAME TO #{quote_table_name(new_idx)}"
- if seq && seq.identifier == "#{table_name}_#{pk}_seq"
- new_seq = "#{new_name}_#{pk}_seq"
+
+ # PostgreSQL automatically creates a sequence for PRIMARY KEY with name consisting of
+ # truncated table name and "#{primary_key}_seq" suffix fitting into max_identifier_length number of characters.
+ max_seq_prefix = max_identifier_length - "_#{pk}_seq".size
+ if seq && seq.identifier == "#{table_name[0, max_seq_prefix]}_#{pk}_seq"
+ new_seq = "#{new_name[0, max_seq_prefix]}_#{pk}_seq"
execute "ALTER TABLE #{seq.quoted} RENAME TO #{quote_table_name(new_seq)}"
end
end
rename_table_indexes(table_name, new_name)
end
- def add_column(table_name, column_name, type, **options) #:nodoc:
+ def add_column(table_name, column_name, type, **options) # :nodoc:
clear_cache!
super
change_column_comment(table_name, column_name, options[:comment]) if options.key?(:comment)
end
- def change_column(table_name, column_name, type, **options) #:nodoc:
+ def change_column(table_name, column_name, type, **options) # :nodoc:
clear_cache!
sqls, procs = Array(change_column_for_alter(table_name, column_name, type, **options)).partition { |v| v.is_a?(String) }
execute "ALTER TABLE #{quote_table_name(table_name)} #{sqls.join(", ")}"
procs.each(&:call)
end
+ # Builds a ChangeColumnDefinition object.
+ #
+ # This definition object contains information about the column change that would occur
+ # if the same arguments were passed to #change_column. See #change_column for information about
+ # passing a +table_name+, +column_name+, +type+ and other options that can be passed.
+ def build_change_column_definition(table_name, column_name, type, **options) # :nodoc:
+ td = create_table_definition(table_name)
+ cd = td.new_column_definition(column_name, type, **options)
+ ChangeColumnDefinition.new(cd, column_name)
+ end
+
# Changes the default value of a table column.
def change_column_default(table_name, column_name, default_or_changes) # :nodoc:
execute "ALTER TABLE #{quote_table_name(table_name)} #{change_column_default_for_alter(table_name, column_name, default_or_changes)}"
end
- def change_column_null(table_name, column_name, null, default = nil) #:nodoc:
+ def build_change_column_default_definition(table_name, column_name, default_or_changes) # :nodoc:
+ column = column_for(table_name, column_name)
+ return unless column
+
+ default = extract_new_default_value(default_or_changes)
+ ChangeColumnDefaultDefinition.new(column, default)
+ end
+
+ def change_column_null(table_name, column_name, null, default = nil) # :nodoc:
+ validate_change_column_null_argument!(null)
+
clear_cache!
unless null || default.nil?
column = column_for(table_name, column_name)
execute "UPDATE #{quote_table_name(table_name)} SET #{quote_column_name(column_name)}=#{quote_default_expression(default, column)} WHERE #{quote_column_name(column_name)} IS NULL" if column
end
- execute "ALTER TABLE #{quote_table_name(table_name)} #{change_column_null_for_alter(table_name, column_name, null, default)}"
+ execute "ALTER TABLE #{quote_table_name(table_name)} ALTER COLUMN #{quote_column_name(column_name)} #{null ? 'DROP' : 'SET'} NOT NULL"
end
# Adds comment for given table column or drops it if +comment+ is a +nil+
@@ -435,22 +466,26 @@ def change_table_comment(table_name, comment_or_changes) # :nodoc:
end
# Renames a column in a table.
- def rename_column(table_name, column_name, new_column_name) #:nodoc:
+ def rename_column(table_name, column_name, new_column_name) # :nodoc:
clear_cache!
execute("ALTER TABLE #{quote_table_name(table_name)} #{rename_column_sql(table_name, column_name, new_column_name)}")
rename_column_indexes(table_name, column_name, new_column_name)
end
- def add_index(table_name, column_name, **options) #:nodoc:
- index, algorithm, if_not_exists = add_index_options(table_name, column_name, **options)
-
- create_index = CreateIndexDefinition.new(index, algorithm, if_not_exists)
+ def add_index(table_name, column_name, **options) # :nodoc:
+ create_index = build_create_index_definition(table_name, column_name, **options)
result = execute schema_creation.accept(create_index)
+ index = create_index.index
execute "COMMENT ON INDEX #{quote_column_name(index.name)} IS #{quote(index.comment)}" if index.comment
result
end
+ def build_create_index_definition(table_name, column_name, **options) # :nodoc:
+ index, algorithm, if_not_exists = add_index_options(table_name, column_name, **options)
+ CreateIndexDefinition.new(index, algorithm, if_not_exists)
+ end
+
def remove_index(table_name, column_name = nil, **options) # :nodoc:
table = Utils.extract_schema_qualified_name(table_name.to_s)
@@ -477,13 +512,33 @@ def remove_index(table_name, column_name = nil, **options) # :nodoc:
def rename_index(table_name, old_name, new_name)
validate_index_length!(table_name, new_name)
- execute "ALTER INDEX #{quote_column_name(old_name)} RENAME TO #{quote_table_name(new_name)}"
+ schema, = extract_schema_qualified_name(table_name)
+ execute "ALTER INDEX #{quote_table_name(schema) + '.' if schema}#{quote_column_name(old_name)} RENAME TO #{quote_table_name(new_name)}"
+ end
+
+ def index_name(table_name, options) # :nodoc:
+ _schema, table_name = extract_schema_qualified_name(table_name.to_s)
+ super
+ end
+
+ def add_foreign_key(from_table, to_table, **options)
+ if options[:deferrable] == true
+ ActiveRecord.deprecator.warn(<<~MSG)
+ `deferrable: true` is deprecated in favor of `deferrable: :immediate`, and will be removed in Rails 7.2.
+ MSG
+
+ options[:deferrable] = :immediate
+ end
+
+ assert_valid_deferrable(options[:deferrable])
+
+ super
end
def foreign_keys(table_name)
scope = quoted_scope(table_name)
- fk_info = exec_query(<<~SQL, "SCHEMA")
- SELECT t2.oid::regclass::text AS to_table, a1.attname AS column, a2.attname AS primary_key, c.conname AS name, c.confupdtype AS on_update, c.confdeltype AS on_delete, c.convalidated AS valid
+ fk_info = internal_exec_query(<<~SQL, "SCHEMA", allow_retry: true, materialize_transactions: false)
+ SELECT t2.oid::regclass::text AS to_table, a1.attname AS column, a2.attname AS primary_key, c.conname AS name, c.confupdtype AS on_update, c.confdeltype AS on_delete, c.convalidated AS valid, c.condeferrable AS deferrable, c.condeferred AS deferred, c.conkey, c.confkey, c.conrelid, c.confrelid
FROM pg_constraint c
JOIN pg_class t1 ON c.conrelid = t1.oid
JOIN pg_class t2 ON c.confrelid = t2.oid
@@ -497,17 +552,31 @@ def foreign_keys(table_name)
SQL
fk_info.map do |row|
+ to_table = Utils.unquote_identifier(row["to_table"])
+ conkey = row["conkey"].scan(/\d+/).map(&:to_i)
+ confkey = row["confkey"].scan(/\d+/).map(&:to_i)
+
+ if conkey.size > 1
+ column = column_names_from_column_numbers(row["conrelid"], conkey)
+ primary_key = column_names_from_column_numbers(row["confrelid"], confkey)
+ else
+ column = Utils.unquote_identifier(row["column"])
+ primary_key = row["primary_key"]
+ end
+
options = {
- column: row["column"],
+ column: column,
name: row["name"],
- primary_key: row["primary_key"]
+ primary_key: primary_key
}
options[:on_delete] = extract_foreign_key_action(row["on_delete"])
options[:on_update] = extract_foreign_key_action(row["on_update"])
+ options[:deferrable] = extract_constraint_deferrable(row["deferrable"], row["deferred"])
+
options[:validate] = row["valid"]
- ForeignKeyDefinition.new(table_name, row["to_table"], options)
+ ForeignKeyDefinition.new(table_name, to_table, options)
end
end
@@ -522,12 +591,14 @@ def foreign_table_exists?(table_name)
def check_constraints(table_name) # :nodoc:
scope = quoted_scope(table_name)
- check_info = exec_query(<<-SQL, "SCHEMA")
- SELECT conname, pg_get_constraintdef(c.oid) AS constraintdef, c.convalidated AS valid
+ check_info = internal_exec_query(<<-SQL, "SCHEMA", allow_retry: true, materialize_transactions: false)
+ SELECT conname, pg_get_constraintdef(c.oid, true) AS constraintdef, c.convalidated AS valid
FROM pg_constraint c
JOIN pg_class t ON c.conrelid = t.oid
+ JOIN pg_namespace n ON n.oid = c.connamespace
WHERE c.contype = 'c'
AND t.relname = #{scope[:name]}
+ AND n.nspname = #{scope[:schema]}
SQL
check_info.map do |row|
@@ -535,14 +606,179 @@ def check_constraints(table_name) # :nodoc:
name: row["conname"],
validate: row["valid"]
}
- expression = row["constraintdef"][/CHECK \({2}(.+)\){2}/, 1]
+ expression = row["constraintdef"][/CHECK \((.+)\)/m, 1]
CheckConstraintDefinition.new(table_name, expression, options)
end
end
+ # Returns an array of exclusion constraints for the given table.
+ # The exclusion constraints are represented as ExclusionConstraintDefinition objects.
+ def exclusion_constraints(table_name)
+ scope = quoted_scope(table_name)
+
+ exclusion_info = internal_exec_query(<<-SQL, "SCHEMA")
+ SELECT conname, pg_get_constraintdef(c.oid) AS constraintdef, c.condeferrable, c.condeferred
+ FROM pg_constraint c
+ JOIN pg_class t ON c.conrelid = t.oid
+ JOIN pg_namespace n ON n.oid = c.connamespace
+ WHERE c.contype = 'x'
+ AND t.relname = #{scope[:name]}
+ AND n.nspname = #{scope[:schema]}
+ SQL
+
+ exclusion_info.map do |row|
+ method_and_elements, predicate = row["constraintdef"].split(" WHERE ")
+ method_and_elements_parts = method_and_elements.match(/EXCLUDE(?: USING (?<using>\S+))? \((?<expression>.+)\)/)
+ predicate.remove!(/ DEFERRABLE(?: INITIALLY (?:IMMEDIATE|DEFERRED))?/) if predicate
+ predicate = predicate.from(2).to(-3) if predicate # strip 2 opening and closing parentheses
+
+ deferrable = extract_constraint_deferrable(row["condeferrable"], row["condeferred"])
+
+ options = {
+ name: row["conname"],
+ using: method_and_elements_parts["using"].to_sym,
+ where: predicate,
+ deferrable: deferrable
+ }
+
+ ExclusionConstraintDefinition.new(table_name, method_and_elements_parts["expression"], options)
+ end
+ end
+
+ # Returns an array of unique constraints for the given table.
+ # The unique constraints are represented as UniqueConstraintDefinition objects.
+ def unique_constraints(table_name)
+ scope = quoted_scope(table_name)
+
+ unique_info = internal_exec_query(<<~SQL, "SCHEMA", allow_retry: true, materialize_transactions: false)
+ SELECT c.conname, c.conrelid, c.conkey, c.condeferrable, c.condeferred
+ FROM pg_constraint c
+ JOIN pg_class t ON c.conrelid = t.oid
+ JOIN pg_namespace n ON n.oid = c.connamespace
+ WHERE c.contype = 'u'
+ AND t.relname = #{scope[:name]}
+ AND n.nspname = #{scope[:schema]}
+ SQL
+
+ unique_info.map do |row|
+ conkey = row["conkey"].delete("{}").split(",").map(&:to_i)
+ columns = column_names_from_column_numbers(row["conrelid"], conkey)
+
+ deferrable = extract_constraint_deferrable(row["condeferrable"], row["condeferred"])
+
+ options = {
+ name: row["conname"],
+ deferrable: deferrable
+ }
+
+ UniqueConstraintDefinition.new(table_name, columns, options)
+ end
+ end
+
+ # Adds a new exclusion constraint to the table. +expression+ is a String
+ # representation of a list of exclusion elements and operators.
+ #
+ # add_exclusion_constraint :products, "price WITH =, availability_range WITH &&", using: :gist, name: "price_check"
+ #
+ # generates:
+ #
+ # ALTER TABLE "products" ADD CONSTRAINT price_check EXCLUDE USING gist (price WITH =, availability_range WITH &&)
+ #
+ # The +options+ hash can include the following keys:
+ # [<tt>:name</tt>]
+ # The constraint name. Defaults to <tt>excl_rails_<identifier></tt>.
+ # [<tt>:deferrable</tt>]
+ # Specify whether or not the exclusion constraint should be deferrable. Valid values are +false+ or +:immediate+ or +:deferred+ to specify the default behavior. Defaults to +false+.
+ def add_exclusion_constraint(table_name, expression, **options)
+ options = exclusion_constraint_options(table_name, expression, options)
+ at = create_alter_table(table_name)
+ at.add_exclusion_constraint(expression, options)
+
+ execute schema_creation.accept(at)
+ end
+
+ def exclusion_constraint_options(table_name, expression, options) # :nodoc:
+ assert_valid_deferrable(options[:deferrable])
+
+ options = options.dup
+ options[:name] ||= exclusion_constraint_name(table_name, expression: expression, **options)
+ options
+ end
+
+ # Removes the given exclusion constraint from the table.
+ #
+ # remove_exclusion_constraint :products, name: "price_check"
+ #
+ # The +expression+ parameter will be ignored if present. It can be helpful
+ # to provide this in a migration's +change+ method so it can be reverted.
+ # In that case, +expression+ will be used by #add_exclusion_constraint.
+ def remove_exclusion_constraint(table_name, expression = nil, **options)
+ excl_name_to_delete = exclusion_constraint_for!(table_name, expression: expression, **options).name
+
+ at = create_alter_table(table_name)
+ at.drop_exclusion_constraint(excl_name_to_delete)
+
+ execute schema_creation.accept(at)
+ end
+
+ # Adds a new unique constraint to the table.
+ #
+ # add_unique_constraint :sections, [:position], deferrable: :deferred, name: "unique_position"
+ #
+ # generates:
+ #
+ # ALTER TABLE "sections" ADD CONSTRAINT unique_position UNIQUE (position) DEFERRABLE INITIALLY DEFERRED
+ #
+ # If you want to change an existing unique index to deferrable, you can use :using_index to create deferrable unique constraints.
+ #
+ # add_unique_constraint :sections, deferrable: :deferred, name: "unique_position", using_index: "index_sections_on_position"
+ #
+ # The +options+ hash can include the following keys:
+ # [<tt>:name</tt>]
+ # The constraint name. Defaults to <tt>uniq_rails_<identifier></tt>.
+ # [<tt>:deferrable</tt>]
+ # Specify whether or not the unique constraint should be deferrable. Valid values are +false+ or +:immediate+ or +:deferred+ to specify the default behavior. Defaults to +false+.
+ # [<tt>:using_index</tt>]
+ # To specify an existing unique index name. Defaults to +nil+.
+ def add_unique_constraint(table_name, column_name = nil, **options)
+ options = unique_constraint_options(table_name, column_name, options)
+ at = create_alter_table(table_name)
+ at.add_unique_constraint(column_name, options)
+
+ execute schema_creation.accept(at)
+ end
+
+ def unique_constraint_options(table_name, column_name, options) # :nodoc:
+ assert_valid_deferrable(options[:deferrable])
+
+ if column_name && options[:using_index]
+ raise ArgumentError, "Cannot specify both column_name and :using_index options."
+ end
+
+ options = options.dup
+ options[:name] ||= unique_constraint_name(table_name, column: column_name, **options)
+ options
+ end
+
+ # Removes the given unique constraint from the table.
+ #
+ # remove_unique_constraint :sections, name: "unique_position"
+ #
+ # The +column_name+ parameter will be ignored if present. It can be helpful
+ # to provide this in a migration's +change+ method so it can be reverted.
+ # In that case, +column_name+ will be used by #add_unique_constraint.
+ def remove_unique_constraint(table_name, column_name = nil, **options)
+ unique_name_to_delete = unique_constraint_for!(table_name, column: column_name, **options).name
+
+ at = create_alter_table(table_name)
+ at.drop_unique_constraint(unique_name_to_delete)
+
+ execute schema_creation.accept(at)
+ end
+
# Maps logical Rails types to PostgreSQL-specific data types.
- def type_to_sql(type, limit: nil, precision: nil, scale: nil, array: nil, **) # :nodoc:
+ def type_to_sql(type, limit: nil, precision: nil, scale: nil, array: nil, enum_type: nil, **) # :nodoc:
sql = \
case type.to_s
when "binary"
@@ -566,6 +802,10 @@ def type_to_sql(type, limit: nil, precision: nil, scale: nil, array: nil, **) #
when 5..8; "bigint"
else raise ArgumentError, "No integer type has byte size #{limit}. Use a numeric with scale 0 instead."
end
+ when "enum"
+ raise ArgumentError, "enum_type is required for enums" if enum_type.nil?
+
+ enum_type
else
super
end
@@ -576,7 +816,7 @@ def type_to_sql(type, limit: nil, precision: nil, scale: nil, array: nil, **) #
# PostgreSQL requires the ORDER BY columns in the select list for distinct queries, and
# requires that the ORDER BY include the distinct column.
- def columns_for_distinct(columns, orders) #:nodoc:
+ def columns_for_distinct(columns, orders) # :nodoc:
order_columns = orders.compact_blank.map { |s|
# Convert Arel node to string
s = visitor.compile(s) unless s.is_a?(String)
@@ -640,11 +880,32 @@ def validate_check_constraint(table_name, **options)
validate_constraint table_name, chk_name_to_validate
end
- private
- def schema_creation
- PostgreSQL::SchemaCreation.new(self)
+ def foreign_key_column_for(table_name, column_name) # :nodoc:
+ _schema, table_name = extract_schema_qualified_name(table_name)
+ super
+ end
+
+ def add_index_options(table_name, column_name, **options) # :nodoc:
+ if (where = options[:where]) && table_exists?(table_name) && column_exists?(table_name, where)
+ options[:where] = quote_column_name(where)
end
+ super
+ end
+
+ def quoted_include_columns_for_index(column_names) # :nodoc:
+ return quote_column_name(column_names) if column_names.is_a?(Symbol)
+
+ quoted_columns = column_names.each_with_object({}) do |name, result|
+ result[name.to_sym] = quote_column_name(name).dup
+ end
+ add_options_for_index_columns(quoted_columns).values.join(", ")
+ end
+ def schema_creation # :nodoc:
+ PostgreSQL::SchemaCreation.new(self)
+ end
+
+ private
def create_table_definition(name, **options)
PostgreSQL::TableDefinition.new(self, name, **options)
end
@@ -653,11 +914,16 @@ def create_alter_table(name)
PostgreSQL::AlterTable.new create_table_definition(name)
end
- def new_column_from_field(table_name, field)
- column_name, type, default, notnull, oid, fmod, collation, comment = field
+ def new_column_from_field(table_name, field, _definitions)
+ column_name, type, default, notnull, oid, fmod, collation, comment, identity, attgenerated = field
type_metadata = fetch_type_metadata(column_name, type, oid.to_i, fmod.to_i)
default_value = extract_value_from_default(default)
- default_function = extract_default_function(default_value, default)
+
+ if attgenerated.present?
+ default_function = default
+ else
+ default_function = extract_default_function(default_value, default)
+ end
if match = default_function&.match(/\Anextval\('"?(?<sequence_name>.+_(?<suffix>seq\d*))"?'::regclass\)\z/)
serial = sequence_name_from_parts(table_name, column_name, match[:suffix]) == match[:sequence_name]
@@ -671,7 +937,9 @@ def new_column_from_field(table_name, field)
default_function,
collation: collation,
comment: comment.presence,
- serial: serial
+ serial: serial,
+ identity: identity.presence,
+ generated: attgenerated
)
end
@@ -711,38 +979,41 @@ def extract_foreign_key_action(specifier)
end
end
+ def assert_valid_deferrable(deferrable)
+ return if !deferrable || %i(immediate deferred).include?(deferrable)
+
+ raise ArgumentError, "deferrable must be `:immediate` or `:deferred`, got: `#{deferrable.inspect}`"
+ end
+
+ def extract_constraint_deferrable(deferrable, deferred)
+ deferrable && (deferred ? :deferred : :immediate)
+ end
+
+ def reference_name_for_table(table_name)
+ _schema, table_name = extract_schema_qualified_name(table_name.to_s)
+ table_name.singularize
+ end
+
def add_column_for_alter(table_name, column_name, type, **options)
return super unless options.key?(:comment)
[super, Proc.new { change_column_comment(table_name, column_name, options[:comment]) }]
end
def change_column_for_alter(table_name, column_name, type, **options)
- td = create_table_definition(table_name)
- cd = td.new_column_definition(column_name, type, **options)
- sqls = [schema_creation.accept(ChangeColumnDefinition.new(cd, column_name))]
+ change_col_def = build_change_column_definition(table_name, column_name, type, **options)
+ sqls = [schema_creation.accept(change_col_def)]
sqls << Proc.new { change_column_comment(table_name, column_name, options[:comment]) } if options.key?(:comment)
sqls
end
- def change_column_default_for_alter(table_name, column_name, default_or_changes)
- column = column_for(table_name, column_name)
- return unless column
-
- default = extract_new_default_value(default_or_changes)
- alter_column_query = "ALTER COLUMN #{quote_column_name(column_name)} %s"
+ def change_column_null_for_alter(table_name, column_name, null, default = nil)
if default.nil?
- # <tt>DEFAULT NULL</tt> results in the same behavior as <tt>DROP DEFAULT</tt>. However, PostgreSQL will
- # cast the default to the columns type, which leaves us with a default like "default NULL::character varying".
- alter_column_query % "DROP DEFAULT"
+ "ALTER COLUMN #{quote_column_name(column_name)} #{null ? 'DROP' : 'SET'} NOT NULL"
else
- alter_column_query % "SET DEFAULT #{quote_default_expression(default, column)}"
+ Proc.new { change_column_null(table_name, column_name, null, default) }
end
end
- def change_column_null_for_alter(table_name, column_name, null, default = nil)
- "ALTER COLUMN #{quote_column_name(column_name)} #{null ? 'DROP' : 'SET'} NOT NULL"
- end
-
def add_index_opclass(quoted_columns, **options)
opclasses = options_for_index_columns(options[:opclass])
quoted_columns.each do |name, column|
@@ -755,6 +1026,46 @@ def add_options_for_index_columns(quoted_columns, **options)
super
end
+ def exclusion_constraint_name(table_name, **options)
+ options.fetch(:name) do
+ expression = options.fetch(:expression)
+ identifier = "#{table_name}_#{expression}_excl"
+ hashed_identifier = Digest::SHA256.hexdigest(identifier).first(10)
+
+ "excl_rails_#{hashed_identifier}"
+ end
+ end
+
+ def exclusion_constraint_for(table_name, **options)
+ excl_name = exclusion_constraint_name(table_name, **options)
+ exclusion_constraints(table_name).detect { |excl| excl.name == excl_name }
+ end
+
+ def exclusion_constraint_for!(table_name, expression: nil, **options)
+ exclusion_constraint_for(table_name, expression: expression, **options) ||
+ raise(ArgumentError, "Table '#{table_name}' has no exclusion constraint for #{expression || options}")
+ end
+
+ def unique_constraint_name(table_name, **options)
+ options.fetch(:name) do
+ column_or_index = Array(options[:column] || options[:using_index]).map(&:to_s)
+ identifier = "#{table_name}_#{column_or_index * '_and_'}_unique"
+ hashed_identifier = Digest::SHA256.hexdigest(identifier).first(10)
+
+ "uniq_rails_#{hashed_identifier}"
+ end
+ end
+
+ def unique_constraint_for(table_name, **options)
+ name = unique_constraint_name(table_name, **options) unless options.key?(:column)
+ unique_constraints(table_name).detect { |unique_constraint| unique_constraint.defined_for?(name: name, **options) }
+ end
+
+ def unique_constraint_for!(table_name, column: nil, **options)
+ unique_constraint_for(table_name, column: column, **options) ||
+ raise(ArgumentError, "Table '#{table_name}' has no unique constraint for #{column || options}")
+ end
+
def data_source_sql(name = nil, type: nil)
scope = quoted_scope(name, type: type)
scope[:type] ||= "'r','v','m','p','f'" # (r)elation/table, (v)iew, (m)aterialized view, (p)artitioned table, (f)oreign table
@@ -788,6 +1099,15 @@ def extract_schema_qualified_name(string)
name = Utils.extract_schema_qualified_name(string.to_s)
[name.schema, name.identifier]
end
+
+ def column_names_from_column_numbers(table_oid, column_numbers)
+ Hash[query(<<~SQL, "SCHEMA")].values_at(*column_numbers).compact
+ SELECT a.attnum, a.attname
+ FROM pg_attribute a
+ WHERE a.attrelid = #{table_oid}
+ AND a.attnum IN (#{column_numbers.join(", ")})
+ SQL
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql/utils.rb b/activerecord/lib/active_record/connection_adapters/postgresql/utils.rb
index e8caeb8132..110b8017fa 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql/utils.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql/utils.rb
@@ -12,7 +12,7 @@ class Name # :nodoc:
attr_reader :schema, :identifier
def initialize(schema, identifier)
- @schema, @identifier = unquote(schema), unquote(identifier)
+ @schema, @identifier = Utils.unquote_identifier(schema), Utils.unquote_identifier(identifier)
end
def to_s
@@ -40,15 +40,6 @@ def hash
def parts
@parts ||= [@schema, @identifier].compact
end
-
- private
- def unquote(part)
- if part && part.start_with?('"')
- part[1..-2]
- else
- part
- end
- end
end
module Utils # :nodoc:
@@ -74,6 +65,14 @@ def extract_schema_qualified_name(string)
end
PostgreSQL::Name.new(schema, table)
end
+
+ def unquote_identifier(identifier)
+ if identifier && identifier.start_with?('"')
+ identifier[1..-2]
+ else
+ identifier
+ end
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb b/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb
index 4cb41272bb..f2afd0ab45 100644
--- a/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb
+++ b/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb
@@ -21,28 +21,19 @@
module ActiveRecord
module ConnectionHandling # :nodoc:
+ def postgresql_adapter_class
+ ConnectionAdapters::PostgreSQLAdapter
+ end
+
# Establishes a connection to the database that's used by all Active Record objects
def postgresql_connection(config)
- conn_params = config.symbolize_keys.compact
-
- # Map ActiveRecords param names to PGs.
- conn_params[:user] = conn_params.delete(:username) if conn_params[:username]
- conn_params[:dbname] = conn_params.delete(:database) if conn_params[:database]
-
- # Forward only valid config params to PG::Connection.connect.
- valid_conn_param_keys = PG::Connection.conndefaults_hash.keys + [:requiressl]
- conn_params.slice!(*valid_conn_param_keys)
-
- ConnectionAdapters::PostgreSQLAdapter.new(
- ConnectionAdapters::PostgreSQLAdapter.new_client(conn_params),
- logger,
- conn_params,
- config,
- )
+ postgresql_adapter_class.new(config)
end
end
module ConnectionAdapters
+ # = Active Record PostgreSQL Adapter
+ #
# The PostgreSQL adapter works with the native C (https://github.com/ged/ruby-pg) driver.
#
# Options:
@@ -52,7 +43,7 @@ module ConnectionAdapters
# * <tt>:port</tt> - Defaults to 5432.
# * <tt>:username</tt> - Defaults to be the same as the operating system name of the user running the application.
# * <tt>:password</tt> - Password to be used if the server demands password authentication.
- # * <tt>:database</tt> - Defaults to be the same as the user name.
+ # * <tt>:database</tt> - Defaults to be the same as the username.
# * <tt>:schema_search_path</tt> - An optional schema search path for the connection given
# as a string of comma-separated schema names. This is backward-compatible with the <tt>:schema_order</tt> option.
# * <tt>:encoding</tt> - An optional client encoding that is used in a <tt>SET client_encoding TO
@@ -77,12 +68,37 @@ class << self
def new_client(conn_params)
PG.connect(**conn_params)
rescue ::PG::Error => error
- if conn_params && conn_params[:dbname] && error.message.include?(conn_params[:dbname])
- raise ActiveRecord::NoDatabaseError
+ if conn_params && conn_params[:dbname] == "postgres"
+ raise ActiveRecord::ConnectionNotEstablished, error.message
+ elsif conn_params && conn_params[:dbname] && error.message.include?(conn_params[:dbname])
+ raise ActiveRecord::NoDatabaseError.db_error(conn_params[:dbname])
+ elsif conn_params && conn_params[:user] && error.message.include?(conn_params[:user])
+ raise ActiveRecord::DatabaseConnectionError.username_error(conn_params[:user])
+ elsif conn_params && conn_params[:host] && error.message.include?(conn_params[:host])
+ raise ActiveRecord::DatabaseConnectionError.hostname_error(conn_params[:host])
else
raise ActiveRecord::ConnectionNotEstablished, error.message
end
end
+
+ def dbconsole(config, options = {})
+ pg_config = config.configuration_hash
+
+ ENV["PGUSER"] = pg_config[:username] if pg_config[:username]
+ ENV["PGHOST"] = pg_config[:host] if pg_config[:host]
+ ENV["PGPORT"] = pg_config[:port].to_s if pg_config[:port]
+ ENV["PGPASSWORD"] = pg_config[:password].to_s if pg_config[:password] && options[:include_password]
+ ENV["PGSSLMODE"] = pg_config[:sslmode].to_s if pg_config[:sslmode]
+ ENV["PGSSLCERT"] = pg_config[:sslcert].to_s if pg_config[:sslcert]
+ ENV["PGSSLKEY"] = pg_config[:sslkey].to_s if pg_config[:sslkey]
+ ENV["PGSSLROOTCERT"] = pg_config[:sslrootcert].to_s if pg_config[:sslrootcert]
+ if pg_config[:variables]
+ ENV["PGOPTIONS"] = pg_config[:variables].filter_map do |name, value|
+ "-c #{name}=#{value.to_s.gsub(/[ \\]/, '\\\\\0')}" unless value == ":default" || value == :default
+ end.join(" ")
+ end
+ find_cmd_and_exec("psql", config.database)
+ end
end
##
@@ -92,20 +108,42 @@ def new_client(conn_params)
# but significantly increases the risk of data loss if the database
# crashes. As a result, this should not be used in production
# environments. If you would like all created tables to be unlogged in
- # the test environment you can add the following line to your test.rb
- # file:
+ # the test environment you can add the following to your test.rb file:
#
- # ActiveRecord::ConnectionAdapters::PostgreSQLAdapter.create_unlogged_tables = true
+ # ActiveSupport.on_load(:active_record_postgresqladapter) do
+ # self.create_unlogged_tables = true
+ # end
class_attribute :create_unlogged_tables, default: false
+ ##
+ # :singleton-method:
+ # PostgreSQL supports multiple types for DateTimes. By default, if you use +datetime+
+ # in migrations, \Rails will translate this to a PostgreSQL "timestamp without time zone".
+ # Change this in an initializer to use another NATIVE_DATABASE_TYPES. For example, to
+ # store DateTimes as "timestamp with time zone":
+ #
+ # ActiveRecord::ConnectionAdapters::PostgreSQLAdapter.datetime_type = :timestamptz
+ #
+ # Or if you are adding a custom type:
+ #
+ # ActiveRecord::ConnectionAdapters::PostgreSQLAdapter::NATIVE_DATABASE_TYPES[:my_custom_type] = { name: "my_custom_type_name" }
+ # ActiveRecord::ConnectionAdapters::PostgreSQLAdapter.datetime_type = :my_custom_type
+ #
+ # If you're using +:ruby+ as your +config.active_record.schema_format+ and you change this
+ # setting, you should immediately run <tt>bin/rails db:migrate</tt> to update the types in your schema.rb.
+ class_attribute :datetime_type, default: :timestamp
+
NATIVE_DATABASE_TYPES = {
primary_key: "bigserial primary key",
string: { name: "character varying" },
text: { name: "text" },
integer: { name: "integer", limit: 4 },
+ bigint: { name: "bigint" },
float: { name: "float" },
decimal: { name: "decimal" },
- datetime: { name: "timestamp" },
+ datetime: {}, # set dynamically based on datetime_type
+ timestamp: { name: "timestamp" },
+ timestamptz: { name: "timestamptz" },
time: { name: "time" },
date: { name: "date" },
daterange: { name: "daterange" },
@@ -139,9 +177,10 @@ def new_client(conn_params)
money: { name: "money" },
interval: { name: "interval" },
oid: { name: "oid" },
+ enum: {} # special type https://www.postgresql.org/docs/current/datatype-enum.html
}
- OID = PostgreSQL::OID #:nodoc:
+ OID = PostgreSQL::OID # :nodoc:
include PostgreSQL::Quoting
include PostgreSQL::ReferentialIntegrity
@@ -157,13 +196,17 @@ def supports_index_sort_order?
end
def supports_partitioned_indexes?
- database_version >= 110_000
+ database_version >= 11_00_00 # >= 11.0
end
def supports_partial_index?
true
end
+ def supports_index_include?
+ database_version >= 11_00_00 # >= 11.0
+ end
+
def supports_expression_index?
true
end
@@ -180,10 +223,22 @@ def supports_check_constraints?
true
end
+ def supports_exclusion_constraints?
+ true
+ end
+
+ def supports_unique_constraints?
+ true
+ end
+
def supports_validate_constraints?
true
end
+ def supports_deferrable_constraints?
+ true
+ end
+
def supports_views?
true
end
@@ -204,21 +259,41 @@ def supports_savepoints?
true
end
+ def supports_restart_db_transaction?
+ database_version >= 12_00_00 # >= 12.0
+ end
+
def supports_insert_returning?
true
end
def supports_insert_on_conflict?
- database_version >= 90500
+ database_version >= 9_05_00 # >= 9.5
end
alias supports_insert_on_duplicate_skip? supports_insert_on_conflict?
alias supports_insert_on_duplicate_update? supports_insert_on_conflict?
alias supports_insert_conflict_target? supports_insert_on_conflict?
+ def supports_virtual_columns?
+ database_version >= 12_00_00 # >= 12.0
+ end
+
+ def supports_identity_columns? # :nodoc:
+ database_version >= 10_00_00 # >= 10.0
+ end
+
+ def supports_nulls_not_distinct?
+ database_version >= 15_00_00 # >= 15.0
+ end
+
def index_algorithms
{ concurrently: "CONCURRENTLY" }
end
+ def return_value_after_insert?(column) # :nodoc:
+ column.auto_populated?
+ end
+
class StatementPool < ConnectionAdapters::StatementPool # :nodoc:
def initialize(connection, max)
super(max)
@@ -232,73 +307,74 @@ def next_key
private
def dealloc(key)
- @connection.query "DEALLOCATE #{key}" if connection_active?
- rescue PG::Error
- end
-
- def connection_active?
- @connection.status == PG::CONNECTION_OK
+ # This is ugly, but safe: the statement pool is only
+ # accessed while holding the connection's lock. (And we
+ # don't need the complication of with_raw_connection because
+ # a reconnect would invalidate the entire statement pool.)
+ if conn = @connection.instance_variable_get(:@raw_connection)
+ conn.query "DEALLOCATE #{key}" if conn.status == PG::CONNECTION_OK
+ end
rescue PG::Error
- false
end
end
# Initializes and connects a PostgreSQL adapter.
- def initialize(connection, logger, connection_parameters, config)
- super(connection, logger, config)
+ def initialize(...)
+ super
- @connection_parameters = connection_parameters || {}
+ conn_params = @config.compact
- # @local_tz is initialized as nil to avoid warnings when connect tries to use it
- @local_tz = nil
- @max_identifier_length = nil
+ # Map ActiveRecords param names to PGs.
+ conn_params[:user] = conn_params.delete(:username) if conn_params[:username]
+ conn_params[:dbname] = conn_params.delete(:database) if conn_params[:database]
- configure_connection
- add_pg_encoders
- add_pg_decoders
+ # Forward only valid config params to PG::Connection.connect.
+ valid_conn_param_keys = PG::Connection.conndefaults_hash.keys + [:requiressl]
+ conn_params.slice!(*valid_conn_param_keys)
- @type_map = Type::HashLookupTypeMap.new
- initialize_type_map
- @local_tz = execute("SHOW TIME ZONE", "SCHEMA").first["TimeZone"]
- @use_insert_returning = @config.key?(:insert_returning) ? self.class.type_cast_config_to_boolean(@config[:insert_returning]) : true
- end
+ @connection_parameters = conn_params
- def self.database_exists?(config)
- !!ActiveRecord::Base.postgresql_connection(config)
- rescue ActiveRecord::NoDatabaseError
- false
+ @max_identifier_length = nil
+ @type_map = nil
+ @raw_connection = nil
+ @notice_receiver_sql_warnings = []
+
+ @use_insert_returning = @config.key?(:insert_returning) ? self.class.type_cast_config_to_boolean(@config[:insert_returning]) : true
end
# Is this connection alive and ready for queries?
def active?
@lock.synchronize do
- @connection.query "SELECT 1"
+ return false unless @raw_connection
+ @raw_connection.query ";"
end
true
rescue PG::Error
false
end
- # Close then reopen the connection.
- def reconnect!
+ def reload_type_map # :nodoc:
@lock.synchronize do
- super
- @connection.reset
- configure_connection
- rescue PG::ConnectionBad
- connect
+ if @type_map
+ type_map.clear
+ else
+ @type_map = Type::HashLookupTypeMap.new
+ end
+
+ initialize_type_map
end
end
def reset!
@lock.synchronize do
- clear_cache!
- reset_transaction
- unless @connection.transaction_status == ::PG::PQTRANS_IDLE
- @connection.query "ROLLBACK"
+ return connect! unless @raw_connection
+
+ unless @raw_connection.transaction_status == ::PG::PQTRANS_IDLE
+ @raw_connection.query "ROLLBACK"
end
- @connection.query "DISCARD ALL"
- configure_connection
+ @raw_connection.query "DISCARD ALL"
+
+ super
end
end
@@ -307,22 +383,31 @@ def reset!
def disconnect!
@lock.synchronize do
super
- @connection.close rescue nil
+ @raw_connection&.close rescue nil
+ @raw_connection = nil
end
end
def discard! # :nodoc:
super
- @connection.socket_io.reopen(IO::NULL) rescue nil
- @connection = nil
+ @raw_connection&.socket_io&.reopen(IO::NULL) rescue nil
+ @raw_connection = nil
+ end
+
+ def native_database_types # :nodoc:
+ self.class.native_database_types
end
- def native_database_types #:nodoc:
- NATIVE_DATABASE_TYPES
+ def self.native_database_types # :nodoc:
+ @native_database_types ||= begin
+ types = NATIVE_DATABASE_TYPES.dup
+ types[:datetime] = types[datetime_type]
+ types
+ end
end
def set_standard_conforming_strings
- execute("SET standard_conforming_strings = on", "SCHEMA")
+ internal_execute("SET standard_conforming_strings = on")
end
def supports_ddl_transactions?
@@ -350,7 +435,7 @@ def supports_foreign_tables?
end
def supports_pgcrypto_uuid?
- database_version >= 90400
+ database_version >= 9_04_00 # >= 9.4
end
def supports_optimizer_hints?
@@ -382,14 +467,21 @@ def release_advisory_lock(lock_id) # :nodoc:
query_value("SELECT pg_advisory_unlock(#{lock_id})")
end
- def enable_extension(name)
- exec_query("CREATE EXTENSION IF NOT EXISTS \"#{name}\"").tap {
- reload_type_map
- }
+ def enable_extension(name, **)
+ schema, name = name.to_s.split(".").values_at(-2, -1)
+ sql = +"CREATE EXTENSION IF NOT EXISTS \"#{name}\""
+ sql << " SCHEMA #{schema}" if schema
+
+ internal_exec_query(sql).tap { reload_type_map }
end
- def disable_extension(name)
- exec_query("DROP EXTENSION IF EXISTS \"#{name}\" CASCADE").tap {
+ # Removes an extension from the database.
+ #
+ # [<tt>:force</tt>]
+ # Set to +:cascade+ to drop dependent objects as well.
+ # Defaults to false.
+ def disable_extension(name, force: false)
+ internal_exec_query("DROP EXTENSION IF EXISTS \"#{name}\"#{' CASCADE' if force == :cascade}").tap {
reload_type_map
}
end
@@ -403,7 +495,105 @@ def extension_enabled?(name)
end
def extensions
- exec_query("SELECT extname FROM pg_extension", "SCHEMA").cast_values
+ internal_exec_query("SELECT extname FROM pg_extension", "SCHEMA", allow_retry: true, materialize_transactions: false).cast_values
+ end
+
+ # Returns a list of defined enum types, and their values.
+ def enum_types
+ query = <<~SQL
+ SELECT
+ type.typname AS name,
+ type.OID AS oid,
+ n.nspname AS schema,
+ string_agg(enum.enumlabel, ',' ORDER BY enum.enumsortorder) AS value
+ FROM pg_enum AS enum
+ JOIN pg_type AS type ON (type.oid = enum.enumtypid)
+ JOIN pg_namespace n ON type.typnamespace = n.oid
+ WHERE n.nspname = ANY (current_schemas(false))
+ GROUP BY type.OID, n.nspname, type.typname;
+ SQL
+
+ internal_exec_query(query, "SCHEMA", allow_retry: true, materialize_transactions: false).cast_values.each_with_object({}) do |row, memo|
+ name, schema = row[0], row[2]
+ schema = nil if schema == current_schema
+ full_name = [schema, name].compact.join(".")
+ memo[full_name] = row.last
+ end.to_a
+ end
+
+ # Given a name and an array of values, creates an enum type.
+ def create_enum(name, values, **options)
+ sql_values = values.map { |s| quote(s) }.join(", ")
+ scope = quoted_scope(name)
+ query = <<~SQL
+ DO $$
+ BEGIN
+ IF NOT EXISTS (
+ SELECT 1
+ FROM pg_type t
+ JOIN pg_namespace n ON t.typnamespace = n.oid
+ WHERE t.typname = #{scope[:name]}
+ AND n.nspname = #{scope[:schema]}
+ ) THEN
+ CREATE TYPE #{quote_table_name(name)} AS ENUM (#{sql_values});
+ END IF;
+ END
+ $$;
+ SQL
+ internal_exec_query(query).tap { reload_type_map }
+ end
+
+ # Drops an enum type.
+ #
+ # If the <tt>if_exists: true</tt> option is provided, the enum is dropped
+ # only if it exists. Otherwise, if the enum doesn't exist, an error is
+ # raised.
+ #
+ # The +values+ parameter will be ignored if present. It can be helpful
+ # to provide this in a migration's +change+ method so it can be reverted.
+ # In that case, +values+ will be used by #create_enum.
+ def drop_enum(name, values = nil, **options)
+ query = <<~SQL
+ DROP TYPE#{' IF EXISTS' if options[:if_exists]} #{quote_table_name(name)};
+ SQL
+ internal_exec_query(query).tap { reload_type_map }
+ end
+
+ # Rename an existing enum type to something else.
+ def rename_enum(name, options = {})
+ to = options.fetch(:to) { raise ArgumentError, ":to is required" }
+
+ exec_query("ALTER TYPE #{quote_table_name(name)} RENAME TO #{to}").tap { reload_type_map }
+ end
+
+ # Add enum value to an existing enum type.
+ def add_enum_value(type_name, value, options = {})
+ before, after = options.values_at(:before, :after)
+ sql = +"ALTER TYPE #{quote_table_name(type_name)} ADD VALUE '#{value}'"
+
+ if before && after
+ raise ArgumentError, "Cannot have both :before and :after at the same time"
+ elsif before
+ sql << " BEFORE '#{before}'"
+ elsif after
+ sql << " AFTER '#{after}'"
+ end
+
+ execute(sql).tap { reload_type_map }
+ end
+
+ # Rename enum value on an existing enum type.
+ def rename_enum_value(type_name, options = {})
+ unless database_version >= 10_00_00 # >= 10.0
+ raise ArgumentError, "Renaming enum values is only supported in PostgreSQL 10 or later"
+ end
+
+ from = options.fetch(:from) { raise ArgumentError, ":from is required" }
+ to = options.fetch(:to) { raise ArgumentError, ":to is required" }
+
+ execute("ALTER TYPE #{quote_table_name(type_name)} RENAME VALUE '#{from}' TO '#{to}'").tap {
+ reload_type_map
+ }
end
# Returns the configured supported identifier length supported by PostgreSQL
@@ -414,7 +604,7 @@ def max_identifier_length
# Set the authorized user for this session
def session_auth=(user)
clear_cache!
- execute("SET SESSION AUTHORIZATION #{user}")
+ internal_execute("SET SESSION AUTHORIZATION #{user}", nil, materialize_transactions: true)
end
def use_insert_returning?
@@ -423,7 +613,7 @@ def use_insert_returning?
# Returns the version of the connected PostgreSQL server.
def get_database_version # :nodoc:
- @connection.server_version
+ valid_raw_connection.server_version
end
alias :postgresql_version :database_version
@@ -438,8 +628,12 @@ def build_insert_sql(insert) # :nodoc:
sql << " ON CONFLICT #{insert.conflict_target} DO NOTHING"
elsif insert.update_duplicates?
sql << " ON CONFLICT #{insert.conflict_target} DO UPDATE SET "
- sql << insert.touch_model_timestamps_unless { |column| "#{insert.model.quoted_table_name}.#{column} IS NOT DISTINCT FROM excluded.#{column}" }
- sql << insert.updatable_columns.map { |column| "#{column}=excluded.#{column}" }.join(",")
+ if insert.raw_update_sql?
+ sql << insert.raw_update_sql
+ else
+ sql << insert.touch_model_timestamps_unless { |column| "#{insert.model.quoted_table_name}.#{column} IS NOT DISTINCT FROM excluded.#{column}" }
+ sql << insert.updatable_columns.map { |column| "#{column}=excluded.#{column}" }.join(",")
+ end
end
sql << " RETURNING #{insert.returning}" if insert.returning
@@ -447,73 +641,13 @@ def build_insert_sql(insert) # :nodoc:
end
def check_version # :nodoc:
- if database_version < 90300
+ if database_version < 9_03_00 # < 9.3
raise "Your version of PostgreSQL (#{database_version}) is too old. Active Record supports PostgreSQL >= 9.3."
end
end
- private
- # See https://www.postgresql.org/docs/current/static/errcodes-appendix.html
- VALUE_LIMIT_VIOLATION = "22001"
- NUMERIC_VALUE_OUT_OF_RANGE = "22003"
- NOT_NULL_VIOLATION = "23502"
- FOREIGN_KEY_VIOLATION = "23503"
- UNIQUE_VIOLATION = "23505"
- SERIALIZATION_FAILURE = "40001"
- DEADLOCK_DETECTED = "40P01"
- DUPLICATE_DATABASE = "42P04"
- LOCK_NOT_AVAILABLE = "55P03"
- QUERY_CANCELED = "57014"
-
- def translate_exception(exception, message:, sql:, binds:)
- return exception unless exception.respond_to?(:result)
-
- case exception.result.try(:error_field, PG::PG_DIAG_SQLSTATE)
- when nil
- if exception.message.match?(/connection is closed/i)
- ConnectionNotEstablished.new(exception)
- else
- super
- end
- when UNIQUE_VIOLATION
- RecordNotUnique.new(message, sql: sql, binds: binds)
- when FOREIGN_KEY_VIOLATION
- InvalidForeignKey.new(message, sql: sql, binds: binds)
- when VALUE_LIMIT_VIOLATION
- ValueTooLong.new(message, sql: sql, binds: binds)
- when NUMERIC_VALUE_OUT_OF_RANGE
- RangeError.new(message, sql: sql, binds: binds)
- when NOT_NULL_VIOLATION
- NotNullViolation.new(message, sql: sql, binds: binds)
- when SERIALIZATION_FAILURE
- SerializationFailure.new(message, sql: sql, binds: binds)
- when DEADLOCK_DETECTED
- Deadlocked.new(message, sql: sql, binds: binds)
- when DUPLICATE_DATABASE
- DatabaseAlreadyExists.new(message, sql: sql, binds: binds)
- when LOCK_NOT_AVAILABLE
- LockWaitTimeout.new(message, sql: sql, binds: binds)
- when QUERY_CANCELED
- QueryCanceled.new(message, sql: sql, binds: binds)
- else
- super
- end
- end
-
- def get_oid_type(oid, fmod, column_name, sql_type = "")
- if !type_map.key?(oid)
- load_additional_types([oid])
- end
-
- type_map.fetch(oid, fmod, sql_type) {
- warn "unknown OID #{oid}: failed to recognize type of '#{column_name}'. It will be treated as String."
- Type.default_value.tap do |cast_type|
- type_map.register_type(oid, cast_type)
- end
- }
- end
-
- def initialize_type_map(m = type_map)
+ class << self
+ def initialize_type_map(m) # :nodoc:
m.register_type "int2", Type::Integer.new(limit: 2)
m.register_type "int4", Type::Integer.new(limit: 4)
m.register_type "int8", Type::Integer.new(limit: 8)
@@ -528,7 +662,6 @@ def initialize_type_map(m = type_map)
m.register_type "bool", Type::Boolean.new
register_class_with_limit m, "bit", OID::Bit
register_class_with_limit m, "varbit", OID::BitVarying
- m.alias_type "timestamptz", "timestamp"
m.register_type "date", OID::Date.new
m.register_type "money", OID::Money.new
@@ -552,9 +685,6 @@ def initialize_type_map(m = type_map)
m.register_type "polygon", OID::SpecializedString.new(:polygon)
m.register_type "circle", OID::SpecializedString.new(:circle)
- register_class_with_precision m, "time", Type::Time
- register_class_with_precision m, "timestamp", OID::DateTime
-
m.register_type "numeric" do |_, fmod, sql_type|
precision = extract_precision(sql_type)
scale = extract_scale(sql_type)
@@ -579,6 +709,18 @@ def initialize_type_map(m = type_map)
precision = extract_precision(sql_type)
OID::Interval.new(precision: precision)
end
+ end
+ end
+
+ private
+ attr_reader :type_map
+
+ def initialize_type_map(m = type_map)
+ self.class.initialize_type_map(m)
+
+ self.class.register_class_with_precision m, "time", Type::Time, timezone: @default_timezone
+ self.class.register_class_with_precision m, "timestamp", OID::Timestamp, timezone: @default_timezone
+ self.class.register_class_with_precision m, "timestamptz", OID::TimestampWithTimeZone
load_additional_types
end
@@ -587,7 +729,7 @@ def initialize_type_map(m = type_map)
def extract_value_from_default(default)
case default
# Quoted types
- when /\A[\(B]?'(.*)'.*::"?([\w. ]+)"?(?:\[\])?\z/m
+ when /\A[(B]?'(.*)'.*::"?([\w. ]+)"?(?:\[\])?\z/m
# The default 'now'::date is CURRENT_DATE
if $1 == "now" && $2 == "date"
nil
@@ -618,37 +760,118 @@ def has_default_function?(default_value, default)
!default_value && %r{\w+\(.*\)|\(.*\)::\w+|CURRENT_DATE|CURRENT_TIMESTAMP}.match?(default)
end
+ # See https://www.postgresql.org/docs/current/static/errcodes-appendix.html
+ VALUE_LIMIT_VIOLATION = "22001"
+ NUMERIC_VALUE_OUT_OF_RANGE = "22003"
+ NOT_NULL_VIOLATION = "23502"
+ FOREIGN_KEY_VIOLATION = "23503"
+ UNIQUE_VIOLATION = "23505"
+ SERIALIZATION_FAILURE = "40001"
+ DEADLOCK_DETECTED = "40P01"
+ DUPLICATE_DATABASE = "42P04"
+ LOCK_NOT_AVAILABLE = "55P03"
+ QUERY_CANCELED = "57014"
+
+ def translate_exception(exception, message:, sql:, binds:)
+ return exception unless exception.respond_to?(:result)
+
+ case exception.result.try(:error_field, PG::PG_DIAG_SQLSTATE)
+ when nil
+ if exception.message.match?(/connection is closed/i)
+ ConnectionNotEstablished.new(exception, connection_pool: @pool)
+ elsif exception.is_a?(PG::ConnectionBad)
+ # libpq message style always ends with a newline; the pg gem's internal
+ # errors do not. We separate these cases because a pg-internal
+ # ConnectionBad means it failed before it managed to send the query,
+ # whereas a libpq failure could have occurred at any time (meaning the
+ # server may have already executed part or all of the query).
+ if exception.message.end_with?("\n")
+ ConnectionFailed.new(exception, connection_pool: @pool)
+ else
+ ConnectionNotEstablished.new(exception, connection_pool: @pool)
+ end
+ else
+ super
+ end
+ when UNIQUE_VIOLATION
+ RecordNotUnique.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ when FOREIGN_KEY_VIOLATION
+ InvalidForeignKey.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ when VALUE_LIMIT_VIOLATION
+ ValueTooLong.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ when NUMERIC_VALUE_OUT_OF_RANGE
+ RangeError.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ when NOT_NULL_VIOLATION
+ NotNullViolation.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ when SERIALIZATION_FAILURE
+ SerializationFailure.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ when DEADLOCK_DETECTED
+ Deadlocked.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ when DUPLICATE_DATABASE
+ DatabaseAlreadyExists.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ when LOCK_NOT_AVAILABLE
+ LockWaitTimeout.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ when QUERY_CANCELED
+ QueryCanceled.new(message, sql: sql, binds: binds, connection_pool: @pool)
+ else
+ super
+ end
+ end
+
+ def retryable_query_error?(exception)
+ # We cannot retry anything if we're inside a broken transaction; we need to at
+ # least raise until the innermost savepoint is rolled back
+ @raw_connection&.transaction_status != ::PG::PQTRANS_INERROR &&
+ super
+ end
+
+ def get_oid_type(oid, fmod, column_name, sql_type = "")
+ if !type_map.key?(oid)
+ load_additional_types([oid])
+ end
+
+ type_map.fetch(oid, fmod, sql_type) {
+ warn "unknown OID #{oid}: failed to recognize type of '#{column_name}'. It will be treated as String."
+ Type.default_value.tap do |cast_type|
+ type_map.register_type(oid, cast_type)
+ end
+ }
+ end
+
def load_additional_types(oids = nil)
initializer = OID::TypeMapInitializer.new(type_map)
+ load_types_queries(initializer, oids) do |query|
+ execute_and_clear(query, "SCHEMA", [], allow_retry: true, materialize_transactions: false) do |records|
+ initializer.run(records)
+ end
+ end
+ end
+ def load_types_queries(initializer, oids)
query = <<~SQL
SELECT t.oid, t.typname, t.typelem, t.typdelim, t.typinput, r.rngsubtype, t.typtype, t.typbasetype
FROM pg_type as t
LEFT JOIN pg_range as r ON oid = rngtypid
SQL
-
if oids
- query += "WHERE t.oid IN (%s)" % oids.join(", ")
+ yield query + "WHERE t.oid IN (%s)" % oids.join(", ")
else
- query += initializer.query_conditions_for_initial_load
- end
-
- execute_and_clear(query, "SCHEMA", []) do |records|
- initializer.run(records)
+ yield query + initializer.query_conditions_for_known_type_names
+ yield query + initializer.query_conditions_for_known_type_types
+ yield query + initializer.query_conditions_for_array_types
end
end
- FEATURE_NOT_SUPPORTED = "0A000" #:nodoc:
+ FEATURE_NOT_SUPPORTED = "0A000" # :nodoc:
- def execute_and_clear(sql, name, binds, prepare: false)
- if preventing_writes? && write_query?(sql)
- raise ActiveRecord::ReadOnlyError, "Write query attempted while in readonly mode: #{sql}"
- end
+ def execute_and_clear(sql, name, binds, prepare: false, async: false, allow_retry: false, materialize_transactions: true)
+ sql = transform_query(sql)
+ check_if_write_query(sql)
if !prepare || without_prepared_statement?(binds)
- result = exec_no_cache(sql, name, binds)
+ result = exec_no_cache(sql, name, binds, async: async, allow_retry: allow_retry, materialize_transactions: materialize_transactions)
else
- result = exec_cache(sql, name, binds)
+ result = exec_cache(sql, name, binds, async: async, allow_retry: allow_retry, materialize_transactions: materialize_transactions)
end
begin
ret = yield result
@@ -658,33 +881,36 @@ def execute_and_clear(sql, name, binds, prepare: false)
ret
end
- def exec_no_cache(sql, name, binds)
- materialize_transactions
+ def exec_no_cache(sql, name, binds, async:, allow_retry:, materialize_transactions:)
mark_transaction_written_if_write(sql)
- # make sure we carry over any changes to ActiveRecord::Base.default_timezone that have been
+ # make sure we carry over any changes to ActiveRecord.default_timezone that have been
# made since we established the connection
update_typemap_for_default_timezone
type_casted_binds = type_casted_binds(binds)
- log(sql, name, binds, type_casted_binds) do
- ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
- @connection.exec_params(sql, type_casted_binds)
+ log(sql, name, binds, type_casted_binds, async: async) do
+ with_raw_connection do |conn|
+ result = conn.exec_params(sql, type_casted_binds)
+ verified!
+ result
end
end
end
- def exec_cache(sql, name, binds)
- materialize_transactions
+ def exec_cache(sql, name, binds, async:, allow_retry:, materialize_transactions:)
mark_transaction_written_if_write(sql)
+
update_typemap_for_default_timezone
- stmt_key = prepare_statement(sql, binds)
- type_casted_binds = type_casted_binds(binds)
+ with_raw_connection do |conn|
+ stmt_key = prepare_statement(sql, binds, conn)
+ type_casted_binds = type_casted_binds(binds)
- log(sql, name, binds, type_casted_binds, stmt_key) do
- ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
- @connection.exec_prepared(stmt_key, type_casted_binds)
+ log(sql, name, binds, type_casted_binds, stmt_key, async: async) do
+ result = conn.exec_prepared(stmt_key, type_casted_binds)
+ verified!
+ result
end
end
rescue ActiveRecord::StatementInvalid => e
@@ -732,70 +958,98 @@ def sql_key(sql)
# Prepare the statement if it hasn't been prepared, return
# the statement key.
- def prepare_statement(sql, binds)
- @lock.synchronize do
- sql_key = sql_key(sql)
- unless @statements.key? sql_key
- nextkey = @statements.next_key
- begin
- @connection.prepare nextkey, sql
- rescue => e
- raise translate_exception_class(e, sql, binds)
- end
- # Clear the queue
- @connection.get_last_result
- @statements[sql_key] = nextkey
+ def prepare_statement(sql, binds, conn)
+ sql_key = sql_key(sql)
+ unless @statements.key? sql_key
+ nextkey = @statements.next_key
+ begin
+ conn.prepare nextkey, sql
+ rescue => e
+ raise translate_exception_class(e, sql, binds)
end
- @statements[sql_key]
+ # Clear the queue
+ conn.get_last_result
+ @statements[sql_key] = nextkey
end
+ @statements[sql_key]
end
# Connects to a PostgreSQL server and sets up the adapter depending on the
# connected server's characteristics.
def connect
- @connection = self.class.new_client(@connection_parameters)
- configure_connection
- add_pg_encoders
- add_pg_decoders
+ @raw_connection = self.class.new_client(@connection_parameters)
+ rescue ConnectionNotEstablished => ex
+ raise ex.set_pool(@pool)
+ end
+
+ def reconnect
+ begin
+ @raw_connection&.reset
+ rescue PG::ConnectionBad
+ @raw_connection = nil
+ end
+
+ connect unless @raw_connection
end
# Configures the encoding, verbosity, schema search path, and time zone of the connection.
# This is called by #connect and should not be called manually.
def configure_connection
if @config[:encoding]
- @connection.set_client_encoding(@config[:encoding])
+ @raw_connection.set_client_encoding(@config[:encoding])
end
self.client_min_messages = @config[:min_messages] || "warning"
self.schema_search_path = @config[:schema_search_path] || @config[:schema_order]
+ unless ActiveRecord.db_warnings_action.nil?
+ @raw_connection.set_notice_receiver do |result|
+ message = result.error_field(PG::Result::PG_DIAG_MESSAGE_PRIMARY)
+ code = result.error_field(PG::Result::PG_DIAG_SQLSTATE)
+ level = result.error_field(PG::Result::PG_DIAG_SEVERITY)
+ @notice_receiver_sql_warnings << SQLWarning.new(message, code, level, nil, @pool)
+ end
+ end
+
# Use standard-conforming strings so we don't have to do the E'...' dance.
set_standard_conforming_strings
variables = @config.fetch(:variables, {}).stringify_keys
- # If using Active Record's time zone support configure the connection to return
- # TIMESTAMP WITH ZONE types in UTC.
- unless variables["timezone"]
- if ActiveRecord::Base.default_timezone == :utc
- variables["timezone"] = "UTC"
- elsif @local_tz
- variables["timezone"] = @local_tz
- end
- end
-
# Set interval output format to ISO 8601 for ease of parsing by ActiveSupport::Duration.parse
- execute("SET intervalstyle = iso_8601", "SCHEMA")
+ internal_execute("SET intervalstyle = iso_8601")
# SET statements from :variables config hash
# https://www.postgresql.org/docs/current/static/sql-set.html
variables.map do |k, v|
if v == ":default" || v == :default
# Sets the value to the global or compile default
- execute("SET SESSION #{k} TO DEFAULT", "SCHEMA")
+ internal_execute("SET SESSION #{k} TO DEFAULT")
elsif !v.nil?
- execute("SET SESSION #{k} TO #{quote(v)}", "SCHEMA")
+ internal_execute("SET SESSION #{k} TO #{quote(v)}")
end
end
+
+ add_pg_encoders
+ add_pg_decoders
+
+ reload_type_map
+ end
+
+ def reconfigure_connection_timezone
+ variables = @config.fetch(:variables, {}).stringify_keys
+
+ # If it's been directly configured as a connection variable, we don't
+ # need to do anything here; it will be set up by configure_connection
+ # and then never changed.
+ return if variables["timezone"]
+
+ # If using Active Record's time zone support configure the connection
+ # to return TIMESTAMP WITH ZONE types in UTC.
+ if default_timezone == :utc
+ internal_execute("SET SESSION timezone TO 'UTC'")
+ else
+ internal_execute("SET SESSION timezone TO DEFAULT")
+ end
end
# Returns the list of a table's column names, data types, and default values.
@@ -820,7 +1074,9 @@ def column_definitions(table_name)
query(<<~SQL, "SCHEMA")
SELECT a.attname, format_type(a.atttypid, a.atttypmod),
pg_get_expr(d.adbin, d.adrelid), a.attnotnull, a.atttypid, a.atttypmod,
- c.collname, col_description(a.attrelid, a.attnum) AS comment
+ c.collname, col_description(a.attrelid, a.attnum) AS comment,
+ #{supports_identity_columns? ? 'attidentity' : quote('')} AS identity,
+ #{supports_virtual_columns? ? 'attgenerated' : quote('')} as attgenerated
FROM pg_attribute a
LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum
LEFT JOIN pg_type t ON a.atttypid = t.oid
@@ -831,37 +1087,37 @@ def column_definitions(table_name)
SQL
end
- def extract_table_ref_from_insert_sql(sql)
- sql[/into\s("[A-Za-z0-9_."\[\]\s]+"|[A-Za-z0-9_."\[\]]+)\s*/im]
- $1.strip if $1
- end
-
def arel_visitor
Arel::Visitors::PostgreSQL.new(self)
end
def build_statement_pool
- StatementPool.new(@connection, self.class.type_cast_config_to_integer(@config[:statement_limit]))
+ StatementPool.new(self, self.class.type_cast_config_to_integer(@config[:statement_limit]))
end
def can_perform_case_insensitive_comparison_for?(column)
- @case_insensitive_cache ||= {}
- @case_insensitive_cache[column.sql_type] ||= begin
- sql = <<~SQL
- SELECT exists(
- SELECT * FROM pg_proc
- WHERE proname = 'lower'
- AND proargtypes = ARRAY[#{quote column.sql_type}::regtype]::oidvector
- ) OR exists(
- SELECT * FROM pg_proc
- INNER JOIN pg_cast
- ON ARRAY[casttarget]::oidvector = proargtypes
- WHERE proname = 'lower'
- AND castsource = #{quote column.sql_type}::regtype
- )
- SQL
- execute_and_clear(sql, "SCHEMA", []) do |result|
- result.getvalue(0, 0)
+ # NOTE: citext is an exception. It is possible to perform a
+ # case-insensitive comparison using `LOWER()`, but it is
+ # unnecessary, as `citext` is case-insensitive by definition.
+ @case_insensitive_cache ||= { "citext" => false }
+ @case_insensitive_cache.fetch(column.sql_type) do
+ @case_insensitive_cache[column.sql_type] = begin
+ sql = <<~SQL
+ SELECT exists(
+ SELECT * FROM pg_proc
+ WHERE proname = 'lower'
+ AND proargtypes = ARRAY[#{quote column.sql_type}::regtype]::oidvector
+ ) OR exists(
+ SELECT * FROM pg_proc
+ INNER JOIN pg_cast
+ ON ARRAY[casttarget]::oidvector = proargtypes
+ WHERE proname = 'lower'
+ AND castsource = #{quote column.sql_type}::regtype
+ )
+ SQL
+ execute_and_clear(sql, "SCHEMA", [], allow_retry: true, materialize_transactions: false) do |result|
+ result.getvalue(0, 0)
+ end
end
end
end
@@ -871,23 +1127,30 @@ def add_pg_encoders
map[Integer] = PG::TextEncoder::Integer.new
map[TrueClass] = PG::TextEncoder::Boolean.new
map[FalseClass] = PG::TextEncoder::Boolean.new
- @connection.type_map_for_queries = map
+ @raw_connection.type_map_for_queries = map
end
def update_typemap_for_default_timezone
- if @default_timezone != ActiveRecord::Base.default_timezone && @timestamp_decoder
- decoder_class = ActiveRecord::Base.default_timezone == :utc ?
+ if @raw_connection && @mapped_default_timezone != default_timezone && @timestamp_decoder
+ decoder_class = default_timezone == :utc ?
PG::TextDecoder::TimestampUtc :
PG::TextDecoder::TimestampWithoutTimeZone
- @timestamp_decoder = decoder_class.new(@timestamp_decoder.to_h)
- @connection.type_map_for_results.add_coder(@timestamp_decoder)
- @default_timezone = ActiveRecord::Base.default_timezone
+ @timestamp_decoder = decoder_class.new(**@timestamp_decoder.to_h)
+ @raw_connection.type_map_for_results.add_coder(@timestamp_decoder)
+
+ @mapped_default_timezone = default_timezone
+
+ # if default timezone has changed, we need to reconfigure the connection
+ # (specifically, the session time zone)
+ reconfigure_connection_timezone
+
+ true
end
end
def add_pg_decoders
- @default_timezone = nil
+ @mapped_default_timezone = nil
@timestamp_decoder = nil
coders_by_name = {
@@ -909,15 +1172,13 @@ def add_pg_decoders
FROM pg_type as t
WHERE t.typname IN (%s)
SQL
- coders = execute_and_clear(query, "SCHEMA", []) do |result|
- result
- .map { |row| construct_coder(row, coders_by_name[row["typname"]]) }
- .compact
+ coders = execute_and_clear(query, "SCHEMA", [], allow_retry: true, materialize_transactions: false) do |result|
+ result.filter_map { |row| construct_coder(row, coders_by_name[row["typname"]]) }
end
map = PG::TypeMapByOid.new
coders.each { |coder| map.add_coder(coder) }
- @connection.type_map_for_results = map
+ @raw_connection.type_map_for_results = map
@type_map_for_results = PG::TypeMapByOid.new
@type_map_for_results.default_type_map = map
@@ -963,5 +1224,6 @@ def decode(value, tuple = nil, field = nil)
ActiveRecord::Type.register(:vector, OID::Vector, adapter: :postgresql)
ActiveRecord::Type.register(:xml, OID::Xml, adapter: :postgresql)
end
+ ActiveSupport.run_load_hooks(:active_record_postgresqladapter, PostgreSQLAdapter)
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/schema_cache.rb b/activerecord/lib/active_record/connection_adapters/schema_cache.rb
index 84a559562b..1603b3b5d2 100644
--- a/activerecord/lib/active_record/connection_adapters/schema_cache.rb
+++ b/activerecord/lib/active_record/connection_adapters/schema_cache.rb
@@ -4,8 +4,234 @@
module ActiveRecord
module ConnectionAdapters
+ class SchemaReflection
+ class << self
+ attr_accessor :use_schema_cache_dump
+ attr_accessor :check_schema_cache_dump_version
+ end
+
+ self.use_schema_cache_dump = true
+ self.check_schema_cache_dump_version = true
+
+ def initialize(cache_path, cache = nil)
+ @cache = cache
+ @cache_path = cache_path
+ end
+
+ def set_schema_cache(cache)
+ @cache = cache
+ end
+
+ def clear!
+ @cache = empty_cache
+
+ nil
+ end
+
+ def load!(connection)
+ cache(connection)
+
+ self
+ end
+
+ def primary_keys(connection, table_name)
+ cache(connection).primary_keys(connection, table_name)
+ end
+
+ def data_source_exists?(connection, name)
+ cache(connection).data_source_exists?(connection, name)
+ end
+
+ def add(connection, name)
+ cache(connection).add(connection, name)
+ end
+
+ def data_sources(connection, name)
+ cache(connection).data_sources(connection, name)
+ end
+
+ def columns(connection, table_name)
+ cache(connection).columns(connection, table_name)
+ end
+
+ def columns_hash(connection, table_name)
+ cache(connection).columns_hash(connection, table_name)
+ end
+
+ def columns_hash?(connection, table_name)
+ cache(connection).columns_hash?(connection, table_name)
+ end
+
+ def indexes(connection, table_name)
+ cache(connection).indexes(connection, table_name)
+ end
+
+ def database_version(connection) # :nodoc:
+ cache(connection).database_version(connection)
+ end
+
+ def version(connection)
+ cache(connection).version(connection)
+ end
+
+ def size(connection)
+ cache(connection).size
+ end
+
+ def clear_data_source_cache!(connection, name)
+ return if @cache.nil? && !possible_cache_available?
+
+ cache(connection).clear_data_source_cache!(connection, name)
+ end
+
+ def cached?(table_name)
+ if @cache.nil?
+ # If `check_schema_cache_dump_version` is enabled we can't load
+ # the schema cache dump without connecting to the database.
+ unless self.class.check_schema_cache_dump_version
+ @cache = load_cache(nil)
+ end
+ end
+
+ @cache&.cached?(table_name)
+ end
+
+ def dump_to(connection, filename)
+ fresh_cache = empty_cache
+ fresh_cache.add_all(connection)
+ fresh_cache.dump_to(filename)
+
+ @cache = fresh_cache
+ end
+
+ private
+ def empty_cache
+ new_cache = SchemaCache.allocate
+ new_cache.send(:initialize)
+ new_cache
+ end
+
+ def cache(connection)
+ @cache ||= load_cache(connection) || empty_cache
+ end
+
+ def possible_cache_available?
+ self.class.use_schema_cache_dump &&
+ @cache_path &&
+ File.file?(@cache_path)
+ end
+
+ def load_cache(connection)
+ # Can't load if schema dumps are disabled
+ return unless possible_cache_available?
+
+ # Check we can find one
+ return unless new_cache = SchemaCache._load_from(@cache_path)
+
+ if self.class.check_schema_cache_dump_version
+ begin
+ current_version = connection.schema_version
+
+ if new_cache.version(connection) != current_version
+ warn "Ignoring #{@cache_path} because it has expired. The current schema version is #{current_version}, but the one in the schema cache file is #{new_cache.schema_version}."
+ return
+ end
+ rescue ActiveRecordError => error
+ warn "Failed to validate the schema cache because of #{error.class}: #{error.message}"
+ return
+ end
+ end
+
+ new_cache
+ end
+ end
+
+ class BoundSchemaReflection
+ def initialize(abstract_schema_reflection, connection)
+ @schema_reflection = abstract_schema_reflection
+ @connection = connection
+ end
+
+ def clear!
+ @schema_reflection.clear!
+ end
+
+ def load!
+ @schema_reflection.load!(@connection)
+ end
+
+ def cached?(table_name)
+ @schema_reflection.cached?(table_name)
+ end
+
+ def primary_keys(table_name)
+ @schema_reflection.primary_keys(@connection, table_name)
+ end
+
+ def data_source_exists?(name)
+ @schema_reflection.data_source_exists?(@connection, name)
+ end
+
+ def add(name)
+ @schema_reflection.add(@connection, name)
+ end
+
+ def data_sources(name)
+ @schema_reflection.data_sources(@connection, name)
+ end
+
+ def columns(table_name)
+ @schema_reflection.columns(@connection, table_name)
+ end
+
+ def columns_hash(table_name)
+ @schema_reflection.columns_hash(@connection, table_name)
+ end
+
+ def columns_hash?(table_name)
+ @schema_reflection.columns_hash?(@connection, table_name)
+ end
+
+ def indexes(table_name)
+ @schema_reflection.indexes(@connection, table_name)
+ end
+
+ def database_version # :nodoc:
+ @schema_reflection.database_version(@connection)
+ end
+
+ def version
+ @schema_reflection.version(@connection)
+ end
+
+ def size
+ @schema_reflection.size(@connection)
+ end
+
+ def clear_data_source_cache!(name)
+ @schema_reflection.clear_data_source_cache!(@connection, name)
+ end
+
+ def dump_to(filename)
+ @schema_reflection.dump_to(@connection, filename)
+ end
+ end
+
+ # = Active Record Connection Adapters Schema Cache
class SchemaCache
- def self.load_from(filename)
+ class << self
+ def new(connection)
+ BoundSchemaReflection.new(SchemaReflection.new(nil), connection)
+ end
+ deprecate new: "use ActiveRecord::ConnectionAdapters::SchemaReflection instead", deprecator: ActiveRecord.deprecator
+
+ def load_from(filename) # :nodoc:
+ BoundSchemaReflection.new(SchemaReflection.new(filename), nil)
+ end
+ deprecate load_from: "use ActiveRecord::ConnectionAdapters::SchemaReflection instead", deprecator: ActiveRecord.deprecator
+ end
+
+ def self._load_from(filename) # :nodoc:
return unless File.file?(filename)
read(filename) do |file|
@@ -32,20 +258,17 @@ def self.read(filename, &block)
end
private_class_method :read
- attr_reader :version
- attr_accessor :connection
-
- def initialize(conn)
- @connection = conn
-
+ def initialize
@columns = {}
@columns_hash = {}
@primary_keys = {}
@data_sources = {}
@indexes = {}
+ @database_version = nil
+ @version = nil
end
- def initialize_dup(other)
+ def initialize_dup(other) # :nodoc:
super
@columns = @columns.dup
@columns_hash = @columns_hash.dup
@@ -54,60 +277,71 @@ def initialize_dup(other)
@indexes = @indexes.dup
end
- def encode_with(coder)
- reset_version!
-
- coder["columns"] = @columns
- coder["primary_keys"] = @primary_keys
- coder["data_sources"] = @data_sources
- coder["indexes"] = @indexes
+ def encode_with(coder) # :nodoc:
+ coder["columns"] = @columns.sort.to_h
+ coder["primary_keys"] = @primary_keys.sort.to_h
+ coder["data_sources"] = @data_sources.sort.to_h
+ coder["indexes"] = @indexes.sort.to_h
coder["version"] = @version
- coder["database_version"] = database_version
+ coder["database_version"] = @database_version
end
def init_with(coder)
@columns = coder["columns"]
+ @columns_hash = coder["columns_hash"]
@primary_keys = coder["primary_keys"]
@data_sources = coder["data_sources"]
@indexes = coder["indexes"] || {}
@version = coder["version"]
@database_version = coder["database_version"]
- derive_columns_hash_and_deduplicate_values
+ unless coder["deduplicated"]
+ derive_columns_hash_and_deduplicate_values
+ end
end
- def primary_keys(table_name)
+ def cached?(table_name)
+ @columns.key?(table_name)
+ end
+
+ def primary_keys(connection, table_name)
@primary_keys.fetch(table_name) do
- if data_source_exists?(table_name)
+ if data_source_exists?(connection, table_name)
@primary_keys[deep_deduplicate(table_name)] = deep_deduplicate(connection.primary_key(table_name))
end
end
end
# A cached lookup for table existence.
- def data_source_exists?(name)
- prepare_data_sources if @data_sources.empty?
+ def data_source_exists?(connection, name)
+ return if ignored_table?(name)
+ prepare_data_sources(connection) if @data_sources.empty?
return @data_sources[name] if @data_sources.key? name
@data_sources[deep_deduplicate(name)] = connection.data_source_exists?(name)
end
# Add internal cache for table with +table_name+.
- def add(table_name)
- if data_source_exists?(table_name)
- primary_keys(table_name)
- columns(table_name)
- columns_hash(table_name)
- indexes(table_name)
+ def add(connection, table_name)
+ if data_source_exists?(connection, table_name)
+ primary_keys(connection, table_name)
+ columns(connection, table_name)
+ columns_hash(connection, table_name)
+ indexes(connection, table_name)
end
end
- def data_sources(name)
+ def data_sources(_connection, name) # :nodoc:
@data_sources[name]
end
+ deprecate data_sources: :data_source_exists?, deprecator: ActiveRecord.deprecator
# Get the columns for a table
- def columns(table_name)
+ def columns(connection, table_name)
+ if ignored_table?(table_name)
+ raise ActiveRecord::StatementInvalid, "Table '#{table_name}' doesn't exist"
+ end
+
@columns.fetch(table_name) do
@columns[deep_deduplicate(table_name)] = deep_deduplicate(connection.columns(table_name))
end
@@ -115,36 +349,37 @@ def columns(table_name)
# Get the columns for a table as a hash, key is the column name
# value is the column object.
- def columns_hash(table_name)
+ def columns_hash(connection, table_name)
@columns_hash.fetch(table_name) do
- @columns_hash[deep_deduplicate(table_name)] = columns(table_name).index_by(&:name).freeze
+ @columns_hash[deep_deduplicate(table_name)] = columns(connection, table_name).index_by(&:name).freeze
end
end
# Checks whether the columns hash is already cached for a table.
- def columns_hash?(table_name)
+ def columns_hash?(connection, table_name)
@columns_hash.key?(table_name)
end
- def indexes(table_name)
+ def indexes(connection, table_name)
@indexes.fetch(table_name) do
- @indexes[deep_deduplicate(table_name)] = deep_deduplicate(connection.indexes(table_name))
+ if data_source_exists?(connection, table_name)
+ @indexes[deep_deduplicate(table_name)] = deep_deduplicate(connection.indexes(table_name))
+ else
+ []
+ end
end
end
- def database_version # :nodoc:
+ def database_version(connection) # :nodoc:
@database_version ||= connection.get_database_version
end
- # Clears out internal caches
- def clear!
- @columns.clear
- @columns_hash.clear
- @primary_keys.clear
- @data_sources.clear
- @indexes.clear
- @version = nil
- @database_version = nil
+ def version(connection)
+ @version ||= connection.schema_version
+ end
+
+ def schema_version
+ @version
end
def size
@@ -152,7 +387,7 @@ def size
end
# Clear out internal caches for the data source +name+.
- def clear_data_source_cache!(name)
+ def clear_data_source_cache!(_connection, name)
@columns.delete name
@columns_hash.delete name
@primary_keys.delete name
@@ -160,9 +395,16 @@ def clear_data_source_cache!(name)
@indexes.delete name
end
+ def add_all(connection) # :nodoc:
+ tables_to_cache(connection).each do |table|
+ add(connection, table)
+ end
+
+ version(connection)
+ database_version(connection)
+ end
+
def dump_to(filename)
- clear!
- connection.data_sources.each { |table| add(table) }
open(filename) { |f|
if filename.include?(".dump")
f.write(Marshal.dump(self))
@@ -172,13 +414,11 @@ def dump_to(filename)
}
end
- def marshal_dump
- reset_version!
-
- [@version, @columns, {}, @primary_keys, @data_sources, @indexes, database_version]
+ def marshal_dump # :nodoc:
+ [@version, @columns, {}, @primary_keys, @data_sources, @indexes, @database_version]
end
- def marshal_load(array)
+ def marshal_load(array) # :nodoc:
@version, @columns, _columns_hash, @primary_keys, @data_sources, @indexes, @database_version = array
@indexes ||= {}
@@ -186,8 +426,16 @@ def marshal_load(array)
end
private
- def reset_version!
- @version = connection.migration_context.current_version
+ def tables_to_cache(connection)
+ connection.data_sources.reject do |table|
+ ignored_table?(table)
+ end
+ end
+
+ def ignored_table?(table_name)
+ ActiveRecord.schema_cache_ignored_tables.any? do |ignored|
+ ignored === table_name
+ end
end
def derive_columns_hash_and_deduplicate_values
@@ -198,51 +446,32 @@ def derive_columns_hash_and_deduplicate_values
@indexes = deep_deduplicate(@indexes)
end
- if RUBY_VERSION < "2.7"
- def deep_deduplicate(value)
- case value
- when Hash
- value.transform_keys { |k| deep_deduplicate(k) }.transform_values { |v| deep_deduplicate(v) }
- when Array
- value.map { |i| deep_deduplicate(i) }
- when String
- if value.tainted?
- # Ruby 2.6 and 2.7 have slightly different implementations of the String#-@ method.
- # In Ruby 2.6, the receiver of the String#-@ method is modified under certain
- # circumstances, and this was later identified as a bug
- # (https://bugs.ruby-lang.org/issues/15926) and only fixed in Ruby 2.7.
- value = value.dup
- end
- -value
- when Deduplicable
- -value
- else
- value
- end
- end
- else
- def deep_deduplicate(value)
- case value
- when Hash
- value.transform_keys { |k| deep_deduplicate(k) }.transform_values { |v| deep_deduplicate(v) }
- when Array
- value.map { |i| deep_deduplicate(i) }
- when String, Deduplicable
- -value
- else
- value
- end
+ def deep_deduplicate(value)
+ case value
+ when Hash
+ value.transform_keys { |k| deep_deduplicate(k) }.transform_values { |v| deep_deduplicate(v) }
+ when Array
+ value.map { |i| deep_deduplicate(i) }
+ when String, Deduplicable
+ -value
+ else
+ value
end
end
- def prepare_data_sources
- connection.data_sources.each { |source| @data_sources[source] = true }
+ def prepare_data_sources(connection)
+ tables_to_cache(connection).each do |source|
+ @data_sources[source] = true
+ end
end
def open(filename)
+ FileUtils.mkdir_p(File.dirname(filename))
+
File.atomic_write(filename) do |file|
if File.extname(filename) == ".gz"
zipper = Zlib::GzipWriter.new file
+ zipper.mtime = 0
yield zipper
zipper.flush
zipper.close
diff --git a/activerecord/lib/active_record/connection_adapters/sqlite3/column.rb b/activerecord/lib/active_record/connection_adapters/sqlite3/column.rb
new file mode 100644
index 0000000000..93d3d08685
--- /dev/null
+++ b/activerecord/lib/active_record/connection_adapters/sqlite3/column.rb
@@ -0,0 +1,49 @@
+# frozen_string_literal: true
+
+module ActiveRecord
+ module ConnectionAdapters
+ module SQLite3
+ class Column < ConnectionAdapters::Column # :nodoc:
+ attr_reader :rowid
+
+ def initialize(*, auto_increment: nil, rowid: false, **)
+ super
+ @auto_increment = auto_increment
+ @rowid = rowid
+ end
+
+ def auto_increment?
+ @auto_increment
+ end
+
+ def auto_incremented_by_db?
+ auto_increment? || rowid
+ end
+
+ def init_with(coder)
+ @auto_increment = coder["auto_increment"]
+ super
+ end
+
+ def encode_with(coder)
+ coder["auto_increment"] = @auto_increment
+ super
+ end
+
+ def ==(other)
+ other.is_a?(Column) &&
+ super &&
+ auto_increment? == other.auto_increment?
+ end
+ alias :eql? :==
+
+ def hash
+ Column.hash ^
+ super.hash ^
+ auto_increment?.hash ^
+ rowid.hash
+ end
+ end
+ end
+ end
+end
diff --git a/activerecord/lib/active_record/connection_adapters/sqlite3/database_statements.rb b/activerecord/lib/active_record/connection_adapters/sqlite3/database_statements.rb
index 582f9f4e0a..e57f6eac45 100644
--- a/activerecord/lib/active_record/connection_adapters/sqlite3/database_statements.rb
+++ b/activerecord/lib/active_record/connection_adapters/sqlite3/database_statements.rb
@@ -15,41 +15,25 @@ def write_query?(sql) # :nodoc:
!READ_QUERY.match?(sql.b)
end
- def explain(arel, binds = [])
- sql = "EXPLAIN QUERY PLAN #{to_sql(arel, binds)}"
- SQLite3::ExplainPrettyPrinter.new.pp(exec_query(sql, "EXPLAIN", []))
+ def explain(arel, binds = [], _options = [])
+ sql = "EXPLAIN QUERY PLAN " + to_sql(arel, binds)
+ result = internal_exec_query(sql, "EXPLAIN", [])
+ SQLite3::ExplainPrettyPrinter.new.pp(result)
end
- def execute(sql, name = nil) #:nodoc:
- if preventing_writes? && write_query?(sql)
- raise ActiveRecord::ReadOnlyError, "Write query attempted while in readonly mode: #{sql}"
- end
+ def internal_exec_query(sql, name = nil, binds = [], prepare: false, async: false) # :nodoc:
+ sql = transform_query(sql)
+ check_if_write_query(sql)
- materialize_transactions
- mark_transaction_written_if_write(sql)
-
- log(sql, name) do
- ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
- @connection.execute(sql)
- end
- end
- end
-
- def exec_query(sql, name = nil, binds = [], prepare: false)
- if preventing_writes? && write_query?(sql)
- raise ActiveRecord::ReadOnlyError, "Write query attempted while in readonly mode: #{sql}"
- end
-
- materialize_transactions
mark_transaction_written_if_write(sql)
type_casted_binds = type_casted_binds(binds)
- log(sql, name, binds, type_casted_binds) do
- ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
+ log(sql, name, binds, type_casted_binds, async: async) do
+ with_raw_connection do |conn|
# Don't cache statements if they are not prepared
unless prepare
- stmt = @connection.prepare(sql)
+ stmt = conn.prepare(sql)
begin
cols = stmt.columns
unless without_prepared_statement?(binds)
@@ -60,76 +44,107 @@ def exec_query(sql, name = nil, binds = [], prepare: false)
stmt.close
end
else
- stmt = @statements[sql] ||= @connection.prepare(sql)
+ stmt = @statements[sql] ||= conn.prepare(sql)
cols = stmt.columns
stmt.reset!
stmt.bind_params(type_casted_binds)
records = stmt.to_a
end
+ verified!
build_result(columns: cols, rows: records)
end
end
end
- def exec_delete(sql, name = "SQL", binds = [])
- exec_query(sql, name, binds)
- @connection.changes
+ def exec_delete(sql, name = "SQL", binds = []) # :nodoc:
+ internal_exec_query(sql, name, binds)
+ @raw_connection.changes
end
alias :exec_update :exec_delete
- def begin_isolated_db_transaction(isolation) #:nodoc
+ def begin_isolated_db_transaction(isolation) # :nodoc:
raise TransactionIsolationError, "SQLite3 only supports the `read_uncommitted` transaction isolation level" if isolation != :read_uncommitted
raise StandardError, "You need to enable the shared-cache mode in SQLite mode before attempting to change the transaction isolation level" unless shared_cache?
- Thread.current.thread_variable_set("read_uncommitted", @connection.get_first_value("PRAGMA read_uncommitted"))
- @connection.read_uncommitted = true
- begin_db_transaction
+ with_raw_connection(allow_retry: true, materialize_transactions: false) do |conn|
+ ActiveSupport::IsolatedExecutionState[:active_record_read_uncommitted] = conn.get_first_value("PRAGMA read_uncommitted")
+ conn.read_uncommitted = true
+ begin_db_transaction
+ end
end
- def begin_db_transaction #:nodoc:
- log("begin transaction", "TRANSACTION") { @connection.transaction }
+ def begin_db_transaction # :nodoc:
+ log("begin transaction", "TRANSACTION") do
+ with_raw_connection(allow_retry: true, materialize_transactions: false) do |conn|
+ result = conn.transaction
+ verified!
+ result
+ end
+ end
end
- def commit_db_transaction #:nodoc:
- log("commit transaction", "TRANSACTION") { @connection.commit }
+ def commit_db_transaction # :nodoc:
+ log("commit transaction", "TRANSACTION") do
+ with_raw_connection(allow_retry: true, materialize_transactions: false) do |conn|
+ conn.commit
+ end
+ end
reset_read_uncommitted
end
- def exec_rollback_db_transaction #:nodoc:
- log("rollback transaction", "TRANSACTION") { @connection.rollback }
+ def exec_rollback_db_transaction # :nodoc:
+ log("rollback transaction", "TRANSACTION") do
+ with_raw_connection(allow_retry: true, materialize_transactions: false) do |conn|
+ conn.rollback
+ end
+ end
reset_read_uncommitted
end
+ # https://stackoverflow.com/questions/17574784
+ # https://www.sqlite.org/lang_datefunc.html
+ HIGH_PRECISION_CURRENT_TIMESTAMP = Arel.sql("STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')").freeze # :nodoc:
+ private_constant :HIGH_PRECISION_CURRENT_TIMESTAMP
+
+ def high_precision_current_timestamp
+ HIGH_PRECISION_CURRENT_TIMESTAMP
+ end
+
private
+ def raw_execute(sql, name, async: false, allow_retry: false, materialize_transactions: false)
+ log(sql, name, async: async) do
+ with_raw_connection(allow_retry: allow_retry, materialize_transactions: materialize_transactions) do |conn|
+ result = conn.execute(sql)
+ verified!
+ result
+ end
+ end
+ end
+
def reset_read_uncommitted
- read_uncommitted = Thread.current.thread_variable_get("read_uncommitted")
+ read_uncommitted = ActiveSupport::IsolatedExecutionState[:active_record_read_uncommitted]
return unless read_uncommitted
- @connection.read_uncommitted = read_uncommitted
+ @raw_connection&.read_uncommitted = read_uncommitted
end
def execute_batch(statements, name = nil)
+ statements = statements.map { |sql| transform_query(sql) }
sql = combine_multi_statements(statements)
- if preventing_writes? && write_query?(sql)
- raise ActiveRecord::ReadOnlyError, "Write query attempted while in readonly mode: #{sql}"
- end
-
- materialize_transactions
+ check_if_write_query(sql)
mark_transaction_written_if_write(sql)
log(sql, name) do
- ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
- @connection.execute_batch2(sql)
+ with_raw_connection do |conn|
+ result = conn.execute_batch2(sql)
+ verified!
+ result
end
end
end
- def last_inserted_id(result)
- @connection.last_insert_row_id
- end
-
def build_fixture_statements(fixture_set)
fixture_set.flat_map do |table_name, fixtures|
next if fixtures.empty?
@@ -140,6 +155,10 @@ def build_fixture_statements(fixture_set)
def build_truncate_statement(table_name)
"DELETE FROM #{quote_table_name(table_name)}"
end
+
+ def returning_column_values(result)
+ result.rows.first
+ end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/sqlite3/quoting.rb b/activerecord/lib/active_record/connection_adapters/sqlite3/quoting.rb
index 9b74a774e5..58d1c1f49c 100644
--- a/activerecord/lib/active_record/connection_adapters/sqlite3/quoting.rb
+++ b/activerecord/lib/active_record/connection_adapters/sqlite3/quoting.rb
@@ -4,8 +4,11 @@ module ActiveRecord
module ConnectionAdapters
module SQLite3
module Quoting # :nodoc:
+ QUOTED_COLUMN_NAMES = Concurrent::Map.new # :nodoc:
+ QUOTED_TABLE_NAMES = Concurrent::Map.new # :nodoc:
+
def quote_string(s)
- @connection.class.quote(s)
+ ::SQLite3::Database.quote(s)
end
def quote_table_name_for_assignment(table, attr)
@@ -13,11 +16,11 @@ def quote_table_name_for_assignment(table, attr)
end
def quote_table_name(name)
- self.class.quoted_table_names[name] ||= super.gsub(".", "\".\"").freeze
+ QUOTED_TABLE_NAMES[name] ||= super.gsub(".", "\".\"").freeze
end
def quote_column_name(name)
- self.class.quoted_column_names[name] ||= %Q("#{super.gsub('"', '""')}")
+ QUOTED_COLUMN_NAMES[name] ||= %Q("#{super.gsub('"', '""')}")
end
def quoted_time(value)
@@ -45,6 +48,34 @@ def unquoted_false
0
end
+ def quote_default_expression(value, column) # :nodoc:
+ if value.is_a?(Proc)
+ value = value.call
+ if value.match?(/\A\w+\(.*\)\z/)
+ "(#{value})"
+ else
+ value
+ end
+ else
+ super
+ end
+ end
+
+ def type_cast(value) # :nodoc:
+ case value
+ when BigDecimal
+ value.to_f
+ when String
+ if value.encoding == Encoding::ASCII_8BIT
+ super(value.encode(Encoding::UTF_8))
+ else
+ super
+ end
+ else
+ super
+ end
+ end
+
def column_name_matcher
COLUMN_NAME
end
@@ -58,7 +89,7 @@ def column_name_with_order_matcher
(
(?:
# "table_name"."column_name" | function(one or no argument)
- ((?:\w+\.|"\w+"\.)?(?:\w+|"\w+")) | \w+\((?:|\g<2>)\)
+ ((?:\w+\.|"\w+"\.)?(?:\w+|"\w+") | \w+\((?:|\g<2>)\))
)
(?:(?:\s+AS)?\s+(?:\w+|"\w+"))?
)
@@ -71,8 +102,9 @@ def column_name_with_order_matcher
(
(?:
# "table_name"."column_name" | function(one or no argument)
- ((?:\w+\.|"\w+"\.)?(?:\w+|"\w+")) | \w+\((?:|\g<2>)\)
+ ((?:\w+\.|"\w+"\.)?(?:\w+|"\w+") | \w+\((?:|\g<2>)\))
)
+ (?:\s+COLLATE\s+(?:\w+|"\w+"))?
(?:\s+ASC|\s+DESC)?
)
(?:\s*,\s*\g<1>)*
@@ -80,22 +112,6 @@ def column_name_with_order_matcher
/ix
private_constant :COLUMN_NAME, :COLUMN_NAME_WITH_ORDER
-
- private
- def _type_cast(value)
- case value
- when BigDecimal
- value.to_f
- when String
- if value.encoding == Encoding::ASCII_8BIT
- super(value.encode(Encoding::UTF_8))
- else
- super
- end
- else
- super
- end
- end
end
end
end
diff --git a/activerecord/lib/active_record/connection_adapters/sqlite3/schema_definitions.rb b/activerecord/lib/active_record/connection_adapters/sqlite3/schema_definitions.rb
index c9855019c1..71a9b44bb7 100644
--- a/activerecord/lib/active_record/connection_adapters/sqlite3/schema_definitions.rb
+++ b/activerecord/lib/active_record/connection_adapters/sqlite3/schema_definitions.rb
@@ -3,7 +3,14 @@
module ActiveRecord
module ConnectionAdapters
module SQLite3
+ # = Active Record SQLite3 Adapter \Table Definition
class TableDefinition < ActiveRecord::ConnectionAdapters::TableDefinition
+ def change_column(column_name, type, **options)
+ name = column_name.to_s
+ @columns_hash[name] = nil
+ column(name, type, **options)
+ end
+
def references(*args, **options)
super(*args, type: :integer, **options)
end
diff --git a/activerecord/lib/active_record/connection_adapters/sqlite3/schema_statements.rb b/activerecord/lib/active_record/connection_adapters/sqlite3/schema_statements.rb
index d9698b01ca..286cedd155 100644
--- a/activerecord/lib/active_record/connection_adapters/sqlite3/schema_statements.rb
+++ b/activerecord/lib/active_record/connection_adapters/sqlite3/schema_statements.rb
@@ -6,7 +6,7 @@ module SQLite3
module SchemaStatements # :nodoc:
# Returns an array of indexes for the given table.
def indexes(table_name)
- exec_query("PRAGMA index_list(#{quote_table_name(table_name)})", "SCHEMA").map do |row|
+ internal_exec_query("PRAGMA index_list(#{quote_table_name(table_name)})", "SCHEMA").filter_map do |row|
# Indexes SQLite creates implicitly for internal use start with "sqlite_".
# See https://www.sqlite.org/fileformat2.html#intschema
next if row["name"].start_with?("sqlite_")
@@ -21,9 +21,9 @@ def indexes(table_name)
WHERE name = #{quote(row['name'])} AND type = 'index'
SQL
- /\bON\b\s*"?(\w+?)"?\s*\((?<expressions>.+?)\)(?:\s*WHERE\b\s*(?<where>.+))?\z/i =~ index_sql
+ /\bON\b\s*"?(\w+?)"?\s*\((?<expressions>.+?)\)(?:\s*WHERE\b\s*(?<where>.+))?(?:\s*\/\*.*\*\/)?\z/i =~ index_sql
- columns = exec_query("PRAGMA index_info(#{quote(row['name'])})", "SCHEMA").map do |col|
+ columns = internal_exec_query("PRAGMA index_info(#{quote(row['name'])})", "SCHEMA").map do |col|
col["name"]
end
@@ -49,7 +49,7 @@ def indexes(table_name)
where: where,
orders: orders
)
- end.compact
+ end
end
def add_foreign_key(from_table, to_table, **options)
@@ -60,6 +60,8 @@ def add_foreign_key(from_table, to_table, **options)
end
def remove_foreign_key(from_table, to_table = nil, **options)
+ return if options.delete(:if_exists) == true && !foreign_key_exists?(from_table, to_table)
+
to_table ||= options[:to_table]
options = options.except(:name, :to_table, :validate)
foreign_keys = foreign_keys(from_table)
@@ -82,11 +84,11 @@ def check_constraints(table_name)
table_sql = query_value(<<-SQL, "SCHEMA")
SELECT sql
FROM sqlite_master
- WHERE name = #{quote_table_name(table_name)} AND type = 'table'
+ WHERE name = #{quote(table_name)} AND type = 'table'
UNION ALL
SELECT sql
FROM sqlite_temp_master
- WHERE name = #{quote_table_name(table_name)} AND type = 'table'
+ WHERE name = #{quote(table_name)} AND type = 'table'
SQL
table_sql.to_s.scan(/CONSTRAINT\s+(?<name>\w+)\s+CHECK\s+\((?<expression>(:?[^()]|\(\g<expression>\))+)\)/i).map do |name, expression|
@@ -100,7 +102,9 @@ def add_check_constraint(table_name, expression, **options)
end
end
- def remove_check_constraint(table_name, expression = nil, **options)
+ def remove_check_constraint(table_name, expression = nil, if_exists: false, **options)
+ return if if_exists && !check_constraint_exists?(table_name, **options)
+
check_constraints = check_constraints(table_name)
chk_name_to_delete = check_constraint_for!(table_name, expression: expression, **options).name
check_constraints.delete_if { |chk| chk.name == chk_name_to_delete }
@@ -111,9 +115,13 @@ def create_schema_dumper(options)
SQLite3::SchemaDumper.create(self, options)
end
+ def schema_creation # :nodoc
+ SQLite3::SchemaCreation.new(self)
+ end
+
private
- def schema_creation
- SQLite3::SchemaCreation.new(self)
+ def valid_table_definition_options
+ super + [:rename]
end
def create_table_definition(name, **options)
@@ -124,21 +132,34 @@ def validate_index_length!(table_name, new_name, internal = false)
super unless internal
end
- def new_column_from_field(table_name, field)
- default = \
- case field["dflt_value"]
- when /^null$/i
- nil
- when /^'(.*)'$/m
- $1.gsub("''", "'")
- when /^"(.*)"$/m
- $1.gsub('""', '"')
- else
- field["dflt_value"]
- end
+ def new_column_from_field(table_name, field, definitions)
+ default = field["dflt_value"]
type_metadata = fetch_type_metadata(field["type"])
- Column.new(field["name"], default, type_metadata, field["notnull"].to_i == 0, collation: field["collation"])
+ default_value = extract_value_from_default(default)
+ default_function = extract_default_function(default_value, default)
+ rowid = is_column_the_rowid?(field, definitions)
+
+ Column.new(
+ field["name"],
+ default_value,
+ type_metadata,
+ field["notnull"].to_i == 0,
+ default_function,
+ collation: field["collation"],
+ auto_increment: field["auto_increment"],
+ rowid: rowid
+ )
+ end
+
+ INTEGER_REGEX = /integer/i
+ # if a rowid table has a primary key that consists of a single column
+ # and the declared type of that column is "INTEGER" in any mixture of upper and lower case,
+ # then the column becomes an alias for the rowid.
+ def is_column_the_rowid?(field, column_definitions)
+ return false unless INTEGER_REGEX.match?(field["type"]) && field["pk"] == 1
+ # is the primary key a single column?
+ column_definitions.one? { |c| c["pk"] > 0 }
end
def data_source_sql(name = nil, type: nil)
diff --git a/activerecord/lib/active_record/connection_adapters/sqli
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment