用于查询的 ORM API 功能¶
ORM API Features for Querying
ORM 加载器选项¶
ORM Loader Options
加载器选项是对象,当它们传递给 Select.options()
方法时,作用于 Select
对象或类似 SQL 结构,影响列属性和关系属性的加载。大多数加载器选项都源自 Load
层次结构。有关使用加载器选项的完整概述,请参阅下面的链接部分。
参见
列加载选项 - 详细说明影响列和 SQL 表达式映射属性加载的映射器和加载选项
关系加载技术 - 详细说明影响
relationship()
映射属性加载的关系和加载选项
Loader options are objects which, when passed to the Select.options()
method of a Select
object or similar SQL construct, affect the loading of both column and relationship-oriented attributes. The majority of loader options descend from the Load
hierarchy. For a complete overview of using loader options, see the linked sections below.
参见
列加载选项 - details mapper and loading options that affect how column and SQL-expression mapped attributes are loaded
关系加载技术 - details relationship and loading options that affect how
relationship()
mapped attributes are loaded
ORM 执行选项¶
ORM Execution Options
ORM 层级的执行选项是关键字选项,它们可以通过 Session.execute.execution_options
参数与语句执行相关联,这是一个字典参数,接受 Session
方法,如 Session.execute()
和 Session.scalars()
,或者通过直接与将要调用的语句本身关联,使用 Executable.execution_options()
方法,该方法接受任意关键字参数。
ORM 层级选项与 Core 层级的执行选项不同,后者在 Connection.execution_options()
中有文档说明。需要注意的是,下文讨论的 ORM 选项与 Core 层级的方法 Connection.execution_options()
或 Engine.execution_options()
不兼容;即使 Engine
或 Connection
与正在使用的 Session
关联,这些选项也会被忽略。
在本节中,将使用 _sql.Executable.execution_options
方法样式进行示例说明。
ORM-level execution options are keyword options that may be associated with a statement execution using either the Session.execute.execution_options
parameter, which is a dictionary argument accepted by Session
methods such as Session.execute()
and Session.scalars()
, or by associating them directly with the statement to be invoked itself using the Executable.execution_options()
method, which accepts them as arbitrary keyword arguments.
ORM-level options are distinct from the Core level execution options documented at Connection.execution_options()
. It’s important to note that the ORM options discussed below are not compatible with Core level methods Connection.execution_options()
or Engine.execution_options()
; the options are ignored at this level, even if the Engine
or Connection
is associated with the Session
in use.
Within this section, the Executable.execution_options()
method style will be illustrated for examples.
填充现有¶
Populate Existing
populate_existing
执行选项确保,对于所有加载的行,对应的 Session
中的实例将被完全刷新——擦除对象中现有的任何数据(包括待处理的更改),并用从结果加载的数据替换。
使用示例如下:
>>> stmt = select(User).execution_options(populate_existing=True)
>>> result = session.execute(stmt)
{execsql}SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
...
通常,ORM 对象只会加载一次,如果它们与后续结果行的主键匹配,则不会将该行应用于对象。这是为了保留对象上的待处理未刷新的更改,并避免刷新已存在数据的开销和复杂性。 Session
假定使用高度隔离的事务模型,且在事务内期望发生变化的数据(超出本地更改)将通过显式步骤来处理,如此方法所示。
使用 populate_existing
,可以刷新与查询匹配的任何对象集,并且它还允许控制关系加载器选项。例如,要刷新一个实例,同时刷新相关的对象集:
stmt = (
select(User)
.where(User.name.in_(names))
.execution_options(populate_existing=True)
.options(selectinload(User.addresses))
)
# 将刷新所有匹配的 User 对象以及相关的
# Address 对象
users = session.execute(stmt).scalars().all()
populate_existing
的另一个使用案例是支持各种属性加载功能,这些功能可以在每个查询基础上更改属性的加载方式。适用的选项包括:
PropComparator.and_()
方法,可修改加载器策略加载的内容load_only()
选项,用于选择要刷新的属性
populate_existing
执行选项等同于 Query.populate_existing()
方法,适用于 1.x style ORM 查询。
The populate_existing
execution option ensures that, for all rows loaded, the corresponding instances in the Session
will be fully refreshed – erasing any existing data within the objects (including pending changes) and replacing with the data loaded from the result.
Example use looks like:
>>> stmt = select(User).execution_options(populate_existing=True)
>>> result = session.execute(stmt)
SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
...
Normally, ORM objects are only loaded once, and if they are matched up to the primary key in a subsequent result row, the row is not applied to the object. This is both to preserve pending, unflushed changes on the object as well as to avoid the overhead and complexity of refreshing data which is already there. The Session
assumes a default working model of a highly isolated transaction, and to the degree that data is expected to change within the transaction outside of the local changes being made, those use cases would be handled using explicit steps such as this method.
Using populate_existing
, any set of objects that matches a query can be refreshed, and it also allows control over relationship loader options. E.g. to refresh an instance while also refreshing a related set of objects:
stmt = (
select(User)
.where(User.name.in_(names))
.execution_options(populate_existing=True)
.options(selectinload(User.addresses))
)
# will refresh all matching User objects as well as the related
# Address objects
users = session.execute(stmt).scalars().all()
Another use case for populate_existing
is in support of various attribute loading features that can change how an attribute is loaded on a per-query basis. Options for which this apply include:
The
with_expression()
optionThe
PropComparator.and_()
method that can modify what a loader strategy loadsThe
contains_eager()
optionThe
with_loader_criteria()
optionThe
load_only()
option to select what attributes to refresh
The populate_existing
execution option is equvialent to the Query.populate_existing()
method in 1.x style ORM queries.
自动刷新¶
Autoflush
此选项在设置为 False
时,将导致 Session
不调用 “autoflush” 步骤。它相当于使用 Session.no_autoflush
上下文管理器来禁用 autoflush:
>>> stmt = select(User).execution_options(autoflush=False)
>>> session.execute(stmt)
{execsql}SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
...
此选项也适用于启用了 ORM 的 Update
和 Delete
查询。
autoflush
执行选项相当于 Query.autoflush()
方法,在 1.x style ORM 查询中。
参见
This option, when passed as False
, will cause the Session
to not invoke the “autoflush” step. It is equivalent to using the Session.no_autoflush
context manager to disable autoflush:
>>> stmt = select(User).execution_options(autoflush=False)
>>> session.execute(stmt)
SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
...
This option will also work on ORM-enabled Update
and Delete
queries.
The autoflush
execution option is equvialent to the Query.autoflush()
method in 1.x style ORM queries.
参见
使用 Yield Per 获取大量结果集¶
Fetching Large Result Sets with Yield Per
yield_per
执行选项是一个整数值,它将导致 Result
一次只缓冲有限数量的行和/或 ORM 对象,然后再将数据提供给客户端。
通常,ORM 会立即获取 所有 行,为每一行构建 ORM 对象,并将这些对象组装成一个单一的缓冲区,然后将这个缓冲区传递给 Result
对象,作为返回的行的来源。这样做的理由是确保如联合急加载、结果去重和依赖身份映射保持一致状态的结果处理逻辑能够正确工作。
yield_per
选项的目的是改变这种行为,使 ORM 结果集优化用于通过非常大的结果集(例如 > 10K 行)进行迭代的场景,在这种情况下,用户已确定上述模式不适用。当使用 yield_per
时,ORM 会将 ORM 结果批量为子集合,并在迭代 Result
对象时逐个从每个子集合中返回行,以便 Python 解释器无需声明非常大的内存区域,这既耗时又会导致过度的内存使用。该选项会影响数据库游标的使用方式,以及 ORM 构建行和对象并传递给 Result
的方式。
小技巧
从上面可以得出结论, Result
必须以可迭代的方式消费,也就是说,使用迭代方式,例如 for row in result
或使用部分行方法,如 Result.fetchmany()
或 Result.partitions()
。调用 Result.all()
会违背使用 yield_per
的目的。
使用 yield_per
相当于同时使用 Connection.execution_options.stream_results
执行选项,这会选择使用服务器端游标(如果数据库支持的话),以及在返回的 Result
对象上使用 Result.yield_per()
方法,后者建立了固定的行大小,并且限制了每次构建的 ORM 对象数量。
小技巧
yield_per
现在也可以作为 Core 执行选项使用,详细说明请参阅 使用服务器端游标(又称流结果)。本节详细说明了将 yield_per
作为执行选项与 ORM Session
配合使用。在这两种上下文中,该选项的行为尽可能相似。
在与 ORM 一起使用时, yield_per
必须通过在给定语句上使用 Executable.execution_options()
方法,或通过将其传递给 Session.execute.execution_options
参数(例如 Session.execute()
或其他类似的 Session
方法,如 Session.scalars()
)。以下是获取 ORM 对象的典型用法示例:
>>> stmt = select(User).execution_options(yield_per=10)
>>> for user_obj in session.scalars(stmt):
... print(user_obj)
{execsql}SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
[...] ()
{stop}User(id=1, name='spongebob', fullname='Spongebob Squarepants')
User(id=2, name='sandy', fullname='Sandy Cheeks')
...
>>> # ... 行继续 ...
上面的代码等同于以下示例,后者在 Core 层级执行选项中使用了 Connection.execution_options.stream_results
和 Connection.execution_options.max_row_buffer
,并结合了 Result.yield_per()
方法:
# 等效代码 >>> stmt = select(User).execution_options(stream_results=True, max_row_buffer=10) >>> for user_obj in session.scalars(stmt).yield_per(10): … print(user_obj) {execsql}SELECT user_account.id, user_account.name, user_account.fullname FROM user_account […] () {stop}User(id=1, name=’spongebob’, fullname=’Spongebob Squarepants’) User(id=2, name=’sandy’, fullname=’Sandy Cheeks’) … >>> # … 行继续 …
yield_per
也通常与 Result.partitions()
方法结合使用,该方法将按分组的分区迭代行。每个分区的大小默认为传递给 yield_per
的整数值,如下所示:
>>> stmt = select(User).execution_options(yield_per=10)
>>> for partition in session.scalars(stmt).partitions():
... for user_obj in partition:
... print(user_obj)
{execsql}SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
[...] ()
{stop}User(id=1, name='spongebob', fullname='Spongebob Squarepants')
User(id=2, name='sandy', fullname='Sandy Cheeks')
...
>>> # ... 行继续 ...
yield_per
执行选项与使用集合的 “subquery” eager loading 或 “joined” eager loading 加载不兼容。当使用 “select in” eager loading 时,如果数据库驱动程序支持多个独立游标,它是可能兼容的。
此外, yield_per
执行选项与 Result.unique()
方法不兼容;因为该方法依赖于存储所有行的完整身份集,它会必然违背使用 yield_per
的目的,后者是为了处理任意大量的行。
在 1.4.6 版本发生变更: 当从使用 Result.unique()
过滤的 Result
对象中获取 ORM 行时,如果同时使用了 yield_per
执行选项,将会引发异常。
当使用遗留的 Query
对象并采用 1.x style ORM 使用时, Query.yield_per()
方法的结果与 yield_per
执行选项相同。
The yield_per
execution option is an integer value which will cause the Result
to buffer only a limited number of rows and/or ORM objects at a time, before making data available to the client.
Normally, the ORM will fetch all rows immediately, constructing ORM objects for each and assembling those objects into a single buffer, before passing this buffer to the Result
object as a source of rows to be returned. The rationale for this behavior is to allow correct behavior for features such as joined eager loading, uniquifying of results, and the general case of result handling logic that relies upon the identity map maintaining a consistent state for every object in a result set as it is fetched.
The purpose of the yield_per
option is to change this behavior so that the ORM result set is optimized for iteration through very large result sets (e.g. > 10K rows), where the user has determined that the above patterns don’t apply. When yield_per
is used, the ORM will instead batch ORM results into sub-collections and yield rows from each sub-collection individually as the Result
object is iterated, so that the Python interpreter doesn’t need to declare very large areas of memory which is both time consuming and leads to excessive memory use. The option affects both the way the database cursor is used as well as how the ORM constructs rows and objects to be passed to the Result
.
小技巧
From the above, it follows that the Result
must be consumed in an iterable fashion, that is, using iteration such as for row in result
or using partial row methods such as Result.fetchmany()
or Result.partitions()
. Calling Result.all()
will defeat the purpose of using yield_per
.
Using yield_per
is equivalent to making use of both the Connection.execution_options.stream_results
execution option, which selects for server side cursors to be used by the backend if supported, and the Result.yield_per()
method on the returned Result
object, which establishes a fixed size of rows to be fetched as well as a corresponding limit to how many ORM objects will be constructed at once.
小技巧
yield_per
is now available as a Core execution option as well, described in detail at 使用服务器端游标(又称流结果). This section details the use of yield_per
as an execution option with an ORM Session
. The option behaves as similarly as possible in both contexts.
When used with the ORM, yield_per
must be established either via the Executable.execution_options()
method on the given statement or by passing it to the Session.execute.execution_options
parameter of Session.execute()
or other similar Session
method such as Session.scalars()
. Typical use for fetching ORM objects is illustrated below:
>>> stmt = select(User).execution_options(yield_per=10)
>>> for user_obj in session.scalars(stmt):
... print(user_obj)
SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
[...] ()
User(id=1, name='spongebob', fullname='Spongebob Squarepants')
User(id=2, name='sandy', fullname='Sandy Cheeks')
...
>>> # ... rows continue ...
The above code is equivalent to the example below, which uses Connection.execution_options.stream_results
and Connection.execution_options.max_row_buffer
Core-level execution options in conjunction with the Result.yield_per()
method of Result
:
# equivalent code
>>> stmt = select(User).execution_options(stream_results=True, max_row_buffer=10)
>>> for user_obj in session.scalars(stmt).yield_per(10):
... print(user_obj)
SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
[...] ()
User(id=1, name='spongebob', fullname='Spongebob Squarepants')
User(id=2, name='sandy', fullname='Sandy Cheeks')
...
>>> # ... rows continue ...
yield_per
is also commonly used in combination with the Result.partitions()
method, which will iterate rows in grouped partitions. The size of each partition defaults to the integer value passed to yield_per
, as in the below example:
>>> stmt = select(User).execution_options(yield_per=10)
>>> for partition in session.scalars(stmt).partitions():
... for user_obj in partition:
... print(user_obj)
SELECT user_account.id, user_account.name, user_account.fullname
FROM user_account
[...] ()
User(id=1, name='spongebob', fullname='Spongebob Squarepants')
User(id=2, name='sandy', fullname='Sandy Cheeks')
...
>>> # ... rows continue ...
The yield_per
execution option is not compatible with “subquery” eager loading loading or “joined” eager loading when using collections. It is potentially compatible with “select in” eager loading , provided the database driver supports multiple, independent cursors.
Additionally, the yield_per
execution option is not compatible with the Result.unique()
method; as this method relies upon storing a complete set of identities for all rows, it would necessarily defeat the purpose of using yield_per
which is to handle an arbitrarily large number of rows.
在 1.4.6 版本发生变更: An exception is raised when ORM rows are fetched from a Result
object that makes use of the Result.unique()
filter, at the same time as the yield_per
execution option is used.
When using the legacy Query
object with 1.x style ORM use, the Query.yield_per()
method will have the same result as that of the yield_per
execution option.
身份令牌¶
Identity Token
“identity token”(身份标记)是一个可以与新加载对象的 identity key 关联的任意值。该元素首先是为了支持进行每行“分片”的扩展而存在,在这些扩展中,对象可能从某一数据库表的多个副本中加载,这些副本之间存在主键重叠的情况。该“identity token”的主要使用者是 水平分片 扩展,它提供了一个通用框架,用于在一个特定数据库表的多个“分片”之间持久化对象。
identity_token
执行选项可以在每个查询的基础上使用,从而直接影响此标记。通过直接使用该选项,可以将多个具有相同主键和源表、但“身份”不同的对象实例载入同一个 Session
中。
一个使用场景是,通过 模式名称的转换 功能,将来自不同 schema 的同名表中的对象加入到 Session
中。以下是一个映射示例:
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class MyTable(Base):
__tablename__ = "my_table"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str]
上述类的默认 “schema” 名称为 None
,意味着不会在 SQL 语句中写入 schema 限定名。然而,如果我们使用 Connection.execution_options.schema_translate_map
并将 None
映射为其他 schema,则可以将 MyTable
的实例分别放入两个不同的 schema 中:
engine = create_engine(
"postgresql+psycopg://scott:tiger@localhost/test",
)
with Session(
engine.execution_options(schema_translate_map={None: "test_schema"})
) as sess:
sess.add(MyTable(name="this is schema one"))
sess.commit()
with Session(
engine.execution_options(schema_translate_map={None: "test_schema_2"})
) as sess:
sess.add(MyTable(name="this is schema two"))
sess.commit()
上面两个代码块分别创建了一个带有不同 schema translate map 的 Session
对象,一个 MyTable
实例分别被持久化到了 test_schema.my_table
和 test_schema_2.my_table
表中。
上述两个 Session
对象是彼此独立的。如果我们希望在一次事务中持久化这两个对象,则需要使用 水平分片 扩展来实现。
然而,我们可以如下所示,在一个 session 中查询这些对象:
with Session(engine) as sess:
obj1 = sess.scalar(
select(MyTable)
.where(MyTable.id == 1)
.execution_options(
schema_translate_map={None: "test_schema"},
identity_token="test_schema",
)
)
obj2 = sess.scalar(
select(MyTable)
.where(MyTable.id == 1)
.execution_options(
schema_translate_map={None: "test_schema_2"},
identity_token="test_schema_2",
)
)
obj1
和 obj2
是彼此独立的。然而,它们都引用了 MyTable
类的主键 id 1,但却是不同的对象。这就是 identity_token
的作用,可以通过检查每个对象的 InstanceState.key
来观察这两个不同的标识:
>>> from sqlalchemy import inspect
>>> inspect(obj1).key
(<class '__main__.MyTable'>, (1,), 'test_schema')
>>> inspect(obj2).key
(<class '__main__.MyTable'>, (1,), 'test_schema_2')
以上逻辑在使用 水平分片 扩展时会自动处理。
在 2.0.0rc1 版本加入:
新增了
identity_token
ORM 层级执行选项。
参见
水平分片 - 位于 核心和 ORM 示例 部分。示例脚本 separate_schema_translates.py
展示了上述使用场景的完整分片 API 实现。
Deep Alchemy
This option is an advanced-use feature mostly intended to be used with the 水平分片 extension. For typical cases of loading objects with identical primary keys from different “shards” or partitions, consider using individual Session
objects per shard first.
The “identity token” is an arbitrary value that can be associated within the identity key of newly loaded objects. This element exists first and foremost to support extensions which perform per-row “sharding”, where objects may be loaded from any number of replicas of a particular database table that nonetheless have overlapping primary key values. The primary consumer of “identity token” is the 水平分片 extension, which supplies a general framework for persisting objects among multiple “shards” of a particular database table.
The identity_token
execution option may be used on a per-query basis to directly affect this token. Using it directly, one can populate a Session
with multiple instances of an object that have the same primary key and source table, but different “identities”.
One such example is to populate a Session
with objects that come from same-named tables in different schemas, using the 模式名称的转换 feature which can affect the choice of schema within the scope of queries. Given a mapping as:
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class MyTable(Base):
__tablename__ = "my_table"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str]
The default “schema” name for the class above is None
, meaning, no schema qualification will be written into SQL statements. However, if we make use of Connection.execution_options.schema_translate_map
, mapping None
to an alternate schema, we can place instances of MyTable
into two different schemas:
engine = create_engine(
"postgresql+psycopg://scott:tiger@localhost/test",
)
with Session(
engine.execution_options(schema_translate_map={None: "test_schema"})
) as sess:
sess.add(MyTable(name="this is schema one"))
sess.commit()
with Session(
engine.execution_options(schema_translate_map={None: "test_schema_2"})
) as sess:
sess.add(MyTable(name="this is schema two"))
sess.commit()
The above two blocks create a Session
object linked to a different schema translate map each time, and an instance of MyTable
is persisted into both test_schema.my_table
as well as test_schema_2.my_table
.
The Session
objects above are independent. If we wanted to persist both objects in one transaction, we would need to use the 水平分片 extension to do this.
However, we can illustrate querying for these objects in one session as follows:
with Session(engine) as sess:
obj1 = sess.scalar(
select(MyTable)
.where(MyTable.id == 1)
.execution_options(
schema_translate_map={None: "test_schema"},
identity_token="test_schema",
)
)
obj2 = sess.scalar(
select(MyTable)
.where(MyTable.id == 1)
.execution_options(
schema_translate_map={None: "test_schema_2"},
identity_token="test_schema_2",
)
)
Both obj1
and obj2
are distinct from each other. However, they both refer to primary key id 1 for the MyTable
class, yet are distinct. This is how the identity_token
comes into play, which we can see in the inspection of each object, where we look at InstanceState.key
to view the two distinct identity tokens:
>>> from sqlalchemy import inspect
>>> inspect(obj1).key
(<class '__main__.MyTable'>, (1,), 'test_schema')
>>> inspect(obj2).key
(<class '__main__.MyTable'>, (1,), 'test_schema_2')
The above logic takes place automatically when using the 水平分片 extension.
在 2.0.0rc1 版本加入:
added the
identity_token
ORM level execution option.
参见
水平分片 - in the 核心和 ORM 示例 section. See the script separate_schema_translates.py
for a demonstration of the above use case using the full sharding API.
检查启用 ORM 的 SELECT 和 DML 语句中的实体和列¶
Inspecting entities and columns from ORM-enabled SELECT and DML statements
select()
构造函数,以及 insert()
、update()
和 delete()
构造函数(对于后者 DML 构造,自 SQLAlchemy 1.4.33 起),都支持对创建这些语句所引用的实体进行检查,并能获取结果集中将返回的列和数据类型等信息。
对于一个 Select
对象,这些信息可通过其 Select.column_descriptions
属性获得。该属性的行为方式与传统的 Query.column_descriptions
属性相同。返回格式为一个字典列表,如下所示:
>>> from pprint import pprint
>>> user_alias = aliased(User, name="user2")
>>> stmt = select(User, User.id, user_alias)
>>> pprint(stmt.column_descriptions)
[{'aliased': False,
'entity': <class 'User'>,
'expr': <class 'User'>,
'name': 'User',
'type': <class 'User'>},
{'aliased': False,
'entity': <class 'User'>,
'expr': <....InstrumentedAttribute object at ...>,
'name': 'id',
'type': Integer()},
{'aliased': True,
'entity': <AliasedClass ...; User>,
'expr': <AliasedClass ...; User>,
'name': 'user2',
'type': <class 'User'>}]
当 Select.column_descriptions
与非 ORM 对象(如普通的 Table
或 Column
对象)一起使用时,返回的条目将始终包含有关返回列的基础信息:
>>> stmt = select(user_table, address_table.c.id)
>>> pprint(stmt.column_descriptions)
[{'expr': Column('id', Integer(), table=<user_account>, primary_key=True, nullable=False),
'name': 'id',
'type': Integer()},
{'expr': Column('name', String(), table=<user_account>, nullable=False),
'name': 'name',
'type': String()},
{'expr': Column('fullname', String(), table=<user_account>),
'name': 'fullname',
'type': String()},
{'expr': Column('id', Integer(), table=<address>, primary_key=True, nullable=False),
'name': 'id_1',
'type': Integer()}]
在 1.4.33 版本发生变更: 当使用在非 ORM 启用的 Select
对象上时,Select.column_descriptions
属性现在会返回值;此前会抛出 NotImplementedError
。
对于 insert()
、update()
和 delete()
构造函数,有两个独立的属性可用。其一是 UpdateBase.entity_description
,该属性返回有关主 ORM 实体及该 DML 语句将影响的数据库表的信息:
>>> from sqlalchemy import update
>>> stmt = update(User).values(name="somename").returning(User.id)
>>> pprint(stmt.entity_description)
{'entity': <class 'User'>,
'expr': <class 'User'>,
'name': 'User',
'table': Table('user_account', ...),
'type': <class 'User'>}
小技巧
UpdateBase.entity_description
中包含的 "table"
条目表示 语句将插入、更新或删除的实际数据表,这通常 并不等同于 该类所映射的 SQL “selectable” 对象。例如,在连接表继承(joined-table inheritance)的场景中,"table"
将指代该实体对应的局部数据表。
另一个属性是 UpdateBase.returning_column_descriptions
,它以与 Select.column_descriptions
类似的方式提供 RETURNING 子句中各列的信息:
>>> pprint(stmt.returning_column_descriptions)
[{'aliased': False,
'entity': <class 'User'>,
'expr': <sqlalchemy.orm.attributes.InstrumentedAttribute ...>,
'name': 'id',
'type': Integer()}]
在 1.4.33 版本加入: 新增了 UpdateBase.entity_description
和 UpdateBase.returning_column_descriptions
属性。
The select()
construct, as well as the insert()
, update()
and delete()
constructs (for the latter DML constructs, as of SQLAlchemy 1.4.33), all support the ability to inspect the entities in which these statements are created against, as well as the columns and datatypes that would be returned in a result set.
For a Select
object, this information is available from the Select.column_descriptions
attribute. This attribute operates in the same way as the legacy Query.column_descriptions
attribute. The format returned is a list of dictionaries:
>>> from pprint import pprint
>>> user_alias = aliased(User, name="user2")
>>> stmt = select(User, User.id, user_alias)
>>> pprint(stmt.column_descriptions)
[{'aliased': False,
'entity': <class 'User'>,
'expr': <class 'User'>,
'name': 'User',
'type': <class 'User'>},
{'aliased': False,
'entity': <class 'User'>,
'expr': <....InstrumentedAttribute object at ...>,
'name': 'id',
'type': Integer()},
{'aliased': True,
'entity': <AliasedClass ...; User>,
'expr': <AliasedClass ...; User>,
'name': 'user2',
'type': <class 'User'>}]
When Select.column_descriptions
is used with non-ORM objects such as plain Table
or Column
objects, the entries will contain basic information about individual columns returned in all cases:
>>> stmt = select(user_table, address_table.c.id)
>>> pprint(stmt.column_descriptions)
[{'expr': Column('id', Integer(), table=<user_account>, primary_key=True, nullable=False),
'name': 'id',
'type': Integer()},
{'expr': Column('name', String(), table=<user_account>, nullable=False),
'name': 'name',
'type': String()},
{'expr': Column('fullname', String(), table=<user_account>),
'name': 'fullname',
'type': String()},
{'expr': Column('id', Integer(), table=<address>, primary_key=True, nullable=False),
'name': 'id_1',
'type': Integer()}]
在 1.4.33 版本发生变更: The Select.column_descriptions
attribute now returns a value when used against a Select
that is not ORM-enabled. Previously, this would raise NotImplementedError
.
For insert()
, update()
and delete()
constructs, there are two separate attributes. One is UpdateBase.entity_description
which returns information about the primary ORM entity and database table which the DML construct would be affecting:
>>> from sqlalchemy import update
>>> stmt = update(User).values(name="somename").returning(User.id)
>>> pprint(stmt.entity_description)
{'entity': <class 'User'>,
'expr': <class 'User'>,
'name': 'User',
'table': Table('user_account', ...),
'type': <class 'User'>}
小技巧
The UpdateBase.entity_description
includes an entry "table"
which is actually the table to be inserted, updated or deleted by the statement, which is not always the same as the SQL “selectable” to which the class may be mapped. For example, in a joined-table inheritance scenario, "table"
will refer to the local table for the given entity.
The other is UpdateBase.returning_column_descriptions
which delivers information about the columns present in the RETURNING collection in a manner roughly similar to that of Select.column_descriptions
:
>>> pprint(stmt.returning_column_descriptions)
[{'aliased': False,
'entity': <class 'User'>,
'expr': <sqlalchemy.orm.attributes.InstrumentedAttribute ...>,
'name': 'id',
'type': Integer()}]
在 1.4.33 版本加入: Added the UpdateBase.entity_description
and UpdateBase.returning_column_descriptions
attributes.
其他 ORM API 构造¶
Additional ORM API Constructs
Object Name | Description |
---|---|
aliased(element[, alias, name, flat, ...]) |
Produce an alias of the given element, usually an |
Represents an “aliased” form of a mapped class for usage with Query. |
|
Provide an inspection interface for an
|
|
A grouping of SQL expressions that are returned by a |
|
join(left, right[, onclause, isouter, ...]) |
Produce an inner join between left and right clauses. |
outerjoin(left, right[, onclause, full]) |
Produce a left outer join between left and right clauses. |
with_loader_criteria(entity_or_base, where_criteria[, loader_only, include_aliases, ...]) |
Add additional WHERE criteria to the load for all occurrences of a particular entity. |
with_parent(instance, prop[, from_entity]) |
Create filtering criterion that relates this query’s primary entity
to the given related instance, using established
|
- function sqlalchemy.orm.aliased(element: _EntityType[_O] | FromClause, alias: FromClause | None = None, name: str | None = None, flat: bool = False, adapt_on_names: bool = False) AliasedClass[_O] | FromClause | AliasedType[_O] ¶
Produce an alias of the given element, usually an
AliasedClass
instance.E.g.:
my_alias = aliased(MyClass) stmt = select(MyClass, my_alias).filter(MyClass.id > my_alias.id) result = session.execute(stmt)
The
aliased()
function is used to create an ad-hoc mapping of a mapped class to a new selectable. By default, a selectable is generated from the normally mapped selectable (typically aTable
) using theFromClause.alias()
method. However,aliased()
can also be used to link the class to a newselect()
statement. Also, thewith_polymorphic()
function is a variant ofaliased()
that is intended to specify a so-called “polymorphic selectable”, that corresponds to the union of several joined-inheritance subclasses at once.For convenience, the
aliased()
function also accepts plainFromClause
constructs, such as aTable
orselect()
construct. In those cases, theFromClause.alias()
method is called on the object and the newAlias
object returned. The returnedAlias
is not ORM-mapped in this case.- 参数:
element¶ – element to be aliased. Is normally a mapped class, but for convenience can also be a
FromClause
element.alias¶ – Optional selectable unit to map the element to. This is usually used to link the object to a subquery, and should be an aliased select construct as one would produce from the
Query.subquery()
method or theSelect.subquery()
orSelect.alias()
methods of theselect()
construct.name¶ – optional string name to use for the alias, if not specified by the
alias
parameter. The name, among other things, forms the attribute name that will be accessible via tuples returned by aQuery
object. Not supported when creating aliases ofJoin
objects.flat¶ –
Boolean, will be passed through to the
FromClause.alias()
call so that aliases ofJoin
objects will alias the individual tables inside the join, rather than creating a subquery. This is generally supported by all modern databases with regards to right-nested joins and generally produces more efficient queries.When
aliased.flat
is combined withaliased.name
, the resulting joins will alias individual tables using a naming scheme similar to<prefix>_<tablename>
. This naming scheme is for visibility / debugging purposes only and the specific scheme is subject to change without notice.在 2.0.32 版本加入: added support for combining
aliased.name
withaliased.flat
. Previously, this would raiseNotImplementedError
.adapt_on_names¶ –
if True, more liberal “matching” will be used when mapping the mapped columns of the ORM entity to those of the given selectable - a name-based match will be performed if the given selectable doesn’t otherwise have a column that corresponds to one on the entity. The use case for this is when associating an entity with some derived selectable such as one that uses aggregate functions:
class UnitPrice(Base): __tablename__ = "unit_price" ... unit_id = Column(Integer) price = Column(Numeric) aggregated_unit_price = ( Session.query(func.sum(UnitPrice.price).label("price")) .group_by(UnitPrice.unit_id) .subquery() ) aggregated_unit_price = aliased( UnitPrice, alias=aggregated_unit_price, adapt_on_names=True )
Above, functions on
aggregated_unit_price
which refer to.price
will return thefunc.sum(UnitPrice.price).label('price')
column, as it is matched on the name “price”. Ordinarily, the “price” function wouldn’t have any “column correspondence” to the actualUnitPrice.price
column as it is not a proxy of the original.
- class sqlalchemy.orm.util.AliasedClass¶
Represents an “aliased” form of a mapped class for usage with Query.
The ORM equivalent of a
alias()
construct, this object mimics the mapped class using a__getattr__
scheme and maintains a reference to a realAlias
object.A primary purpose of
AliasedClass
is to serve as an alternate within a SQL statement generated by the ORM, such that an existing mapped entity can be used in multiple contexts. A simple example:# find all pairs of users with the same name user_alias = aliased(User) session.query(User, user_alias).join( (user_alias, User.id > user_alias.id) ).filter(User.name == user_alias.name)
AliasedClass
is also capable of mapping an existing mapped class to an entirely new selectable, provided this selectable is column- compatible with the existing mapped selectable, and it can also be configured in a mapping as the target of arelationship()
. See the links below for examples.The
AliasedClass
object is constructed typically using thealiased()
function. It also is produced with additional configuration when using thewith_polymorphic()
function.The resulting object is an instance of
AliasedClass
. This object implements an attribute scheme which produces the same attribute and method interface as the original mapped class, allowingAliasedClass
to be compatible with any attribute technique which works on the original class, including hybrid attributes (see 混合属性).The
AliasedClass
can be inspected for its underlyingMapper
, aliased selectable, and other information usinginspect()
:from sqlalchemy import inspect my_alias = aliased(MyClass) insp = inspect(my_alias)
The resulting inspection object is an instance of
AliasedInsp
.Class signature
class
sqlalchemy.orm.AliasedClass
(sqlalchemy.inspection.Inspectable
,sqlalchemy.orm.ORMColumnsClauseRole
)
- class sqlalchemy.orm.util.AliasedInsp¶
Provide an inspection interface for an
AliasedClass
object.The
AliasedInsp
object is returned given anAliasedClass
using theinspect()
function:from sqlalchemy import inspect from sqlalchemy.orm import aliased my_alias = aliased(MyMappedClass) insp = inspect(my_alias)
Attributes on
AliasedInsp
include:entity
- theAliasedClass
represented.mapper
- theMapper
mapping the underlying class.selectable
- theAlias
construct which ultimately represents an aliasedTable
orSelect
construct.name
- the name of the alias. Also is used as the attribute name when returned in a result tuple fromQuery
.with_polymorphic_mappers
- collection ofMapper
objects indicating all those mappers expressed in the select construct for theAliasedClass
.polymorphic_on
- an alternate column or SQL expression which will be used as the “discriminator” for a polymorphic load.
参见
Class signature
class
sqlalchemy.orm.AliasedInsp
(sqlalchemy.orm.ORMEntityColumnsClauseRole
,sqlalchemy.orm.ORMFromClauseRole
,sqlalchemy.sql.cache_key.HasCacheKey
,sqlalchemy.orm.base.InspectionAttr
,sqlalchemy.util.langhelpers.MemoizedSlots
,sqlalchemy.inspection.Inspectable
,typing.Generic
)
- class sqlalchemy.orm.Bundle¶
A grouping of SQL expressions that are returned by a
Query
under one namespace.The
Bundle
essentially allows nesting of the tuple-based results returned by a column-orientedQuery
object. It also is extensible via simple subclassing, where the primary capability to override is that of how the set of expressions should be returned, allowing post-processing as well as custom return types, without involving ORM identity-mapped classes.Members
__init__(), c, columns, create_row_processor(), is_aliased_class, is_bundle, is_clause_element, is_mapper, label(), single_entity
Class signature
class
sqlalchemy.orm.Bundle
(sqlalchemy.orm.ORMColumnsClauseRole
,sqlalchemy.sql.annotation.SupportsCloneAnnotations
,sqlalchemy.sql.cache_key.MemoizedHasCacheKey
,sqlalchemy.inspection.Inspectable
,sqlalchemy.orm.base.InspectionAttr
)-
method
sqlalchemy.orm.Bundle.
__init__(name: str, *exprs: _ColumnExpressionArgument[Any], **kw: Any)¶ Construct a new
Bundle
.e.g.:
bn = Bundle("mybundle", MyClass.x, MyClass.y) for row in session.query(bn).filter(bn.c.x == 5).filter(bn.c.y == 4): print(row.mybundle.x, row.mybundle.y)
-
attribute
sqlalchemy.orm.Bundle.
c: ReadOnlyColumnCollection[str, KeyedColumnElement[Any]]¶ An alias for
Bundle.columns
.
-
attribute
sqlalchemy.orm.Bundle.
columns: ReadOnlyColumnCollection[str, KeyedColumnElement[Any]]¶ A namespace of SQL expressions referred to by this
Bundle
.e.g.:
bn = Bundle("mybundle", MyClass.x, MyClass.y) q = sess.query(bn).filter(bn.c.x == 5)
Nesting of bundles is also supported:
b1 = Bundle( "b1", Bundle("b2", MyClass.a, MyClass.b), Bundle("b3", MyClass.x, MyClass.y), ) q = sess.query(b1).filter(b1.c.b2.c.a == 5).filter(b1.c.b3.c.y == 9)
参见
-
method
sqlalchemy.orm.Bundle.
create_row_processor(query: Select[Unpack[TupleAny]], procs: Sequence[Callable[[Row[Unpack[TupleAny]]], Any]], labels: Sequence[str]) Callable[[Row[Unpack[TupleAny]]], Any] ¶ Produce the “row processing” function for this
Bundle
.May be overridden by subclasses to provide custom behaviors when results are fetched. The method is passed the statement object and a set of “row processor” functions at query execution time; these processor functions when given a result row will return the individual attribute value, which can then be adapted into any kind of return data structure.
The example below illustrates replacing the usual
Row
return structure with a straight Python dictionary:from sqlalchemy.orm import Bundle class DictBundle(Bundle): def create_row_processor(self, query, procs, labels): "Override create_row_processor to return values as dictionaries" def proc(row): return dict(zip(labels, (proc(row) for proc in procs))) return proc
A result from the above
Bundle
will return dictionary values:bn = DictBundle("mybundle", MyClass.data1, MyClass.data2) for row in session.execute(select(bn)).where(bn.c.data1 == "d1"): print(row.mybundle["data1"], row.mybundle["data2"])
-
attribute
sqlalchemy.orm.Bundle.
is_aliased_class = False¶ True if this object is an instance of
AliasedClass
.
-
attribute
sqlalchemy.orm.Bundle.
is_bundle = True¶ True if this object is an instance of
Bundle
.
-
attribute
sqlalchemy.orm.Bundle.
is_clause_element = False¶ True if this object is an instance of
ClauseElement
.
-
attribute
sqlalchemy.orm.Bundle.
is_mapper = False¶ True if this object is an instance of
Mapper
.
-
method
sqlalchemy.orm.Bundle.
label(name)¶ Provide a copy of this
Bundle
passing a new label.
-
attribute
sqlalchemy.orm.Bundle.
single_entity = False¶ If True, queries for a single Bundle will be returned as a single entity, rather than an element within a keyed tuple.
-
method
- function sqlalchemy.orm.with_loader_criteria(entity_or_base: _EntityType[Any], where_criteria: _ColumnExpressionArgument[bool] | Callable[[Any], _ColumnExpressionArgument[bool]], loader_only: bool = False, include_aliases: bool = False, propagate_to_loaders: bool = True, track_closure_variables: bool = True) LoaderCriteriaOption ¶
Add additional WHERE criteria to the load for all occurrences of a particular entity.
在 1.4 版本加入.
The
with_loader_criteria()
option is intended to add limiting criteria to a particular kind of entity in a query, globally, meaning it will apply to the entity as it appears in the SELECT query as well as within any subqueries, join conditions, and relationship loads, including both eager and lazy loaders, without the need for it to be specified in any particular part of the query. The rendering logic uses the same system used by single table inheritance to ensure a certain discriminator is applied to a table.E.g., using 2.0-style queries, we can limit the way the
User.addresses
collection is loaded, regardless of the kind of loading used:from sqlalchemy.orm import with_loader_criteria stmt = select(User).options( selectinload(User.addresses), with_loader_criteria(Address, Address.email_address != "foo"), )
Above, the “selectinload” for
User.addresses
will apply the given filtering criteria to the WHERE clause.Another example, where the filtering will be applied to the ON clause of the join, in this example using 1.x style queries:
q = ( session.query(User) .outerjoin(User.addresses) .options(with_loader_criteria(Address, Address.email_address != "foo")) )
The primary purpose of
with_loader_criteria()
is to use it in theSessionEvents.do_orm_execute()
event handler to ensure that all occurrences of a particular entity are filtered in a certain way, such as filtering for access control roles. It also can be used to apply criteria to relationship loads. In the example below, we can apply a certain set of rules to all queries emitted by a particularSession
:session = Session(bind=engine) @event.listens_for("do_orm_execute", session) def _add_filtering_criteria(execute_state): if ( execute_state.is_select and not execute_state.is_column_load and not execute_state.is_relationship_load ): execute_state.statement = execute_state.statement.options( with_loader_criteria( SecurityRole, lambda cls: cls.role.in_(["some_role"]), include_aliases=True, ) )
In the above example, the
SessionEvents.do_orm_execute()
event will intercept all queries emitted using theSession
. For those queries which are SELECT statements and are not attribute or relationship loads a customwith_loader_criteria()
option is added to the query. Thewith_loader_criteria()
option will be used in the given statement and will also be automatically propagated to all relationship loads that descend from this query.The criteria argument given is a
lambda
that accepts acls
argument. The given class will expand to include all mapped subclass and need not itself be a mapped class.小技巧
When using
with_loader_criteria()
option in conjunction with thecontains_eager()
loader option, it’s important to note thatwith_loader_criteria()
only affects the part of the query that determines what SQL is rendered in terms of the WHERE and FROM clauses. Thecontains_eager()
option does not affect the rendering of the SELECT statement outside of the columns clause, so does not have any interaction with thewith_loader_criteria()
option. However, the way things “work” is thatcontains_eager()
is meant to be used with a query that is already selecting from the additional entities in some way, wherewith_loader_criteria()
can apply it’s additional criteria.In the example below, assuming a mapping relationship as
A -> A.bs -> B
, the givenwith_loader_criteria()
option will affect the way in which the JOIN is rendered:stmt = ( select(A) .join(A.bs) .options(contains_eager(A.bs), with_loader_criteria(B, B.flag == 1)) )
Above, the given
with_loader_criteria()
option will affect the ON clause of the JOIN that is specified by.join(A.bs)
, so is applied as expected. Thecontains_eager()
option has the effect that columns fromB
are added to the columns clause:SELECT b.id, b.a_id, b.data, b.flag, a.id AS id_1, a.data AS data_1 FROM a JOIN b ON a.id = b.a_id AND b.flag = :flag_1
The use of the
contains_eager()
option within the above statement has no effect on the behavior of thewith_loader_criteria()
option. If thecontains_eager()
option were omitted, the SQL would be the same as regards the FROM and WHERE clauses, wherewith_loader_criteria()
continues to add its criteria to the ON clause of the JOIN. The addition ofcontains_eager()
only affects the columns clause, in that additional columns againstb
are added which are then consumed by the ORM to produceB
instances.警告
The use of a lambda inside of the call to
with_loader_criteria()
is only invoked once per unique class. Custom functions should not be invoked within this lambda. See 使用 Lambda 显著提高语句生成速度 for an overview of the “lambda SQL” feature, which is for advanced use only.- 参数:
entity_or_base¶ – a mapped class, or a class that is a super class of a particular set of mapped classes, to which the rule will apply.
where_criteria¶ –
a Core SQL expression that applies limiting criteria. This may also be a “lambda:” or Python function that accepts a target class as an argument, when the given class is a base with many different mapped subclasses.
备注
To support pickling, use a module-level Python function to produce the SQL expression instead of a lambda or a fixed SQL expression, which tend to not be picklable.
include_aliases¶ – if True, apply the rule to
aliased()
constructs as well.propagate_to_loaders¶ –
defaults to True, apply to relationship loaders such as lazy loaders. This indicates that the option object itself including SQL expression is carried along with each loaded instance. Set to
False
to prevent the object from being assigned to individual instances.参见
ORM 查询事件 - includes examples of using
with_loader_criteria()
.添加全局 WHERE / ON 条件 - basic example on how to combine
with_loader_criteria()
with theSessionEvents.do_orm_execute()
event.track_closure_variables¶ –
when False, closure variables inside of a lambda expression will not be used as part of any cache key. This allows more complex expressions to be used inside of a lambda expression but requires that the lambda ensures it returns the identical SQL every time given a particular class.
在 1.4.0b2 版本加入.
- function sqlalchemy.orm.join(left: _FromClauseArgument, right: _FromClauseArgument, onclause: _OnClauseArgument | None = None, isouter: bool = False, full: bool = False) _ORMJoin ¶
Produce an inner join between left and right clauses.
join()
is an extension to the core join interface provided byjoin()
, where the left and right selectable may be not only core selectable objects such asTable
, but also mapped classes orAliasedClass
instances. The “on” clause can be a SQL expression or an ORM mapped attribute referencing a configuredrelationship()
.join()
is not commonly needed in modern usage, as its functionality is encapsulated within that of theSelect.join()
andQuery.join()
methods. which feature a significant amount of automation beyondjoin()
by itself. Explicit use ofjoin()
with ORM-enabled SELECT statements involves use of theSelect.select_from()
method, as in:from sqlalchemy.orm import join stmt = ( select(User) .select_from(join(User, Address, User.addresses)) .filter(Address.email_address == "foo@bar.com") )
In modern SQLAlchemy the above join can be written more succinctly as:
stmt = ( select(User) .join(User.addresses) .filter(Address.email_address == "foo@bar.com") )
警告
using
join()
directly may not work properly with modern ORM options such aswith_loader_criteria()
. It is strongly recommended to use the idiomatic join patterns provided by methods such asSelect.join()
andSelect.join_from()
when creating ORM joins.
- function sqlalchemy.orm.outerjoin(left: _FromClauseArgument, right: _FromClauseArgument, onclause: _OnClauseArgument | None = None, full: bool = False) _ORMJoin ¶
Produce a left outer join between left and right clauses.
This is the “outer join” version of the
join()
function, featuring the same behavior except that an OUTER JOIN is generated. See that function’s documentation for other usage details.
- function sqlalchemy.orm.with_parent(instance: object, prop: attributes.QueryableAttribute[Any], from_entity: _EntityType[Any] | None = None) ColumnElement[bool] ¶
Create filtering criterion that relates this query’s primary entity to the given related instance, using established
relationship()
configuration.E.g.:
stmt = select(Address).where(with_parent(some_user, User.addresses))
The SQL rendered is the same as that rendered when a lazy loader would fire off from the given parent on that attribute, meaning that the appropriate state is taken from the parent object in Python without the need to render joins to the parent table in the rendered statement.
The given property may also make use of
PropComparator.of_type()
to indicate the left side of the criteria:a1 = aliased(Address) a2 = aliased(Address) stmt = select(a1, a2).where(with_parent(u1, User.addresses.of_type(a2)))
The above use is equivalent to using the
from_entity()
argument:a1 = aliased(Address) a2 = aliased(Address) stmt = select(a1, a2).where( with_parent(u1, User.addresses, from_entity=a2) )
- 参数:
instance¶ – An instance which has some
relationship()
.property¶ – Class-bound attribute, which indicates what relationship from the instance should be used to reconcile the parent/child relationship.
from_entity¶ – Entity in which to consider as the left side. This defaults to the “zero” entity of the
Query
itself.