声明式的表配置¶
Table Configuration with Declarative
如 声明式映射 中所介绍的,声明式风格包括同时生成映射的 Table
对象的能力,或直接容纳 Table
或其他 FromClause
对象。
以下示例假设一个声明式基类,如:
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
以下所有示例都说明了从上述 Base
继承的类。使用装饰器进行声明性映射(无声明性基类) 中介绍的装饰器风格在以下所有示例中也完全支持,声明式基类的旧形式也是如此,包括由 declarative_base()
生成的基类。
As introduced at 声明式映射, the Declarative style
includes the ability to generate a mapped Table
object
at the same time, or to accommodate a Table
or other
FromClause
object directly.
The following examples assume a declarative base class as:
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
All of the examples that follow illustrate a class inheriting from the above
Base
. The decorator style introduced at 使用装饰器进行声明性映射(无声明性基类)
is fully supported with all the following examples as well, as are legacy
forms of Declarative Base including base classes generated by
declarative_base()
.
带有 mapped_column()
的声明表¶
Declarative Table with mapped_column()
在使用声明式时,要映射的类的主体在大多数情况下包括一个属性 __tablename__
,该属性指示应与映射一起生成的 Table
的字符串名称。然后,在类主体中使用具有其他ORM特定配置功能的 mapped_column()
构造来指示表中的列。下面的示例说明了在声明式映射中使用此构造的最基本方法:
from sqlalchemy import Integer, String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user"
id = mapped_column(Integer, primary_key=True)
name = mapped_column(String(50), nullable=False)
fullname = mapped_column(String)
nickname = mapped_column(String(30))
如上所述,mapped_column()
构造在类定义中内联放置为类级别属性。在类声明时,声明式映射过程将针对与声明式 Base
关联的 MetaData
集合生成一个新的 Table
对象;在此过程中,每个 mapped_column()
实例将用于生成一个 Column
对象,该对象将成为此 Table
对象的 Table.columns
集合的一部分。
在上述示例中,声明式将构建一个等效于以下内容的 Table
构造:
# 生成的等效Table对象
user_table = Table(
"user",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String(50)),
Column("fullname", String()),
Column("nickname", String(30)),
)
当上述 User
类映射时,可以通过 __table__
属性直接访问此 Table
对象;这在 访问表和元数据 中有进一步描述。
mapped_column()
取代了 Column()
的使用
1.x 版本SQLAlchemy的用户将注意到 mapped_column()
构造的使用,这是SQLAlchemy 2.0系列的新功能。此ORM特定的构造首先旨在成为声明式映射中 Column
的直接替代,添加了新的ORM特定的便利功能,例如在构造中建立 mapped_column.deferred
的能力,最重要的是向Mypy_和Pylance_等工具准确表示属性在运行时在类级别和实例级别的行为。正如在以下部分中将看到的那样,它也是SQLAlchemy 2.0中引入的新注释驱动配置风格的前沿。
旧代码的用户应注意,Column
形式在声明式中将始终以相同方式工作。不同形式的属性映射也可以在单个映射中的属性基础上混合使用,因此迁移到新形式可以以任何速度进行。有关逐步迁移到新形式的指南,请参见 ORM Declarative Models。
mapped_column()
构造接受 Column
构造接受的所有参数,以及其他ORM特定参数。通常省略 mapped_column.__name
字段,该字段指示数据库列的名称,因为声明式过程将使用分配给构造的属性名称并将其分配为列的名称(在上述示例中,这指的是名称 id
、 name
、 fullname
、 nickname
)。分配备用的 mapped_column.__name
也是有效的,其中生成的 Column
将在SQL和DDL语句中使用给定的名称,而映射的 User
类将继续允许使用给定的属性名称访问该属性,与分配给列本身的名称无关(更多内容请参见 显式命名声明性映射列)。
小技巧
mapped_column()
构造 仅在声明式类映射中有效 。在使用Core构造 Table
对象以及使用 imperative table 配置时,仍然需要 Column
构造以指示数据库列的存在。
When using Declarative, the body of the class to be mapped in most cases
includes an attribute __tablename__
that indicates the string name of a
Table
that should be generated along with the mapping. The
mapped_column()
construct, which features additional ORM-specific
configuration capabilities not present in the plain Column
class, is then used within the class body to indicate columns in the table. The
example below illustrates the most basic use of this construct within a
Declarative mapping:
from sqlalchemy import Integer, String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user"
id = mapped_column(Integer, primary_key=True)
name = mapped_column(String(50), nullable=False)
fullname = mapped_column(String)
nickname = mapped_column(String(30))
Above, mapped_column()
constructs are placed inline within the class
definition as class level attributes. At the point at which the class is
declared, the Declarative mapping process will generate a new
Table
object against the MetaData
collection
associated with the Declarative Base
; each instance of
mapped_column()
will then be used to generate a
Column
object during this process, which will become part of
the Table.columns
collection of this Table
object.
In the above example, Declarative will build a Table
construct that is equivalent to the following:
# equivalent Table object produced
user_table = Table(
"user",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String(50)),
Column("fullname", String()),
Column("nickname", String(30)),
)
When the User
class above is mapped, this Table
object
can be accessed directly via the __table__
attribute; this is described
further at 访问表和元数据.
mapped_column()
supersedes the use of Column()
Users of 1.x SQLAlchemy will note the use of the mapped_column()
construct, which is new as of the SQLAlchemy 2.0 series. This ORM-specific
construct is intended first and foremost to be a drop-in replacement for
the use of Column
within Declarative mappings only, adding
new ORM-specific convenience features such as the ability to establish
mapped_column.deferred
within the construct, and most
importantly to indicate to typing tools such as Mypy and Pylance an
accurate representation of how the attribute will behave at runtime at
both the class level as well as the instance level. As will be seen in
the following sections, it’s also at the forefront of a new
annotation-driven configuration style introduced in SQLAlchemy 2.0.
Users of legacy code should be aware that the Column
form
will always work in Declarative in the same way it always has. The different
forms of attribute mapping may also be mixed within a single mapping on an
attribute by attribute basis, so migration to the new form can be at
any pace. See the section ORM Declarative Models for
a step by step guide to migrating a Declarative model to the new form.
The mapped_column()
construct accepts all arguments that are
accepted by the Column
construct, as well as additional
ORM-specific arguments. The mapped_column.__name
field,
indicating the name of the database column, is typically omitted, as the
Declarative process will make use of the attribute name given to the construct
and assign this as the name of the column (in the above example, this refers to
the names id
, name
, fullname
, nickname
). Assigning an alternate
mapped_column.__name
is valid as well, where the resulting
Column
will use the given name in SQL and DDL statements,
while the User
mapped class will continue to allow access to the attribute
using the attribute name given, independent of the name given to the column
itself (more on this at 显式命名声明性映射列).
小技巧
The mapped_column()
construct is only valid within a
Declarative class mapping. When constructing a Table
object using Core as well as when using
imperative table configuration,
the Column
construct is still required in order to
indicate the presence of a database column.
使用带注释的声明表( mapped_column()
的类型注释形式)¶
Using Annotated Declarative Table (Type Annotated Forms for mapped_column()
)
mapped_column()
构造能够从与声明式映射类中声明的属性关联的 PEP 484 类型注释中推导其列配置信息。如果使用这些类型注释,它们 必须 存在于一个特殊的SQLAlchemy类型 Mapped
中,该类型是一个泛型_类型,然后在其中指示特定的Python类型。
下面说明了上一节中的映射,添加了 Mapped
的使用:
from typing import Optional
from sqlalchemy import String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(50))
fullname: Mapped[Optional[str]]
nickname: Mapped[Optional[str]] = mapped_column(String(30))
如上所述,当声明式处理每个类属性时,每个 mapped_column()
将从左侧的相应 Mapped
类型注释中推导其他参数(如果存在)。此外,声明式将在遇到没有分配给属性的值的 Mapped
类型注释时隐式生成一个空的 mapped_column()
指令(这种形式的灵感来自Python数据类_中使用的类似风格);此 mapped_column()
构造将继续从存在的 Mapped
注释中推导其配置。
The mapped_column()
construct is capable of deriving its column-configuration
information from PEP 484 type annotations associated with the attribute
as declared in the Declarative mapped class. These type annotations,
if used, must
be present within a special SQLAlchemy type called Mapped
, which
is a generic type that then indicates a specific Python type within it.
Below illustrates the mapping from the previous section, adding the use of
Mapped
:
from typing import Optional
from sqlalchemy import String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(50))
fullname: Mapped[Optional[str]]
nickname: Mapped[Optional[str]] = mapped_column(String(30))
Above, when Declarative processes each class attribute, each
mapped_column()
will derive additional arguments from the
corresponding Mapped
type annotation on the left side, if
present. Additionally, Declarative will generate an empty
mapped_column()
directive implicitly, whenever a
Mapped
type annotation is encountered that does not have
a value assigned to the attribute (this form is inspired by the similar
style used in Python dataclasses); this mapped_column()
construct
proceeds to derive its configuration from the Mapped
annotation present.
mapped_column()
从 Mapped
注释中派生数据类型和可空性¶
mapped_column()
derives the datatype and nullability from the Mapped
annotation
mapped_column()
构造从 Mapped
注释中推导出以下两个属性:
数据类型 - 在
Mapped
中给出的Python类型(如果存在)包含在typing.Optional
构造中,与TypeEngine
子类(如Integer
、String
、DateTime
或Uuid
)相关联,以列出一些常见类型。数据类型根据Python类型到SQLAlchemy数据类型的字典确定。此字典是完全可自定义的,如下一节 自定义类型映射 中详细说明。默认类型映射实现如下代码示例:
from typing import Any from typing import Dict from typing import Type import datetime import decimal import uuid from sqlalchemy import types # 默认类型映射,从 Mapped[] 注释中推导出 mapped_column() 的类型 type_map: Dict[Type[Any], TypeEngine[Any]] = { bool: types.Boolean(), bytes: types.LargeBinary(), datetime.date: types.Date(), datetime.datetime: types.DateTime(), datetime.time: types.Time(), datetime.timedelta: types.Interval(), decimal.Decimal: types.Numeric(), float: types.Float(), int: types.Integer(), str: types.String(), uuid.UUID: types.Uuid(), }
如果 mapped_column()
构造指示了类型传递到 mapped_column.__type
参数,则会忽略给定的Python类型。
可空性 -
mapped_column()
构造将通过传递True
或False
的mapped_column.nullable
参数首先指示其Column
为NULL
或NOT NULL
。此外,如果mapped_column.primary_key
参数存在并设置为True
,这也将意味着该列应为NOT NULL
。如果 这两个参数都不存在 ,则将使用
Mapped
类型注释中的typing.Optional[]
来确定可空性,其中typing.Optional[]
表示NULL
,而没有typing.Optional[]
表示NOT NULL
。如果根本不存在Mapped[]
注释,并且没有mapped_column.nullable
或mapped_column.primary_key
参数,则使用SQLAlchemy对Column
的通常默认NULL
。在下面的示例中,
id
和data
列将为NOT NULL
,而additional_info
列将为NULL
:from typing import Optional from sqlalchemy.orm import DeclarativeBase from sqlalchemy.orm import Mapped from sqlalchemy.orm import mapped_column class Base(DeclarativeBase): pass class SomeClass(Base): __tablename__ = "some_table" # primary_key=True,因此将为 NOT NULL id: Mapped[int] = mapped_column(primary_key=True) # 不是 Optional[],因此将为 NOT NULL data: Mapped[str] # Optional[],因此将为 NULL additional_info: Mapped[Optional[str]]
使
mapped_column()
的可空性与注释所暗示的不同也是完全有效的。例如,ORM映射属性可以在Python代码中允许None
,该代码在首次创建和填充对象时与对象一起工作,但最终将值写入NOT NULL
的数据库列。存在时,mapped_column.nullable
参数将始终优先:class SomeClass(Base): # ... # 将是 String() NOT NULL,但在Python中可以是 None data: Mapped[Optional[str]] = mapped_column(nullable=False)
同样,写入数据库列的非None属性由于某种原因需要在架构级别为
NULL
,可以将mapped_column.nullable
设置为True
:class SomeClass(Base): # ... # 将是 String() NULL,但类型检查器不会期望该属性为 None data: Mapped[str] = mapped_column(nullable=True)
The two qualities that mapped_column()
derives from the
Mapped
annotation are:
datatype - the Python type given inside
Mapped
, as contained within thetyping.Optional
construct if present, is associated with aTypeEngine
subclass such asInteger
,String
,DateTime
, orUuid
, to name a few common types.The datatype is determined based on a dictionary of Python type to SQLAlchemy datatype. This dictionary is completely customizable, as detailed in the next section 自定义类型映射. The default type map is implemented as in the code example below:
from typing import Any from typing import Dict from typing import Type import datetime import decimal import uuid from sqlalchemy import types # default type mapping, deriving the type for mapped_column() # from a Mapped[] annotation type_map: Dict[Type[Any], TypeEngine[Any]] = { bool: types.Boolean(), bytes: types.LargeBinary(), datetime.date: types.Date(), datetime.datetime: types.DateTime(), datetime.time: types.Time(), datetime.timedelta: types.Interval(), decimal.Decimal: types.Numeric(), float: types.Float(), int: types.Integer(), str: types.String(), uuid.UUID: types.Uuid(), }
If the
mapped_column()
construct indicates an explicit type as passed to themapped_column.__type
argument, then the given Python type is disregarded.nullability - The
mapped_column()
construct will indicate itsColumn
asNULL
orNOT NULL
first and foremost by the presence of themapped_column.nullable
parameter, passed either asTrue
orFalse
. Additionally , if themapped_column.primary_key
parameter is present and set toTrue
, that will also imply that the column should beNOT NULL
.In the absence of both of these parameters, the presence of
typing.Optional[]
within theMapped
type annotation will be used to determine nullability, wheretyping.Optional[]
meansNULL
, and the absence oftyping.Optional[]
meansNOT NULL
. If there is noMapped[]
annotation present at all, and there is nomapped_column.nullable
ormapped_column.primary_key
parameter, then SQLAlchemy’s usual default forColumn
ofNULL
is used.In the example below, the
id
anddata
columns will beNOT NULL
, and theadditional_info
column will beNULL
:from typing import Optional from sqlalchemy.orm import DeclarativeBase from sqlalchemy.orm import Mapped from sqlalchemy.orm import mapped_column class Base(DeclarativeBase): pass class SomeClass(Base): __tablename__ = "some_table" # primary_key=True, therefore will be NOT NULL id: Mapped[int] = mapped_column(primary_key=True) # not Optional[], therefore will be NOT NULL data: Mapped[str] # Optional[], therefore will be NULL additional_info: Mapped[Optional[str]]
It is also perfectly valid to have a
mapped_column()
whose nullability is different from what would be implied by the annotation. For example, an ORM mapped attribute may be annotated as allowingNone
within Python code that works with the object as it is first being created and populated, however the value will ultimately be written to a database column that isNOT NULL
. Themapped_column.nullable
parameter, when present, will always take precedence:class SomeClass(Base): # ... # will be String() NOT NULL, but can be None in Python data: Mapped[Optional[str]] = mapped_column(nullable=False)
Similarly, a non-None attribute that’s written to a database column that for whatever reason needs to be NULL at the schema level,
mapped_column.nullable
may be set toTrue
:class SomeClass(Base): # ... # will be String() NULL, but type checker will not expect # the attribute to be None data: Mapped[str] = mapped_column(nullable=True)
自定义类型映射¶
Customizing the Type Map
Python类型到SQLAlchemy TypeEngine
类型的映射在上一节中描述的默认情况下存在于 sqlalchemy.sql.sqltypes
模块中的硬编码字典中。然而,协调声明式映射过程的 registry
对象首先会咨询一个本地的用户定义类型字典,该字典可以在构造 registry
时作为 registry.type_annotation_map
参数传递,并且可以在首次使用时与 DeclarativeBase
超类关联。
例如,如果我们希望将 int
使用 BIGINT
数据类型, datetime.datetime
使用 timezone=True
的 TIMESTAMP
数据类型,并且仅在Microsoft SQL Server上希望将Python str
使用 NVARCHAR
数据类型,则可以配置注册表和声明式基类,如下所示:
import datetime
from sqlalchemy import BIGINT, NVARCHAR, String, TIMESTAMP
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
class Base(DeclarativeBase):
type_annotation_map = {
int: BIGINT,
datetime.datetime: TIMESTAMP(timezone=True),
str: String().with_variant(NVARCHAR, "mssql"),
}
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
date: Mapped[datetime.datetime]
status: Mapped[str]
下面说明了在上述映射中生成的CREATE TABLE语句,首先在Microsoft SQL Server后端,说明 NVARCHAR
数据类型:
>>> from sqlalchemy.schema import CreateTable
>>> from sqlalchemy.dialects import mssql, postgresql
>>> print(CreateTable(SomeClass.__table__).compile(dialect=mssql.dialect()))
CREATE TABLE some_table (
id BIGINT NOT NULL IDENTITY,
date TIMESTAMP NOT NULL,
status NVARCHAR(max) NOT NULL,
PRIMARY KEY (id)
)
然后在PostgreSQL后端,说明 TIMESTAMP WITH TIME ZONE
:
>>> print(CreateTable(SomeClass.__table__).compile(dialect=postgresql.dialect()))
CREATE TABLE some_table (
id BIGSERIAL NOT NULL,
date TIMESTAMP WITH TIME ZONE NOT NULL,
status VARCHAR NOT NULL,
PRIMARY KEY (id)
)
通过使用 TypeEngine.with_variant()
等方法,我们能够构建一个为不同后端定制的类型映射,同时仍然能够使用简洁的仅注释 mapped_column()
配置。超过这一点还有两个级别的Python类型可配置性,分别在接下来的两节中描述。
The mapping of Python types to SQLAlchemy TypeEngine
types
described in the previous section defaults to a hardcoded dictionary
present in the sqlalchemy.sql.sqltypes
module. However, the registry
object that coordinates the Declarative mapping process will first consult
a local, user defined dictionary of types which may be passed
as the registry.type_annotation_map
parameter when
constructing the registry
, which may be associated with
the DeclarativeBase
superclass when first used.
As an example, if we wish to make use of the BIGINT
datatype for
int
, the TIMESTAMP
datatype with timezone=True
for
datetime.datetime
, and then only on Microsoft SQL Server we’d like to use
NVARCHAR
datatype when Python str
is used,
the registry and Declarative base could be configured as:
import datetime
from sqlalchemy import BIGINT, NVARCHAR, String, TIMESTAMP
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
class Base(DeclarativeBase):
type_annotation_map = {
int: BIGINT,
datetime.datetime: TIMESTAMP(timezone=True),
str: String().with_variant(NVARCHAR, "mssql"),
}
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
date: Mapped[datetime.datetime]
status: Mapped[str]
Below illustrates the CREATE TABLE statement generated for the above mapping,
first on the Microsoft SQL Server backend, illustrating the NVARCHAR
datatype:
>>> from sqlalchemy.schema import CreateTable
>>> from sqlalchemy.dialects import mssql, postgresql
>>> print(CreateTable(SomeClass.__table__).compile(dialect=mssql.dialect()))
CREATE TABLE some_table (
id BIGINT NOT NULL IDENTITY,
date TIMESTAMP NOT NULL,
status NVARCHAR(max) NOT NULL,
PRIMARY KEY (id)
)
Then on the PostgreSQL backend, illustrating TIMESTAMP WITH TIME ZONE
:
>>> print(CreateTable(SomeClass.__table__).compile(dialect=postgresql.dialect()))
CREATE TABLE some_table (
id BIGSERIAL NOT NULL,
date TIMESTAMP WITH TIME ZONE NOT NULL,
status VARCHAR NOT NULL,
PRIMARY KEY (id)
)
By making use of methods such as TypeEngine.with_variant()
, we’re able
to build up a type map that’s customized to what we need for different backends,
while still being able to use succinct annotation-only mapped_column()
configurations. There are two more levels of Python-type configurability
available beyond this, described in the next two sections.
类型映射内的联合类型¶
Union types inside the Type Map
在 2.0.37 版本发生变更: 本节中描述的功能已被修复和增强以确保一致性。在此更改之前, type_annotation_map
中支持联合类型,但该功能在联合语法和 None
的处理方面表现出不一致的行为。请确保SQLAlchemy是最新的,然后再尝试使用本节中描述的功能。
SQLAlchemy支持在 type_annotation_map
中映射联合类型,以允许映射可以支持多种Python类型的数据库类型,例如 JSON
或 JSONB
:
from typing import Union
from sqlalchemy import JSON
from sqlalchemy.dialects import postgresql
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
from sqlalchemy.schema import CreateTable
# 使用管道运算符的新样式Union
json_list = list[int] | list[str]
# 使用显式Union的旧样式Union
json_scalar = Union[float, str, bool]
class Base(DeclarativeBase):
type_annotation_map = {
json_list: postgresql.JSONB,
json_scalar: JSON,
}
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
list_col: Mapped[list[str] | list[int]]
# 使用JSON
scalar_col: Mapped[json_scalar]
# 使用JSON并且nullable=True
scalar_col_nullable: Mapped[json_scalar | None]
# 由于json_scalar条目,这些形式也都使用JSON
scalar_col_newstyle: Mapped[float | str | bool]
scalar_col_oldstyle: Mapped[Union[float, str, bool]]
scalar_col_mixedstyle: Mapped[Optional[float | str | bool]]
上述示例将 list[int]
和 list[str]
的联合映射到Postgresql的 JSONB
数据类型,而将 float,str,bool
的联合命名为 JSON
数据类型。在 Mapped
构造中声明的等效联合将匹配到类型映射中的相应条目。
联合类型的匹配基于联合的内容,不论单个类型如何命名,并且排除 None
类型的使用。也就是说, json_scalar
也将匹配 str | bool | float | None
。它 不会 匹配为此联合子集或超集的联合;即 str | bool
不会匹配, str | bool | float | int
也不会匹配。排除 None
的联合的单个内容必须完全匹配。
None
值在从 type_annotation_map
到 Mapped
的匹配中从不重要,但在指示 Column
的可空性方面确实重要。当 None
出现在联合中时,无论是放置在 Mapped
构造中。出现在 Mapped
中时,它表明 Column
将是可空的,除非有更具体的指示。此逻辑的工作方式与在 mapped_column() 从 Mapped 注释中派生数据类型和可空性 中描述的指示 Optional
类型相同。
上述映射的CREATE TABLE语句如下所示:
>>> print(CreateTable(SomeClass.__table__).compile(dialect=postgresql.dialect()))
CREATE TABLE some_table (
id SERIAL NOT NULL,
list_col JSONB NOT NULL,
scalar_col JSON,
scalar_col_nullable JSON,
scalar_col_newstyle JSON,
scalar_col_oldstyle JSON,
scalar_col_mixedstyle JSON,
PRIMARY KEY (id)
)
虽然联合类型使用“松散”的匹配方法来匹配任何等效的子类型集,但Python类型还具有一种创建“类型别名”的方法,这些类型别名被视为不同的类型,与包含相同组成的另一种类型不等效。这些类型与 type_annotation_map
的集成在下一节 支持类型别名类型(由 PEP 695 定义)和 NewType 中描述。
在 2.0.37 版本发生变更: The features described in this section have been
repaired and enhanced to work consistently. Prior to this change, union
types were supported in type_annotation_map
, however the feature
exhibited inconsistent behaviors between union syntaxes as well as in how
None
was handled. Please ensure SQLAlchemy is up to date before
attempting to use the features described in this section.
SQLAlchemy supports mapping union types inside the type_annotation_map
to
allow mapping database types that can support multiple Python types, such as
JSON
or JSONB
:
from typing import Union
from sqlalchemy import JSON
from sqlalchemy.dialects import postgresql
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
from sqlalchemy.schema import CreateTable
# new style Union using a pipe operator
json_list = list[int] | list[str]
# old style Union using Union explicitly
json_scalar = Union[float, str, bool]
class Base(DeclarativeBase):
type_annotation_map = {
json_list: postgresql.JSONB,
json_scalar: JSON,
}
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
list_col: Mapped[list[str] | list[int]]
# uses JSON
scalar_col: Mapped[json_scalar]
# uses JSON and is also nullable=True
scalar_col_nullable: Mapped[json_scalar | None]
# these forms all use JSON as well due to the json_scalar entry
scalar_col_newstyle: Mapped[float | str | bool]
scalar_col_oldstyle: Mapped[Union[float, str, bool]]
scalar_col_mixedstyle: Mapped[Optional[float | str | bool]]
The above example maps the union of list[int]
and list[str]
to the Postgresql
JSONB
datatype, while naming a union of float,
str, bool
will match to the JSON
datatype. An equivalent
union, stated in the Mapped
construct, will match into the
corresponding entry in the type map.
The matching of a union type is based on the contents of the union regardless
of how the individual types are named, and additionally excluding the use of
the None
type. That is, json_scalar
will also match to str | bool |
float | None
. It will not match to a union that is a subset or superset
of this union; that is, str | bool
would not match, nor would str | bool
| float | int
. The individual contents of the union excluding None
must
be an exact match.
The None
value is never significant as far as matching
from type_annotation_map
to Mapped
, however is significant
as an indicator for nullability of the Column
. When None
is present in the
union either as it is placed in the Mapped
construct. When
present in Mapped
, it indicates the Column
would be nullable, in the absense of more specific indicators. This logic works
in the same way as indicating an Optional
type as described at
mapped_column() 从 Mapped 注释中派生数据类型和可空性.
The CREATE TABLE statement for the above mapping will look as below:
>>> print(CreateTable(SomeClass.__table__).compile(dialect=postgresql.dialect()))
CREATE TABLE some_table (
id SERIAL NOT NULL,
list_col JSONB NOT NULL,
scalar_col JSON,
scalar_col_not_null JSON NOT NULL,
PRIMARY KEY (id)
)
While union types use a “loose” matching approach that matches on any equivalent
set of subtypes, Python typing also features a way to create “type aliases”
that are treated as distinct types that are non-equivalent to another type that
includes the same composition. Integration of these types with type_annotation_map
is described in the next section, 支持类型别名类型(由 PEP 695 定义)和 NewType.
支持类型别名类型(由 PEP 695 定义)和 NewType¶
Support for Type Alias Types (defined by PEP 695) and NewType
与 类型映射内的联合类型 中描述的类型查找相比,Python类型还包括两种通过更正式的方式创建组合类型的方法,使用 typing.NewType
以及 PEP 695 引入的 type
关键字。这些类型的行为与普通类型别名(即将类型分配给变量名)不同,SQLAlchemy在从类型映射中解析这些类型时尊重这种差异。
在 2.0.37 版本发生变更: 本节中描述的 typing.NewType
以及 PEP 695 type
的行为已被正式化和修正。对于在某些2.0版本中有效但将在SQLAlchemy 2.1中删除的“松散匹配”模式,现在会发出弃用警告。在尝试使用本节中描述的功能之前,请确保SQLAlchemy是最新的。
typing模块允许使用 typing.NewType
创建“新类型”:
from typing import NewType
nstr30 = NewType("nstr30", str)
nstr50 = NewType("nstr50", str)
此外,在Python 3.12中,引入了一个由 PEP 695 定义的新功能,即提供 type
关键字来完成类似的任务;使用 type
会生成一个在许多方面类似于 typing.NewType
的对象,内部称为 typing.TypeAliasType
type SmallInt = int
type BigInt = int
type JsonScalar = str | float | bool | None
为了说明SQLAlchemy在用于 Mapped
内的SQL类型查找时如何处理这些类型对象,重要的是要注意Python不认为两个等效的 typing.TypeAliasType
或 typing.NewType
对象是相等的:
# 即使两个 typing.NewType 对象都是 str,它们也不相等
>>> nstr50 == nstr30
False
# 即使两个 TypeAliasType 对象都是 int,它们也不相等
>>> SmallInt == BigInt
False
# 等效的联合类型不等于 JsonScalar
>>> JsonScalar == str | float | bool | None
False
这是与普通联合类型比较方式相反的行为,并且告知SQLAlchemy的 type_annotation_map
的正确行为。当使用 typing.NewType
或 PEP 695 type
对象时,期望在 type_annotation_map
中显式地存在类型对象,以便从 Mapped
类型中进行匹配,其中必须声明相同的对象才能进行匹配(不包括 Mapped
内的类型是否也联合了 None
)。这与 类型映射内的联合类型 中描述的行为不同,其中直接引用的普通 Union
将基于特定类型在 type_annotation_map
中的组成而不是对象标识来匹配其他 Union
。
在下面的示例中, nstr30
、 nstr50
、 SmallInt
、 BigInt
和 JsonScalar
的组合类型彼此之间没有重叠,可以在每个 Mapped
构造中独立命名,并且在 type_annotation_map
中也是显式的。这些类型中的任何一个也可以与 None
联合或声明为 Optional[]
而不影响查找,仅派生列的可空性:
from typing import NewType
from sqlalchemy import SmallInteger, BigInteger, JSON, String
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
from sqlalchemy.schema import CreateTable
nstr30 = NewType("nstr30", str)
nstr50 = NewType("nstr50", str)
type SmallInt = int
type BigInt = int
type JsonScalar = str | float | bool | None
class TABase(DeclarativeBase):
type_annotation_map = {
nstr30: String(30),
nstr50: String(50),
SmallInt: SmallInteger,
BigInteger: BigInteger,
JsonScalar: JSON,
}
class SomeClass(TABase):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
normal_str: Mapped[str]
short_str: Mapped[nstr30]
long_str_nullable: Mapped[nstr50 | None]
small_int: Mapped[SmallInt]
big_int: Mapped[BigInteger]
scalar_col: Mapped[JsonScalar]
上述映射的CREATE TABLE语句将展示我们配置的不同整数和字符串变体,如下所示:
>>> print(CreateTable(SomeClass.__table__))
CREATE TABLE some_table (
id INTEGER NOT NULL,
normal_str VARCHAR NOT NULL,
short_str VARCHAR(30) NOT NULL,
long_str_nullable VARCHAR(50),
small_int SMALLINT NOT NULL,
big_int BIGINT NOT NULL,
scalar_col JSON,
PRIMARY KEY (id)
)
关于可空性, JsonScalar
类型在其定义中包含 None
,这表明列是可空的。同样, long_str_nullable
列也将 None
与 nstr50
联合,这匹配 type_annotation_map
中的 nstr50
类型,同时对映射列应用可空性。其他列都保持为 NOT NULL,因为它们没有被标记为可选。
In contrast to the typing lookup described in
类型映射内的联合类型, Python typing also includes two
ways to create a composed type in a more formal way, using typing.NewType
as
well as the type
keyword introduced in PEP 695. These types behave
differently from ordinary type aliases (i.e. assigning a type to a variable
name), and this difference is honored in how SQLAlchemy resolves these
types from the type map.
在 2.0.37 版本发生变更: The behaviors described in this section for typing.NewType
as well as PEP 695 type
have been formalized and corrected.
Deprecation warnings are now emitted for “loose matching” patterns that have
worked in some 2.0 releases, but are to be removed in SQLAlchemy 2.1.
Please ensure SQLAlchemy is up to date before attempting to use the features
described in this section.
The typing module allows the creation of “new types” using typing.NewType
:
from typing import NewType
nstr30 = NewType("nstr30", str)
nstr50 = NewType("nstr50", str)
Additionally, in Python 3.12, a new feature defined by PEP 695 was introduced which
provides the type
keyword to accomplish a similar task; using
type
produces an object that is similar in many ways to typing.NewType
which is internally referred to as typing.TypeAliasType
:
type SmallInt = int
type BigInt = int
type JsonScalar = str | float | bool | None
For the purposes of how SQLAlchemy treats these type objects when used
for SQL type lookup inside of Mapped
, it’s important to note
that Python does not consider two equivalent typing.TypeAliasType
or typing.NewType
objects to be equal:
# two typing.NewType objects are not equal even if they are both str
>>> nstr50 == nstr30
False
# two TypeAliasType objects are not equal even if they are both int
>>> SmallInt == BigInt
False
# an equivalent union is not equal to JsonScalar
>>> JsonScalar == str | float | bool | None
False
This is the opposite behavior from how ordinary unions are compared, and
informs the correct behavior for SQLAlchemy’s type_annotation_map
. When
using typing.NewType
or PEP 695 type
objects, the type object is
expected to be explicit within the type_annotation_map
for it to be matched
from a Mapped
type, where the same object must be stated in order
for a match to be made (excluding whether or not the type inside of
Mapped
also unions on None
). This is distinct from the
behavior described at 类型映射内的联合类型, where a
plain Union
that is referenced directly will match to other Unions
based on the composition, rather than the object identity, of a particular type
in type_annotation_map
.
In the example below, the composed types for nstr30
, nstr50
,
SmallInt
, BigInt
, and JsonScalar
have no overlap with each other
and can be named distinctly within each Mapped
construct, and
are also all explicit in type_annotation_map
. Any of these types may
also be unioned with None
or declared as Optional[]
without affecting
the lookup, only deriving column nullability:
from typing import NewType
from sqlalchemy import SmallInteger, BigInteger, JSON, String
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
from sqlalchemy.schema import CreateTable
nstr30 = NewType("nstr30", str)
nstr50 = NewType("nstr50", str)
type SmallInt = int
type BigInt = int
type JsonScalar = str | float | bool | None
class TABase(DeclarativeBase):
type_annotation_map = {
nstr30: String(30),
nstr50: String(50),
SmallInt: SmallInteger,
BigInteger: BigInteger,
JsonScalar: JSON,
}
class SomeClass(TABase):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
normal_str: Mapped[str]
short_str: Mapped[nstr30]
long_str_nullable: Mapped[nstr50 | None]
small_int: Mapped[SmallInt]
big_int: Mapped[BigInteger]
scalar_col: Mapped[JsonScalar]
a CREATE TABLE for the above mapping will illustrate the different variants of integer and string we’ve configured, and looks like:
>>> print(CreateTable(SomeClass.__table__))
CREATE TABLE some_table (
id INTEGER NOT NULL,
normal_str VARCHAR NOT NULL,
short_str VARCHAR(30) NOT NULL,
long_str_nullable VARCHAR(50),
small_int SMALLINT NOT NULL,
big_int BIGINT NOT NULL,
scalar_col JSON,
PRIMARY KEY (id)
)
Regarding nullability, the JsonScalar
type includes None
in its
definition, which indicates a nullable column. Similarly the
long_str_nullable
column applies a union of None
to nstr50
,
which matches to the nstr50
type in the type_annotation_map
while
also applying nullability to the mapped column. The other columns all remain
NOT NULL as they are not indicated as optional.
将多种类型配置映射到 Python 类型¶
Mapping Multiple Type Configurations to Python Types
作为通过使用 registry.type_annotation_map
参数将单个Python类型与任何种类的 TypeEngine
配置关联的附加功能,还可以根据附加的类型限定符将单个Python类型与SQL类型的不同变体关联起来。一个典型的例子是将Python str
数据类型映射到不同长度的 VARCHAR
SQL类型。另一个例子是将不同类型的 decimal.Decimal
映射到不同大小的 NUMERIC
列。
Python的类型系统提供了一种很好的方法来向Python类型添加额外的元数据,即使用 PEP 593 Annotated
泛型类型,它允许将附加信息与Python类型一起打包。当在 registry.type_annotation_map
中解析 Annotated
对象时,mapped_column()
构造将通过标识正确解释它,如下面的示例所示,我们声明了两种 String
和 Numeric
的变体:
from decimal import Decimal
from typing_extensions import Annotated
from sqlalchemy import Numeric
from sqlalchemy import String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import registry
str_30 = Annotated[str, 30]
str_50 = Annotated[str, 50]
num_12_4 = Annotated[Decimal, 12]
num_6_2 = Annotated[Decimal, 6]
class Base(DeclarativeBase):
registry = registry(
type_annotation_map={
str_30: String(30),
str_50: String(50),
num_12_4: Numeric(12, 4),
num_6_2: Numeric(6, 2),
}
)
传递给 Annotated
容器的Python类型(在上述示例中为 str
和 Decimal
类型)对于打字工具的好处非常重要;就 mapped_column()
构造而言,它只需要在 registry.type_annotation_map
字典中查找每个类型对象,而实际上不需要查看 Annotated
对象,至少在这种特定情况下也是如此。同样,传递给 Annotated
的参数(除底层Python类型本身之外)也不重要,只是必须至少存在一个参数才能使 Annotated
构造有效。然后我们可以直接在映射中使用这些增强的类型,它们将匹配更具体的类型构造,如下例所示:
class SomeClass(Base):
__tablename__ = "some_table"
short_name: Mapped[str_30] = mapped_column(primary_key=True)
long_name: Mapped[str_50]
num_value: Mapped[num_12_4]
short_num_value: Mapped[num_6_2]
上述映射的CREATE TABLE语句将展示我们配置的不同 VARCHAR
和 NUMERIC
变体,如下所示:
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
CREATE TABLE some_table (
short_name VARCHAR(30) NOT NULL,
long_name VARCHAR(50) NOT NULL,
num_value NUMERIC(12, 4) NOT NULL,
short_num_value NUMERIC(6, 2) NOT NULL,
PRIMARY KEY (short_name)
)
虽然将 Annotated
类型链接到不同的SQL类型为我们提供了很大的灵活性,但下一节说明了 Annotated
可以与声明式一起使用的第二种方式,这种方式甚至更加开放。
备注
虽然 typing.TypeAliasType
可以分配给联合类型,就像上面定义的 JsonScalar
一样,但它的行为与没有 type ...
语法定义的普通联合类型不同。以下映射包含与 JsonScalar
兼容的联合类型,但它们不会被识别:
class SomeClass(TABase):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
col_a: Mapped[str | float | bool | None]
col_b: Mapped[str | float | bool]
这会引发错误,因为 col_a
或 col_b
使用的联合类型没有在 TABase
类型映射中找到,并且必须直接引用 JsonScalar
。
As individual Python types may be associated with TypeEngine
configurations of any variety by using the registry.type_annotation_map
parameter, an additional
capability is the ability to associate a single Python type with different
variants of a SQL type based on additional type qualifiers. One typical
example of this is mapping the Python str
datatype to VARCHAR
SQL types of different lengths. Another is mapping different varieties of
decimal.Decimal
to differently sized NUMERIC
columns.
Python’s typing system provides a great way to add additional metadata to a
Python type which is by using the PEP 593 Annotated
generic type, which
allows additional information to be bundled along with a Python type. The
mapped_column()
construct will correctly interpret an Annotated
object by identity when resolving it in the
registry.type_annotation_map
, as in the example below where we
declare two variants of String
and Numeric
:
from decimal import Decimal
from typing_extensions import Annotated
from sqlalchemy import Numeric
from sqlalchemy import String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import registry
str_30 = Annotated[str, 30]
str_50 = Annotated[str, 50]
num_12_4 = Annotated[Decimal, 12]
num_6_2 = Annotated[Decimal, 6]
class Base(DeclarativeBase):
registry = registry(
type_annotation_map={
str_30: String(30),
str_50: String(50),
num_12_4: Numeric(12, 4),
num_6_2: Numeric(6, 2),
}
)
The Python type passed to the Annotated
container, in the above example the
str
and Decimal
types, is important only for the benefit of typing
tools; as far as the mapped_column()
construct is concerned, it will only need
perform a lookup of each type object in the
registry.type_annotation_map
dictionary without actually
looking inside of the Annotated
object, at least in this particular
context. Similarly, the arguments passed to Annotated
beyond the underlying
Python type itself are also not important, it’s only that at least one argument
must be present for the Annotated
construct to be valid. We can then use
these augmented types directly in our mapping where they will be matched to the
more specific type constructions, as in the following example:
class SomeClass(Base):
__tablename__ = "some_table"
short_name: Mapped[str_30] = mapped_column(primary_key=True)
long_name: Mapped[str_50]
num_value: Mapped[num_12_4]
short_num_value: Mapped[num_6_2]
a CREATE TABLE for the above mapping will illustrate the different variants
of VARCHAR
and NUMERIC
we’ve configured, and looks like:
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
CREATE TABLE some_table (
short_name VARCHAR(30) NOT NULL,
long_name VARCHAR(50) NOT NULL,
num_value NUMERIC(12, 4) NOT NULL,
short_num_value NUMERIC(6, 2) NOT NULL,
PRIMARY KEY (short_name)
)
While variety in linking Annotated
types to different SQL types grants
us a wide degree of flexibility, the next section illustrates a second
way in which Annotated
may be used with Declarative that is even
more open ended.
备注
While a typing.TypeAliasType
can be assigned to unions, like in the
case of JsonScalar
defined above, it has a different behavior than normal
unions defined without the type ...
syntax.
The following mapping includes unions that are compatible with JsonScalar
,
but they will not be recognized:
class SomeClass(TABase):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
col_a: Mapped[str | float | bool | None]
col_b: Mapped[str | float | bool]
This raises an error since the union types used by col_a
or col_b
,
are not found in TABase
type map and JsonScalar
must be referenced
directly.
将整个列声明映射到 Python 类型¶
Mapping Whole Column Declarations to Python Types
上一节说明了使用 PEP 593 Annotated
类型实例作为 registry.type_annotation_map
字典中的键。在这种形式中,mapped_column()
构造实际上不会查看 Annotated
对象本身,而只是用作字典键。然而,声明式还具有直接从 Annotated
对象中提取整个预先建立的 mapped_column()
构造的能力。使用这种形式,我们不仅可以定义与Python类型链接的不同种类的SQL数据类型,而无需使用 registry.type_annotation_map
字典,还可以设置任何数量的参数,如可空性、列默认值和约束,以可重用的方式。
一组ORM模型通常会有某种类型的主键样式,适用于所有映射类。还可能有共同的列配置,例如带有默认值的时间戳和其他预先设定大小和配置的字段。我们可以将这些配置组合到 mapped_column()
实例中,然后直接将这些实例捆绑到 Annotated
实例中,这些实例可以在任何数量的类声明中重用。声明式将在以这种方式提供时解包 Annotated
对象,跳过任何不适用于SQLAlchemy的指令,仅搜索SQLAlchemy ORM构造。
下面的示例说明了以这种方式使用的各种预配置字段类型,其中我们定义了代表 Integer
主键列的 intpk
,代表 DateTime
类型的 timestamp
,该类型将使用 CURRENT_TIMESTAMP
作为DDL级别列默认值,以及代表长度为30的 NOT NULL
String
的 required_name
:
import datetime
from typing_extensions import Annotated
from sqlalchemy import func
from sqlalchemy import String
from sqlalchemy.orm import mapped_column
intpk = Annotated[int, mapped_column(primary_key=True)]
timestamp = Annotated[
datetime.datetime,
mapped_column(nullable=False, server_default=func.CURRENT_TIMESTAMP()),
]
required_name = Annotated[str, mapped_column(String(30), nullable=False)]
上述 Annotated
对象然后可以直接在 Mapped
中使用,其中预配置的 mapped_column()
构造将被提取并复制到每个属性特定的新实例中:
class Base(DeclarativeBase):
pass
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[intpk]
name: Mapped[required_name]
created_at: Mapped[timestamp]
上述映射的 CREATE TABLE
如下所示:
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
CREATE TABLE some_table (
id INTEGER NOT NULL,
name VARCHAR(30) NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
PRIMARY KEY (id)
)
以这种方式使用 Annotated
类型时,类型的配置也可以在每个属性的基础上受到影响。对于上述示例中显式使用 mapped_column.nullable
的类型,我们可以对任何类型应用 Optional[]
泛型修饰符,以便字段在Python级别上是可选的,这将独立于数据库中发生的 NULL
/ NOT NULL
设置:
from typing_extensions import Annotated
import datetime
from typing import Optional
from sqlalchemy.orm import DeclarativeBase
timestamp = Annotated[
datetime.datetime,
mapped_column(nullable=False),
]
class Base(DeclarativeBase):
pass
class SomeClass(Base):
# ...
# pep-484类型将是Optional,但列将是NOT NULL
created_at: Mapped[Optional[timestamp]]
mapped_column()
构造还与显式传递的 mapped_column()
构造进行了协调,其参数将优先于 Annotated
构造的参数。下面我们为我们的整数主键添加 ForeignKey
约束,并且还为 created_at
列使用备用服务器默认值:
import datetime
from typing_extensions import Annotated
from sqlalchemy import ForeignKey
from sqlalchemy import func
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.schema import CreateTable
intpk = Annotated[int, mapped_column(primary_key=True)]
timestamp = Annotated[
datetime.datetime,
mapped_column(nullable=False, server_default=func.CURRENT_TIMESTAMP()),
]
class Base(DeclarativeBase):
pass
class Parent(Base):
__tablename__ = "parent"
id: Mapped[intpk]
class SomeClass(Base):
__tablename__ = "some_table"
# 为 mapped_column(Integer, primary_key=True) 添加 ForeignKey
id: Mapped[intpk] = mapped_column(ForeignKey("parent.id"))
# 将服务器默认值从 CURRENT_TIMESTAMP 更改为 UTC_TIMESTAMP
created_at: Mapped[timestamp] = mapped_column(server_default=func.UTC_TIMESTAMP())
CREATE TABLE 语句说明了这些每个属性的设置,添加了 FOREIGN KEY
约束并将 UTC_TIMESTAMP
替换为 CURRENT_TIMESTAMP
:
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
CREATE TABLE some_table (
id INTEGER NOT NULL,
created_at DATETIME DEFAULT UTC_TIMESTAMP() NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(id) REFERENCES parent (id)
)
备注
mapped_column()
的功能如上所述,其中可以使用包含“模板” mapped_column()
对象的 PEP 593 Annotated
对象指示一组完全构造的列参数,该对象将被复制到属性中,目前尚未为其他ORM构造(如 relationship()
和 composite()
实现。虽然理论上可能实现此功能,但目前尝试使用 Annotated
来指示 relationship()
和类似的进一步参数将在运行时引发 NotImplementedError
异常,但可能会在未来的版本中实现。
The previous section illustrated using PEP 593 Annotated
type
instances as keys within the registry.type_annotation_map
dictionary. In this form, the mapped_column()
construct does not
actually look inside the Annotated
object itself, it’s instead
used only as a dictionary key. However, Declarative also has the ability to extract
an entire pre-established mapped_column()
construct from
an Annotated
object directly. Using this form, we can define not only
different varieties of SQL datatypes linked to Python types without using
the registry.type_annotation_map
dictionary, we can also
set up any number of arguments such as nullability, column defaults,
and constraints in a reusable fashion.
A set of ORM models will usually have some kind of primary
key style that is common to all mapped classes. There also may be
common column configurations such as timestamps with defaults and other fields of
pre-established sizes and configurations. We can compose these configurations
into mapped_column()
instances that we then bundle directly into
instances of Annotated
, which are then re-used in any number of class
declarations. Declarative will unpack an Annotated
object
when provided in this manner, skipping over any other directives that don’t
apply to SQLAlchemy and searching only for SQLAlchemy ORM constructs.
The example below illustrates a variety of pre-configured field types used
in this way, where we define intpk
that represents an Integer
primary
key column, timestamp
that represents a DateTime
type
which will use CURRENT_TIMESTAMP
as a DDL level column default,
and required_name
which is a String
of length 30 that’s
NOT NULL
:
import datetime
from typing_extensions import Annotated
from sqlalchemy import func
from sqlalchemy import String
from sqlalchemy.orm import mapped_column
intpk = Annotated[int, mapped_column(primary_key=True)]
timestamp = Annotated[
datetime.datetime,
mapped_column(nullable=False, server_default=func.CURRENT_TIMESTAMP()),
]
required_name = Annotated[str, mapped_column(String(30), nullable=False)]
The above Annotated
objects can then be used directly within
Mapped
, where the pre-configured mapped_column()
constructs will be extracted and copied to a new instance that will be
specific to each attribute:
class Base(DeclarativeBase):
pass
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[intpk]
name: Mapped[required_name]
created_at: Mapped[timestamp]
CREATE TABLE
for our above mapping looks like:
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
CREATE TABLE some_table (
id INTEGER NOT NULL,
name VARCHAR(30) NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
PRIMARY KEY (id)
)
When using Annotated
types in this way, the configuration of the type
may also be affected on a per-attribute basis. For the types in the above
example that feature explicit use of mapped_column.nullable
,
we can apply the Optional[]
generic modifier to any of our types so that
the field is optional or not at the Python level, which will be independent
of the NULL
/ NOT NULL
setting that takes place in the database:
from typing_extensions import Annotated
import datetime
from typing import Optional
from sqlalchemy.orm import DeclarativeBase
timestamp = Annotated[
datetime.datetime,
mapped_column(nullable=False),
]
class Base(DeclarativeBase):
pass
class SomeClass(Base):
# ...
# pep-484 type will be Optional, but column will be
# NOT NULL
created_at: Mapped[Optional[timestamp]]
The mapped_column()
construct is also reconciled with an explicitly
passed mapped_column()
construct, whose arguments will take precedence
over those of the Annotated
construct. Below we add a ForeignKey
constraint to our integer primary key and also use an alternate server
default for the created_at
column:
import datetime
from typing_extensions import Annotated
from sqlalchemy import ForeignKey
from sqlalchemy import func
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.schema import CreateTable
intpk = Annotated[int, mapped_column(primary_key=True)]
timestamp = Annotated[
datetime.datetime,
mapped_column(nullable=False, server_default=func.CURRENT_TIMESTAMP()),
]
class Base(DeclarativeBase):
pass
class Parent(Base):
__tablename__ = "parent"
id: Mapped[intpk]
class SomeClass(Base):
__tablename__ = "some_table"
# add ForeignKey to mapped_column(Integer, primary_key=True)
id: Mapped[intpk] = mapped_column(ForeignKey("parent.id"))
# change server default from CURRENT_TIMESTAMP to UTC_TIMESTAMP
created_at: Mapped[timestamp] = mapped_column(server_default=func.UTC_TIMESTAMP())
The CREATE TABLE statement illustrates these per-attribute settings,
adding a FOREIGN KEY
constraint as well as substituting
UTC_TIMESTAMP
for CURRENT_TIMESTAMP
:
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
CREATE TABLE some_table (
id INTEGER NOT NULL,
created_at DATETIME DEFAULT UTC_TIMESTAMP() NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(id) REFERENCES parent (id)
)
备注
The feature of mapped_column()
just described, where
a fully constructed set of column arguments may be indicated using
PEP 593 Annotated
objects that contain a “template”
mapped_column()
object to be copied into the attribute, is
currently not implemented for other ORM constructs such as
relationship()
and composite()
. While this functionality
is in theory possible, for the moment attempting to use Annotated
to indicate further arguments for relationship()
and similar
will raise a NotImplementedError
exception at runtime, but
may be implemented in future releases.
在类型映射中使用 Python Enum
或 pep-586 Literal
类型¶
Using Python Enum
or pep-586 Literal
types in the type map
在 2.0.0b4 版本加入: - 增加了 Enum
支持
在 2.0.1 版本加入: - 增加了 Literal
支持
用户定义的Python类型派生自Python内置的 enum.Enum
以及 typing.Literal
类,在ORM声明式映射中使用时会自动链接到SQLAlchemy的 Enum
数据类型。下面的示例在 Mapped[]
构造中使用了自定义的 enum.Enum
import enum
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class Status(enum.Enum):
PENDING = "pending"
RECEIVED = "received"
COMPLETED = "completed"
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
status: Mapped[Status]
在上述示例中,映射的属性 SomeClass.status
将链接到数据类型为 Enum(Status)
的 Column
。例如,我们可以在PostgreSQL数据库的CREATE TABLE输出中看到这一点:
CREATE TYPE status AS ENUM ('PENDING', 'RECEIVED', 'COMPLETED')
CREATE TABLE some_table (
id SERIAL NOT NULL,
status status NOT NULL,
PRIMARY KEY (id)
)
同样,可以使用 typing.Literal
,使用包含所有字符串的 typing.Literal
from typing import Literal
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
Status = Literal["pending", "received", "completed"]
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
status: Mapped[Status]
在 registry.type_annotation_map
中使用的条目将基础 enum.Enum
Python类型以及 typing.Literal
类型链接到SQLAlchemy的 Enum
SQL类型,使用一种特殊形式,指示 Enum
数据类型应自动配置自身以针对任意枚举类型。默认情况下,这种配置是隐式的,可以显式指示如下:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
type_annotation_map = {
enum.Enum: sqlalchemy.Enum(enum.Enum),
typing.Literal: sqlalchemy.Enum(enum.Enum),
}
声明式中的解析逻辑能够解析 enum.Enum
的子类以及 typing.Literal
实例,以匹配 registry.type_annotation_map
字典中的 enum.Enum
或 typing.Literal
条目。然后 Enum
SQL类型知道如何生成具有适当设置的配置版本,包括默认字符串长度。如果传递的 typing.Literal
不只包含字符串值,则会引发信息性错误。
typing.TypeAliasType
也可以用于创建枚举,通过将它们分配给字符串的 typing.Literal
from typing import Literal
type Status = Literal["on", "off", "unknown"]
由于这是一个 typing.TypeAliasType
,它表示一个唯一的类型对象,因此必须将其放置在 type_annotation_map
中,以便成功查找,如下所示:
import enum import sqlalchemy
- class Base(DeclarativeBase):
type_annotation_map = {Status: sqlalchemy.Enum(enum.Enum)}
由于SQLAlchemy支持映射不同的 typing.TypeAliasType
对象(这些对象在结构上是独立等效的),这些对象必须存在于 type_annotation_map
中以避免歧义。
在 2.0.0b4 版本加入: - Added Enum
support
在 2.0.1 版本加入: - Added Literal
support
User-defined Python types which derive from the Python built-in enum.Enum
as well as the typing.Literal
class are automatically linked to the SQLAlchemy Enum
datatype
when used in an ORM declarative mapping. The example below uses
a custom enum.Enum
within the Mapped[]
constructor:
import enum
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class Status(enum.Enum):
PENDING = "pending"
RECEIVED = "received"
COMPLETED = "completed"
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
status: Mapped[Status]
In the above example, the mapped attribute SomeClass.status
will be
linked to a Column
with the datatype of Enum(Status)
.
We can see this for example in the CREATE TABLE output for the PostgreSQL
database:
CREATE TYPE status AS ENUM (‘PENDING’, ‘RECEIVED’, ‘COMPLETED’)
- CREATE TABLE some_table (
id SERIAL NOT NULL, status status NOT NULL, PRIMARY KEY (id)
)
In a similar way, typing.Literal
may be used instead, using
a typing.Literal
that consists of all strings:
from typing import Literal
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
Status = Literal["pending", "received", "completed"]
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
status: Mapped[Status]
The entries used in registry.type_annotation_map
link the base
enum.Enum
Python type as well as the typing.Literal
type to the
SQLAlchemy Enum
SQL type, using a special form which indicates to the
Enum
datatype that it should automatically configure itself against
an arbitrary enumerated type. This configuration, which is implicit by default,
would be indicated explicitly as:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
type_annotation_map = {
enum.Enum: sqlalchemy.Enum(enum.Enum),
typing.Literal: sqlalchemy.Enum(enum.Enum),
}
The resolution logic within Declarative is able to resolve subclasses
of enum.Enum
as well as instances of typing.Literal
to match the
enum.Enum
or typing.Literal
entry in the
registry.type_annotation_map
dictionary. The Enum
SQL type then knows how to produce a configured version of itself with the
appropriate settings, including default string length. If a typing.Literal
that does not consist of only string values is passed, an informative
error is raised.
typing.TypeAliasType
can also be used to create enums, by assigning them
to a typing.Literal
of strings:
from typing import Literal
type Status = Literal["on", "off", "unknown"]
Since this is a typing.TypeAliasType
, it represents a unique type object,
so it must be placed in the type_annotation_map
for it to be looked up
successfully, keyed to the Enum
type as follows:
import enum
import sqlalchemy
class Base(DeclarativeBase):
type_annotation_map = {Status: sqlalchemy.Enum(enum.Enum)}
Since SQLAlchemy supports mapping different typing.TypeAliasType
objects that are otherwise structurally equivalent individually,
these must be present in type_annotation_map
to avoid ambiguity.
原生枚举和命名¶
Native Enums and Naming
Enum.native_enum
参数指的是 Enum
数据类型是否应创建所谓的“本地”枚举,在 MySQL/MariaDB 上是 ENUM
数据类型,在 PostgreSQL 上是通过 CREATE TYPE
创建的新 TYPE
对象,或者是“非本地”枚举,这意味着将使用 VARCHAR
来创建数据类型。对于 MySQL/MariaDB 或 PostgreSQL 以外的后端,在所有情况下都使用 VARCHAR
(第三方方言可能有其自己的行为)。
由于 PostgreSQL 的 CREATE TYPE
要求必须为要创建的类型提供一个显式名称,因此在处理隐式生成的 Enum
而未在映射中指定显式 Enum
数据类型时,存在特殊的回退逻辑:
如果
Enum
链接到一个enum.Enum
对象,则Enum.native_enum
参数默认为True
,并且枚举的名称将取自enum.Enum
数据类型的名称。PostgreSQL 后端将假定使用此名称的CREATE TYPE
。如果
Enum
链接到一个typing.Literal
对象,则Enum.native_enum
参数默认为False
;不会生成名称,并且假定使用VARCHAR
。
要将 typing.Literal
与 PostgreSQL CREATE TYPE
类型一起使用,必须使用显式 Enum
,可以在类型映射中使用:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
Status = Literal["pending", "received", "completed"]
class Base(DeclarativeBase):
type_annotation_map = {
Status: sqlalchemy.Enum("pending", "received", "completed", name="status_enum"),
}
或者在 mapped_column()
中使用:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
Status = Literal["pending", "received", "completed"]
class Base(DeclarativeBase):
pass
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
status: Mapped[Status] = mapped_column(
sqlalchemy.Enum("pending", "received", "completed", name="status_enum")
)
The Enum.native_enum
parameter refers to if the
Enum
datatype should create a so-called “native”
enum, which on MySQL/MariaDB is the ENUM
datatype and on PostgreSQL is
a new TYPE
object created by CREATE TYPE
, or a “non-native” enum,
which means that VARCHAR
will be used to create the datatype. For
backends other than MySQL/MariaDB or PostgreSQL, VARCHAR
is used in
all cases (third party dialects may have their own behaviors).
Because PostgreSQL’s CREATE TYPE
requires that there’s an explicit name
for the type to be created, special fallback logic exists when working
with implicitly generated Enum
without specifying an
explicit Enum
datatype within a mapping:
If the
Enum
is linked to anenum.Enum
object, theEnum.native_enum
parameter defaults toTrue
and the name of the enum will be taken from the name of theenum.Enum
datatype. The PostgreSQL backend will assumeCREATE TYPE
with this name.If the
Enum
is linked to atyping.Literal
object, theEnum.native_enum
parameter defaults toFalse
; no name is generated andVARCHAR
is assumed.
To use typing.Literal
with a PostgreSQL CREATE TYPE
type, an
explicit Enum
must be used, either within the
type map:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
Status = Literal["pending", "received", "completed"]
class Base(DeclarativeBase):
type_annotation_map = {
Status: sqlalchemy.Enum("pending", "received", "completed", name="status_enum"),
}
Or alternatively within mapped_column()
:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
Status = Literal["pending", "received", "completed"]
class Base(DeclarativeBase):
pass
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
status: Mapped[Status] = mapped_column(
sqlalchemy.Enum("pending", "received", "completed", name="status_enum")
)
更改默认枚举的配置¶
Altering the Configuration of the Default Enum
为了修改隐式生成的 Enum
数据类型的固定配置,请在 registry.type_annotation_map
中指定新条目,指示附加参数。例如,要无条件地使用“非本地枚举”,可以将 Enum.native_enum
参数设置为 False,适用于所有类型:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
type_annotation_map = {
enum.Enum: sqlalchemy.Enum(enum.Enum, native_enum=False),
typing.Literal: sqlalchemy.Enum(enum.Enum, native_enum=False),
}
在 2.0.1 版本发生变更: 实现了在建立 registry.type_annotation_map
时覆盖 Enum
数据类型中的参数(如 Enum.native_enum
)的支持。此前,此功能不起作用。
要对特定 enum.Enum
子类型使用特定配置,例如在使用示例 Status
数据类型时将字符串长度设置为50:
import enum
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
class Status(enum.Enum):
PENDING = "pending"
RECEIVED = "received"
COMPLETED = "completed"
class Base(DeclarativeBase):
type_annotation_map = {
Status: sqlalchemy.Enum(Status, length=50, native_enum=False)
}
默认情况下,自动生成的 Enum
不与 Base
使用的 MetaData
实例关联,因此如果元数据定义了模式,它将不会自动与枚举关联。要自动将枚举与它们所属的元数据或表中的模式关联,可以设置 Enum.inherit_schema
参数:
from enum import Enum
import sqlalchemy as sa
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
metadata = sa.MetaData(schema="my_schema")
type_annotation_map = {Enum: sa.Enum(Enum, inherit_schema=True)}
In order to modify the fixed configuration of the Enum
datatype
that’s generated implicitly, specify new entries in the
registry.type_annotation_map
, indicating additional arguments.
For example, to use “non native enumerations” unconditionally, the
Enum.native_enum
parameter may be set to False for all types:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
type_annotation_map = {
enum.Enum: sqlalchemy.Enum(enum.Enum, native_enum=False),
typing.Literal: sqlalchemy.Enum(enum.Enum, native_enum=False),
}
在 2.0.1 版本发生变更: Implemented support for overriding parameters
such as Enum.native_enum
within the
Enum
datatype when establishing the
registry.type_annotation_map
. Previously, this
functionality was not working.
To use a specific configuration for a specific enum.Enum
subtype, such
as setting the string length to 50 when using the example Status
datatype:
import enum
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
class Status(enum.Enum):
PENDING = "pending"
RECEIVED = "received"
COMPLETED = "completed"
class Base(DeclarativeBase):
type_annotation_map = {
Status: sqlalchemy.Enum(Status, length=50, native_enum=False)
}
By default Enum
that are automatically generated are not
associated with the MetaData
instance used by the Base
, so if
the metadata defines a schema it will not be automatically associated with the
enum. To automatically associate the enum with the schema in the metadata or
table they belong to the Enum.inherit_schema
can be set:
from enum import Enum
import sqlalchemy as sa
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
metadata = sa.MetaData(schema="my_schema")
type_annotation_map = {Enum: sa.Enum(Enum, inherit_schema=True)}
将特定的 enum.Enum
或 typing.Literal
链接到其他数据类型¶
Linking Specific enum.Enum
or typing.Literal
to other datatypes
上述示例中使用的 Enum
可以自动配置自身以适应 enum.Enum
或 typing.Literal
类型对象上存在的参数/属性。对于需要将特定种类的 enum.Enum
或 typing.Literal
链接到其他类型的用例,也可以将这些特定类型放置在类型映射中。在下面的示例中,包含非字符串类型的 Literal[]
条目链接到 JSON
数据类型:
from typing import Literal
from sqlalchemy import JSON
from sqlalchemy.orm import DeclarativeBase
my_literal = Literal[0, 1, True, False, "true", "false"]
class Base(DeclarativeBase):
type_annotation_map = {my_literal: JSON}
在上述配置中, my_literal
数据类型将解析为 JSON
实例。其他 Literal
变体将继续解析为 Enum
数据类型。
The above examples feature the use of an Enum
that is
automatically configuring itself to the arguments / attributes present on
an enum.Enum
or typing.Literal
type object. For use cases where
specific kinds of enum.Enum
or typing.Literal
should be linked to
other types, these specific types may be placed in the type map also.
In the example below, an entry for Literal[]
that contains non-string
types is linked to the JSON
datatype:
from typing import Literal
from sqlalchemy import JSON
from sqlalchemy.orm import DeclarativeBase
my_literal = Literal[0, 1, True, False, "true", "false"]
class Base(DeclarativeBase):
type_annotation_map = {my_literal: JSON}
In the above configuration, the my_literal
datatype will resolve to a
JSON
instance. Other Literal
variants will continue
to resolve to Enum
datatypes.
mapped_column()
中的数据类功能¶
Dataclass features in mapped_column()
mapped_column()
构造与SQLAlchemy的“本地数据类”功能集成,讨论见 声明式Dataclass映射。有关 mapped_column()
支持的其他指令的当前背景,请参见该部分。
The mapped_column()
construct integrates with SQLAlchemy’s
“native dataclasses” feature, discussed at
声明式Dataclass映射. See that section for current
background on additional directives supported by mapped_column()
.
访问表和元数据¶
Accessing Table and Metadata
声明式映射类将始终包含一个名为 __table__
的属性;当使用 __tablename__
的上述配置完成时,声明式过程通过 __table__
属性提供 Table
对象:
# 访问 Table
user_table = User.__table__
上述表最终与 Mapper.local_table
属性对应,我们可以通过 runtime inspection system 看到:
from sqlalchemy import inspect
user_table = inspect(User).local_table
与声明式 registry
以及基类关联的 MetaData
集合通常是运行DDL操作(如CREATE)以及与Alembic等迁移工具一起使用所必需的。此对象可通过 registry
以及声明式基类的 .metadata
属性获取。下面,对于一个小脚本,我们可能希望针对SQLite数据库发出所有表的CREATE语句:
engine = create_engine("sqlite://")
Base.metadata.create_all(engine)
A declaratively mapped class will always include an attribute called
__table__
; when the above configuration using __tablename__
is
complete, the declarative process makes the Table
available via the __table__
attribute:
# access the Table
user_table = User.__table__
The above table is ultimately the same one that corresponds to the
Mapper.local_table
attribute, which we can see through the
runtime inspection system:
from sqlalchemy import inspect
user_table = inspect(User).local_table
The MetaData
collection associated with both the declarative
registry
as well as the base class is frequently necessary in
order to run DDL operations such as CREATE, as well as in use with migration
tools such as Alembic. This object is available via the .metadata
attribute of registry
as well as the declarative base class.
Below, for a small script we may wish to emit a CREATE for all tables against a
SQLite database:
engine = create_engine("sqlite://")
Base.metadata.create_all(engine)
声明性表配置¶
Declarative Table Configuration
在使用带有 __tablename__
声明类属性的声明式表配置时,应使用 __table_args__
声明类属性提供要传递给 Table
构造函数的附加参数。
此属性适用于通常传递给 Table
构造函数的定位参数和关键字参数。
该属性可以通过两种形式之一指定。 一种是作为字典:
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = {"mysql_engine": "InnoDB"}
另一种是元组形式,每个参数都是定位参数(通常是约束):
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = (
ForeignKeyConstraint(["id"], ["remote_table.id"]),
UniqueConstraint("foo"),
)
可以通过将最后一个参数指定为字典来使用上述形式指定关键字参数:
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = (
ForeignKeyConstraint(["id"], ["remote_table.id"]),
UniqueConstraint("foo"),
{"autoload": True},
)
类还可以使用 declared_attr()
方法装饰器以动态样式指定 __table_args__
声明属性和 __tablename__
属性。 有关背景,请参见 使用 Mixins 组合映射层次结构。
When using Declarative Table configuration with the __tablename__
declarative class attribute, additional arguments to be supplied to the
Table
constructor should be provided using the
__table_args__
declarative class attribute.
This attribute accommodates both positional as well as keyword
arguments that are normally sent to the
Table
constructor.
The attribute can be specified in one of two forms. One is as a
dictionary:
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = {"mysql_engine": "InnoDB"}
The other, a tuple, where each argument is positional (usually constraints):
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = (
ForeignKeyConstraint(["id"], ["remote_table.id"]),
UniqueConstraint("foo"),
)
Keyword arguments can be specified with the above form by specifying the last argument as a dictionary:
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = (
ForeignKeyConstraint(["id"], ["remote_table.id"]),
UniqueConstraint("foo"),
{"autoload": True},
)
A class may also specify the __table_args__
declarative attribute,
as well as the __tablename__
attribute, in a dynamic style using the
declared_attr()
method decorator. See
使用 Mixins 组合映射层次结构 for background.
使用声明性表的显式架构名称¶
Explicit Schema Name with Declarative Table
类 Table
的模式名称如 指定架构名称 中所述,使用 Table.schema
参数应用于单个 Table
。在使用声明式表时,此选项与其他选项一样传递给 __table_args__
字典:
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = {"schema": "some_schema"}
也可以使用 使用元数据指定默认架构名称 中记录的 MetaData.schema
参数将模式名称全局应用于所有 Table
对象。可以单独构建 MetaData
对象,并通过直接分配给 metadata
属性将其与 DeclarativeBase
子类关联:
from sqlalchemy import MetaData
from sqlalchemy.orm import DeclarativeBase
metadata_obj = MetaData(schema="some_schema")
class Base(DeclarativeBase):
metadata = metadata_obj
class MyClass(Base):
# 默认将使用 "some_schema"
__tablename__ = "sometable"
参见
使用元数据描述数据库 中的 指定架构名称 。
The schema name for a Table
as documented at
指定架构名称 is applied to an individual Table
using the Table.schema
argument. When using Declarative
tables, this option is passed like any other to the __table_args__
dictionary:
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = {"schema": "some_schema"}
The schema name can also be applied to all Table
objects
globally by using the MetaData.schema
parameter documented
at 使用元数据指定默认架构名称. The MetaData
object
may be constructed separately and associated with a DeclarativeBase
subclass by assigning to the metadata
attribute directly:
from sqlalchemy import MetaData
from sqlalchemy.orm import DeclarativeBase
metadata_obj = MetaData(schema="some_schema")
class Base(DeclarativeBase):
metadata = metadata_obj
class MyClass(Base):
# will use "some_schema" by default
__tablename__ = "sometable"
参见
指定架构名称 - in the 使用元数据描述数据库 documentation.
为声明性映射列设置加载和持久性选项¶
Setting Load and Persistence Options for Declarative Mapped Columns
mapped_column()
构造接受影响生成的 Column
映射的其他ORM特定参数,影响其加载和持久化行为。常用的选项包括:
延迟列加载 -
mapped_column.deferred
布尔值默认使用 延迟列加载 建立Column
。在下面的示例中,User.bio
列默认不会加载,而是仅在访问时加载class User(Base): __tablename__ = "user" id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] bio: Mapped[str] = mapped_column(Text, deferred=True)
参见
使用【列延迟】限制加载哪些列 - 延迟列加载的完整描述
活动历史 -
mapped_column.active_history
确保在更改属性值时,先前的值将已加载并成为在检查属性历史时的AttributeState.history
集合的一部分。这可能会产生额外的SQL语句class User(Base): __tablename__ = "user" id: Mapped[int] = mapped_column(primary_key=True) important_identifier: Mapped[str] = mapped_column(active_history=True)
有关支持参数的列表,请参见 mapped_column()
的文档字符串。
参见
为命令式表应用加载、持久性和映射选项列 - 描述使用 column_property()
和 deferred()
配置命令式表
The mapped_column()
construct accepts additional ORM-specific
arguments that affect how the generated Column
is
mapped, affecting its load and persistence-time behavior. Options
that are commonly used include:
deferred column loading - The
mapped_column.deferred
boolean establishes theColumn
using deferred column loading by default. In the example below, theUser.bio
column will not be loaded by default, but only when accessed:class User(Base): __tablename__ = "user" id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] bio: Mapped[str] = mapped_column(Text, deferred=True)
参见
使用【列延迟】限制加载哪些列 - full description of deferred column loading
active history - The
mapped_column.active_history
ensures that upon change of value for the attribute, the previous value will have been loaded and made part of theAttributeState.history
collection when inspecting the history of the attribute. This may incur additional SQL statements:class User(Base): __tablename__ = "user" id: Mapped[int] = mapped_column(primary_key=True) important_identifier: Mapped[str] = mapped_column(active_history=True)
See the docstring for mapped_column()
for a list of supported
parameters.
参见
为命令式表应用加载、持久性和映射选项列 - describes using
column_property()
and deferred()
for use with
Imperative Table configuration
显式命名声明性映射列¶
Naming Declarative Mapped Columns Explicitly
到目前为止的所有示例中,mapped_column()
构造都链接到ORM映射属性,其中给 mapped_column()
的Python属性名称也是我们在CREATE TABLE语句以及查询中看到的列名称。可以通过传递字符串位置参数 mapped_column.__name
作为第一个位置参数来表示SQL中列的名称。在下面的示例中, User
类映射有给列本身的备用名称:
class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column("user_id", primary_key=True)
name: Mapped[str] = mapped_column("user_name")
上述示例中, User.id
解析为名为 user_id
的列, User.name
解析为名为 user_name
的列。我们可以使用Python属性名称编写 select()
语句,并将看到生成的SQL名称:
>>> from sqlalchemy import select
>>> print(select(User.id, User.name).where(User.name == "x"))
SELECT "user".user_id, "user".user_name
FROM "user"
WHERE "user".user_name = :user_name_1
参见
用于映射表列的备用属性名称 - 适用于命令式表
All of the examples thus far feature the mapped_column()
construct
linked to an ORM mapped attribute, where the Python attribute name given
to the mapped_column()
is also that of the column as we see in
CREATE TABLE statements as well as queries. The name for a column as
expressed in SQL may be indicated by passing the string positional argument
mapped_column.__name
as the first positional argument.
In the example below, the User
class is mapped with alternate names
given to the columns themselves:
class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column("user_id", primary_key=True)
name: Mapped[str] = mapped_column("user_name")
Where above User.id
resolves to a column named user_id
and User.name
resolves to a column named user_name
. We
may write a select()
statement using our Python attribute names
and will see the SQL names generated:
>>> from sqlalchemy import select
>>> print(select(User.id, User.name).where(User.name == "x"))
SELECT "user".user_id, "user".user_name
FROM "user"
WHERE "user".user_name = :user_name_1
参见
用于映射表列的备用属性名称 - applies to Imperative Table
将其他列附加到现有的声明性映射类¶
Appending additional columns to an existing Declarative mapped class
声明式表配置允许在 Table
元数据已经生成后,向现有映射添加新的 Column
对象。
对于使用声明式基类声明的声明式类,底层元类 DeclarativeMeta
包含一个 __setattr__()
方法,该方法将拦截额外的 mapped_column()
或核心 Column
对象,并将它们添加到 Table`(使用 :meth:
.Table.append_column`)以及现有的 Mapper`(使用 :meth:
.Mapper.add_property`):
MyClass.some_new_column = mapped_column(String)
使用核心 Column
:
MyClass.some_new_column = Column(String)
支持所有参数,包括备用名称,例如 MyClass.some_new_column = mapped_column("some_name", String)
。但是,必须将SQL类型显式传递给 mapped_column()
或 Column
对象,如上述示例中传递的 String
类型。 Mapped
注释类型无法参与此操作。
在使用单表继承的特定情况下,也可以将其他 Column
对象添加到映射中,其中映射子类上存在其他列,但它们没有自己的 Table
。这在 单表继承 部分中进行了说明。
在声明后向映射类添加关系 - 类似的 relationship()
示例
备注
将映射属性分配给已映射的类仅在使用“声明式基类”时才会正确运行,这意味着 DeclarativeBase
的用户定义子类或 declarative_base()
或 registry.generate_base()
返回的动态生成类。此“基类”包含一个实现特殊 __setattr__()
方法的Python元类,用于拦截这些操作。
如果类是使用 registry.mapped()
之类的装饰器或 registry.map_imperatively()
之类的命令式函数进行映射的,则在运行时将类映射的属性分配给映射类将 不会 起作用。
A declarative table configuration allows the addition of new
Column
objects to an existing mapping after the Table
metadata has already been generated.
For a declarative class that is declared using a declarative base class,
the underlying metaclass DeclarativeMeta
includes a __setattr__()
method that will intercept additional mapped_column()
or Core
Column
objects and
add them to both the Table
using Table.append_column()
as well as to the existing Mapper
using Mapper.add_property()
:
MyClass.some_new_column = mapped_column(String)
Using core Column
:
MyClass.some_new_column = Column(String)
All arguments are supported including an alternate name, such as
MyClass.some_new_column = mapped_column("some_name", String)
. However,
the SQL type must be passed to the mapped_column()
or
Column
object explicitly, as in the above examples where
the String
type is passed. There’s no capability for
the Mapped
annotation type to take part in the operation.
Additional Column
objects may also be added to a mapping
in the specific circumstance of using single table inheritance, where
additional columns are present on mapped subclasses that have
no Table
of their own. This is illustrated in the section
单表继承.
参见
在声明后向映射类添加关系 - similar examples for relationship()
备注
Assignment of mapped
properties to an already mapped class will only
function correctly if the “declarative base” class is used, meaning
the user-defined subclass of DeclarativeBase
or the
dynamically generated class returned by declarative_base()
or registry.generate_base()
. This “base” class includes
a Python metaclass which implements a special __setattr__()
method
that intercepts these operations.
Runtime assignment of class-mapped attributes to a mapped class will not work
if the class is mapped using decorators like registry.mapped()
or imperative functions like registry.map_imperatively()
.
使用命令式表的声明式(又名混合声明式)¶
Declarative with Imperative Table (a.k.a. Hybrid Declarative)
声明式映射还可以使用预先存在的 Table
对象,或者其他任意的 FromClause
构造(例如 Join
或 Subquery
),这些构造是单独构建的。
这被称为“混合声明式(hybrid declarative)”映射,因为类使用声明式样式进行所有涉及映射器配置的操作,但映射的 Table
对象是单独生成的,并直接传递给声明式过程:
from sqlalchemy import Column, ForeignKey, Integer, String, Table
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
# 直接构建Table。 Base.metadata 集合通常是MetaData的一个不错选择,但可以使用任何MetaData集合。
user_table = Table(
"user",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String),
Column("fullname", String),
Column("nickname", String),
)
# 使用此表构建User类。
class User(Base):
__table__ = user_table
以上,Table
对象是使用 使用元数据描述数据库 中描述的方法构建的。然后可以直接应用于声明式映射的类。此形式不使用 __tablename__
和 __table_args__
声明类属性。上面的配置通常更易读为内联定义:
class User(Base):
__table__ = Table(
"user",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String),
Column("fullname", String),
Column("nickname", String),
)
上述样式的一个自然结果是 __table__
属性本身在类定义块中定义。因此,可以在后续属性中立即引用,例如下面的示例,该示例说明在多态映射器配置中引用 type
列:
class Person(Base):
__table__ = Table(
"person",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String(50)),
Column("type", String(50)),
)
__mapper_args__ = {
"polymorphic_on": __table__.c.type,
"polymorphic_identity": "person",
}
当要映射非 Table
构造(例如 Join
或 Subquery
对象)时,也使用“命令式表(imperative table)”形式。下面是一个示例:
from sqlalchemy import func, select
subq = (
select(
func.count(orders.c.id).label("order_count"),
func.max(orders.c.price).label("highest_order"),
orders.c.customer_id,
)
.group_by(orders.c.customer_id)
.subquery()
)
customer_select = (
select(customers, subq)
.join_from(customers, subq, customers.c.id == subq.c.customer_id)
.subquery()
)
class Customer(Base):
__table__ = customer_select
有关映射到非 Table
构造的背景,请参见 将一个类映射到多个表 和 将一个类映射到任意子查询 部分。
当类本身使用替代形式的属性声明(例如Python数据类)时,“命令式表(imperative table)”形式特别有用。有关详细信息,请参见 将 ORM 映射应用于现有数据类(旧式数据类使用) 部分。
Declarative mappings may also be provided with a pre-existing
Table
object, or otherwise a Table
or other
arbitrary FromClause
construct (such as a Join
or Subquery
) that is constructed separately.
This is referred to as a “hybrid declarative”
mapping, as the class is mapped using the declarative style for everything
involving the mapper configuration, however the mapped Table
object is produced separately and passed to the declarative process
directly:
from sqlalchemy import Column, ForeignKey, Integer, String
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
# construct a Table directly. The Base.metadata collection is
# usually a good choice for MetaData but any MetaData
# collection may be used.
user_table = Table(
"user",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String),
Column("fullname", String),
Column("nickname", String),
)
# construct the User class using this table.
class User(Base):
__table__ = user_table
Above, a Table
object is constructed using the approach
described at 使用元数据描述数据库. It can then be applied directly
to a class that is declaratively mapped. The __tablename__
and
__table_args__
declarative class attributes are not used in this form.
The above configuration is often more readable as an inline definition:
class User(Base):
__table__ = Table(
"user",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String),
Column("fullname", String),
Column("nickname", String),
)
A natural effect of the above style is that the __table__
attribute is
itself defined within the class definition block. As such it may be
immediately referenced within subsequent attributes, such as the example
below which illustrates referring to the type
column in a polymorphic
mapper configuration:
class Person(Base):
__table__ = Table(
"person",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String(50)),
Column("type", String(50)),
)
__mapper_args__ = {
"polymorphic_on": __table__.c.type,
"polymorphic_identity": "person",
}
The “imperative table” form is also used when a non-Table
construct, such as a Join
or Subquery
object,
is to be mapped. An example below:
from sqlalchemy import func, select
subq = (
select(
func.count(orders.c.id).label("order_count"),
func.max(orders.c.price).label("highest_order"),
orders.c.customer_id,
)
.group_by(orders.c.customer_id)
.subquery()
)
customer_select = (
select(customers, subq)
.join_from(customers, subq, customers.c.id == subq.c.customer_id)
.subquery()
)
class Customer(Base):
__table__ = customer_select
For background on mapping to non-Table
constructs see
the sections 将一个类映射到多个表 and 将一个类映射到任意子查询.
The “imperative table” form is of particular use when the class itself is using an alternative form of attribute declaration, such as Python dataclasses. See the section 将 ORM 映射应用于现有数据类(旧式数据类使用) for detail.
用于映射表列的备用属性名称¶
Alternate Attribute Names for Mapping Table Columns
显式命名声明性映射列 部分说明了如何使用 mapped_column()
为生成的 Column
对象提供单独于其映射的属性名称的特定名称。
使用命令式表配置时,我们已经有 Column
对象。要将它们映射到备用名称,我们可以将 Column
直接分配给所需的属性:
user_table = Table(
"user",
Base.metadata,
Column("user_id", Integer, primary_key=True),
Column("user_name", String),
)
class User(Base):
__table__ = user_table
id = user_table.c.user_id
name = user_table.c.user_name
上述 User
映射将通过 User.id
和 User.name
属性引用 "user_id"
和 "user_name"
列,与 显式命名声明性映射列 中演示的方式相同。
上述映射的一个警告是,直接内联链接到 Column
在使用 PEP 484 类型工具时不会正确键入。一种解决此问题的策略是将 Column
对象应用于 column_property()
函数中;虽然 Mapper
已经自动为其内部使用生成了此属性对象,但通过在类声明中命名它,类型工具将能够将属性与 Mapped
注释匹配:
from sqlalchemy.orm import column_property
from sqlalchemy.orm import Mapped
class User(Base):
__table__ = user_table
id: Mapped[int] = column_property(user_table.c.user_id)
name: Mapped[str] = column_property(user_table.c.user_name)
参见
显式命名声明性映射列 - 应用到声明式表(Declarative Table)
The section 显式命名声明性映射列 illustrated how to
use mapped_column()
to provide a specific name for the generated
Column
object separate from the attribute name under which
it is mapped.
When using Imperative Table configuration, we already have
Column
objects present. To map these to alternate names
we may assign the Column
to the desired attributes
directly:
user_table = Table(
"user",
Base.metadata,
Column("user_id", Integer, primary_key=True),
Column("user_name", String),
)
class User(Base):
__table__ = user_table
id = user_table.c.user_id
name = user_table.c.user_name
The User
mapping above will refer to the "user_id"
and "user_name"
columns via the User.id
and User.name
attributes, in the same
way as demonstrated at 显式命名声明性映射列.
One caveat to the above mapping is that the direct inline link to
Column
will not be typed correctly when using
PEP 484 typing tools. A strategy to resolve this is to apply the
Column
objects within the column_property()
function; while the Mapper
already generates this property
object for its internal use automatically, by naming it in the class
declaration, typing tools will be able to match the attribute to the
Mapped
annotation:
from sqlalchemy.orm import column_property
from sqlalchemy.orm import Mapped
class User(Base):
__table__ = user_table
id: Mapped[int] = column_property(user_table.c.user_id)
name: Mapped[str] = column_property(user_table.c.user_name)
参见
显式命名声明性映射列 - applies to Declarative Table
为命令式表应用加载、持久性和映射选项列¶
Applying Load, Persistence and Mapping Options for Imperative Table Columns
为声明性映射列设置加载和持久性选项 部分回顾了在使用 mapped_column()
构造和声明式表配置时如何设置加载和持久化选项。使用命令式表配置时,我们已经有现有的 Column
对象被映射。为了将这些 Column
对象与特定于ORM映射的其他参数一起映射,我们可以使用 column_property()
和 deferred()
构造来关联列的其他参数。选项包括:
延迟列加载 -
deferred()
函数是调用column_property()
并将column_property.deferred
参数设置为True
的简写;此构造默认使用 延迟列加载 建立Column
。在下面的示例中,User.bio
列默认不会加载,而是仅在访问时加载:from sqlalchemy.orm import deferred user_table = Table( "user", Base.metadata, Column("id", Integer, primary_key=True), Column("name", String), Column("bio", Text), ) class User(Base): __table__ = user_table bio = deferred(user_table.c.bio)
参见
使用【列延迟】限制加载哪些列 - 延迟列加载的完整描述
活动历史 -
column_property.active_history
确保在更改属性值时,先前的值将已加载并成为在检查属性历史时的AttributeState.history
集合的一部分。这可能会产生额外的SQL语句:from sqlalchemy.orm import column_property user_table = Table( "user", Base.metadata, Column("id", Integer, primary_key=True), Column("important_identifier", String), ) class User(Base): __table__ = user_table important_identifier = column_property( user_table.c.important_identifier, active_history=True )
参见
column_property()
构造对于将类映射到如连接和选择等替代FROM子句的情况也很重要。有关这些情况的更多背景信息,请参见:
对于使用 mapped_column()
的声明式表配置,大多数选项可以直接使用;有关示例,请参见 为声明性映射列设置加载和持久性选项 部分。
The section 为声明性映射列设置加载和持久性选项 reviewed how to set load
and persistence options when using the mapped_column()
construct
with Declarative Table configuration. When using Imperative Table configuration,
we already have existing Column
objects that are mapped.
In order to map these Column
objects along with additional
parameters that are specific to the ORM mapping, we may use the
column_property()
and deferred()
constructs in order to
associate additional parameters with the column. Options include:
deferred column loading - The
deferred()
function is shorthand for invokingcolumn_property()
with thecolumn_property.deferred
parameter set toTrue
; this construct establishes theColumn
using deferred column loading by default. In the example below, theUser.bio
column will not be loaded by default, but only when accessed:from sqlalchemy.orm import deferred user_table = Table( "user", Base.metadata, Column("id", Integer, primary_key=True), Column("name", String), Column("bio", Text), ) class User(Base): __table__ = user_table bio = deferred(user_table.c.bio)
参见
使用【列延迟】限制加载哪些列 - full description of deferred column loading
active history - The
column_property.active_history
ensures that upon change of value for the attribute, the previous value will have been loaded and made part of theAttributeState.history
collection when inspecting the history of the attribute. This may incur additional SQL statements:from sqlalchemy.orm import deferred user_table = Table( "user", Base.metadata, Column("id", Integer, primary_key=True), Column("important_identifier", String), ) class User(Base): __table__ = user_table important_identifier = column_property( user_table.c.important_identifier, active_history=True )
参见
The column_property()
construct is also important for cases
where classes are mapped to alternative FROM clauses such as joins and
selects. More background on these cases is at:
For Declarative Table configuration with mapped_column()
,
most options are available directly; see the section
为声明性映射列设置加载和持久性选项 for examples.
使用反射表进行声明式映射¶
Mapping Declaratively with Reflected Tables
有几种模式可以提供针对从数据库中内省的一系列 Table
对象生成映射类的方法,使用在 反射数据库对象 中描述的反射过程。
将类映射到从数据库反射的表的简单方法是使用声明式混合映射,将 Table.autoload_with
参数传递给 Table
的构造函数:
from sqlalchemy import create_engine
from sqlalchemy import Table
from sqlalchemy.orm import DeclarativeBase
engine = create_engine("postgresql+psycopg2://user:pass@hostname/my_existing_database")
class Base(DeclarativeBase):
pass
class MyClass(Base):
__table__ = Table(
"mytable",
Base.metadata,
autoload_with=engine,
)
上述模式的一个变体是使用 MetaData.reflect()
方法一次反射一组 Table
对象,然后从 MetaData
中引用它们:
from sqlalchemy import create_engine
from sqlalchemy import Table
from sqlalchemy.orm import DeclarativeBase
engine = create_engine("postgresql+psycopg2://user:pass@hostname/my_existing_database")
class Base(DeclarativeBase):
pass
Base.metadata.reflect(engine)
class MyClass(Base):
__table__ = Base.metadata.tables["mytable"]
使用 __table__
方法的一个警告是,映射类不能在表被反射之前声明,这需要在声明应用程序类时存在数据库连接源;通常类在应用程序模块被导入时声明,但在应用程序开始运行代码以便消耗配置信息并创建引擎之前,数据库连接不可用。当前有两种方法可以解决这个问题,在接下来的两个部分中描述。
There are several patterns available which provide for producing mapped
classes against a series of Table
objects that were
introspected from the database, using the reflection process described at
反射数据库对象.
A simple way to map a class to a table reflected from the database is to
use a declarative hybrid mapping, passing the
Table.autoload_with
parameter to the constructor for
Table
:
from sqlalchemy import create_engine
from sqlalchemy import Table
from sqlalchemy.orm import DeclarativeBase
engine = create_engine("postgresql+psycopg2://user:pass@hostname/my_existing_database")
class Base(DeclarativeBase):
pass
class MyClass(Base):
__table__ = Table(
"mytable",
Base.metadata,
autoload_with=engine,
)
A variant on the above pattern that scales for many tables is to use the
MetaData.reflect()
method to reflect a full set of Table
objects at once, then refer to them from the MetaData
:
from sqlalchemy import create_engine
from sqlalchemy import Table
from sqlalchemy.orm import DeclarativeBase
engine = create_engine("postgresql+psycopg2://user:pass@hostname/my_existing_database")
class Base(DeclarativeBase):
pass
Base.metadata.reflect(engine)
class MyClass(Base):
__table__ = Base.metadata.tables["mytable"]
One caveat to the approach of using __table__
is that the mapped classes cannot
be declared until the tables have been reflected, which requires the database
connectivity source to be present while the application classes are being
declared; it’s typical that classes are declared as the modules of an
application are being imported, but database connectivity isn’t available
until the application starts running code so that it can consume configuration
information and create an engine. There are currently two approaches
to working around this, described in the next two sections.
使用 DeferredReflection¶
Using DeferredReflection
为了适应在表元数据反射之后声明映射类的用例,可以使用一个简单的扩展,称为 DeferredReflection
混入,它会更改声明式映射过程,直到调用一个特殊的类级别方法 DeferredReflection.prepare()
,该方法将对目标数据库执行反射过程,并将结果与声明式表映射过程集成,即使用 __tablename__
属性的类:
from sqlalchemy.ext.declarative import DeferredReflection
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
class Reflected(DeferredReflection):
__abstract__ = True
class Foo(Reflected, Base):
__tablename__ = "foo"
bars = relationship("Bar")
class Bar(Reflected, Base):
__tablename__ = "bar"
foo_id = mapped_column(Integer, ForeignKey("foo.id"))
在上面,我们创建了一个混入类 Reflected
,它将作为我们声明层次结构中类的基础,当调用 Reflected.prepare
方法时,这些类应该成为映射的。上述映射在给定 Engine
之前是不完整的:
engine = create_engine("postgresql+psycopg2://user:pass@hostname/my_existing_database")
Reflected.prepare(engine)
Reflected
类的目的是定义应该反射映射的类的范围。插件将在调用 .prepare()
的目标的子类树中搜索,并反射所有由声明类命名的表;与目标表没有通过外键约束关联的目标数据库中的表将不会被反射。
To accommodate the use case of declaring mapped classes where reflection of
table metadata can occur afterwards, a simple extension called the
DeferredReflection
mixin is available, which alters the declarative
mapping process to be delayed until a special class-level
DeferredReflection.prepare()
method is called, which will perform
the reflection process against a target database, and will integrate the
results with the declarative table mapping process, that is, classes which
use the __tablename__
attribute:
from sqlalchemy.ext.declarative import DeferredReflection
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
class Reflected(DeferredReflection):
__abstract__ = True
class Foo(Reflected, Base):
__tablename__ = "foo"
bars = relationship("Bar")
class Bar(Reflected, Base):
__tablename__ = "bar"
foo_id = mapped_column(Integer, ForeignKey("foo.id"))
Above, we create a mixin class Reflected
that will serve as a base
for classes in our declarative hierarchy that should become mapped when
the Reflected.prepare
method is called. The above mapping is not
complete until we do so, given an Engine
:
engine = create_engine("postgresql+psycopg2://user:pass@hostname/my_existing_database")
Reflected.prepare(engine)
The purpose of the Reflected
class is to define the scope at which
classes should be reflectively mapped. The plugin will search among the
subclass tree of the target against which .prepare()
is called and reflect
all tables which are named by declared classes; tables in the target database
that are not part of mappings and are not related to the target tables
via foreign key constraint will not be reflected.
使用 Automap¶
Using Automap
对映射现有数据库使用表反射的更自动化解决方案是使用 自动映射 扩展。此扩展将从数据库模式生成整个映射类,包括基于观察到的外键约束的类之间的关系。虽然它包括自定义的钩子,例如允许自定义类命名和关系命名方案的钩子,但automap面向的是一种快速的零配置工作方式。如果应用程序希望具有使用表反射的完全显式模型,则 DeferredReflection 类可能更适合其不那么自动化的方法。
参见
A more automated solution to mapping against an existing database where table reflection is to be used is to use the 自动映射 extension. This extension will generate entire mapped classes from a database schema, including relationships between classes based on observed foreign key constraints. While it includes hooks for customization, such as hooks that allow custom class naming and relationship naming schemes, automap is oriented towards an expedient zero-configuration style of working. If an application wishes to have a fully explicit model that makes use of table reflection, the DeferredReflection class may be preferable for its less automated approach.
参见
从反射表自动执行列命名方案¶
Automating Column Naming Schemes from Reflected Tables
使用任何前述的反射技术时,我们可以选择更改列的映射命名方案。Column
对象包含一个参数 Column.key
,这是一个字符串名称,用于确定此 Column
在 Table.c
集合中以何种名称存在,与列的SQL名称无关。如果没有通过其他方式提供,例如在 用于映射表列的备用属性名称 中说明的那样,此键也将由 Mapper
用作映射 Column
的属性名称。
使用表反射时,我们可以拦截将用于 Column
的参数,并使用 DDLEvents.column_reflect()
事件应用所需的任何更改,包括 .key
属性以及数据类型。
事件钩子最容易与使用中的 MetaData
对象关联,如下所示:
from sqlalchemy import event
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
@event.listens_for(Base.metadata, "column_reflect")
def column_reflect(inspector, table, column_info):
# 设置 column.key = "attr_<lower_case_name>"
column_info["key"] = "attr_%s" % column_info["name"].lower()
使用上述事件,反射 Column
对象时将使用我们的事件拦截,并添加一个新的 .key
元素,如下所示:
class MyClass(Base):
__table__ = Table("some_table", Base.metadata, autoload_with=some_engine)
该方法还适用于 DeferredReflection
基类以及 自动映射 扩展。对于automap的详细信息,请参见 拦截列用法 部分。
When using any of the previous reflection techniques, we have the option
to change the naming scheme by which columns are mapped. The
Column
object includes a parameter
Column.key
which is a string name that determines
under what name
this Column
will be present in the Table.c
collection, independently of the SQL name of the column. This key is also
used by Mapper
as the attribute name under which the
Column
will be mapped, if not supplied through other
means such as that illustrated at 用于映射表列的备用属性名称.
When working with table reflection, we can intercept the parameters that
will be used for Column
as they are received using
the DDLEvents.column_reflect()
event and apply whatever
changes we need, including the .key
attribute but also things like
datatypes.
The event hook is most easily
associated with the MetaData
object that’s in use
as illustrated below:
from sqlalchemy import event
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
@event.listens_for(Base.metadata, "column_reflect")
def column_reflect(inspector, table, column_info):
# set column.key = "attr_<lower_case_name>"
column_info["key"] = "attr_%s" % column_info["name"].lower()
With the above event, the reflection of Column
objects will be intercepted
with our event that adds a new “.key” element, such as in a mapping as below:
class MyClass(Base):
__table__ = Table("some_table", Base.metadata, autoload_with=some_engine)
The approach also works with both the DeferredReflection
base class
as well as with the 自动映射 extension. For automap
specifically, see the section 拦截列用法 for
background.
映射到一组显式主键列¶
Mapping to an Explicit Set of Primary Key Columns
Mapper
构造在成功映射表时,总是需要至少一个列被标识为该可选择的“主键”。这是为了在加载或持久化ORM对象时,可以使用适当的 identity key 将其放置在 identity map 中。
在这些情况下,反射表不包括主键约束,以及在一般情况下 映射任意可选择的 时,可能不存在主键列,提供了 Mapper.primary_key
参数,以便可以将任何一组列配置为表的“主键”,就ORM映射而言。
给定以下示例:针对现有 Table
对象的命令式表映射,其中表没有声明主键(如在反射场景中可能发生的那样),我们可以如下示例映射这样的表:
from sqlalchemy import Column
from sqlalchemy import MetaData
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import UniqueConstraint
from sqlalchemy.orm import DeclarativeBase
metadata = MetaData()
group_users = Table(
"group_users",
metadata,
Column("user_id", String(40), nullable=False),
Column("group_id", String(40), nullable=False),
UniqueConstraint("user_id", "group_id"),
)
class Base(DeclarativeBase):
pass
class GroupUsers(Base):
__table__ = group_users
__mapper_args__ = {"primary_key": [group_users.c.user_id, group_users.c.group_id]}
在上面, group_users
表是某种类型的关联表,具有字符串列 user_id
和 group_id
,但没有设置主键;相反,只有一个 UniqueConstraint
确立了这两列代表唯一键。Mapper
不会自动检查唯一约束是否为主键;相反,我们使用 Mapper.primary_key
参数,传递 [group_users.c.user_id, group_users.c.group_id]
的集合,表明这两列应被用于构建 GroupUsers
类实例的标识键。
The Mapper
construct in order to successfully map a table always
requires that at least one column be identified as the “primary key” for
that selectable. This is so that when an ORM object is loaded or persisted,
it can be placed in the identity map with an appropriate
identity key.
In those cases where the a reflected table to be mapped does not include
a primary key constraint, as well as in the general case for
mapping against arbitrary selectables
where primary key columns might not be present, the
Mapper.primary_key
parameter is provided so that any set of
columns may be configured as the “primary key” for the table, as far as
ORM mapping is concerned.
Given the following example of an Imperative Table
mapping against an existing Table
object where the table does not
have any declared primary key (as may occur in reflection scenarios), we may
map such a table as in the following example:
from sqlalchemy import Column
from sqlalchemy import MetaData
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import UniqueConstraint
from sqlalchemy.orm import DeclarativeBase
metadata = MetaData()
group_users = Table(
"group_users",
metadata,
Column("user_id", String(40), nullable=False),
Column("group_id", String(40), nullable=False),
UniqueConstraint("user_id", "group_id"),
)
class Base(DeclarativeBase):
pass
class GroupUsers(Base):
__table__ = group_users
__mapper_args__ = {"primary_key": [group_users.c.user_id, group_users.c.group_id]}
Above, the group_users
table is an association table of some kind
with string columns user_id
and group_id
, but no primary key is set up;
instead, there is only a UniqueConstraint
establishing that the
two columns represent a unique key. The Mapper
does not automatically
inspect unique constraints for primary keys; instead, we make use of the
Mapper.primary_key
parameter, passing a collection of
[group_users.c.user_id, group_users.c.group_id]
, indicating that these two
columns should be used in order to construct the identity key for instances
of the GroupUsers
class.
映射表列的子集¶
Mapping a Subset of Table Columns
有时表反射可能会提供一个包含许多不重要列的 Table
,这些列可以安全地忽略。对于具有许多不需要在应用程序中引用的列的表,Mapper.include_properties
或 Mapper.exclude_properties
参数可以指示要映射的列子集,目标 Table
中的其他列将不会被ORM以任何方式考虑。示例:
class User(Base):
__table__ = user_table
__mapper_args__ = {"include_properties": ["user_id", "user_name"]}
在上述示例中, User
类将映射到 user_table
表,仅包括 user_id
和 user_name
列 - 其余的不被引用。
类似地:
class Address(Base):
__table__ = address_table
__mapper_args__ = {"exclude_properties": ["street", "city", "state", "zip"]}
将 Address
类映射到 address_table
表,包括所有存在的列,除了 street
、 city
、 state
和 zip
。
如两个示例中所示,列可以通过字符串名称引用,也可以直接引用 Column
对象。直接引用对象可能有助于明确性以及在映射到可能具有重复名称的多表构造时解决歧义:
class User(Base):
__table__ = user_table
__mapper_args__ = {
"include_properties": [user_table.c.user_id, user_table.c.user_name]
}
当列不包含在映射中时,这些列不会在执行 select()
或遗留 Query
对象时发出的SELECT语句中引用,也不会在映射类上有任何表示该列的映射属性;分配该名称的属性除了正常的Python属性分配外不会有任何效果。
然而,重要的是要注意,即使这些列可能从ORM映射中排除, 架构级别的列默认值仍将有效(schema level column defaults WILL
still be in effect) ,对于包含这些默认值的 Column
对象也是如此。
“架构级别的列默认值”是指在 列 INSERT/UPDATE 默认值 中描述的默认值,包括由 Column.default
、Column.onupdate
、Column.server_default
和 Column.server_onupdate
参数配置的默认值。这些构造继续具有正常效果,因为在 Column.default
和 Column.onupdate
的情况下,Column
对象仍然存在于底层 Table
上,从而允许在ORM发出INSERT或UPDATE时进行默认函数,并且在 Column.server_default
和 Column.server_onupdate
的情况下,关系数据库本身会以服务器端行为发出这些默认值。
Sometimes table reflection may provide a Table
with many
columns that are not important for our needs and may be safely ignored.
For such a table that has lots of columns that don’t need to be referenced
in the application, the Mapper.include_properties
or Mapper.exclude_properties
parameters can indicate
a subset of columns to be mapped, where other columns from the
target Table
will not be considered by the ORM in any
way. Example:
class User(Base):
__table__ = user_table
__mapper_args__ = {"include_properties": ["user_id", "user_name"]}
In the above example, the User
class will map to the user_table
table, only
including the user_id
and user_name
columns - the rest are not referenced.
Similarly:
class Address(Base):
__table__ = address_table
__mapper_args__ = {"exclude_properties": ["street", "city", "state", "zip"]}
will map the Address
class to the address_table
table, including
all columns present except street
, city
, state
, and zip
.
As indicated in the two examples, columns may be referenced either
by string name or by referring to the Column
object
directly. Referring to the object directly may be useful for explicitness as
well as to resolve ambiguities when
mapping to multi-table constructs that might have repeated names:
class User(Base):
__table__ = user_table
__mapper_args__ = {
"include_properties": [user_table.c.user_id, user_table.c.user_name]
}
When columns are not included in a mapping, these columns will not be
referenced in any SELECT statements emitted when executing select()
or legacy Query
objects, nor will there be any mapped attribute
on the mapped class which represents the column; assigning an attribute of that
name will have no effect beyond that of a normal Python attribute assignment.
However, it is important to note that schema level column defaults WILL
still be in effect for those Column
objects that include them,
even though they may be excluded from the ORM mapping.
“Schema level column defaults” refers to the defaults described at
列 INSERT/UPDATE 默认值 including those configured by the
Column.default
, Column.onupdate
,
Column.server_default
and
Column.server_onupdate
parameters. These constructs
continue to have normal effects because in the case of
Column.default
and Column.onupdate
, the
Column
object is still present on the underlying
Table
, thus allowing the default functions to take place when
the ORM emits an INSERT or UPDATE, and in the case of
Column.server_default
and
Column.server_onupdate
, the relational database itself
emits these defaults as a server side behavior.