configure
configure(connection: Optional[Connection] = None, url: Optional[str] = None, dialect_name: Optional[str] = None, dialect_opts: Optional[dict] = None, transactional_ddl: Optional[bool] = None, transaction_per_migration: bool = False, output_buffer: Optional[TextIO] = None, starting_rev: Optional[str] = None, tag: Optional[str] = None, template_args: Optional[dict] = None, render_as_batch: bool = False, target_metadata: Optional[MetaData] = None, include_name: Optional[Callable] = None, include_object: Optional[Callable] = None, include_schemas: bool = False, process_revision_directives: Optional[Callable] = None, compare_type: bool = False, compare_server_default: bool = False, render_item: Optional[Callable] = None, literal_binds: bool = False, upgrade_token: str = 'upgrades', downgrade_token: str = 'downgrades', alembic_module_prefix: str = 'op.', sqlalchemy_module_prefix: str = 'sa.', user_module_prefix: Optional[str] = None, on_version_apply: Optional[Callable] = None, **kw) → None
Configure a MigrationContext within this EnvironmentContext which will provide database connectivity and other configuration to a series of migration scripts.
Many methods on EnvironmentContext require that this method has been called in order to function, as they ultimately need to have database access or at least access to the dialect in use. Those which do are documented as such.
The important thing needed by configure() is a means to determine what kind of database dialect is in use. An actual connection to that database is needed only if the MigrationContext is to be used in “online” mode.
If the is_offline_mode() function returns True
, then no connection
is needed here. Otherwise, the connection
parameter should be present as an instance of sqlalchemy.engine.Connection.
This function is typically called from the env.py
script within a migration environment. It can be called multiple times for an invocation. The most recent Connection for which it was called is the one that will be operated upon by the next call to run_migrations().
General parameters:
- connection – a Connection to use for SQL execution in “online” mode. When present, is also used to determine the type of dialect in use.
- url – a string database url, or a sqlalchemy.engine.url.URL object. The type of dialect to be used will be derived from this if
connection
is not passed. - dialect_name – string name of a dialect, such as “postgresql”, “mssql”, etc. The type of dialect to be used will be derived from this if
connection
andurl
are not passed. - dialect_opts – dictionary of options to be passed to dialect constructor.
New in version 1.0.12.
- transactional_ddl – Force the usage of “transactional” DDL on or off; this otherwise defaults to whether or not the dialect in use supports it.
- transaction_per_migration – if True, nest each migration script in a transaction rather than the full series of migrations to run.
- output_buffer – a file-like object that will be used for textual output when the
--sql
option is used to generate SQL scripts. Defaults tosys.stdout
if not passed here and also not present on the Config object. The value here overrides that of the Config object. - output_encoding – when using
--sql
to generate SQL scripts, apply this encoding to the string output. - literal_binds – when using
--sql
to generate SQL scripts, pass through theliteral_binds
flag to the compiler so that any literal values that would ordinarily be bound parameters are converted to plain strings.Warning: Dialects can typically only handle simple datatypes like strings and numbers for auto-literal generation. Datatypes like dates, intervals, and others may still require manual formatting, typically using Operations.inline_literal().
Note: the
literal_binds
flag is ignored on SQLAlchemy versions prior to 0.8 where this feature is not supported.See also: Operations.inline_literal()
- starting_rev – Override the “starting revision” argument when using
--sql
mode. - tag – a string tag for usage by custom
env.py
scripts. Set via the--tag
option, can be overridden here. - template_args – dictionary of template arguments which will be added to the template argument environment when running the “revision” command. Note that the script environment is only run within the “revision” command if the –autogenerate option is used, or if the option “revision_environment=true” is present in the alembic.ini file.
- version_table – The name of the Alembic version table. The default is
'alembic_version'
. - version_table_schema – Optional schema to place version table within.
- version_table_pk – boolean, whether the Alembic version table should use a primary key constraint for the “value” column; this only takes effect when the table is first created. Defaults to True; setting to False should not be necessary and is here for backwards compatibility reasons.
- on_version_apply – a callable or collection of callables to be run for each migration step. The callables will be run in the order they are given, once for each migration step, after the respective operation has been applied but before its transaction is finalized. Each callable accepts no positional arguments and the following keyword arguments:
- ctx: the MigrationContext running the migration,
- step: a
MigrationInfo
representing thestep
currently being applied, - heads: a collection of version strings representing the current
heads
, - run_args: the
**kwargs
passed to run_migrations().
Parameters specific to the autogenerate feature, when alembic revision
is run with the --autogenerate
feature:
-
target_metadata – a sqlalchemy.schema.MetaData object, or a sequence of MetaData objects, that will be consulted during autogeneration. The tables present in each MetaData will be compared against what is locally available on the target Connection to produce candidate upgrade/downgrade operations.
-
compare_type – Indicates type comparison behavior during an autogenerate operation. Defaults to
False
which disables type comparison. Set toTrue
to turn on default type comparison, which has varied accuracy depending on backend. See Comparing Types for an example as well as information on other type comparison options.See also: Comparing Types
-
compare_server_default – Indicates server default comparison behavior during an autogenerate operation. Defaults to
False
which disables server default comparison. Set toTrue
to turn on server default comparison, which has varied accuracy depending on backend.To customize server default comparison behavior, a callable may be specified which can filter server default comparisons during an autogenerate operation. defaults during an autogenerate operation. The format of this callable is:
def my_compare_server_default(context, inspected_column, metadata_column, inspected_default, metadata_default, rendered_metadata_default): # return True if the defaults are different, # False if not, or None to allow the default implementation # to compare these defaults return None context.configure( # ... compare_server_default = my_compare_server_default )
inspected_column is a dictionary structure as returned by sqlalchemy.engine.reflection.Inspector.get_columns(), whereas
metadata_column
is a sqlalchemy.schema.Column from the local model environment.A return value of
None
indicates to allow default server default comparison to proceed. Note that some backends such as Postgresql actually execute the two defaults on the database side to compare for equivalence. -
include_name – A callable function which is given the chance to return
True
orFalse
for any database reflected object based on its name, including database schema names when the EnvironmentContext.configure.include_schemas flag is set toTrue
.The function accepts the following positional arguments:
-
name: the
name
of the object, such as schemaname
or tablename
. Will beNone
when indicating the default schemaname
of the database connection. -
type: a string describing the
type
of object; currently"schema"
,"table"
,"column"
,"index"
,"unique_constraint"
, or "foreign_key_constraint" -
parent_names: a dictionary of “parent” object names, that are relative to the name being given. Keys in this dictionary may include:
"schema_name"
,"table_name"
.E.g.:
def include_name(name, type_, parent_names): if type_ == "schema": return name in ["schema_one", "schema_two"] else: return True context.configure( # ... include_schemas = True, include_name = include_name )
New in version 1.5.
See also: Controlling What to be Autogenerated
-
-
include_object – A callable function which is given the chance to return
True
orFalse
for any object, indicating if the given object should be considered in the autogenerate sweep.The function accepts the following positional arguments:
-
object: a SchemaItem
object
such as a Table, Column, Index UniqueConstraint, or ForeignKeyConstraint object -
name: the
name
of the object. This is typically available viaobject.name
. -
type: a string describing the
type
of object; currently"table"
,"column"
,"index"
,"unique_constraint"
, or "foreign_key_constraint" -
reflected:
True
if the given object was produced based on table reflection,False
if it’s from a localMetaData
object. -
compare_to: the object being compared against, if available, else
None
. E.g.:def include_object(object, name, type_, reflected, compare_to): if (type_ == "column" and not reflected and object.info.get("skip_autogenerate", False)): return False else: return True context.configure( # ... include_object = include_object )
For the use case of omitting specific schemas from a target database when EnvironmentContext.configure.include_schemas is set to
True
, theschema
attribute can be checked for each Table object passed to the hook, however it is much more efficient to filter on schemas before reflection of objects takes place using the EnvironmentContext.configure.include_name hook.See also: Controlling What to be Autogenerated
-
-
render_as_batch – if True, commands which alter elements within a table will be placed under a
with batch_alter_table():
directive, so that batch migrations will take place.See also: Running “Batch” Migrations for SQLite and Other Databases
-
include_schemas – If True, autogenerate will scan across all schemas located by the SQLAlchemy get_schema_names() method, and include all differences in tables found across all those schemas. When using this option, you may want to also use the EnvironmentContext.configure.include_name parameter to specify a callable which can filter the tables/schemas that get included.
See also: Controlling What to be Autogenerated
-
render_item – Callable that can be used to override how any schema item, i.e. column, constraint, type, etc., is rendered for autogenerate. The callable receives a string describing the type of object, the object, and the autogen context. If it returns False, the default rendering method will be used. If it returns None, the item will not be rendered in the context of a Table construct, that is, can be used to skip columns or constraints within op.create_table():
def my_render_column(type_, col, autogen_context): if type_ == "column" and isinstance(col, MySpecialCol): return repr(col) else: return False context.configure( # ... render_item = my_render_column )
Available values for the type string include:
"column"
,"primary_key"
,"foreign_key"
,"unique"
,"check"
,"type"
,"server_default"
. -
upgrade_token – When autogenerate completes, the text of the candidate upgrade operations will be present in this template variable when
script.py.mako
is rendered. Defaults toupgrades
. -
downgrade_token – When autogenerate completes, the text of the candidate downgrade operations will be present in this template variable when
script.py.mako
is rendered. Defaults todowngrades
. -
alembic_module_prefix – When autogenerate refers to Alembic alembic.operations constructs, this prefix will be used (i.e. op.create_table) Defaults to “op.”. Can be
None
to indicate no prefix. -
sqlalchemy_module_prefix – When autogenerate refers to SQLAlchemy Column or type classes, this prefix will be used (i.e. sa.Column("somename", sa.Integer)) Defaults to “sa.”. Can be
None
to indicate no prefix. Note that when dialect-specific types are rendered, autogenerate will render them using the dialect module name, i.e.mssql.BIT()
,postgresql.UUID()
. -
user_module_prefix – When autogenerate refers to a SQLAlchemy type (e.g. TypeEngine) where the module name is not under the
sqlalchemy
namespace, this prefix will be used within autogenerate. If left at its default ofNone
, the__module__
attribute of the type is used to render the import module. It’s a good practice to set this and to have all custom types be available from a fixed module space, in order to future-proof migration files against reorganizations in modules.See also: Controlling the Module Prefix
-
process_revision_directives – a callable function that will be passed a structure representing the end result of an autogenerate or plain “revision” operation, which can be manipulated to affect how the
alembic revision
command ultimately outputs new revision scripts. The structure of the callable is:def process_revision_directives(context, revision, directives): pass
The
directives
parameter is a Python list containing a single MigrationScript directive, which represents therevision
file to be generated. This list as well as its contents may be freely modified to produce any set of commands. The section Customizing Revision Generation shows an example of doing this. Thecontext
parameter is the MigrationContext in use, andrevision
is a tuple ofrevision
identifiers representing the currentrevision
of the database.The callable is invoked at all times when the
--autogenerate
option is passed toalembic revision
. If--autogenerate
is not passed, the callable is invoked only if therevision_environment
variable is set to True in the Alembic configuration, in which case the givendirectives
collection will contain empty UpgradeOps and DowngradeOps collections for.upgrade_ops
and.downgrade_ops
. The--autogenerate
option itself can be inferred by inspectingcontext.config.cmd_opts.autogenerate
.The callable function may optionally be an instance of a Rewriter object. This is a helper object that assists in the production of autogenerate-stream rewriter functions.
See also: Customizing Revision Generation
Parameters specific to individual backends:
- mssql_batch_separator – The “batch separator” which will be placed between each statement when generating offline SQL Server migrations. Defaults to
GO
. Note this is in addition to the customary semicolon;
at the end of each statement; SQL Server considers the “batch separator” to denote the end of an individual statement execution, and cannot group certain dependent operations in one step. - oracle_batch_separator – The “batch separator” which will be placed between each statement when generating offline Oracle migrations. Defaults to
/
. Oracle doesn’t add a semicolon between statements like most other backends.