Operation Reference

This file provides documentation on Alembic migration directives.

The directives here are used within user-defined migration files, within the upgrade() and downgrade() functions, as well as any functions further invoked by those.

All directives exist as methods on a class called Operations. When migration scripts are run, this object is made available to the script via the alembic.op datamember, which is a proxy to an actual instance of Operations. Currently, alembic.op is a real Python module, populated with individual proxies for each method on Operations, so symbols can be imported safely from the alembic.op namespace.

The Operations system is also fully extensible. See Operation Plugins for details on this.

A key design philosophy to the Operation Directives methods is that to the greatest degree possible, they internally generate the appropriate SQLAlchemy metadata, typically involving Table and Constraint objects. This so that migration instructions can be given in terms of just the string names and/or flags involved. The exceptions to this rule include the add_column() and create_table() directives, which require full Column objects, though the table metadata is still generated here.

The functions here all require that a MigrationContext has been configured within the env.py script first, which is typically via EnvironmentContext.configure(). Under normal circumstances they are called from an actual migration script, which itself would be invoked by the EnvironmentContext.run_migrations() method.

class alembic.operations.Operations(migration_context: MigrationContext, impl: Optional[BatchOperationsImpl] = None)

Define high level migration operations.

Each operation corresponds to some schema migration operation, executed against a particular MigrationContext which in turn represents connectivity to a database, or a file output stream.

While Operations is normally configured as part of the EnvironmentContext.run_migrations() method called from an env.py script, a standalone Operations instance can be made for use cases external to regular Alembic migrations by passing in a MigrationContext:

from alembic.migration import MigrationContext
from alembic.operations import Operations

conn = myengine.connect()
ctx = MigrationContext.configure(conn)
op = Operations(ctx)

op.alter_column("t", "c", nullable=True)

Note that as of 0.8, most of the methods on this class are produced dynamically using the Operations.register_operation() method.

Construct a new Operations

Parameters

migration_context – a MigrationContext instance.

add_column(table_name: str, column: Column, schema: Optional[str] = None) Optional[Table]

Issue an “add column” instruction using the current migration context.

e.g.:

from alembic import op
from sqlalchemy import Column, String

op.add_column('organization',
    Column('name', String())
)

The provided Column object can also specify a ForeignKey, referencing a remote table name. Alembic will automatically generate a stub “referenced” table and emit a second ALTER statement in order to add the constraint separately:

from alembic import op
from sqlalchemy import Column, INTEGER, ForeignKey

op.add_column('organization',
    Column('account_id', INTEGER, ForeignKey('accounts.id'))
)

Note that this statement uses the Column construct as is from the SQLAlchemy library. In particular, default values to be created on the database side are specified using the server_default parameter, and not default which only specifies Python-side defaults:

from alembic import op
from sqlalchemy import Column, TIMESTAMP, func

# specify "DEFAULT NOW" along with the column add
op.add_column('account',
    Column('timestamp', TIMESTAMP, server_default=func.now())
)
Parameters
  • table_name – String name of the parent table.

  • column – a sqlalchemy.schema.Column object representing the new column.

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

alter_column(table_name: str, column_name: str, nullable: Optional[bool] = None, comment: Optional[Union[str, bool]] = False, server_default: Any = False, new_column_name: Optional[str] = None, type_: Optional[Union[TypeEngine, Type[TypeEngine]]] = None, existing_type: Optional[Union[TypeEngine, Type[TypeEngine]]] = None, existing_server_default: Optional[Union[str, bool, Identity, Computed]] = False, existing_nullable: Optional[bool] = None, existing_comment: Optional[str] = None, schema: Optional[str] = None, **kw) Optional[Table]

Issue an “alter column” instruction using the current migration context.

Generally, only that aspect of the column which is being changed, i.e. name, type, nullability, default, needs to be specified. Multiple changes can also be specified at once and the backend should “do the right thing”, emitting each change either separately or together as the backend allows.

MySQL has special requirements here, since MySQL cannot ALTER a column without a full specification. When producing MySQL-compatible migration files, it is recommended that the existing_type, existing_server_default, and existing_nullable parameters be present, if not being altered.

Type changes which are against the SQLAlchemy “schema” types Boolean and Enum may also add or drop constraints which accompany those types on backends that don’t support them natively. The existing_type argument is used in this case to identify and remove a previous constraint that was bound to the type object.

Parameters
  • table_name – string name of the target table.

  • column_name – string name of the target column, as it exists before the operation begins.

  • nullable – Optional; specify True or False to alter the column’s nullability.

  • server_default – Optional; specify a string SQL expression, text(), or DefaultClause to indicate an alteration to the column’s default value. Set to None to have the default removed.

  • comment

    optional string text of a new comment to add to the column.

    New in version 1.0.6.

  • new_column_name – Optional; specify a string name here to indicate the new name within a column rename operation.

  • type_ – Optional; a TypeEngine type object to specify a change to the column’s type. For SQLAlchemy types that also indicate a constraint (i.e. Boolean, Enum), the constraint is also generated.

  • autoincrement – set the AUTO_INCREMENT flag of the column; currently understood by the MySQL dialect.

  • existing_type – Optional; a TypeEngine type object to specify the previous type. This is required for all MySQL column alter operations that don’t otherwise specify a new type, as well as for when nullability is being changed on a SQL Server column. It is also used if the type is a so-called SQLlchemy “schema” type which may define a constraint (i.e. Boolean, Enum), so that the constraint can be dropped.

  • existing_server_default – Optional; The existing default value of the column. Required on MySQL if an existing default is not being changed; else MySQL removes the default.

  • existing_nullable – Optional; the existing nullability of the column. Required on MySQL if the existing nullability is not being changed; else MySQL sets this to NULL.

  • existing_autoincrement – Optional; the existing autoincrement of the column. Used for MySQL’s system of altering a column that specifies AUTO_INCREMENT.

  • existing_comment

    string text of the existing comment on the column to be maintained. Required on MySQL if the existing comment on the column is not being changed.

    New in version 1.0.6.

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

  • postgresql_using – String argument which will indicate a SQL expression to render within the Postgresql-specific USING clause within ALTER COLUMN. This string is taken directly as raw SQL which must explicitly include any necessary quoting or escaping of tokens within the expression.

batch_alter_table(table_name, schema=None, recreate='auto', partial_reordering=None, copy_from=None, table_args=(), table_kwargs={}, reflect_args=(), reflect_kwargs={}, naming_convention=None)

Invoke a series of per-table migrations in batch.

Batch mode allows a series of operations specific to a table to be syntactically grouped together, and allows for alternate modes of table migration, in particular the “recreate” style of migration required by SQLite.

“recreate” style is as follows:

  1. A new table is created with the new specification, based on the migration directives within the batch, using a temporary name.

  2. the data copied from the existing table to the new table.

  3. the existing table is dropped.

  4. the new table is renamed to the existing table name.

The directive by default will only use “recreate” style on the SQLite backend, and only if directives are present which require this form, e.g. anything other than add_column(). The batch operation on other backends will proceed using standard ALTER TABLE operations.

The method is used as a context manager, which returns an instance of BatchOperations; this object is the same as Operations except that table names and schema names are omitted. E.g.:

with op.batch_alter_table("some_table") as batch_op:
    batch_op.add_column(Column('foo', Integer))
    batch_op.drop_column('bar')

The operations within the context manager are invoked at once when the context is ended. When run against SQLite, if the migrations include operations not supported by SQLite’s ALTER TABLE, the entire table will be copied to a new one with the new specification, moving all data across as well.

The copy operation by default uses reflection to retrieve the current structure of the table, and therefore batch_alter_table() in this mode requires that the migration is run in “online” mode. The copy_from parameter may be passed which refers to an existing Table object, which will bypass this reflection step.

Note

The table copy operation will currently not copy CHECK constraints, and may not copy UNIQUE constraints that are unnamed, as is possible on SQLite. See the section Dealing with Constraints for workarounds.

Parameters
  • table_name – name of table

  • schema – optional schema name.

  • recreate – under what circumstances the table should be recreated. At its default of "auto", the SQLite dialect will recreate the table if any operations other than add_column(), create_index(), or drop_index() are present. Other options include "always" and "never".

  • copy_from

    optional Table object that will act as the structure of the table being copied. If omitted, table reflection is used to retrieve the structure of the table.

  • reflect_args – a sequence of additional positional arguments that will be applied to the table structure being reflected / copied; this may be used to pass column and constraint overrides to the table that will be reflected, in lieu of passing the whole Table using copy_from.

  • reflect_kwargs – a dictionary of additional keyword arguments that will be applied to the table structure being copied; this may be used to pass additional table and reflection options to the table that will be reflected, in lieu of passing the whole Table using copy_from.

  • table_args – a sequence of additional positional arguments that will be applied to the new Table when created, in addition to those copied from the source table. This may be used to provide additional constraints such as CHECK constraints that may not be reflected.

  • table_kwargs – a dictionary of additional keyword arguments that will be applied to the new Table when created, in addition to those copied from the source table. This may be used to provide for additional table options that may not be reflected.

  • naming_convention

    a naming convention dictionary of the form described at Integration of Naming Conventions into Operations, Autogenerate which will be applied to the MetaData during the reflection process. This is typically required if one wants to drop SQLite constraints, as these constraints will not have names when reflected on this backend. Requires SQLAlchemy 0.9.4 or greater.

  • partial_reordering

    a list of tuples, each suggesting a desired ordering of two or more columns in the newly created table. Requires that batch_alter_table.recreate is set to "always". Examples, given a table with columns “a”, “b”, “c”, and “d”:

    Specify the order of all columns:

    with op.batch_alter_table(
            "some_table", recreate="always",
            partial_reordering=[("c", "d", "a", "b")]
    ) as batch_op:
        pass
    

    Ensure “d” appears before “c”, and “b”, appears before “a”:

    with op.batch_alter_table(
            "some_table", recreate="always",
            partial_reordering=[("d", "c"), ("b", "a")]
    ) as batch_op:
        pass
    

    The ordering of columns not included in the partial_reordering set is undefined. Therefore it is best to specify the complete ordering of all columns for best results.

    New in version 1.4.0.

Note

batch mode requires SQLAlchemy 0.8 or above.

bulk_insert(table: Union[Table, TableClause], rows: List[dict], multiinsert: bool = True) None

Issue a “bulk insert” operation using the current migration context.

This provides a means of representing an INSERT of multiple rows which works equally well in the context of executing on a live connection as well as that of generating a SQL script. In the case of a SQL script, the values are rendered inline into the statement.

e.g.:

from alembic import op
from datetime import date
from sqlalchemy.sql import table, column
from sqlalchemy import String, Integer, Date

# Create an ad-hoc table to use for the insert statement.
accounts_table = table('account',
    column('id', Integer),
    column('name', String),
    column('create_date', Date)
)

op.bulk_insert(accounts_table,
    [
        {'id':1, 'name':'John Smith',
                'create_date':date(2010, 10, 5)},
        {'id':2, 'name':'Ed Williams',
                'create_date':date(2007, 5, 27)},
        {'id':3, 'name':'Wendy Jones',
                'create_date':date(2008, 8, 15)},
    ]
)

When using –sql mode, some datatypes may not render inline automatically, such as dates and other special types. When this issue is present, Operations.inline_literal() may be used:

op.bulk_insert(accounts_table,
    [
        {'id':1, 'name':'John Smith',
                'create_date':op.inline_literal("2010-10-05")},
        {'id':2, 'name':'Ed Williams',
                'create_date':op.inline_literal("2007-05-27")},
        {'id':3, 'name':'Wendy Jones',
                'create_date':op.inline_literal("2008-08-15")},
    ],
    multiinsert=False
)

When using Operations.inline_literal() in conjunction with Operations.bulk_insert(), in order for the statement to work in “online” (e.g. non –sql) mode, the multiinsert flag should be set to False, which will have the effect of individual INSERT statements being emitted to the database, each with a distinct VALUES clause, so that the “inline” values can still be rendered, rather than attempting to pass the values as bound parameters.

Parameters
  • table – a table object which represents the target of the INSERT.

  • rows – a list of dictionaries indicating rows.

  • multiinsert – when at its default of True and –sql mode is not enabled, the INSERT statement will be executed using “executemany()” style, where all elements in the list of dictionaries are passed as bound parameters in a single list. Setting this to False results in individual INSERT statements being emitted per parameter set, and is needed in those cases where non-literal values are present in the parameter sets.

create_check_constraint(constraint_name: Optional[str], table_name: str, condition: Union[str, BinaryExpression], schema: Optional[str] = None, **kw) Optional[Table]

Issue a “create check constraint” instruction using the current migration context.

e.g.:

from alembic import op
from sqlalchemy.sql import column, func

op.create_check_constraint(
    "ck_user_name_len",
    "user",
    func.len(column('name')) > 5
)

CHECK constraints are usually against a SQL expression, so ad-hoc table metadata is usually needed. The function will convert the given arguments into a sqlalchemy.schema.CheckConstraint bound to an anonymous table in order to emit the CREATE statement.

Parameters
  • name – Name of the check constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions, name here can be None, as the event listener will apply the name to the constraint object when it is associated with the table.

  • table_name – String name of the source table.

  • condition – SQL expression that’s the condition of the constraint. Can be a string or SQLAlchemy expression language structure.

  • deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.

  • initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

create_exclude_constraint(constraint_name: str, table_name: str, *elements: Any, **kw: Any) Optional[Table]

Issue an alter to create an EXCLUDE constraint using the current migration context.

Note

This method is Postgresql specific, and additionally requires at least SQLAlchemy 1.0.

e.g.:

from alembic import op

op.create_exclude_constraint(
    "user_excl",
    "user",

    ("period", '&&'),
    ("group", '='),
    where=("group != 'some group'")

)

Note that the expressions work the same way as that of the ExcludeConstraint object itself; if plain strings are passed, quoting rules must be applied manually.

Parameters
  • name – Name of the constraint.

  • table_name – String name of the source table.

  • elements – exclude conditions.

  • where – SQL expression or SQL string with optional WHERE clause.

  • deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.

  • initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.

  • schema – Optional schema name to operate within.

create_foreign_key(constraint_name: Optional[str], source_table: str, referent_table: str, local_cols: List[str], remote_cols: List[str], onupdate: Optional[str] = None, ondelete: Optional[str] = None, deferrable: Optional[bool] = None, initially: Optional[str] = None, match: Optional[str] = None, source_schema: Optional[str] = None, referent_schema: Optional[str] = None, **dialect_kw) Optional[Table]

Issue a “create foreign key” instruction using the current migration context.

e.g.:

from alembic import op
op.create_foreign_key(
            "fk_user_address", "address",
            "user", ["user_id"], ["id"])

This internally generates a Table object containing the necessary columns, then generates a new ForeignKeyConstraint object which it then associates with the Table. Any event listeners associated with this action will be fired off normally. The AddConstraint construct is ultimately used to generate the ALTER statement.

Parameters
  • constraint_name – Name of the foreign key constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions, name here can be None, as the event listener will apply the name to the constraint object when it is associated with the table.

  • source_table – String name of the source table.

  • referent_table – String name of the destination table.

  • local_cols – a list of string column names in the source table.

  • remote_cols – a list of string column names in the remote table.

  • onupdate – Optional string. If set, emit ON UPDATE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.

  • ondelete – Optional string. If set, emit ON DELETE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.

  • deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.

  • source_schema – Optional schema name of the source table.

  • referent_schema – Optional schema name of the destination table.

create_index(index_name: str, table_name: str, columns: Sequence[Union[str, TextClause, Function]], schema: Optional[str] = None, unique: bool = False, **kw) Optional[Table]

Issue a “create index” instruction using the current migration context.

e.g.:

from alembic import op
op.create_index('ik_test', 't1', ['foo', 'bar'])

Functional indexes can be produced by using the sqlalchemy.sql.expression.text() construct:

from alembic import op
from sqlalchemy import text
op.create_index('ik_test', 't1', [text('lower(foo)')])
Parameters
  • index_name – name of the index.

  • table_name – name of the owning table.

  • columns – a list consisting of string column names and/or text() constructs.

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

  • unique – If True, create a unique index.

  • quote – Force quoting of this column’s name on or off, corresponding to True or False. When left at its default of None, the column identifier will be quoted according to whether the name is case sensitive (identifiers with at least one upper case character are treated as case sensitive), or if it’s a reserved word. This flag is only needed to force quoting of a reserved word which is not known by the SQLAlchemy dialect.

  • **kw – Additional keyword arguments not mentioned above are dialect specific, and passed in the form <dialectname>_<argname>. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.

create_primary_key(constraint_name: Optional[str], table_name: str, columns: List[str], schema: Optional[str] = None) Optional[Table]

Issue a “create primary key” instruction using the current migration context.

e.g.:

from alembic import op
op.create_primary_key(
            "pk_my_table", "my_table",
            ["id", "version"]
        )

This internally generates a Table object containing the necessary columns, then generates a new PrimaryKeyConstraint object which it then associates with the Table. Any event listeners associated with this action will be fired off normally. The AddConstraint construct is ultimately used to generate the ALTER statement.

Parameters
  • constraint_name – Name of the primary key constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions name here can be None, as the event listener will apply the name to the constraint object when it is associated with the table.

  • table_name – String name of the target table.

  • columns – a list of string column names to be applied to the primary key constraint.

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

create_table(table_name: str, *columns, **kw) Optional[Table]

Issue a “create table” instruction using the current migration context.

This directive receives an argument list similar to that of the traditional sqlalchemy.schema.Table construct, but without the metadata:

from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column
from alembic import op

op.create_table(
    'account',
    Column('id', INTEGER, primary_key=True),
    Column('name', VARCHAR(50), nullable=False),
    Column('description', NVARCHAR(200)),
    Column('timestamp', TIMESTAMP, server_default=func.now())
)

Note that create_table() accepts Column constructs directly from the SQLAlchemy library. In particular, default values to be created on the database side are specified using the server_default parameter, and not default which only specifies Python-side defaults:

from alembic import op
from sqlalchemy import Column, TIMESTAMP, func

# specify "DEFAULT NOW" along with the "timestamp" column
op.create_table('account',
    Column('id', INTEGER, primary_key=True),
    Column('timestamp', TIMESTAMP, server_default=func.now())
)

The function also returns a newly created Table object, corresponding to the table specification given, which is suitable for immediate SQL operations, in particular Operations.bulk_insert():

from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column
from alembic import op

account_table = op.create_table(
    'account',
    Column('id', INTEGER, primary_key=True),
    Column('name', VARCHAR(50), nullable=False),
    Column('description', NVARCHAR(200)),
    Column('timestamp', TIMESTAMP, server_default=func.now())
)

op.bulk_insert(
    account_table,
    [
        {"name": "A1", "description": "account 1"},
        {"name": "A2", "description": "account 2"},
    ]
)
Parameters
  • table_name – Name of the table

  • *columns – collection of Column objects within the table, as well as optional Constraint objects and Index objects.

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

  • **kw – Other keyword arguments are passed to the underlying sqlalchemy.schema.Table object created for the command.

Returns

the Table object corresponding to the parameters given.

create_table_comment(table_name: str, comment: Optional[str], existing_comment: None = None, schema: Optional[str] = None) Optional[Table]

Emit a COMMENT ON operation to set the comment for a table.

New in version 1.0.6.

Parameters
  • table_name – string name of the target table.

  • comment – string value of the comment being registered against the specified table.

  • existing_comment – String value of a comment already registered on the specified table, used within autogenerate so that the operation is reversible, but not required for direct use.

create_unique_constraint(constraint_name: Optional[str], table_name: str, columns: Sequence[str], schema: Optional[str] = None, **kw) Any

Issue a “create unique constraint” instruction using the current migration context.

e.g.:

from alembic import op
op.create_unique_constraint("uq_user_name", "user", ["name"])

This internally generates a Table object containing the necessary columns, then generates a new UniqueConstraint object which it then associates with the Table. Any event listeners associated with this action will be fired off normally. The AddConstraint construct is ultimately used to generate the ALTER statement.

Parameters
  • name – Name of the unique constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions, name here can be None, as the event listener will apply the name to the constraint object when it is associated with the table.

  • table_name – String name of the source table.

  • columns – a list of string column names in the source table.

  • deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.

  • initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

drop_column(table_name: str, column_name: str, schema: Optional[str] = None, **kw) Optional[Table]

Issue a “drop column” instruction using the current migration context.

e.g.:

drop_column('organization', 'account_id')
Parameters
  • table_name – name of table

  • column_name – name of column

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

  • mssql_drop_check – Optional boolean. When True, on Microsoft SQL Server only, first drop the CHECK constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.check_constraints, then exec’s a separate DROP CONSTRAINT for that constraint.

  • mssql_drop_default – Optional boolean. When True, on Microsoft SQL Server only, first drop the DEFAULT constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.default_constraints, then exec’s a separate DROP CONSTRAINT for that default.

  • mssql_drop_foreign_key – Optional boolean. When True, on Microsoft SQL Server only, first drop a single FOREIGN KEY constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.foreign_keys/sys.foreign_key_columns, then exec’s a separate DROP CONSTRAINT for that default. Only works if the column has exactly one FK constraint which refers to it, at the moment.

drop_constraint(constraint_name: str, table_name: str, type_: Optional[str] = None, schema: Optional[str] = None) Optional[Table]

Drop a constraint of the given name, typically via DROP CONSTRAINT.

Parameters
  • constraint_name – name of the constraint.

  • table_name – table name.

  • type_ – optional, required on MySQL. can be ‘foreignkey’, ‘primary’, ‘unique’, or ‘check’.

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

drop_index(index_name: str, table_name: Optional[str] = None, schema: Optional[str] = None, **kw) Optional[Table]

Issue a “drop index” instruction using the current migration context.

e.g.:

drop_index("accounts")
Parameters
  • index_name – name of the index.

  • table_name – name of the owning table. Some backends such as Microsoft SQL Server require this.

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

  • **kw – Additional keyword arguments not mentioned above are dialect specific, and passed in the form <dialectname>_<argname>. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.

drop_table(table_name: str, schema: Optional[str] = None, **kw: Any) None

Issue a “drop table” instruction using the current migration context.

e.g.:

drop_table("accounts")
Parameters
  • table_name – Name of the table

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

  • **kw – Other keyword arguments are passed to the underlying sqlalchemy.schema.Table object created for the command.

drop_table_comment(table_name: str, existing_comment: Optional[str] = None, schema: Optional[str] = None) Optional[Table]

Issue a “drop table comment” operation to remove an existing comment set on a table.

New in version 1.0.6.

Parameters
  • table_name – string name of the target table.

  • existing_comment – An optional string value of a comment already registered on the specified table.

execute(sqltext: Union[str, TextClause, Update], execution_options: None = None) Optional[Table]

Execute the given SQL using the current migration context.

The given SQL can be a plain string, e.g.:

op.execute("INSERT INTO table (foo) VALUES ('some value')")

Or it can be any kind of Core SQL Expression construct, such as below where we use an update construct:

from sqlalchemy.sql import table, column
from sqlalchemy import String
from alembic import op

account = table('account',
    column('name', String)
)
op.execute(
    account.update().\\
        where(account.c.name==op.inline_literal('account 1')).\\
        values({'name':op.inline_literal('account 2')})
        )

Above, we made use of the SQLAlchemy sqlalchemy.sql.expression.table() and sqlalchemy.sql.expression.column() constructs to make a brief, ad-hoc table construct just for our UPDATE statement. A full Table construct of course works perfectly fine as well, though note it’s a recommended practice to at least ensure the definition of a table is self-contained within the migration script, rather than imported from a module that may break compatibility with older migrations.

In a SQL script context, the statement is emitted directly to the output stream. There is no return result, however, as this function is oriented towards generating a change script that can run in “offline” mode. Additionally, parameterized statements are discouraged here, as they will not work in offline mode. Above, we use inline_literal() where parameters are to be used.

For full interaction with a connected database where parameters can also be used normally, use the “bind” available from the context:

from alembic import op
connection = op.get_bind()

connection.execute(
    account.update().where(account.c.name=='account 1').
    values({"name": "account 2"})
)

Additionally, when passing the statement as a plain string, it is first coerceed into a sqlalchemy.sql.expression.text() construct before being passed along. In the less likely case that the literal SQL string contains a colon, it must be escaped with a backslash, as:

op.execute("INSERT INTO table (foo) VALUES ('\:colon_value')")
Parameters

sqltext – Any legal SQLAlchemy expression, including:

Note

when passing a plain string, the statement is coerced into a sqlalchemy.sql.expression.text() construct. This construct considers symbols with colons, e.g. :foo to be bound parameters. To avoid this, ensure that colon symbols are escaped, e.g. \:foo.

Parameters

execution_options – Optional dictionary of execution options, will be passed to sqlalchemy.engine.Connection.execution_options().

f(name: str) sqlalchemy.sql.elements.conv

Indicate a string name that has already had a naming convention applied to it.

This feature combines with the SQLAlchemy naming_convention feature to disambiguate constraint names that have already had naming conventions applied to them, versus those that have not. This is necessary in the case that the "%(constraint_name)s" token is used within a naming convention, so that it can be identified that this particular name should remain fixed.

If the Operations.f() is used on a constraint, the naming convention will not take effect:

op.add_column('t', 'x', Boolean(name=op.f('ck_bool_t_x')))

Above, the CHECK constraint generated will have the name ck_bool_t_x regardless of whether or not a naming convention is in use.

Alternatively, if a naming convention is in use, and ‘f’ is not used, names will be converted along conventions. If the target_metadata contains the naming convention {"ck": "ck_bool_%(table_name)s_%(constraint_name)s"}, then the output of the following:

op.add_column(‘t’, ‘x’, Boolean(name=’x’))

will be:

CONSTRAINT ck_bool_t_x CHECK (x in (1, 0)))

The function is rendered in the output of autogenerate when a particular constraint name is already converted.

get_bind() Connection

Return the current ‘bind’.

Under normal circumstances, this is the Connection currently being used to emit SQL to the database.

In a SQL script context, this value is None. [TODO: verify this]

get_context()

Return the MigrationContext object that’s currently in use.

classmethod implementation_for(op_cls: Any) Callable

Register an implementation for a given MigrateOperation.

This is part of the operation extensibility API.

See also

Operation Plugins - example of use

inline_literal(value: Union[str, int], type_: None = None) _literal_bindparam

Produce an ‘inline literal’ expression, suitable for using in an INSERT, UPDATE, or DELETE statement.

When using Alembic in “offline” mode, CRUD operations aren’t compatible with SQLAlchemy’s default behavior surrounding literal values, which is that they are converted into bound values and passed separately into the execute() method of the DBAPI cursor. An offline SQL script needs to have these rendered inline. While it should always be noted that inline literal values are an enormous security hole in an application that handles untrusted input, a schema migration is not run in this context, so literals are safe to render inline, with the caveat that advanced types like dates may not be supported directly by SQLAlchemy.

See execute() for an example usage of inline_literal().

The environment can also be configured to attempt to render “literal” values inline automatically, for those simple types that are supported by the dialect; see EnvironmentContext.configure.literal_binds for this more recently added feature.

Parameters
  • value – The value to render. Strings, integers, and simple numerics should be supported. Other types like boolean, dates, etc. may or may not be supported yet by various backends.

  • type_ – optional - a sqlalchemy.types.TypeEngine subclass stating the type of this value. In SQLAlchemy expressions, this is usually derived automatically from the Python type of the value itself, as well as based on the context in which the value is used.

invoke(operation: MigrateOperation) Any

Given a MigrateOperation, invoke it in terms of this Operations instance.

classmethod register_operation(name: str, sourcename: Optional[str] = None) Callable

Register a new operation for this class.

This method is normally used to add new operations to the Operations class, and possibly the BatchOperations class as well. All Alembic migration operations are implemented via this system, however the system is also available as a public API to facilitate adding custom operations.

rename_table(old_table_name: str, new_table_name: str, schema: Optional[str] = None) Optional[Table]

Emit an ALTER TABLE to rename a table.

Parameters
  • old_table_name – old name.

  • new_table_name – new name.

  • schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct quoted_name.

class alembic.operations.BatchOperations(migration_context: MigrationContext, impl: Optional[BatchOperationsImpl] = None)

Modifies the interface Operations for batch mode.

This basically omits the table_name and schema parameters from associated methods, as these are a given when running under batch mode.

Note that as of 0.8, most of the methods on this class are produced dynamically using the Operations.register_operation() method.

Construct a new Operations

Parameters

migration_context – a MigrationContext instance.

add_column(column: Column, insert_before: Optional[str] = None, insert_after: Optional[str] = None) Optional[Table]

Issue an “add column” instruction using the current batch migration context.

alter_column(column_name: str, nullable: Optional[bool] = None, comment: bool = False, server_default: Union[Function, bool] = False, new_column_name: Optional[str] = None, type_: Optional[Union[TypeEngine, Type[TypeEngine]]] = None, existing_type: Optional[Union[TypeEngine, Type[TypeEngine]]] = None, existing_server_default: bool = False, existing_nullable: None = None, existing_comment: None = None, insert_before: None = None, insert_after: None = None, **kw) Optional[Table]

Issue an “alter column” instruction using the current batch migration context.

Parameters are the same as that of Operations.alter_column(), as well as the following option(s):

Parameters
  • insert_before

    String name of an existing column which this column should be placed before, when creating the new table.

    New in version 1.4.0.

  • insert_before

    String name of an existing column which this column should be placed after, when creating the new table. If both BatchOperations.alter_column.insert_before and BatchOperations.alter_column.insert_after are omitted, the column is inserted after the last existing column in the table.

    New in version 1.4.0.

create_check_constraint(constraint_name: str, condition: TextClause, **kw) Optional[Table]

Issue a “create check constraint” instruction using the current batch migration context.

The batch form of this call omits the source and schema arguments from the call.

create_exclude_constraint(constraint_name, *elements, **kw)

Issue a “create exclude constraint” instruction using the current batch migration context.

Note

This method is Postgresql specific, and additionally requires at least SQLAlchemy 1.0.

create_foreign_key(constraint_name: str, referent_table: str, local_cols: List[str], remote_cols: List[str], referent_schema: Optional[str] = None, onupdate: None = None, ondelete: None = None, deferrable: None = None, initially: None = None, match: None = None, **dialect_kw) None

Issue a “create foreign key” instruction using the current batch migration context.

The batch form of this call omits the source and source_schema arguments from the call.

e.g.:

with batch_alter_table("address") as batch_op:
    batch_op.create_foreign_key(
                "fk_user_address",
                "user", ["user_id"], ["id"])
create_index(index_name: str, columns: List[str], **kw) Optional[Table]

Issue a “create index” instruction using the current batch migration context.

create_primary_key(constraint_name: str, columns: List[str]) None

Issue a “create primary key” instruction using the current batch migration context.

The batch form of this call omits the table_name and schema arguments from the call.

create_table_comment(comment, existing_comment=None)

Emit a COMMENT ON operation to set the comment for a table using the current batch migration context.

New in version 1.6.0.

Parameters
  • comment – string value of the comment being registered against the specified table.

  • existing_comment – String value of a comment already registered on the specified table, used within autogenerate so that the operation is reversible, but not required for direct use.

create_unique_constraint(constraint_name: str, columns: Sequence[str], **kw) Any

Issue a “create unique constraint” instruction using the current batch migration context.

The batch form of this call omits the source and schema arguments from the call.

drop_column(column_name: str, **kw) Optional[Table]

Issue a “drop column” instruction using the current batch migration context.

drop_constraint(constraint_name: str, type_: Optional[str] = None) None

Issue a “drop constraint” instruction using the current batch migration context.

The batch form of this call omits the table_name and schema arguments from the call.

drop_index(index_name: str, **kw) Optional[Table]

Issue a “drop index” instruction using the current batch migration context.

drop_table_comment(existing_comment=None)

Issue a “drop table comment” operation to remove an existing comment set on a table using the current batch operations context.

New in version 1.6.0.

Parameters

existing_comment – An optional string value of a comment already registered on the specified table.