Specifies the name of the database to be dumped. If this is not specified, the environment variable Show -a --data-only Dump only the data, not the schema (data definitions). Table data, large objects, and sequence values are dumped. This option is similar to, but for historical reasons not
identical to, specifying -b --blobs Include large objects in the dump. This is the default behavior except when -B --no-blobs
Exclude large objects in the dump. When both -c --clean Output commands to clean (drop) database objects prior to outputting the commands for creating them. (Unless This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call -C --create Begin the output with a command to create the database itself and reconnect to the created database. (With a script of this form, it doesn't matter which database in the destination installation you connect to before running the script.) If With This option is ignored when emitting an archive (non-text) output file. For the
archive formats, you can specify the option when you call -e --extension= Dump only extensions matching Any configuration relation registered by NoteWhen -E --encoding= Create the dump in the specified character set encoding. By default, the dump is
created in the database encoding. (Another way to get the same result is to set the -f --file= Send output to the specified file. This parameter
can be omitted for file based output formats, in which case the standard output is used. It must be given for the directory output format however, where it specifies the target directory instead of a file. In this case the directory is created by -F --format= Selects the format of the output.
Output a plain-text SQL script file (the default). c custom Output a custom-format archive suitable for input into pg_restore. Together with the directory output format, this is the most flexible output format in that it allows manual selection and reordering of archived items during restore. This format is also compressed by default. d directory
Output a directory-format archive suitable for input into pg_restore. This will create a directory with one file for each table and blob being dumped, plus a so-called Table of Contents file describing the dumped objects in a machine-readable format that pg_restore can read. A directory format archive can be manipulated with standard Unix tools; for example, files in an uncompressed archive can be compressed with the gzip tool. This format is compressed by default and also supports parallel dumps. t tar Output a -j --jobs= Run the dump in parallel by dumping pg_dump will open
Requesting exclusive locks on database objects while running a parallel dump could cause the dump to fail. The reason is that the pg_dump leader process requests shared locks
(ACCESS SHARE) on the objects that the worker processes are going to dump later in order to make sure that nobody deletes them and makes them go away while the dump is running. If another client then requests an exclusive lock on a table, that lock will not be granted but will be queued waiting for the shared lock of the leader process to be released.
Consequently any other access to the table will not be granted either and will queue after the exclusive lock request. This includes the worker process trying to dump the table. Without any precautions this would be a classic deadlock situation. To detect this conflict, the pg_dump worker process requests another shared lock using the For a consistent backup, the database server needs to support synchronized snapshots, a feature that was introduced in PostgreSQL 9.2 for primary servers and 10 for standbys. With this feature, database clients can ensure they see the same data set even though they use different connections. If you want to run a parallel dump of a pre-9.2 server, you need to make sure that the database content doesn't change from between the time the leader connects to the database until the last worker job has connected to the database. The easiest way to do this is to
halt any data modifying processes (DDL and DML) accessing the database before starting the backup. You also need to specify the -n --schema= Dump only schemas matching NoteWhen NoteNon-schema objects such as blobs are not dumped when -N --exclude-schema= Do not dump any schemas matching When both -O --no-owner Do not output commands to set ownership of objects to match the original database. By default, pg_dump issues This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call -R --no-reconnect
This option is obsolete but still accepted for backwards compatibility. -s --schema-only Dump only the object definitions (schema), not data. This option is the inverse of (Do not confuse this with the To exclude table data for
only a subset of tables in the database, see -S --superuser= Specify the superuser user name to use when disabling triggers. This is relevant only if -t --table= Dump only tables with names matching As well as tables, this option can be used to dump the definition of matching views, materialized views, foreign tables, and sequences. It will not dump the contents of views or materialized views, and the contents of foreign tables will only be dumped if the corresponding foreign server is specified with
The NoteWhen -T --exclude-table= Do not dump any tables matching When both -v --verbose Specifies verbose mode. This will cause pg_dump to output detailed object comments and start/stop times to the dump file, and progress messages to standard error. Repeating the option causes additional debug-level messages to appear on standard error. -V --version
Print the pg_dump version and exit. -x --no-privileges --no-acl Prevent dumping of access privileges (grant/revoke commands). -Z --compress= Specify the compression level to use. Zero means no compression. For the custom and directory archive formats, this specifies compression of individual table-data segments, and the default is to compress at a moderate level. For plain text output, setting a nonzero compression level causes the entire output file to be compressed, as though it had been fed through gzip; but the default is not to compress. The tar archive format currently does not support compression at all. --binary-upgrade This option is for use by in-place upgrade utilities. Its use for other purposes is not recommended or supported. The behavior of the option may change in future releases without notice. --column-inserts --attribute-inserts Dump data as --disable-dollar-quoting This option disables the use of dollar quoting for function bodies, and forces them to be quoted using SQL standard string syntax. --disable-triggers This option is relevant only when creating a data-only dump. It instructs pg_dump to include commands to temporarily disable triggers on the target tables while the data is restored. Use this if you have referential integrity checks or other triggers on the tables that you do not want to invoke during data restore. Presently, the commands emitted for This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call --enable-row-security This option is relevant only when dumping the contents of a table which has row security. By default, pg_dump will set row_security to off, to ensure that all data is dumped from the table. If the user does not have sufficient privileges to bypass row security, then an error is thrown. This parameter instructs pg_dump to set row_security to on instead, allowing the user to dump the parts of the contents of the table that they have access to. Note that if you use this option currently, you probably also want the dump be in --exclude-table-data= Do not dump data for any tables matching
To exclude data for all tables in the database, see --extra-float-digits= Use the specified value of --if-exists Use conditional commands (i.e., add an --include-foreign-data= Dump the data for any foreign table with a foreign server matching NoteWhen --inserts
Dump data as --load-via-partition-root When dumping data for a table partition, make the It is best not to use parallelism when restoring from an archive made with this option, because pg_restore will not know exactly which partition(s) a given archive data item will load data into. This could result in inefficiency due to lock conflicts between parallel jobs, or perhaps even restore failures due to foreign key constraints being set up before all the relevant data is loaded. --lock-wait-timeout= Do not wait forever to acquire shared table locks at the beginning of the dump. Instead fail if unable to lock a table within the specified --no-comments Do not dump comments. --no-publications Do not dump publications. --no-security-labels Do not dump security labels. --no-subscriptions Do not dump subscriptions. --no-sync By default, --no-synchronized-snapshots This option allows running --no-tablespaces Do not output commands to select tablespaces. With this option, all objects will be created in whichever tablespace is the default during restore. This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call --no-toast-compression Do not output commands to set TOAST compression methods. With this option, all columns will be restored with the default compression setting. --no-unlogged-table-data Do not dump the contents of unlogged tables. This option has no effect on whether or not the table definitions (schema) are dumped; it only suppresses dumping the table data. Data in unlogged tables is always excluded when dumping from a standby server. --on-conflict-do-nothing Add --quote-all-identifiers Force quoting of all identifiers. This option is recommended when dumping a database from a server
whose PostgreSQL major version is different from pg_dump's, or when the output is intended to be loaded into a server of a different major version. By default, pg_dump quotes only identifiers that are reserved words in its own major version. This sometimes results in compatibility issues when dealing with servers of other versions that may have slightly different sets of reserved words. Using --rows-per-insert= Dump data as --section= Only dump the named section. The section name can be The data section contains actual table data, large-object contents, and sequence values. Post-data items include definitions of indexes, triggers, rules, and constraints other than validated check constraints. Pre-data items include all other data definition items. --serializable-deferrable Use a This option is not beneficial for a dump which is intended only for disaster recovery. It could be useful for a dump used to load a copy of the database for reporting or other read-only load sharing while the original database continues to be updated. Without it the dump may reflect a state which is not consistent with any serial execution of the transactions eventually committed. For example, if batch processing techniques are used, a batch may show as closed in the dump without all of the items which are in the batch appearing. This option will make no difference if there are no read-write transactions active when pg_dump is started. If read-write transactions are active, the start of the dump may be delayed for an indeterminate length of time. Once running, performance with or without the switch is the same. --snapshot= Use the specified synchronized snapshot when making a dump of the database (see Table 9.90 for more details). This option is useful when needing to synchronize the dump with a logical replication slot (see Chapter 49) or with a concurrent session. In the case of a parallel dump, the snapshot name defined by this option is used rather than taking a new snapshot. --strict-names Require that each extension ( This
option has no effect on --use-set-session-authorization Output SQL-standard -? --help Show help about pg_dump command line arguments, and exit. The following command-line options control the database connection parameters.
Specifies the name of the database to connect to. This is equivalent to specifying
-h --host= Specifies the host name of the machine on which
the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the -p --port= Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the -U --username= User name to connect as. -w --no-password Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a -W --password Force pg_dump to prompt for a password before connecting to a database. This option is never essential, since pg_dump will automatically prompt for a password if the server demands password authentication. However, pg_dump will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing --role= Specifies a role name to be used to create the dump. This option causes pg_dump to issue a Which type of database that supports one or more at the same time?Difference between Single User and Multi-User Database Systems:. What is it called when more than one user access the same data at the same time as Mcq?Concurrency access
This means that no one else has access to the file at the same time. In database systems, concurrency is managed thus allowing multiple users access to the same record.
What is multiA multiuser environment is one in which other users can connect and make changes to the same database that you are working with. As a result, several users might be working with the same database objects at the same time.
What is the difference between single user and multiDifference between Single User and Multi-User OS
A Single-User Operating System is a system in which only one user can access the computer system at a time. A Multi-User Operating System is a system that allows more than one user to access a computer system at one time.
|