pg_dump man page on UnixWare

Man page or keyword search:  
man Server   3616 pages
apropos Keyword Search (all sections)
Output format
UnixWare logo
[printable version]

PG_DUMP(1)		PostgreSQL Client Applications		    PG_DUMP(1)

NAME
       pg_dump - extract a PostgreSQL database into a script file or other ar‐
       chive file

SYNOPSIS
       pg_dump [ option... ]  [ dbname ]

DESCRIPTION
       pg_dump is a utility for backing up a  PostgreSQL  database.  It	 makes
       consistent  backups  even  if  the database is being used concurrently.
       pg_dump does not block other users accessing the database  (readers  or
       writers).

       Dumps can be output in script or archive file formats. Script dumps are
       plain-text files containing the SQL commands  required  to  reconstruct
       the  database  to  the  state  it  was  in at the time it was saved. To
       restore from such a script, feed it to psql(1).	Script	files  can  be
       used  to	 reconstruct  the  database  even  on other machines and other
       architectures; with some modifications even on other SQL database prod‐
       ucts.

       The alternative archive file formats must be used with pg_restore(1) to
       rebuild the database. They allow pg_restore to be selective about  what
       is restored, or even to reorder the items prior to being restored.  The
       archive file formats are designed to be portable across architectures.

       When used with one of  the  archive  file  formats  and	combined  with
       pg_restore,  pg_dump  provides  a flexible archival and transfer mecha‐
       nism. pg_dump can be used to backup an entire database, then pg_restore
       can  be	used  to  examine the archive and/or select which parts of the
       database are to be restored. The most flexible output  file  format  is
       the  ``custom'' format (-Fc). It allows for selection and reordering of
       all archived items, and is compressed by default. The tar format	 (-Ft)
       is  not compressed and it is not possible to reorder data when loading,
       but it is otherwise quite flexible; moreover,  it  can  be  manipulated
       with standard Unix tools such as tar.

       While  running  pg_dump, one should examine the output for any warnings
       (printed on standard error), especially in  light  of  the  limitations
       listed below.

OPTIONS
       The  following  command-line  options control the content and format of
       the output.

       dbname Specifies the name of the database to be dumped. If this is  not
	      specified,  the environment variable PGDATABASE is used. If that
	      is not set, the user name specified for the connection is used.

       -a

       --data-only
	      Dump only the data, not the schema (data definitions).

	      This option is only meaningful for the  plain-text  format.  For
	      the  archive  formats,  you may specify the option when you call
	      pg_restore.

       -b

       --blobs
	      Include large objects in the dump. This is the default  behavior
	      except when --schema, --table, or --schema-only is specified, so
	      the -b switch is only useful to add large objects	 to  selective
	      dumps.

       -c

       --clean
	      Output  commands	to clean (drop) database objects prior to (the
	      commands for) creating them.

	      This option is only meaningful for the  plain-text  format.  For
	      the  archive  formats,  you may specify the option when you call
	      pg_restore.

       -C

       --create
	      Begin the output with a command to create	 the  database	itself
	      and  reconnect  to  the created database. (With a script of this
	      form, it doesn't matter which database  you  connect  to	before
	      running the script.)

	      This  option  is	only meaningful for the plain-text format. For
	      the archive formats, you may specify the option  when  you  call
	      pg_restore.

       -d

       --inserts
	      Dump  data as INSERT commands (rather than COPY). This will make
	      restoration very slow; it is mainly useful for making dumps that
	      can  be  loaded into non-PostgreSQL databases.  Also, since this
	      option generates a separate command for each row,	 an  error  in
	      reloading	 a row causes only that row to be lost rather than the
	      entire table contents.  Note that the  restore  may  fail	 alto‐
	      gether  if  you  have rearranged column order.  The -D option is
	      safe against column order changes, though even slower.

       -D

       --column-inserts

       --attribute-inserts
	      Dump data as INSERT commands with explicit column names  (INSERT
	      INTO table (column, ...) VALUES ...). This will make restoration
	      very slow; it is mainly useful for  making  dumps	 that  can  be
	      loaded  into  non-PostgreSQL databases.  Also, since this option
	      generates a separate command for each row, an error in reloading
	      a row causes only that row to be lost rather than the entire ta‐
	      ble contents.

       -E encoding

       --encoding=encoding
	      Create the dump in the  specified	 character  set	 encoding.  By
	      default,	the dump is created in the database encoding. (Another
	      way to get the same result is to set the PGCLIENTENCODING	 envi‐
	      ronment variable to the desired dump encoding.)

       -f file

       --file=file
	      Send output to the specified file. If this is omitted, the stan‐
	      dard output is used.

       -F format

       --format=format
	      Selects the format of the output.	 format can be one of the fol‐
	      lowing:

	      p

	      plain  Output a plain-text SQL script file (the default).

	      c

	      custom Output   a	  custom   archive  suitable  for  input  into
		     pg_restore. This is the most flexible format in  that  it
		     allows reordering of loading data as well as object defi‐
		     nitions. This format is also compressed by default.

	      t

	      tar    Output a tar archive suitable for input into  pg_restore.
		     Using this archive format allows reordering and/or exclu‐
		     sion of database objects at  the  time  the  database  is
		     restored.	It  is	also  possible	to limit which data is
		     reloaded at restore time.

       -i

       --ignore-version
	      Ignore version mismatch between pg_dump and the database server.

	      pg_dump can dump from servers running previous releases of Post‐
	      greSQL,  but  very  old versions are not supported anymore (cur‐
	      rently, those prior to 7.0).  Dumping from a server  newer  than
	      pg_dump  is  likely  not to work at all.	Use this option if you
	      need to override the version check (and if pg_dump  then	fails,
	      don't say you weren't warned).

       -n schema

       --schema=schema
	      Dump  only schemas matching schema; this selects both the schema
	      itself, and all its contained objects. When this option  is  not
	      specified, all non-system schemas in the target database will be
	      dumped. Multiple schemas can be selected by writing multiple  -n
	      switches. Also, the schema parameter is interpreted as a pattern
	      according to the same rules used by psql's \d commands (see Pat‐
	      terns  [psql(1)]),  so  multiple schemas can also be selected by
	      writing wildcard characters in the  pattern.  When  using	 wild‐
	      cards,  be careful to quote the pattern if needed to prevent the
	      shell from expanding the wildcards.

	      Note: When -n is specified, pg_dump makes no attempt to dump any
	      other  database  objects	that the selected schema(s) may depend
	      upon. Therefore, there is no guarantee that  the	results	 of  a
	      specific-schema  dump can be successfully restored by themselves
	      into a clean database.

	      Note: Non-schema objects such as blobs are not dumped when -n is
	      specified.  You  can add blobs back to the dump with the --blobs
	      switch.

       -N schema

       --exclude-schema=schema
	      Do not dump any schemas matching the schema pattern. The pattern
	      is interpreted according to the same rules as for -n.  -N can be
	      given more than once to exclude schemas matching any of  several
	      patterns.

	      When  both -n and -N are given, the behavior is to dump just the
	      schemas that match at least one -n switch but no -N switches. If
	      -N  appears  without  -n,	 then schemas matching -N are excluded
	      from what is otherwise a normal dump.

       -o

       --oids Dump object identifiers (OIDs) as part of the data for every ta‐
	      ble. Use this option if your application references the OID col‐
	      umns in some way (e.g., in a foreign  key	 constraint).	Other‐
	      wise, this option should not be used.

       -O

       --no-owner
	      Do  not output commands to set ownership of objects to match the
	      original database.  By default, pg_dump issues  ALTER  OWNER  or
	      SET SESSION AUTHORIZATION statements to set ownership of created
	      database objects.	 These statements will fail when the script is
	      run  unless  it is started by a superuser (or the same user that
	      owns all of the objects in the script).  To make a  script  that
	      can  be  restored by any user, but will give that user ownership
	      of all the objects, specify -O.

	      This option is only meaningful for the  plain-text  format.  For
	      the  archive  formats,  you may specify the option when you call
	      pg_restore.

       -R

       --no-reconnect
	      This option is obsolete but still accepted for backwards compat‐
	      ibility.

       -s

       --schema-only
	      Dump only the object definitions (schema), not data.

       -S username

       --superuser=username
	      Specify  the superuser user name to use when disabling triggers.
	      This is only relevant if --disable-triggers is used.   (Usually,
	      it's  better  to leave this out, and instead start the resulting
	      script as superuser.)

       -t table

       --table=table
	      Dump only tables (or views or sequences) matching table.	Multi‐
	      ple  tables  can	be  selected  by writing multiple -t switches.
	      Also, the table parameter is interpreted as a pattern  according
	      to  the  same  rules  used  by  psql's \d commands (see Patterns
	      [psql(1)]), so multiple tables can also be selected  by  writing
	      wildcard	characters  in	the  pattern. When using wildcards, be
	      careful to quote the pattern if needed to prevent the shell from
	      expanding the wildcards.

	      The  -n  and -N switches have no effect when -t is used, because
	      tables selected  by  -t  will  be	 dumped	 regardless  of	 those
	      switches, and non-table objects will not be dumped.

	      Note: When -t is specified, pg_dump makes no attempt to dump any
	      other database objects that the  selected	 table(s)  may	depend
	      upon.  Therefore,	 there	is  no guarantee that the results of a
	      specific-table dump can be successfully restored	by  themselves
	      into a clean database.

	      Note:  The behavior of the -t switch is not entirely upward com‐
	      patible with pre-8.2 PostgreSQL versions. Formerly,  writing  -t
	      tab  would  dump	all  tables  named  tab, but now it just dumps
	      whichever one is visible in your default search path. To get the
	      old  behavior  you  can  write  -t '*.tab'. Also, you must write
	      something like -t sch.tab to select  a  table  in	 a  particular
	      schema, rather than the old locution of -n sch -t tab.

       -T table

       --exclude-table=table
	      Do  not  dump any tables matching the table pattern. The pattern
	      is interpreted according to the same rules as for -t.  -T can be
	      given  more  than once to exclude tables matching any of several
	      patterns.

	      When both -t and -T are given, the behavior is to dump just  the
	      tables  that match at least one -t switch but no -T switches. If
	      -T appears without -t, then tables matching -T are excluded from
	      what is otherwise a normal dump.

       -v

       --verbose
	      Specifies	 verbose  mode.	 This  will  cause  pg_dump  to output
	      detailed object comments and start/stop times to the dump	 file,
	      and progress messages to standard error.

       -x

       --no-privileges

       --no-acl
	      Prevent dumping of access privileges (grant/revoke commands).

       --disable-dollar-quoting
	      This option disables the use of dollar quoting for function bod‐
	      ies, and forces them to be quoted using SQL standard string syn‐
	      tax.

       --disable-triggers
	      This option is only relevant when creating a data-only dump.  It
	      instructs pg_dump to include  commands  to  temporarily  disable
	      triggers	on  the	 target tables while the data is reloaded. Use
	      this if you have referential integrity checks or other  triggers
	      on the tables that you do not want to invoke during data reload.

	      Presently,  the  commands emitted for --disable-triggers must be
	      done as superuser. So, you should also specify a superuser  name
	      with  -S, or preferably be careful to start the resulting script
	      as a superuser.

	      This option is only meaningful for the  plain-text  format.  For
	      the  archive  formats,  you may specify the option when you call
	      pg_restore.

       --use-set-session-authorization
	      Output SQL-standard SET SESSION AUTHORIZATION  commands  instead
	      of  ALTER	 OWNER	commands  to  determine object ownership. This
	      makes the dump more standards compatible, but depending  on  the
	      history  of  the	objects in the dump, may not restore properly.
	      Also, a dump using  SET  SESSION	AUTHORIZATION  will  certainly
	      require superuser privileges to restore correctly, whereas ALTER
	      OWNER requires lesser privileges.

       -Z 0..9

       --compress=0..9
	      Specify the compression level to use  in	archive	 formats  that
	      support  compression.  (Currently only the custom archive format
	      supports compression.)

       The following command-line  options  control  the  database  connection
       parameters.

       -h host

       --host=host
	      Specifies	 the  host  name of the machine on which the server is
	      running. If the value begins with a slash, it  is	 used  as  the
	      directory	 for the Unix domain socket. The default is taken from
	      the PGHOST environment variable, if  set,	 else  a  Unix	domain
	      socket connection is attempted.

       -p port

       --port=port
	      Specifies	 the  TCP port or local Unix domain socket file exten‐
	      sion on which the server is listening for connections.  Defaults
	      to  the  PGPORT  environment  variable, if set, or a compiled-in
	      default.

       -U username
	      Connect as the given user

       -W     Force a password prompt. This should happen automatically if the
	      server requires password authentication.

ENVIRONMENT
       PGDATABASE

       PGHOST

       PGPORT

       PGUSER Default connection parameters.

       This utility, like most other PostgreSQL utilities, also uses the envi‐
       ronment variables supported by libpq (see in the documentation).

DIAGNOSTICS
       pg_dump internally executes SELECT statements.  If  you	have  problems
       running	pg_dump, make sure you are able to select information from the
       database using, for example, psql(1). Also, any default connection set‐
       tings  and  environment	variables  used by the libpq front-end library
       will apply.

NOTES
       If your database cluster has any local additions to the template1 data‐
       base,  be  careful  to restore the output of pg_dump into a truly empty
       database; otherwise you are likely to get errors due to duplicate defi‐
       nitions	of  the	 added	objects. To make an empty database without any
       local additions, copy from template0 not template1, for example:

       CREATE DATABASE foo WITH TEMPLATE template0;

       pg_dump has a few limitations:

       · When a data-only dump is chosen and the option --disable-triggers  is
	 used,	pg_dump	 emits	commands  to  disable  triggers on user tables
	 before inserting the data and commands to re-enable  them  after  the
	 data  has been inserted. If the restore is stopped in the middle, the
	 system catalogs may be left in the wrong state.

       Members of tar archives are limited to a size less than 8 GB.  (This is
       an  inherent  limitation of the tar file format.) Therefore this format
       cannot be used if the textual representation of any one	table  exceeds
       that  size. The total size of a tar archive and any of the other output
       formats is not limited, except possibly by the operating system.

       The dump file produced by pg_dump does not contain the statistics  used
       by  the	optimizer  to  make query planning decisions. Therefore, it is
       wise to run ANALYZE after restoring from a dump	file  to  ensure  good
       performance.

       Because	pg_dump	 is  used  to transfer data to newer versions of Post‐
       greSQL, the output of pg_dump can be loaded into newer PostgreSQL data‐
       bases. It also can read older PostgreSQL databases. However, it usually
       cannot read newer PostgreSQL databases or produce dump output that  can
       be  loaded  into older database versions. To do this, manual editing of
       the dump file might be required.

EXAMPLES
       To dump a database called mydb into a SQL-script file:

       $ pg_dump mydb > db.sql

       To reload such a script into a (freshly created) database named newdb:

       $ psql -d newdb -f db.sql

       To dump a database into a custom-format archive file:

       $ pg_dump -Fc mydb > db.dump

       To reload an archive file  into	a  (freshly  created)  database	 named
       newdb:

       $ pg_restore -d newdb db.dump

       To dump a single table named mytab:

       $ pg_dump -t mytab mydb > db.sql

       To  dump	 all  tables whose names start with emp in the detroit schema,
       except for the table named employee_log:

       $ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql

       To dump all schemas whose names start with east or west and end in gsm,
       excluding any schemas whose names contain the word test:

       $ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql

       The   same,  using  regular  expression	notation  to  consolidate  the
       switches:

       $ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql

       To dump all database objects except for tables whose names  begin  with
       ts_:

       $ pg_dump -T 'ts_*' mydb > db.sql

       To specify an upper-case or mixed-case name in -t and related switches,
       you need to double-quote the name; else it will be folded to lower case
       (see  Patterns  [psql(1)]). But double quotes are special to the shell,
       so in turn they must be quoted.	Thus, to dump a single	table  with  a
       mixed-case name, you need something like

       $ pg_dump -t '"MixedCaseName"' mydb > mytab.sql

HISTORY
       The pg_dump utility first appeared in Postgres95 release 0.02. The non-
       plain-text output formats were introduced in PostgreSQL release 7.1.

SEE ALSO
       pg_dumpall(1), pg_restore(1), psql(1)

Application			  2008-01-03			    PG_DUMP(1)
[top]
                             _         _         _ 
                            | |       | |       | |     
                            | |       | |       | |     
                         __ | | __ __ | | __ __ | | __  
                         \ \| |/ / \ \| |/ / \ \| |/ /  
                          \ \ / /   \ \ / /   \ \ / /   
                           \   /     \   /     \   /    
                            \_/       \_/       \_/ 
More information is available in HTML format for server UnixWare

List of man pages available for UnixWare

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net