diff --git a/LICENSE b/LICENSE deleted file mode 100644 index fce0a5b..0000000 --- a/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2017 Ignacio Elizaga - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/README.md b/README.md deleted file mode 100644 index c2a3acd..0000000 --- a/README.md +++ /dev/null @@ -1,180 +0,0 @@ -# Mock-data [![Go Version](https://img.shields.io/badge/go-v1.7.4-green.svg?style=flat-square)](https://golang.org/dl/) - - Here are my tables - Load them [with data] for me - I don't care how - -Mock-data is the result of a Pivotal internal hackathon in July 2017. The original idea behind it is to allow users to test database queries with sets of fake data in any pre-defined table. - -With Mock-data users can have their own tables defined with any particular datatypes. It's only needed to provide the target table(s) and the number of rows of randomly generated data to insert. - -An ideal environment to make Mock-data work without any errors would be -+ Tables with no constraints -+ No custom datatypes - -On a second iteration work has been done to ensure proper functioning with database constraints such as primary keys, unique keys or foreign keys. However, please **DO MAKE SURE TO TAKE A BAKCUP** of your database before you mock data in it as it has not been tested extensively. - -Check on the "Known Issues" section below for more information about current identified bugs. - -# Important information and disclaimer - -Mock-data idea is to generate fake data in new test cluster and it is **NOT TO BE USED IN PRODUCTION ENVIRONMENTS**. Please ensure you have a backup of your database before running Mock-data in an environment you can't afford losing. - -# Supported database engines - -+ PostgresSQL -+ Greenplum Database -+ HAWQ/HDB -+ MySQL (coming soon?) -+ Oracle (coming soon?) - -# Supported datatypes - -+ All datatypes that are listed on the [postgres datatype](https://www.postgresql.org/docs/9.6/static/datatype.html) website are supported -+ As Greenplum / HAWQ are both base from postgres, the supported postgres datatype also apply in their case - -# Dependencies - -+ [Data Faker](https://github.com/icrowley/fake) by icrowley -+ [Progress bar](https://github.com/vbauerster/mpb) by vbauerster, had to use v3.0.4, most of the latest version is not compatible anymore -+ [Postgres Driver](https://github.com/lib/pq) by lib/pq -+ [Go Logger](https://github.com/op/go-logging) by op/go-logging - -# How it works. - -+ PARSES the CLI arguments -+ CHECKS if the database connection can be established -+ IF all database flag is set, then extract all the tables in the database -+ ELSE IF tables are specified then uses only target tables -+ CREATES a backup of all constraints (PK, UK, CK, FK ) and unique indexes (due to cascade nature of the drop constraints) -+ STORES this constraint/unique index information in memory -+ REMOVES all the constraints on the table -+ STARTS loading random data based on the columns datatype -+ READS all the constraints information from memory -+ FIXES PK and UK initially -+ FIXES FK -+ CHECK constraints are ignored (coming soon?) -+ LOADS constraints that it had backed up (Mock-data can fail at this stage if its not able to fix the constraint violations) - -# Usage - -``` -USAGE: mockd -DATABASE ENGINE: - postgres Postgres database - greenplum Greenplum database - hdb Hawq Database - help Show help -OPTIONS: - Execute "mockd -h" for all the database specific options -``` - -# How to use it - -### Users - -[Download](https://github.com/pivotal/mock-data/releases) the latest release and you're ready to go! - -**NOTE:** if you have the datatype UUID defined on the table, make sure you have the execute "uuidgen" installed on the OS. - -### Developers - -+ Clone the github repo - -``` -git clone https://github.com/pivotal/mock-data.git -``` - -or use "go get" to download the source after setting the GOPATH - -``` -go get github.com/pivotal/mock-data -``` - -+ Download all the dependencies - -``` -go get github.com/icrowley/fake -go get github.com/vbauerster/mpb -go get github.com/lib/pq -go get github.com/op/go-logging -``` - -+ You can modify the code and execute using command before creating a build - -``` -go run *.go -``` - -+ To build binaries for different OS, you can use for eg.s - -``` -env GOOS=linux GOARCH=amd64 go build -``` - -# Command Reference - -+ For PostgresSQL / Greenplum Database / HAWQ - -``` -XXXXX:bin XXXXX ./mockd-mac postgres -help -2017-07-16 10:58:43.609:INFO > Parsing all the command line arguments -Usage of postgres: - -d string - The database name where the table resides (default "postgres") - -h string - The hostname that can be used to connect to the database (default "localhost") - -i Ignore checking and fixing constraint issues - -n int - The total number of mocked rows that is needed (default 1) - -p int - The port that is used by the database engine (default 5432) - -t string - The table name to be filled in with mock data - -u string - The username that can be used to connect to the database (default "postgres") - -w string - The password for the user that can be used to connect to the database - -x Mock all the tables in the database -``` - -# Examples - -+ Mock one table will random data - -``` -bin/mockd-mac -n -u -d -t -``` - -![single table](https://github.com/pivotal/mock-data/blob/master/img/singletable.gif) - -+ Mock multiple table with random data - -``` -bin/mockd-mac -n -u -d -t ,,.... -``` - -![multiple table](https://github.com/pivotal/mock-data/blob/master/img/multipletable.gif) - -+ Mock entire database - -``` -bin/mockd-mac -n -u -d -x -``` - -![All Database](https://github.com/pivotal/mock-data/blob/master/img/alldb.gif) - -# Known Issues - -1. If you have a unique index on a foreign key column then there are chance the constraint creation would fail, since mockd doesn't pick up unique value for foriegn key value it picks up random values from the reference table. -2. Still having issues with Check constraint, only check that works is "COLUMN > 0" -3. On Greenplum Datbase/HAWQ partition tables are not supported (due to check constraint issues defined above) -4. Custom datatypes are not supported - -# Collaborate - -You can sumbit issues or pull request via [github](https://github.com/pivotal/mock-data) and we will try our best to fix them. - -# Authors - -[![Ignacio](https://img.shields.io/badge/github-Ignacio_Elizaga-green.svg?style=social)](https://github.com/ielizaga) [![Aitor](https://img.shields.io/badge/github-Aitor_Cedres-green.svg?style=social)](https://github.com/Zerpet) [![Juan](https://img.shields.io/badge/github-Juan_Ramos-green.svg?style=social)](https://github.com/jujoramos) [![Faisal](https://img.shields.io/badge/github-Faisal_Ali-green.svg?style=social)](https://github.com/faisaltheparttimecoder) [![Adam](https://img.shields.io/badge/github-Adam_Clevy-green.svg?style=social)](https://github.com/adamclevy) diff --git a/argParser.go b/argParser.go deleted file mode 100644 index 64837fe..0000000 --- a/argParser.go +++ /dev/null @@ -1,121 +0,0 @@ -package main - -import ( - "flag" - "fmt" - "os" - "strings" - "github.com/pivotal/mock-data/core" -) - -// Connector struct -type connector struct { - Engine string - Db, Username, Password, Host, Table string - Port, RowCount int - AllTables, IgnoreConstraints, Debug bool -} - -// The connector -var Connector connector - -// Program Usage. -func ShowHelp() { - fmt.Print(` -USAGE: mockd -DATABASE ENGINE: - postgres Postgres database - greenplum Greenplum database - hdb Hawq Database -OPTIONS: - Execute "mockd -h" for all the database specific options -OTHERS: - "mockd version" for the version of the mockd application - "mockd help" for reprinting this help menu - -`) - os.Exit(0) -} - -// Parse the arguments passed via the OS command line -func ArgPaser() { - - // Postgres/Greenplum/Hawq(HDB) Command Parser - postgresFlag := flag.NewFlagSet("postgres", flag.ExitOnError) - postgresPortFlag := postgresFlag.Int("p", 5432, "The port that is used by the database engine") - postgresDBFlag := postgresFlag.String("d", "postgres", "The database name where the table resides") - postgresUsernameFlag := postgresFlag.String("u", *postgresDBFlag, "The username that can be used to connect to the database") - postgresPasswordFlag := postgresFlag.String("w", "", "The password for the user that can be used to connect to the database") - postgresHostFlag := postgresFlag.String("h", "localhost", "The hostname that can be used to connect to the database") - postgresTotalRowsFlag := postgresFlag.Int("n", 1, "The total number of mocked rows that is needed") - postgresTableFlag := postgresFlag.String("t", "", "The table name to be filled in with mock data") - postgresAllDBFlag := postgresFlag.Bool("x", false, "Mock all the tables in the database") - postgresIgnoreConstrFlag := postgresFlag.Bool("i", false, "Ignore checking and fixing constraint issues") - postgresDebugFlag := postgresFlag.Bool("debug", false, "Print debug information") - flag.Parse() - - // If no COMMAND keyword provided then show the help menu. - if len(os.Args) == 1 { - log.Errorf("Missing Database engine parameters ...") - ShowHelp() - } - - // Greenplum , HDB is built on top of postgres, so they will use the same Mock logic - var engineArgs = os.Args[1] - // Postgres - var postgresEngines = []string{"postgres", "greenplum", "hawq"} - - // If there is a command keyword provided then check to what is it and then parse the appropriate options - switch { - // MockD Version - case engineArgs == "version": - fmt.Printf("MockD Version: %s\n", version) - os.Exit(0) - // Help Menu - case engineArgs == "help": - ShowHelp() - // Postgres command parser - case core.StringContains(engineArgs, postgresEngines): - postgresFlag.Parse(os.Args[2:]) - // If not of the list of supported engines, error out - default: - log.Errorf("%q is not valid database engine ...", os.Args[1]) - ShowHelp() - } - - // All checks passed lets parse the command line arguments - log.Info("Parsing all the command line arguments") - - // Parse the command line argument - // Postgres database engine - if postgresFlag.Parsed() { - - // Store all connector information - DBEngine = "postgres" - Connector.Engine = engineArgs - Connector.Db = *postgresDBFlag - Connector.Username = *postgresUsernameFlag - Connector.Password = *postgresPasswordFlag - Connector.Table = *postgresTableFlag - Connector.Port = *postgresPortFlag - Connector.Host = *postgresHostFlag - Connector.RowCount = *postgresTotalRowsFlag - Connector.AllTables = *postgresAllDBFlag - Connector.IgnoreConstraints = *postgresIgnoreConstrFlag - Connector.Debug = *postgresDebugFlag - - // If both -t and -x are provided, error out - if Connector.AllTables && strings.TrimSpace(Connector.Table) != "" { - log.Error("Cannot have both table (-t) and all tables (-x) flag working together, choose one.\n") - fmt.Printf("Usage of engine: %s\n", Connector.Engine) - postgresFlag.PrintDefaults() - os.Exit(1) - } else if !Connector.AllTables && strings.TrimSpace(Connector.Table) == "" { // if -t is empty - log.Error("Provide the list of tables (-t) to mock or -x for all database.\n") - fmt.Printf("Usage of engine: %s\n", Connector.Engine) - postgresFlag.PrintDefaults() - os.Exit(1) - } - } - -} diff --git a/core/array_generator.go b/core/array_generator.go deleted file mode 100644 index 189863f..0000000 --- a/core/array_generator.go +++ /dev/null @@ -1,169 +0,0 @@ -package core - -import ( - "strings" - "strconv" - "github.com/icrowley/fake" -) - - -// Array function argument catcher -var ArrayArgs = make(map[string]interface{}) - - -// Random array generator for array datatypes -func ArrayGenerator(dt string) (string, error) { - - // Getting the value of itertors - maxValues,_ := RandomInt(0, 6) - maxIteration, _ := RandomInt(0, 3) - - // Collectors - var value interface{} - var resultArrayCollector []string - - for i := 0; i < maxIteration; i++ { // Max number of arrays - var resultArray []string - for j := 0; j < maxValues; j++ { // max number of values in a array. - - // Call appropriate function to generate a array - switch dt { - - case "string": // strings - value = RandomString(ArrayArgs["strlen"].(int)) - - case "int": // int - intvalue, err := RandomInt(ArrayArgs["intmin"].(int), ArrayArgs["intmax"].(int)) - if err != nil { - return "", err - } - value = strconv.Itoa(intvalue) - - case "float": // float - floatvalue, err := RandomFloat(ArrayArgs["floatmin"].(int), ArrayArgs["floatmax"].(int), ArrayArgs["floatprecision"].(int)) - if err != nil { - return "", err - } - value = TruncateFloat(floatvalue, ArrayArgs["floatmax"].(int), ArrayArgs["floatprecision"].(int)) - value = strconv.FormatFloat(value.(float64), 'f', ArrayArgs["floatprecision"].(int), 64) - - case "bit": // bit - value = RandomBit(ArrayArgs["bitlen"].(int)) - - case "text": // text - value = fake.WordsN(1) - - case "date": // date - dvalue, err := RandomDate(ArrayArgs["fromyear"].(int), ArrayArgs["toyear"].(int)) - if err != nil { - return "", err - } - value = dvalue - - case "time": // timestamp - tvalue, err := RandomTime(ArrayArgs["fromyear"].(int), ArrayArgs["toyear"].(int)) - if err != nil { - return "", err - } - value = tvalue - - case "timetz": // timestamp - ttzvalue, err := RandomTimetz(ArrayArgs["fromyear"].(int), ArrayArgs["toyear"].(int)) - if err != nil { - return "", err - } - value = ttzvalue - - case "timestamp": // timestamp - tsvalue, err := RandomTimestamp(ArrayArgs["fromyear"].(int), ArrayArgs["toyear"].(int)) - if err != nil { - return "", err - } - value = tsvalue - - case "timestamptz": // timestamp - tstzvalue, err := RandomTimestamptz(ArrayArgs["fromyear"].(int), ArrayArgs["toyear"].(int)) - if err != nil { - return "", err - } - value = tstzvalue - - case "bool": // bool - if RandomBoolean() { - value = "true" - } else { - value = "false" - } - - case "IP": // IP Address - value = RandomIP() - - case "macaddr": // Mac Address - value = RandomMacAddress() - - case "uuid": // UUID - uvalue, err := RandomUUID() - if err != nil { - return "", err - } - value = uvalue - - case "txid_snapshot": // txid snapshot - value = RandomTXID() - - case "pg_lsn": // pg lsn - value = RandomLSN() - - case "tsquery": // TS Query - value = RandomTSQuery() - - case "tsvector": // TS Vector - value = RandomTSVector() - - } - resultArray = append(resultArray, value.(string)) - } - resultArrayCollector = append(resultArrayCollector, "{" + strings.Join(resultArray, ",") + "}") - } - return "{" + strings.Join(resultArrayCollector, ",") + "}", nil -} - -// Random geometric array generators -func GeometricArrayGenerator(maxInt int, geometryType string) string { - - // Getting the value of iterators - maxIterations,_ := RandomInt(0, 6) - var resultArray []string - - if geometryType == "box" { - value := RandomGeometricData(maxInt, geometryType, false) - resultArray = append(resultArray, value) - } else { - for i := 0; i < maxIterations; i++ { // Max number of arrays - value := RandomGeometricData(maxInt, geometryType, true) - resultArray = append(resultArray, value) - } - } - - return "{" + strings.Join(resultArray, ",") + "}" -} - -// Random XML & Json array generators. -func JsonXmlArrayGenerator(dt string) string { - // Getting the value of iterators - maxIterations,_ := RandomInt(0, 6) - var resultArray []string - var value string - for i := 0; i < maxIterations; i++ { // Max number of arrays - - switch dt { // Choose the appropriate random data generators - case "json": - value = "\"" + RandomJson(true) + "\"" - case "xml": - value = "\"" + RandomXML(true) + "\"" - } - - resultArray = append(resultArray, value) - } - return "{" + strings.Join(resultArray, ",") + "}" -} \ No newline at end of file diff --git a/core/datatype_mapper.go b/core/datatype_mapper.go deleted file mode 100644 index d03fe62..0000000 --- a/core/datatype_mapper.go +++ /dev/null @@ -1,364 +0,0 @@ -package core - -import ( - "fmt" - "strings" - "regexp" - "strconv" - "math/rand" -) - -// Data Generator -// It provided random data based on datatypes. -func BuildData(dt string) (interface{}, error) { - // ranges of dates - var fromyear = -10 - var toyear = 10 - - // Time datatypes - var Intervalkeywords = []string{"interval", "time without time zone"} - - // Networking datatypes - var ipkeywords = []string{"inet", "cidr"} - - // Integer datatypes - var intkeywords = []string{"smallint", "integer", "bigint"} - var intranges = map[string]int{"smallint": 2767, "integer": 7483647, "bigint": 372036854775807} - - // Decimal datatypes - var floatkeywords = []string{"double precision", "real", "money"} - - // Geometry datatypes - var geoDataTypekeywords = []string{"path", "polygon", "line", "lseg", "box", "circle", "point"} - - switch { - - // Generate Random Integer - case StringHasPrefix(dt, intkeywords): - if strings.HasSuffix(dt, "[]") { // Its requesting for a array of data - nonArraydt := strings.Replace(dt, "[]", "", 1) - ArrayArgs = map[string]interface{}{"intmin": -intranges[nonArraydt], "intmax": intranges[nonArraydt]} - value, err := ArrayGenerator("int") - if err != nil { - return "", fmt.Errorf("Build Integer Array: %v", err) - } - return value, nil - } else { // Not a array, but a single entry request - value, err := RandomInt(-intranges[dt], intranges[dt]) - if err != nil { - return "", fmt.Errorf("Build Integer: %v", err) - } - return value, nil - } - - // Generate Random characters - case strings.HasPrefix(dt, "character"): - l, err := CharLen(dt) - if err != nil { - return "", fmt.Errorf("Getting Character Length: %v", err) - } - if strings.HasSuffix(dt, "[]") { - ArrayArgs["strlen"] = l - value, _ := ArrayGenerator("string") - return value, nil - } else { - value := RandomString(l) - return value, nil - } - - // Generate Random date - case strings.HasPrefix(dt, "date"): - if strings.HasSuffix(dt, "[]") { - ArrayArgs = map[string]interface{}{"fromyear": fromyear, "toyear": toyear} - value, err := ArrayGenerator("date") - if err != nil { - return "", fmt.Errorf("Build Date Array: %v", err) - } - return value, nil - } else { - value, err := RandomDate(fromyear, toyear) - if err != nil { - return "", fmt.Errorf("Build Date: %v", err) - } - return value, nil - } - - // Generate Random timestamp without timezone - case strings.HasPrefix(dt, "timestamp without time zone"): - if strings.HasSuffix(dt, "[]") { - ArrayArgs = map[string]interface{}{"fromyear": fromyear, "toyear": toyear} - value, err := ArrayGenerator("timestamp") - if err != nil { - return "", fmt.Errorf("Build Timestamp without timezone Array: %v", err) - } - return value, nil - } else { - value, err := RandomTimestamp(fromyear, toyear) - if err != nil { - return "", fmt.Errorf("Build Timestamp without timezone: %v", err) - } - return value, nil - } - - /* - === Updated at 2019-06-26 === - Added below code to handle data type from timestamp(0) to timestampe(6) with/without time zone - The data type can be find in this doc: https://gpdb.docs.pivotal.io/5200/ref_guide/data_types.html - */ - case regexp.MustCompile(`timestamp\([0-6]\) without time zone`).MatchString(dt), regexp.MustCompile(`timestamp\([0-6]\) with time zone`).MatchString(dt): - value, err := RandomTimestamp(fromyear, toyear) // get a random timestamp with format like: 2018-04-10 01:19:22 - if err != nil { - return "", fmt.Errorf("Build Timestamp[p] without timezone: %v", err) - } - ts_reg := regexp.MustCompile(`\([0-6]\)`) - decimal, _ := strconv.Atoi( strings.Split(ts_reg.FindString(dt),"")[1] ) // capture the decimal in timestamp[x] - var timestamp_decimal string - for i := 0; i < decimal; i++ { - timestamp_decimal = timestamp_decimal + strconv.Itoa(rand.Intn(9)) // use rand() to generate random decimal in timestamp - } - if len(timestamp_decimal) > 0 { - value = value + "." + timestamp_decimal - } - return value,nil - /* End of Updated */ - - // Generate Random timestamp with timezone - case strings.HasPrefix(dt, "timestamp with time zone"): - if strings.HasSuffix(dt, "[]") { - ArrayArgs = map[string]interface{}{"fromyear": fromyear, "toyear": toyear} - value, err := ArrayGenerator("timestamptz") - if err != nil { - return "", fmt.Errorf("Build Timestamp with timezone Array: %v", err) - } - return value, nil - } else { - value, err := RandomTimestamptz(fromyear, toyear) - if err != nil { - return "", fmt.Errorf("Build Timestamp with timezone: %v", err) - } - return value, nil - } - - // Generate Random time without timezone - case StringHasPrefix(dt, Intervalkeywords): - if strings.HasSuffix(dt, "[]") { - ArrayArgs = map[string]interface{}{"fromyear": fromyear, "toyear": toyear} - value, err := ArrayGenerator("time") - if err != nil { - return "", fmt.Errorf("Build Array Time without timezone: %v", err) - } - return value, nil - } else { - value, err := RandomTime(fromyear, toyear) - if err != nil { - return "", fmt.Errorf("Build Time without timezone: %v", err) - } - return value, nil - } - - // Generate Random time with timezone - case strings.HasPrefix(dt, "time with time zone"): - if strings.HasSuffix(dt, "[]") { - ArrayArgs = map[string]interface{}{"fromyear": fromyear, "toyear": toyear} - value, err := ArrayGenerator("timetz") - if err != nil { - return "", fmt.Errorf("Build Time with timezone array: %v", err) - } - return value, nil - } else { - value, err := RandomTimetz(fromyear, toyear) - if err != nil { - return "", fmt.Errorf("Build Time with timezone: %v", err) - } - return value, nil - } - - // Generate Random ips - case StringHasPrefix(dt, ipkeywords): - if strings.HasSuffix(dt, "[]") { - value, _ := ArrayGenerator("IP") - return value, nil - } else { - return RandomIP(), nil - } - - // Generate Random boolean - case strings.HasPrefix(dt, "boolean"): - if strings.HasSuffix(dt, "[]") { - value, _ := ArrayGenerator("bool") - return value, nil - } else { - return RandomBoolean(), nil - } - - // Generate Random text - case strings.HasPrefix(dt, "text"): - if strings.HasSuffix(dt, "[]") { - value, _ := ArrayGenerator("text") - return value, nil - } else { - return RandomParagraphs(), nil - } - - // Generate Random text & bytea - case strings.EqualFold(dt, "bytea"): - return RandomBytea(1024 * 1024), nil - - // Generate Random float values - case StringHasPrefix(dt, floatkeywords): - if strings.HasSuffix(dt, "[]") { // Float array - ArrayArgs = map[string]interface{}{"floatmin": 1, "floatmax": intranges["smallint"], "floatprecision": 3} - value, err := ArrayGenerator("float") - if err != nil { - return "", fmt.Errorf("Build Float Array: %v", err) - } - return value, nil - } else { // non float array - value, err := RandomFloat(1, intranges["smallint"], 3) - if err != nil { - return "", fmt.Errorf("Build Float: %v", err) - } - return value, nil - } - - // Generate Random numeric values with precision - case strings.HasPrefix(dt, "numeric"): - max, precision, err := FloatPrecision(dt) - if err != nil { - return "", fmt.Errorf("Build Numeric: %v", err) - } - if strings.HasSuffix(dt, "[]") { // Numeric Array - ArrayArgs = map[string]interface{}{"floatmin": 0, "floatmax": max, "floatprecision": precision} - value, err := ArrayGenerator("float") - if err != nil { - return "", fmt.Errorf("Build Numeric Float Array: %v", err) - } - return value, nil - } else { // Non numeric array - value, err := RandomFloat(0, max, precision) - if err != nil { - return "", fmt.Errorf("Build Numeric Float Array: %v", err) - } - value = TruncateFloat(value, max, precision) - return value, nil - } - - // Random bit generator - case strings.HasPrefix(dt, "bit"): - l, err := CharLen(dt) - if err != nil { - return "", fmt.Errorf("Build bit: %v", err) - } - if strings.HasSuffix(dt, "[]") { - ArrayArgs["bitlen"] = l - value, err := ArrayGenerator("bit") - if err != nil { - return "", fmt.Errorf("Build bit array: %v", err) - } - return value, nil - } else { - value := RandomBit(l) - return value, nil - } - - // Random UUID generator - case strings.HasPrefix(dt, "uuid"): - if strings.HasSuffix(dt, "[]") { - uuid, err := ArrayGenerator("uuid") - if err != nil { - return "", fmt.Errorf("Build UUID Array: %v", err) - } - return uuid, nil - } else { - uuid, err := RandomUUID() - if err != nil { - return "", fmt.Errorf("Build UUID: %v", err) - } - return uuid, nil - } - - // Random MacAddr Generator - case strings.HasPrefix(dt, "macaddr"): - if strings.HasSuffix(dt, "[]") { - value, _ := ArrayGenerator("macaddr") - return value, nil - } else { - return RandomMacAddress(), nil - } - - // Random Json - case strings.HasPrefix(dt, "json"): - if strings.HasSuffix(dt, "[]") { - return JsonXmlArrayGenerator("json"), nil - } else { - return RandomJson(false), nil - } - - // Random XML - case strings.HasPrefix(dt, "xml"): - if strings.HasSuffix(dt, "[]") { - return JsonXmlArrayGenerator("xml"), nil - } else { - return RandomXML(false), nil - } - - // Random Text Search Query - case strings.HasPrefix(dt, "tsquery"): - if strings.HasSuffix(dt, "[]") { - value, _ := ArrayGenerator("tsquery") - return value, nil - } else { - return RandomTSQuery(), nil - } - - // Random Text Search Vector - case strings.HasPrefix(dt, "tsvector"): - if strings.HasSuffix(dt, "[]") { - value, _ := ArrayGenerator("tsquery") - return value, nil - } else { - return RandomTSVector(), nil - } - - // Random Log Sequence number - case strings.HasPrefix(dt, "pg_lsn"): - if strings.HasSuffix(dt, "[]") { - value, _ := ArrayGenerator("pg_lsn") - return value, nil - } else { - return RandomLSN(), nil - } - - // Random Log Sequence number - case strings.HasPrefix(dt, "txid_snapshot"): - if strings.HasSuffix(dt, "[]") { - value, _ := ArrayGenerator("txid_snapshot") - return value, nil - } else { - return RandomTXID(), nil - } - - // Random GeoMetric data - case StringHasPrefix(dt, geoDataTypekeywords): - var randomInt int - if dt == "path" || dt == "polygon" { - randomInt, _ = RandomInt(1, 5) - } else { - randomInt, _ = RandomInt(1, 2) - } - if strings.HasSuffix(dt, "[]") { - dtype := strings.Replace(dt, "[]", "", 1) - value := GeometricArrayGenerator(randomInt, dtype) - return value, nil - } else { - return RandomGeometricData(randomInt, dt, false), nil - } - - - // If there is no datatype found then send the below message - default: - return "", fmt.Errorf("Unsupported datatypes found: %v", dt) - } - - return "", nil -} diff --git a/core/helper.go b/core/helper.go deleted file mode 100644 index 4025528..0000000 --- a/core/helper.go +++ /dev/null @@ -1,185 +0,0 @@ -package core - -import ( - "regexp" - "fmt" - "strconv" - "strings" - "math" - "os" - "time" - "path/filepath" - "log" - "bufio" -) - - -// Extract the current time now. -func TimeNow() string { - return time.Now().Format("20060102150405") -} - -// Create a file ( if not exists ), append the content and then close the file -func WriteToFile(filename string, message string) error { - - // open files r, w mode - file, err := os.OpenFile(filename, os.O_CREATE|os.O_APPEND|os.O_WRONLY,0600) - if err != nil { - return err - } - - // Close the file - defer file.Close() - - // Append the message or content to be written - if _, err = file.WriteString(message); err != nil { - return err - } - - return nil -} - -// List all the backup sql file to recreate the constraints -func ListFile(dir, suffix string) ([]string, error) { - return filepath.Glob(filepath.Join(dir, suffix)) -} - -// Read the file content and send it across -func ReadFile(filename string) ([]string, error) { - - var contentSaver []string - - // Open th file - file, err := os.Open(filename) - if err != nil { - log.Fatal(err) - } - defer file.Close() - - // Read the file line by line - scanner := bufio.NewScanner(file) - for scanner.Scan() { - contentSaver = append(contentSaver, scanner.Text()) - } - - if err := scanner.Err(); err != nil { - return contentSaver, err - } - return contentSaver, nil -} - -// Is the value string or integer -func IsIntorString(v string) bool { - _, err := strconv.Atoi(v) - if err != nil { - return false - } - return true -} - -// Ignore Error strings matches -func IgnoreErrorString(errmsg string, ignoreErr []string) bool { - for _, ignore := range ignoreErr { - if strings.HasSuffix(errmsg, ignore) || strings.HasPrefix(errmsg, ignore) { - return true - } - } - return false -} - -// Built a method to find if the values exits with a slice -func StringContains(item string, slice []string) bool { - set := make(map[string]struct{}, len(slice)) - for _, s := range slice { - set[s] = struct{}{} - } - _, ok := set[item] - return ok -} - -// Build a method to find if the value starts with specific word within a slice -func StringHasPrefix(item string, slice []string) bool { - set := make(map[string]struct{}, len(slice)) - for _, s := range slice { - if strings.HasPrefix(item, s) { - set[item] = struct{}{} - } - } - _, ok := set[item] - return ok -} - -// Extract total characters that the datatype char can store. -func CharLen(dt string) (int, error) { - var rgx = regexp.MustCompile(`\((.*?)\)`) - var returnValue int - var err error - rs := rgx.FindStringSubmatch(dt) - if len(rs) > 0 { // If the datatypes has number of value defined - returnValue, err = strconv.Atoi(rs[1]) - if err != nil { - return 0, err - } - } else { - returnValue = 1 - } - return returnValue, nil -} - -// Column Extractor from the provided constraint key -func ColExtractor(conkey,regExp string) (string, error) { - var rgx = regexp.MustCompile(regExp) - rs := rgx.FindStringSubmatch(conkey) - if len(rs) > 0 { - return rs[0], nil - } else { - return "", fmt.Errorf("Unable to extract the columns from the constraint key") - } - return "", nil -} - -// If given a datatype see if it has a bracket or not. -func BracketsExists(dt string) bool { - var rgx = regexp.MustCompile(`\(.*\)`) - rs := rgx.FindStringSubmatch(dt) - if len(rs) > 0 { - return true - } else { - return false - } -} - -// Extract Float precision from the float datatypes -func FloatPrecision(dt string) (int, int, error) { - - // check if brackets exists, if it doesn't then add some virtual values - if !BracketsExists(dt) && strings.HasSuffix(dt, "[]") { - dt = strings.Replace(dt, "[]", "", 1) + "(5,3)[]" - } else if !BracketsExists(dt) && !strings.HasSuffix(dt, "[]") { - dt = dt + "(5,3)" - } - // Get the ranges in the brackets - var rgx = regexp.MustCompile(`\((.*?)\)`) - rs := rgx.FindStringSubmatch(dt) - split := strings.Split(rs[1], ",") - m, err := strconv.Atoi(split[0]) - if err != nil { - return 0, 0, fmt.Errorf("Float Precision (min): %v", err) - } - p, err := strconv.Atoi(split[1]) - if err != nil { - return 0, 0, fmt.Errorf("Float Precision (precision): %v", err) - } - return m, p, nil -} - -// If the random value of numeric datatype is greater than specifed, it ends up with -// i.e error "numeric field overflow" -// The below helper helps to reduce the size of the value -func TruncateFloat(f float64, max, precision int) float64 { - stringFloat := strconv.FormatFloat(f, 'f', precision, 64) - if len(stringFloat) > max { - f = math.Log10(f) - } - return f -} \ No newline at end of file diff --git a/core/progressbar.go b/core/progressbar.go deleted file mode 100644 index adf9121..0000000 --- a/core/progressbar.go +++ /dev/null @@ -1,51 +0,0 @@ -package core - -import ( - "time" - "github.com/vbauerster/mpb" - "github.com/vbauerster/mpb/decor" -) - -// Progress bar for the app. -var ( - bar *mpb.Bar - p *mpb.Progress -) -func ProgressBar(steps int, progressMsg string) { - - // Start a new bar - p = mpb.New( - mpb.WithWidth(100), - mpb.WithRefreshRate(120*time.Millisecond), - ) - - // Total steps to take and the message of this bar - total := steps - name := " " + progressMsg - - // Add a bar - bar = p.AddBar(int64(total), - - // Prepending decorators - mpb.PrependDecorators( - decor.Elapsed(4, decor.DSyncSpace), - ), - - // Appending decorators - mpb.AppendDecorators( - decor.Percentage(5, 0), - decor.StaticName(name, len(name), 0), - ), - ) -} - -// Increment Progress bar -func IncrementBar() { - bar.Incr(1) -} - - -// Close progress bar -func CloseProgressBar() { - p.Stop() -} diff --git a/core/random_data_generator.go b/core/random_data_generator.go deleted file mode 100644 index 4a3f1b2..0000000 --- a/core/random_data_generator.go +++ /dev/null @@ -1,348 +0,0 @@ -package core - -import ( - "errors" - "fmt" - "math" - "math/rand" - "os/exec" - "strconv" - "strings" - "time" - - "github.com/icrowley/fake" -) - -// Random text generator based on the length of string needed -var r *rand.Rand - -func init() { - r = rand.New(rand.NewSource(time.Now().UnixNano())) -} - -// Random Bytea data -func RandomBytea(maxlen int) []byte { - rand.Seed(time.Now().UnixNano()) - result := make([]byte, r.Intn(maxlen)+1) - for i := range result { - result[i] = byte(r.Intn(255)) - } - return result -} - -// Random String -func RandomString(strlen int) string { - const chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789" - result := make([]byte, strlen) - for i := range result { - result[i] = chars[r.Intn(len(chars))] - } - return string(result) -} - -// Random Number generator based on the min and max specified -func RandomInt(min, max int) (int, error) { - if min >= max { - return 0, errors.New("Max value is greater or equal to Min value, cannot generate data within these ranges") - } - rand.Seed(time.Now().UnixNano()) - return rand.Intn(max-min) + max, nil -} - -// Random Float generator based on precision specified -func round(num float64) int { - return int(num + math.Copysign(0.5, num)) -} - -func RandomFloat(min, max, precision int) (float64, error) { - output := math.Pow(10, float64(precision)) - randNumber, err := RandomInt(min, max) - if err != nil { - return 0.0, err - } - return float64(round(float64(randNumber)/rand.Float64()*output)) / output, nil -} - -// Random calender date time generator -func RandomCalenderDateTime(fromyear, toyear int) (time.Time, error) { - if fromyear > toyear { - return time.Now(), errors.New("Number of years behind is greater than number of years in future") - } - min := time.Now().AddDate(fromyear, 0, 0).Unix() - max := time.Now().AddDate(toyear, 0, 0).Unix() - delta := max - min - sec := rand.Int63n(delta) + min - return time.Unix(sec, 0), nil -} - -// Random date -func RandomDate(fromyear, toyear int) (string, error) { - timestamp, err := RandomCalenderDateTime(fromyear, toyear) - if err != nil { - return "", err - } - return timestamp.Format("2006-01-02"), nil -} - -// Random Timestamp without time zone -func RandomTimestamp(fromyear, toyear int) (string, error) { - timestamp, err := RandomCalenderDateTime(fromyear, toyear) - if err != nil { - return "", err - } - return timestamp.Format("2006-01-02 15:04:05"), nil -} - -// Random Timestamp with time zone -func RandomTimestamptz(fromyear, toyear int) (string, error) { - timestamp, err := RandomCalenderDateTime(fromyear, toyear) - if err != nil { - return "", err - } - return timestamp.Format("2006-01-02 15:04:05.000000"), nil -} - -// Random Time without time zone -func RandomTime(fromyear, toyear int) (string, error) { - timestamp, err := RandomCalenderDateTime(fromyear, toyear) - if err != nil { - return "", err - } - return timestamp.Format("15:04:05"), nil -} - -// Random Timestamp without time zone -func RandomTimetz(fromyear, toyear int) (string, error) { - timestamp, err := RandomCalenderDateTime(fromyear, toyear) - if err != nil { - return "", err - } - return timestamp.Format("15:04:05.000000"), nil -} - -// Random bool generator based on if number is even or not -func RandomBoolean() bool { - number, _ := RandomInt(1, 9999) - if number%2 == 0 { - return true - } else { - return false - } -} - -// Random Paragraphs -func RandomParagraphs() string { - n, _ := strconv.Atoi(fake.DigitsN(1)) - return fake.ParagraphsN(n) -} - -// Random IPv6 & IPv4 Address -func RandomIP() string { - number, _ := RandomInt(1, 9999) - if number%2 == 0 { - return fake.IPv4() - } else { - return fake.IPv6() - } -} - -// Random bit -func RandomBit(max int) string { - var bitValue string - for i := 0; i < max; i++ { - if RandomBoolean() { - bitValue = bitValue + "1" - } else { - bitValue = bitValue + "0" - } - } - return bitValue -} - -// Random UUID -func RandomUUID() (string, error) { - // To generate random UUID, we will use unix tool "uuidgen" (unix utility) - uuidString, err := exec.Command("uuidgen").Output() - if err != nil { - return "", fmt.Errorf("Unable to run uuidgen to generate UUID data: %v", err) - } - return strings.TrimSpace(string(uuidString)), nil -} - -// Random Mac Address -func RandomMacAddress() string { - return fmt.Sprintf("%02x:%02x:%02x:%02x:%02x:%02x", - RandomString(1), RandomString(1), - RandomString(1), RandomString(1), - RandomString(1), RandomString(1)) -} - -// Random Text Search Query -func RandomTSQuery() string { - number, _ := RandomInt(1, 9999) - number = number % 5 - if number == 0 { - return fake.WordsN(1) + " & " + fake.WordsN(1) - } else if number == 1 { - return fake.WordsN(1) + " | " + fake.WordsN(1) - } else if number == 2 { - return " ! " + fake.WordsN(1) + " & " + fake.WordsN(1) - } else if number == 3 { - return fake.WordsN(1) + " & " + fake.WordsN(1) + " & ! " + fake.WordsN(1) - } else { - return fake.WordsN(1) + " & ( " + fake.WordsN(1) + " | " + fake.WordsN(1) + " )" - } - return "" -} - -// Random Text Search Query -func RandomTSVector() string { - return fake.SentencesN(fake.Day()) -} - -// Random Geometric data -func RandomGeometricData(randomInt int, GeoMetry string, IsItArray bool) string { - var geometry []string - if GeoMetry == "point" { // Syntax for point datatype - if IsItArray { // If Array - return "\"(" + fake.DigitsN(2) + "," + fake.DigitsN(3) + ")\"" - } else { - return "(" + fake.DigitsN(2) + "," + fake.DigitsN(3) + ")" - } - } else if GeoMetry == "circle" { // Syntax for circle datatype - if IsItArray { // If Array - return "\"(" + fake.DigitsN(2) + "," + fake.DigitsN(3) + ")," + fake.DigitsN(2) + ")\"" - } else { - return "(" + fake.DigitsN(2) + "," + fake.DigitsN(3) + ")," + fake.DigitsN(2) + ")" - } - - } else { // Syntax for the rest of geometry datatype - for i := 0; i < randomInt; i++ { - x, _ := RandomFloat(1, 10, 2) - y, _ := RandomFloat(1, 10, 2) - geometry = append(geometry, "("+fmt.Sprintf("%v", x)+","+fmt.Sprintf("%v", y)+")") - } - if IsItArray { // If Array - return "\"(" + strings.Join(geometry, ",") + ")\"" - } else { - return "(" + strings.Join(geometry, ",") + ")" - } - } - return "" -} - - -// Random Log Sequence Number -func RandomLSN() string { - return fmt.Sprintf("%02x/%02x", - RandomString(1), RandomString(4)) -} - -// Random transaction XID -func RandomTXID() string { - x, _ := strconv.Atoi(fake.DigitsN(8)) - y, _ := strconv.Atoi(fake.DigitsN(8)) - if x > y { // left side of ":" should be always less than right side - return fmt.Sprintf("%v:%v:", y, x) - } else { - return fmt.Sprintf("%v:%v:", x, y) - } - return "" -} - -// Random JSON generator -func RandomJson(IsItArray bool) string { - jsonData := "{" + - " \"_id\": \"" + RandomString(24) + "\"," + - " \"index\": \"" + fake.DigitsN(10) + "\"," + - " \"guid\": \"" + RandomString(8) + "-" + RandomString(4) + "-" + RandomString(4) + "-" + RandomString(4) + "-" + RandomString(12) + "\"," + - " \"isActive\": \"" + strconv.FormatBool(RandomBoolean()) + "\"," + - " \"balance\": \"$" + fake.Digits() + "." + fake.DigitsN(2) + "\"," + - " \"website\": \"https://" + fake.DomainName() + "/" + fake.WordsN(1) + "\"," + - " \"age\": \"" + fake.DigitsN(2) + "\"," + - " \"username\": \"" + fake.UserName() + "\"," + - " \"eyeColor\": \"" + fake.Color() + "\"," + - " \"name\": \"" + fake.FullName() + "\"," + - " \"gender\": \"" + fake.Gender() + "\"," + - " \"company\": \"" + fake.Company() + "\"," + - " \"email\": \"" + fake.EmailAddress() + "\"," + - " \"phone\": \"" + fake.Phone() + "\"," + - " \"address\": \"" + fake.StreetAddress() + "\"," + - " \"zipcode\": \"" + fake.Zip() + "\"," + - " \"state\": \"" + fake.State() + "\"," + - " \"country\": \"" + fake.Country() + "\"," + - " \"about\": \"" + fake.WordsN(12) + "\"," + - " \"Machine IP\": \"" + RandomIP() + "\"," + - " \"job title\": \"" + fake.JobTitle() + "\"," + - " \"registered\": \"" + strconv.Itoa(fake.Year(2000, 2050)) + "-" + strconv.Itoa(fake.MonthNum()) + "-" + strconv.Itoa(fake.Day()) + "T" + fake.DigitsN(2) + ":" + fake.DigitsN(2) + ":" + fake.DigitsN(2) + " -" + fake.DigitsN(1) + ":" + fake.DigitsN(2) + "\"," + - " \"latitude\": \"" + fake.DigitsN(2) + "." + fake.DigitsN(6) + "\"," + - " \"longitude\": \"" + fake.DigitsN(2) + "." + fake.DigitsN(6) + "\"," + - " \"tags\": [" + - " \"" + fake.WordsN(1) + "\"," + - " \"" + fake.WordsN(1) + "\"," + - " \"" + fake.WordsN(1) + "\"," + - " \"" + fake.WordsN(1) + "\"," + - " \"" + fake.WordsN(1) + "\"," + - " \"" + fake.WordsN(1) + "\"," + - " \"" + fake.WordsN(1) + "\"" + - " ]," + - " \"friends\": [" + - " {" + - " \"id\": \"" + fake.DigitsN(2) + "\"," + - " \"name\": \"" + fake.FullName() + "\"" + - " }," + - " {" + - " \"id\": \"" + fake.DigitsN(2) + "\"," + - " \"name\": \"" + fake.FullName() + "\"" + - " }," + - " {" + - " \"id\": \"" + fake.DigitsN(2) + "\"," + - " \"name\": \"" + fake.FullName() + "\"" + - " }" + - " ]," + - " \"greeting\": \"" + fake.Sentence() + "\"," + - " \"favoriteBrand\": \"" + fake.Brand() + "\"" + - " }" - - if IsItArray { - return strings.Replace(jsonData, "\"", "\\\"", -1 ) - } else { - return jsonData - } -} - -// Random XML Generator -func RandomXML(IsItArray bool) string { - xmlData := "" + - "" + - " " + fake.FullName() + "" + - " " + - " " + fake.FullName() + "" + - "
" + fake.StreetAddress() + "
" + - " " + fake.City() + "" + - " " + fake.Country() + "" + - " " + fake.EmailAddress() + "" + - " " + fake.Phone() + "" + - "
" + - " " + - " " + fake.Title() + "" + - " " + fake.Sentences() + "" + - " " + fake.Digits() + "" + - " " + fake.Color() + "" + - " " + fake.Digits() + "." + fake.DigitsN(2) + "" + - " " + - " " + - " " + fake.Title() + "" + - " " + fake.Digits() + "" + - " " + fake.Digits() + "." + fake.DigitsN(2) + "" + - " " + - "
" - - if IsItArray { - return strings.Replace(xmlData, "\"", "\\\"", -1 ) - } else { - return xmlData - } -} \ No newline at end of file diff --git a/db/postgres/backup_constraints.go b/db/postgres/backup_constraints.go deleted file mode 100644 index 99d37c5..0000000 --- a/db/postgres/backup_constraints.go +++ /dev/null @@ -1,169 +0,0 @@ -package postgres - -import ( - "database/sql" - "fmt" - "strings" - - "github.com/pivotal/mock-data/core" -) - -type constraint struct { - table, column string -} - -var ( - savedConstraints = map[string][]constraint{"PRIMARY": {}, "CHECK": {}, "UNIQUE": {}, "FOREIGN": {}} - constraints = []string{"p", "f", "u", "c"} - ignoreErr = []string{ - "pq: multiple primary keys for table", - "already exists"} -) - -// Backup DDL of objects which are going to drop to -// allow faster and smooth transition of inputting data. -func BackupDDL(db *sql.DB, timestamp string) error { - - // Constraints - err := backupConstraints(db, timestamp) - if err != nil { - return err - } - - // Unique Index - err = backupIndexes(db, timestamp) - if err != nil { - return err - } - - return nil -} - -// Backup all the constraints -func backupConstraints(db *sql.DB, timestamp string) error { - - for _, constr := range constraints { - filename := "mockd_constriant_backup_" + constr + "_" + timestamp + ".sql" - rows, err := db.Query(GetPGConstraintDDL(constr)) - for rows.Next() { - var table, conname, conkey string - - // Scan and store the rows - err = rows.Scan(&table, &conname, &conkey) - if err != nil { - return fmt.Errorf("Error extracting the rows of the list of constraint DDL: %v", err) - } - - // Generate the DDL command - message := fmt.Sprintf("ALTER TABLE %s ADD CONSTRAINT %s %s;\n", table, conname, conkey) - - // write this to the file - err = core.WriteToFile(filename, message) - if err != nil { - return fmt.Errorf("Error in saving the constraints DDL to the file: %v", err) - } - - // Before dropping the constraints ensure we have the information - // saved about the state of what constraints this table had - ctype, err := constraintFinder(constr) - if err != nil { - return err - } - savedConstraints[ctype] = append(savedConstraints[ctype], constraint{table: table, column: conkey}) - } - } - - return nil -} - -// Backup all the unique index -func backupIndexes(db *sql.DB, timestamp string) error { - - filename := "mockd_index_backup_u_" + timestamp + ".sql" - rows, err := db.Query(GetPGIndexDDL()) - for rows.Next() { - var table, index string - // Scan and store the rows - err = rows.Scan(&table, &index) - if err != nil { - return fmt.Errorf("Error extracting the rows of the list of Index DDL: %v", err) - } - - // Generate the DDL command - message := fmt.Sprintf("%s;\n", index) - - // write this to the file - err = core.WriteToFile(filename, message) - if err != nil { - return fmt.Errorf("Error in saving the index DDL to the file: %v", err) - } - - // Save all the index information - savedConstraints["UNIQUE"] = append(savedConstraints["UNIQUE"], constraint{table: table, column: index}) - } - - return nil -} - -func RemoveConstraints(db *sql.DB, table string) error { - - var statment string - - // Obtain all the constraints on the table that we going to load data to - rows, err := db.Query(GetConstraintsPertab(table)) - if err != nil { - return err - } - - // scan through the rows and generate the drop command - for rows.Next() { - var tab, conname, concol, contype string - - // Scan and store the rows - err = rows.Scan(&tab, &conname, &concol, &contype) - if err != nil { - return fmt.Errorf("Error extracting all the constriant list on the table: %v", err) - } - - // Generate the DROP DDL command - if contype == "index" { // if the constriant is a index - statment = fmt.Sprintf("DROP INDEX %s CASCADE;", conname) - } else { // if the constraint is a constraint - statment = fmt.Sprintf("ALTER TABLE %s DROP CONSTRAINT %s CASCADE;", table, conname) - } - - // Execute the statement - _, err = db.Exec(statment) - if err != nil { - // Ignore does not exist error eg.s the primary key is dropped - // then the index also goes along with it , so no need to panic here - errmsg := fmt.Sprintf("%s", err) - if !strings.HasSuffix(errmsg, "does not exist") { - return err - } - } - } - - return nil -} - -// What is the type of constraints is it -func constraintFinder(contype string) (string, error) { - switch { - // Check constraint - case strings.Contains(contype, "c"): - return "CHECK", nil - // Primary constraint - case strings.Contains(contype, "p"): - return "PRIMARY", nil - // Foreign constraint - case strings.Contains(contype, "f"): - return "FOREIGN", nil - // Unique constraint - case strings.Contains(contype, "u"): - return "UNIQUE", nil - default: - return "", fmt.Errorf("Cannot understand the type of constraints") - } - return "", nil -} diff --git a/db/postgres/recreate_check_constraints.go b/db/postgres/recreate_check_constraints.go deleted file mode 100644 index ae564a9..0000000 --- a/db/postgres/recreate_check_constraints.go +++ /dev/null @@ -1,50 +0,0 @@ -package postgres - -import ( - "fmt" - "strings" - "database/sql" - "github.com/pivotal/mock-data/core" -) - -// -// Currently there is issue with check constraints, so this check is just -// a dummy, will work on it once we have a proper idea -// - - -// fix Check constraints -func fixCheck(db *sql.DB, ck constraint) error { - - var TotalViolators int - - // Extract the key columns - keys, err := core.ColExtractor(ck.column, `\(.*?\)`) - if err != nil { - return fmt.Errorf("Unable to determine CK violators key columns: %v", err) - } - cols := strings.Trim(keys, "()") - - // Extract the column name from it - colname, err := core.ColExtractor(cols, `^(.*?)\s`) - if err != nil { - return fmt.Errorf("Unable to determine column name from keys: %v", err) - } - - // Check if this table is violating any check constraints - rows, err := db.Query(GetTotalCKViolator(ck.table, colname, cols)) - if err != nil { - return fmt.Errorf("Unable to get the total primary key violators: %v", err) - } - for rows.Next() { - err = rows.Scan(&TotalViolators) - if err != nil { - return fmt.Errorf("Unable to scan the total primary key violators: %v", err) - } - - } - - log.Info(TotalViolators) - - return nil -} \ No newline at end of file diff --git a/db/postgres/recreate_constraints.go b/db/postgres/recreate_constraints.go deleted file mode 100644 index dba4b76..0000000 --- a/db/postgres/recreate_constraints.go +++ /dev/null @@ -1,105 +0,0 @@ -package postgres - -import ( - "database/sql" - "fmt" - "github.com/op/go-logging" - "github.com/pivotal/mock-data/core" -) - -var ( - log = logging.MustGetLogger("mockd") -) - -// fix the data loaded so that we can reenable the constraints -func FixConstraints(db *sql.DB, timestamp string, debug bool) error { - - // Fix the constraints in this order - //var constr = []string{"PRIMARY", "UNIQUE", "CHECK", "FOREIGN"} - var constr = []string{"PRIMARY", "UNIQUE", "FOREIGN"} - for _, v := range constr { - log.Infof("Checking for any %s KEYS, fixing them if there is any violations", v) - for _, con := range savedConstraints[v] { - switch { - case v == "PRIMARY": - err := fixPKey(db, con, v, debug) - if err != nil { - return err - } - case v == "UNIQUE": // Run the same logic as primary key - err := fixPKey(db, con, v, debug) - if err != nil { - return err - } - case v == "CHECK": - err := fixCheck(db, con) - if err != nil { - return err - } - case v == "FOREIGN": - err := fixFKey(db, con, debug) - if err != nil { - return err - } - } - } - } - - // Recreate constraints - failureDetected, err := recreateAllConstraints(db, timestamp, debug) - if failureDetected || err != nil { - return err - } - - return nil -} - -// Recreate all the constraints of the database ( in case we have dropped any ) -func recreateAllConstraints(db *sql.DB, timestamp string, debug bool) (bool, error) { - - var AnyErrors bool - log.Info("Starting to recreating all the constraints of the table ...") - - // list the backup files collected. - for _, con := range constraints { - backupFile, err := core.ListFile(".", "*_"+con+"_"+timestamp+".sql") - if err != nil { - return AnyErrors, fmt.Errorf("Error in getting the list of backup files: %v", err) - } - - // run it only if we do find the backup file - if len(backupFile) > 0 { - contents, err := core.ReadFile(backupFile[0]) - if err != nil { - return AnyErrors, fmt.Errorf("Error in reading the backup files: %v", err) - } - - // Recreate all constraints one by one, if we can't create it then display the message - // on the screen and continue with the rest, since we don't want it to fail if we cannot - // recreate constraint of a single table. - for _, content := range contents { - - // Print the constraints that we are going to fix. - if debug { - log.Debugf("Recreating the constraint: \"%v\"", content) - } - - _, err = db.Exec(content) - if err != nil && !core.IgnoreErrorString(fmt.Sprintf("%s", err), ignoreErr) { - AnyErrors = true - log.Errorf("Failed to create constraints: \"%v\"", content) - } else if debug { // if debug is on, tell user that we are able to successful create it - log.Debugf("Successful in creating constraint: \"%v\"", content) - } - } - } - } - - // If any error detected, tell the user about it - if AnyErrors { - return AnyErrors, fmt.Errorf("Detected failure in creating constraints... ") - } else { // else we are all good. - return AnyErrors, nil - } - -} diff --git a/db/postgres/recreate_foreign_constraints.go b/db/postgres/recreate_foreign_constraints.go deleted file mode 100644 index d5cb7ae..0000000 --- a/db/postgres/recreate_foreign_constraints.go +++ /dev/null @@ -1,167 +0,0 @@ -package postgres - -import ( - "fmt" - "strings" - "database/sql" - "github.com/pivotal/mock-data/core" -) - -// Fix Foreign Key -func fixFKey(db *sql.DB, fk constraint, debug bool) error { - - // The objects involved in this foriegn key clause - fkeyObjects, err := getFKeyObjects(fk) - if err != nil { - return fmt.Errorf("Unable to scan the total primary key violators: %v", err) - } - - // Time to fix the foriegn key issues - err = UpdateFKViolationRecord(db, fkeyObjects, debug) - if err != nil { - return err - } - return nil -} - -// Get Foriegn key objects -type foreignKey struct { - table, column, reftable, refcolumn string -} - - -// Functions to extract the columns names from the provided -// command output. -func getFKeyObjects(fk constraint) (foreignKey, error) { - - var foriegnClause foreignKey - - // Extract reference clause from the value - refClause, err := core.ColExtractor(fk.column, `REFERENCES[ \t]*([^\n\r]*\))`) - if err != nil { - return foriegnClause, fmt.Errorf("Unable to extract reference key clause from fk clause: %v", err) - } - - // Extract the fk column from the clause - fkCol, err := core.ColExtractor(strings.Replace(fk.column, refClause, "", -1), `\(.*?\)`) - if err != nil { - return foriegnClause, fmt.Errorf("Unable to extract foreign key coulmn from fk clause: %v", err) - } - fkCol = strings.Trim(fkCol, "()") - - // Extract the reference column from the clause - refCol, err := core.ColExtractor(refClause, `\(.*?\)`) - if err != nil { - return foriegnClause, fmt.Errorf("Unable to extract reference key coulmn from fk clause: %v", err) - } - - // Extract reference table from the clause - refTab := strings.Replace(refClause, refCol, "", -1) - refTab = strings.Replace(refTab, "REFERENCES ", "", -1) - refCol = strings.Trim(refCol, "()") - - foriegnClause = foreignKey{fk.table, fkCol, refTab, refCol} - - return foriegnClause, nil -} - -// Update the foriegn key violation tables. -func UpdateFKViolationRecord(db *sql.DB, fkObjects foreignKey, debug bool) error { - - var TotalViolators int = 1 - - // Get total number of records on the table - totalRow, err := totalRows(db, fkObjects.reftable) - if err != nil { - return err - } - - if debug { - log.Debugf("Checking / Fixing FOREIGN KEY Violation table: %s, column: %s, reference: %s(%s)", fkObjects.table, fkObjects.table, fkObjects.reftable, fkObjects.refcolumn) - - } - - // Loop till we reach the the end of the loop - for TotalViolators > 0 { - - // Total foreign key violators - TotalViolators, err = totalFKViolators(db, fkObjects) - if err != nil { - return err - } - - // Run only if there is a violations else no - if TotalViolators > 0 { - err := updateFKViolators(db, fkObjects, totalRow) - if err != nil { - return err - } - } - } - - return nil -} - -// Total rows of the table -func totalRows(db *sql.DB, table string) (string, error) { - - var TotalRow string - - // Total rows in the reference table - rows, err := db.Query(TotalRows(table)) - if err != nil { - return TotalRow, fmt.Errorf("Error in getting total rows from the table: %v", err) - } - for rows.Next() { - err = rows.Scan(&TotalRow) - if err != nil { - return TotalRow, fmt.Errorf("Error in scanning total rows from the table: %v", err) - } - } - - return TotalRow, nil -} - -// Total Foreign Key violators -func totalFKViolators(db *sql.DB, fkObjects foreignKey) (int, error) { - - var TotalViolators int - - // Get the total rows that are violating the FK constraint - rows, err := db.Query(GetTotalFKViolators(fkObjects.table, fkObjects.column, fkObjects.reftable, fkObjects.refcolumn)) - if err != nil { - return TotalViolators, fmt.Errorf("Error in getting total violator of foriegn keys: %v", err) - } - for rows.Next() { - err = rows.Scan(&TotalViolators) - if err != nil { - return TotalViolators, fmt.Errorf("Error in scanning total violator of foriegn keys: %v", err) - } - } - - return TotalViolators, nil -} - -// Update foreign key violators -func updateFKViolators(db *sql.DB, fkObjects foreignKey, totalRows string) error { - - var violatorKey string - - // Get all the rows that are violating the FK constraint - rows, err := db.Query(GetFKViolators(fkObjects.table, fkObjects.column, fkObjects.reftable, fkObjects.refcolumn)) - if err != nil { - return fmt.Errorf("Error in retreving total violator values of foriegn keys: %v", err) - } - for rows.Next() { - err = rows.Scan(&violatorKey) - if err != nil { - return fmt.Errorf("Error in scanning total violator of foriegn keys: %v", err) - } - _, err := db.Exec(UpdateFKeys(fkObjects.table, fkObjects.column, fkObjects.reftable, fkObjects.refcolumn, violatorKey, totalRows)) - if err != nil { - return fmt.Errorf("Error in update violator of foriegn keys: %v", err) - } - } - - return nil -} diff --git a/db/postgres/recreate_primary_constraints.go b/db/postgres/recreate_primary_constraints.go deleted file mode 100644 index f9e80ec..0000000 --- a/db/postgres/recreate_primary_constraints.go +++ /dev/null @@ -1,140 +0,0 @@ -package postgres - -import ( - "strings" - "fmt" - "database/sql" - "github.com/pivotal/mock-data/core" -) - -// Fix Primary Key -func fixPKey(db *sql.DB, pk constraint, fixtype string, debug bool) error { - - // Start with 1 violation to begin the loop - var TotalViolators int = 1 - - // Extract the columns from the list that was collected during backup - keys, err := core.ColExtractor(pk.column, `\(.*?\)`) - if err != nil { - return fmt.Errorf("Unable to determine PK violators key columns: %v", err) - } - cols := strings.Trim(keys, "()") - - // If logging is tuned on then paste this message on the screen. - if debug { - log.Debugf("Checking / Fixing %s KEY Violation table: %s, column: %s", fixtype, pk.table, cols) - } - - // Loop till we get a 0 value (i.e 0 violation ) - for TotalViolators > 0 { - - // How many violations are we having, if zero then loop breaks - TotalViolators, err = getTotalViolation(db, GetTotalPKViolator(pk.table, cols)) - if err != nil { - return err - } - - // Perform the below action only if the violators is greater than 0 - if TotalViolators > 0 { - - // If there are two or more columns forming a PK or UK - // lets only fix column by column. - totalColumns := strings.Split(cols, ",") - - // Get datatype assosicated with the datatypes - dtypes, err := obtainDataType(db, GetDatatype(pk.table, totalColumns)) - if err != nil { - return err - } - - // Fix the primary constraints by picking the columns from the - // array, i.e we update the column one by one. - for _, v := range dtypes { - column := strings.Split(v, ":")[0] - dtype := strings.Split(v, ":")[1] - err = fixPKViolator(db, pk.table, column, dtype) - if err != nil { - return err - } - } - } - } - - return nil -} - -// Get datatype of the column that is based on the -// column provided. -func obtainDataType(db *sql.DB, query string) ([]string, error) { - var colname, dtype string - var dtypes []string - - rows, err := db.Query(query) - if err != nil { - return dtypes, fmt.Errorf("Unable to get the datatype of key violators: %v", err) - } - for rows.Next() { - err = rows.Scan(&colname, &dtype) - if err != nil { - return dtypes, fmt.Errorf("Unable to scan the datatype of key violators: %v", err) - } - dtypes = append(dtypes, colname + ":" + dtype) - } - - return dtypes, nil -} - - -// Get total violations of the columns that are part of the PK -func getTotalViolation(db *sql.DB, query string) (int, error) { - - var TotalViolators int - - // Check if this table is violating any primary constraints - rows, err := db.Query(query) - if err != nil { - return TotalViolators, fmt.Errorf("Unable to get the total key violators: %v", err) - } - for rows.Next() { - err = rows.Scan(&TotalViolators) - if err != nil { - return TotalViolators, fmt.Errorf("Unable to scan the total key violators: %v", err) - } - - } - - return TotalViolators, nil -} - - -// Fix Primary Ket string violators. -func fixPKViolator(db *sql.DB, tab, col, dttype string) error { - - // Get all the strings that violates the primary key constraints - rows, err := db.Query(GetPKViolator(tab, col)) - if err != nil { - return fmt.Errorf("Error in getting rows of PK violators: %v", err) - } - for rows.Next() { - - // Get a random data based on datatype - newdata, err := core.BuildData(dttype) - if err != nil { - return err - } - - // Update the column - var duplicateRow string - err = rows.Scan(&duplicateRow) - if err != nil { - return fmt.Errorf("Error in scanning rows of PK violators: %v", err) - } - _, err = db.Exec(UpdatePKey(tab, col, duplicateRow, fmt.Sprintf("%v", newdata))) - if err != nil { - return fmt.Errorf("Error in fixing the rows of PK violators: %v", err) - } - } - - return nil -} - diff --git a/db/postgres/sql.go b/db/postgres/sql.go deleted file mode 100644 index 5002cc9..0000000 --- a/db/postgres/sql.go +++ /dev/null @@ -1,215 +0,0 @@ -package postgres - -import "strings" - -// Postgres version -func PGVersion() string { - return "select version()" -} - -// Get all table query -// Postgres 9 and above -func PGAllTablesQry1() string { - return "SELECT n.nspname || '.' || c.relname " + - "FROM pg_catalog.pg_class c " + - " LEFT JOIN pg_catalog.pg_namespace n " + - " ON n.oid = c.relnamespace " + - "WHERE c.relkind IN ( 'r', '' ) " + - " AND n.nspname <> 'pg_catalog' " + - " AND n.nspname <> 'information_schema' " + - " AND n.nspname !~ '^pg_toast' " + - " AND n.nspname !~ '^gp_toolkit' " + - " AND c.relkind = 'r' " + - "ORDER BY 1 " -} - -// Greenplum, HDB , postgres 8.3 etc -func PGAllTablesQry2() string { - return "SELECT n.nspname || '.' || c.relname " + - "FROM pg_catalog.pg_class c " + - " LEFT JOIN pg_catalog.pg_namespace n " + - " ON n.oid = c.relnamespace " + - "WHERE c.relkind IN ( 'r', '' ) " + - " AND n.nspname <> 'pg_catalog' " + - " AND n.nspname <> 'information_schema' " + - " AND n.nspname !~ '^pg_toast' " + - " AND n.nspname <> 'gp_toolkit' " + - " AND c.relkind = 'r' " + - " AND c.relstorage IN ('a', 'h') " + - "ORDER BY 1 " -} - -// Get all columns from the table query -// Postgres 9 and above -func PGColumnQry1(table string) string { - return "SELECT a.attname, " + - " pg_catalog.Format_type(a.atttypid, a.atttypmod), " + - " COALESCE((SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128) " + - " FROM pg_catalog.pg_attrdef d " + - " WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef), '') " + - "FROM pg_catalog.pg_attribute a " + - "WHERE a.attrelid = '"+table+"'::regclass " + - "AND a.attnum > 0 " + - "AND NOT a.attisdropped " + - "ORDER BY a.attnum " -} - -// Postgres 8.3, GPDB, HDB etc -func PGColumnQry2(table string) string { - return "SELECT a.attname, " + - " pg_catalog.Format_type(a.atttypid, a.atttypmod), " + - " COALESCE((SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128) " + - " FROM pg_catalog.pg_attrdef d " + - " WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef), '') " + - "FROM pg_catalog.pg_attribute a " + - "LEFT OUTER JOIN pg_catalog.pg_attribute_encoding e " + - "ON e.attrelid = a .attrelid " + - "AND e.attnum = a.attnum " + - "WHERE a.attrelid = '"+table+"'::regclass " + - "AND a.attnum > 0 " + - "AND NOT a.attisdropped " + - "ORDER BY a.attnum" -} - -// Save all the DDL of the constraint ( like PK(p), FK(f), CK(c), UK(u) ) -func GetPGConstraintDDL(conntype string) string { - return " SELECT n.nspname || '.' || c.relname tablename, " + - " con.conname constraint_name," + - " pg_catalog.pg_get_constraintdef(con.oid, true) constriant_col" + - " FROM pg_catalog.pg_class c," + - " pg_catalog.pg_constraint con," + - " pg_namespace n" + - " WHERE conrelid = c.oid" + - " AND n.oid = c.relnamespace" + - " AND contype = '"+conntype+"'" + - " ORDER BY tablename " -} - -// Get all the Unique index from the database -func GetPGIndexDDL() string { - return "SELECT schemaname ||'.'|| tablename, " + - "indexdef " + - "FROM pg_indexes " + - "WHERE schemaname IN (SELECT nspname " + - "FROM pg_namespace " + - "WHERE nspname NOT IN ( " + - "'pg_catalog', " + - "'information_schema'," + - "'pg_aoseg'," + - "'gp_toolkit'," + - "'pg_toast', 'pg_bitmapindex' )) " + - "AND indexdef LIKE 'CREATE UNIQUE%'" -} - - -// Drop statement for the table -func GetConstraintsPertab(tabname string) string { - return "SELECT * FROM ( " + - " SELECT n.nspname || '.' || c.relname tablename, " + - " con.conname conname, " + - " pg_catalog.pg_get_constraintdef(con.oid, true) concol," + - " 'constriant' contype " + - " FROM pg_catalog.pg_class c, " + - " pg_catalog.pg_constraint con, " + - " pg_namespace n " + - " WHERE c.oid = '"+tabname+"'::regclass " + - " AND conrelid = c.oid " + - " AND n.oid = c.relnamespace " + - " AND contype IN ('u','f','c','p') " + - " UNION " + - " SELECT schemaname || '.' || tablename tablename, " + - " indexname conname, " + - " indexdef concol, " + - " 'index' contype " + - " FROM pg_indexes " + - " WHERE schemaname IN (SELECT nspname " + - " FROM pg_namespace " + - " WHERE nspname NOT IN ( " + - " 'pg_catalog', " + - " 'information_schema', " + - " 'pg_aoseg', " + - " 'gp_toolkit', " + - " 'pg_toast', 'pg_bitmapindex' )) " + - " AND indexdef LIKE 'CREATE UNIQUE%' " + - " AND schemaname || '.' || tablename = '"+tabname+"' " + - ") a ORDER BY contype" // Ensuring the constraint remains on top - -} - -// Get the datatype of the column -func GetDatatype(tab string, columns []string) string { - whereClause := strings.Join(columns, "' or attname = '") - whereClause = strings.Replace(whereClause, "attname = ' ", "attname = '", -1) - query := "SELECT attname, pg_catalog.Format_type(atttypid, atttypmod) " + - "FROM pg_attribute WHERE (attname = "+ - "'"+whereClause+"') AND attrelid = '"+tab+"'::regclass" - return query -} - - -// Primary key violation check -func GetTotalPKViolator(tab, cols string) string { - return "SELECT COUNT(*) FROM ( "+ - GetPKViolator(tab, cols)+ - ") a " -} - -// Total Primary Key violator -func GetPKViolator(tab, cols string) string { - return " SELECT "+cols+ - " FROM "+tab+ - " GROUP BY "+cols+ - " HAVING COUNT(*) > 1 " -} - -// Fix int PK violators. -func UpdateIntPKey(tab, col, dt string) string { - return " UPDATE " + tab + - " SET " + col + " = " + col + "+" + "trunc(random() * 100 + 1)::" + dt + - " WHERE " + col + " IN ( " + GetPKViolator(tab, col) + " )" -} - -// Fix String PK Violators -func UpdatePKey(tab, col, whichrow, newdata string) string { - return " UPDATE " + tab + - " SET " + col + " = '" + newdata + "'" + - " WHERE ctid = ( SELECT ctid FROM " + tab + - " WHERE " + col + " = '" +whichrow+ "' LIMIT 1 )" -} - - -// Get the foriegn violations keys -func GetFKViolators(tab, col, reftab, refcol string) string { - return "SELECT "+col+" FROM "+tab+" where "+col+" NOT IN ( SELECT "+refcol+" from "+reftab+" )" -} - -// Get total FK violators -func GetTotalFKViolators(tab, col, reftab, refcol string) string { - return "SELECT COUNT(*) FROM (" + GetFKViolators(tab, col, reftab, refcol) + ") a" -} - -// Total rows of the table -func TotalRows(tab string) string { - return "SELECT COUNT(*) FROM "+tab -} - -// Update FK violators -func UpdateFKeys(fktab, fkcol, reftab, refcol, whichrow, totalRows string) string { - return "UPDATE "+fktab+" SET "+fkcol+ - "=(SELECT "+refcol+" FROM "+reftab+ - " OFFSET floor(random()*"+totalRows+") LIMIT 1)" + - " WHERE " +fkcol+ "='" + whichrow + "'" -} - -// Check key violation check -func GetTotalCKViolator(tab, column, ckconstraint string) string { - return "SELECT COUNT(*) FROM ( "+ - GetCKViolator(tab, column, ckconstraint)+ - ") a " -} - -// Check Constraint violation -func GetCKViolator(tab, column, ckconstraint string) string { - return "SELECT " + column + - "FROM " + tab + " WHERE not ("+ckconstraint+")" -} \ No newline at end of file diff --git a/demo/postgres/demo-db.sql b/demo/postgres/demo-db.sql deleted file mode 100644 index 58a1772..0000000 --- a/demo/postgres/demo-db.sql +++ /dev/null @@ -1,1463 +0,0 @@ --- Sample database download from --- http://www.postgresqltutorial.com/postgresql-sample-database/ --- --- NOTE: --- --- File paths need to be edited. Search for $$PATH$$ and --- replace it with the path to the directory containing --- the extracted data files. --- --- --- PostgreSQL database dump --- - -SET statement_timeout = 0; -SET client_encoding = 'UTF8'; -SET standard_conforming_strings = on; -SET check_function_bodies = false; -SET client_min_messages = warning; - -SET search_path = public, pg_catalog; - -ALTER TABLE ONLY public.store DROP CONSTRAINT store_manager_staff_id_fkey; -ALTER TABLE ONLY public.store DROP CONSTRAINT store_address_id_fkey; -ALTER TABLE ONLY public.staff DROP CONSTRAINT staff_address_id_fkey; -ALTER TABLE ONLY public.rental DROP CONSTRAINT rental_staff_id_key; -ALTER TABLE ONLY public.rental DROP CONSTRAINT rental_inventory_id_fkey; -ALTER TABLE ONLY public.rental DROP CONSTRAINT rental_customer_id_fkey; -ALTER TABLE ONLY public.payment DROP CONSTRAINT payment_staff_id_fkey; -ALTER TABLE ONLY public.payment DROP CONSTRAINT payment_rental_id_fkey; -ALTER TABLE ONLY public.payment DROP CONSTRAINT payment_customer_id_fkey; -ALTER TABLE ONLY public.inventory DROP CONSTRAINT inventory_film_id_fkey; -ALTER TABLE ONLY public.city DROP CONSTRAINT fk_city; -ALTER TABLE ONLY public.address DROP CONSTRAINT fk_address_city; -ALTER TABLE ONLY public.film DROP CONSTRAINT film_language_id_fkey; -ALTER TABLE ONLY public.film_category DROP CONSTRAINT film_category_film_id_fkey; -ALTER TABLE ONLY public.film_category DROP CONSTRAINT film_category_category_id_fkey; -ALTER TABLE ONLY public.film_actor DROP CONSTRAINT film_actor_film_id_fkey; -ALTER TABLE ONLY public.film_actor DROP CONSTRAINT film_actor_actor_id_fkey; -ALTER TABLE ONLY public.customer DROP CONSTRAINT customer_address_id_fkey; -DROP TRIGGER last_updated ON public.store; -DROP TRIGGER last_updated ON public.staff; -DROP TRIGGER last_updated ON public.rental; -DROP TRIGGER last_updated ON public.language; -DROP TRIGGER last_updated ON public.inventory; -DROP TRIGGER last_updated ON public.film_category; -DROP TRIGGER last_updated ON public.film_actor; -DROP TRIGGER last_updated ON public.film; -DROP TRIGGER last_updated ON public.customer; -DROP TRIGGER last_updated ON public.country; -DROP TRIGGER last_updated ON public.city; -DROP TRIGGER last_updated ON public.category; -DROP TRIGGER last_updated ON public.address; -DROP TRIGGER last_updated ON public.actor; -DROP TRIGGER film_fulltext_trigger ON public.film; -DROP INDEX public.idx_unq_rental_rental_date_inventory_id_customer_id; -DROP INDEX public.idx_unq_manager_staff_id; -DROP INDEX public.idx_title; -DROP INDEX public.idx_store_id_film_id; -DROP INDEX public.idx_last_name; -DROP INDEX public.idx_fk_store_id; -DROP INDEX public.idx_fk_staff_id; -DROP INDEX public.idx_fk_rental_id; -DROP INDEX public.idx_fk_language_id; -DROP INDEX public.idx_fk_inventory_id; -DROP INDEX public.idx_fk_film_id; -DROP INDEX public.idx_fk_customer_id; -DROP INDEX public.idx_fk_country_id; -DROP INDEX public.idx_fk_city_id; -DROP INDEX public.idx_fk_address_id; -DROP INDEX public.idx_actor_last_name; -DROP INDEX public.film_fulltext_idx; -ALTER TABLE ONLY public.store DROP CONSTRAINT store_pkey; -ALTER TABLE ONLY public.staff DROP CONSTRAINT staff_pkey; -ALTER TABLE ONLY public.rental DROP CONSTRAINT rental_pkey; -ALTER TABLE ONLY public.payment DROP CONSTRAINT payment_pkey; -ALTER TABLE ONLY public.language DROP CONSTRAINT language_pkey; -ALTER TABLE ONLY public.inventory DROP CONSTRAINT inventory_pkey; -ALTER TABLE ONLY public.film DROP CONSTRAINT film_pkey; -ALTER TABLE ONLY public.film_category DROP CONSTRAINT film_category_pkey; -ALTER TABLE ONLY public.film_actor DROP CONSTRAINT film_actor_pkey; -ALTER TABLE ONLY public.customer DROP CONSTRAINT customer_pkey; -ALTER TABLE ONLY public.country DROP CONSTRAINT country_pkey; -ALTER TABLE ONLY public.city DROP CONSTRAINT city_pkey; -ALTER TABLE ONLY public.category DROP CONSTRAINT category_pkey; -ALTER TABLE ONLY public.address DROP CONSTRAINT address_pkey; -ALTER TABLE ONLY public.actor DROP CONSTRAINT actor_pkey; -DROP VIEW public.staff_list; -DROP VIEW public.sales_by_store; -DROP TABLE public.store; -DROP SEQUENCE public.store_store_id_seq; -DROP TABLE public.staff; -DROP SEQUENCE public.staff_staff_id_seq; -DROP VIEW public.sales_by_film_category; -DROP TABLE public.rental; -DROP SEQUENCE public.rental_rental_id_seq; -DROP TABLE public.payment; -DROP SEQUENCE public.payment_payment_id_seq; -DROP VIEW public.nicer_but_slower_film_list; -DROP TABLE public.language; -DROP SEQUENCE public.language_language_id_seq; -DROP TABLE public.inventory; -DROP SEQUENCE public.inventory_inventory_id_seq; -DROP VIEW public.film_list; -DROP VIEW public.customer_list; -DROP TABLE public.country; -DROP SEQUENCE public.country_country_id_seq; -DROP TABLE public.city; -DROP SEQUENCE public.city_city_id_seq; -DROP TABLE public.address; -DROP SEQUENCE public.address_address_id_seq; -DROP VIEW public.actor_info; -DROP TABLE public.film_category; -DROP TABLE public.film_actor; -DROP TABLE public.film; -DROP SEQUENCE public.film_film_id_seq; -DROP TABLE public.category; -DROP SEQUENCE public.category_category_id_seq; -DROP TABLE public.actor; -DROP SEQUENCE public.actor_actor_id_seq; -DROP AGGREGATE public.group_concat(text); -DROP FUNCTION public.rewards_report(min_monthly_purchases integer, min_dollar_amount_purchased numeric); -DROP TABLE public.customer; -DROP SEQUENCE public.customer_customer_id_seq; -DROP FUNCTION public.last_updated(); -DROP FUNCTION public.last_day(timestamp without time zone); -DROP FUNCTION public.inventory_in_stock(p_inventory_id integer); -DROP FUNCTION public.inventory_held_by_customer(p_inventory_id integer); -DROP FUNCTION public.get_customer_balance(p_customer_id integer, p_effective_date timestamp without time zone); -DROP FUNCTION public.film_not_in_stock(p_film_id integer, p_store_id integer, OUT p_film_count integer); -DROP FUNCTION public.film_in_stock(p_film_id integer, p_store_id integer, OUT p_film_count integer); -DROP FUNCTION public._group_concat(text, text); -DROP DOMAIN public.year; -DROP TYPE public.mpaa_rating; -DROP EXTENSION plpgsql; -DROP SCHEMA public; --- --- Name: public; Type: SCHEMA; Schema: -; Owner: postgres --- - -CREATE SCHEMA public; - - -ALTER SCHEMA public OWNER TO postgres; - --- --- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres --- - -COMMENT ON SCHEMA public IS 'Standard public schema'; - - --- --- Name: plpgsql; Type: EXTENSION; Schema: -; Owner: --- - -CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog; - - --- --- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner: --- - -COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language'; - - -SET search_path = public, pg_catalog; - --- --- Name: mpaa_rating; Type: TYPE; Schema: public; Owner: postgres --- - -CREATE TYPE mpaa_rating AS ENUM ( - 'G', - 'PG', - 'PG-13', - 'R', - 'NC-17' -); - - -ALTER TYPE public.mpaa_rating OWNER TO postgres; - --- --- Name: year; Type: DOMAIN; Schema: public; Owner: postgres --- - -CREATE DOMAIN year AS integer - CONSTRAINT year_check CHECK (((VALUE >= 1901) AND (VALUE <= 2155))); - - -ALTER DOMAIN public.year OWNER TO postgres; - --- --- Name: _group_concat(text, text); Type: FUNCTION; Schema: public; Owner: postgres --- - -CREATE FUNCTION _group_concat(text, text) RETURNS text - LANGUAGE sql IMMUTABLE - AS $_$ -SELECT CASE - WHEN $2 IS NULL THEN $1 - WHEN $1 IS NULL THEN $2 - ELSE $1 || ', ' || $2 -END -$_$; - - -ALTER FUNCTION public._group_concat(text, text) OWNER TO postgres; - --- --- Name: film_in_stock(integer, integer); Type: FUNCTION; Schema: public; Owner: postgres --- - -CREATE FUNCTION film_in_stock(p_film_id integer, p_store_id integer, OUT p_film_count integer) RETURNS SETOF integer - LANGUAGE sql - AS $_$ - SELECT inventory_id - FROM inventory - WHERE film_id = $1 - AND store_id = $2 - AND inventory_in_stock(inventory_id); -$_$; - - -ALTER FUNCTION public.film_in_stock(p_film_id integer, p_store_id integer, OUT p_film_count integer) OWNER TO postgres; - --- --- Name: film_not_in_stock(integer, integer); Type: FUNCTION; Schema: public; Owner: postgres --- - -CREATE FUNCTION film_not_in_stock(p_film_id integer, p_store_id integer, OUT p_film_count integer) RETURNS SETOF integer - LANGUAGE sql - AS $_$ - SELECT inventory_id - FROM inventory - WHERE film_id = $1 - AND store_id = $2 - AND NOT inventory_in_stock(inventory_id); -$_$; - - -ALTER FUNCTION public.film_not_in_stock(p_film_id integer, p_store_id integer, OUT p_film_count integer) OWNER TO postgres; - --- --- Name: get_customer_balance(integer, timestamp without time zone); Type: FUNCTION; Schema: public; Owner: postgres --- - -CREATE FUNCTION get_customer_balance(p_customer_id integer, p_effective_date timestamp without time zone) RETURNS numeric - LANGUAGE plpgsql - AS $$ - --#OK, WE NEED TO CALCULATE THE CURRENT BALANCE GIVEN A CUSTOMER_ID AND A DATE - --#THAT WE WANT THE BALANCE TO BE EFFECTIVE FOR. THE BALANCE IS: - --# 1) RENTAL FEES FOR ALL PREVIOUS RENTALS - --# 2) ONE DOLLAR FOR EVERY DAY THE PREVIOUS RENTALS ARE OVERDUE - --# 3) IF A FILM IS MORE THAN RENTAL_DURATION * 2 OVERDUE, CHARGE THE REPLACEMENT_COST - --# 4) SUBTRACT ALL PAYMENTS MADE BEFORE THE DATE SPECIFIED -DECLARE - v_rentfees DECIMAL(5,2); --#FEES PAID TO RENT THE VIDEOS INITIALLY - v_overfees INTEGER; --#LATE FEES FOR PRIOR RENTALS - v_payments DECIMAL(5,2); --#SUM OF PAYMENTS MADE PREVIOUSLY -BEGIN - SELECT COALESCE(SUM(film.rental_rate),0) INTO v_rentfees - FROM film, inventory, rental - WHERE film.film_id = inventory.film_id - AND inventory.inventory_id = rental.inventory_id - AND rental.rental_date <= p_effective_date - AND rental.customer_id = p_customer_id; - - SELECT COALESCE(SUM(IF((rental.return_date - rental.rental_date) > (film.rental_duration * '1 day'::interval), - ((rental.return_date - rental.rental_date) - (film.rental_duration * '1 day'::interval)),0)),0) INTO v_overfees - FROM rental, inventory, film - WHERE film.film_id = inventory.film_id - AND inventory.inventory_id = rental.inventory_id - AND rental.rental_date <= p_effective_date - AND rental.customer_id = p_customer_id; - - SELECT COALESCE(SUM(payment.amount),0) INTO v_payments - FROM payment - WHERE payment.payment_date <= p_effective_date - AND payment.customer_id = p_customer_id; - - RETURN v_rentfees + v_overfees - v_payments; -END -$$; - - -ALTER FUNCTION public.get_customer_balance(p_customer_id integer, p_effective_date timestamp without time zone) OWNER TO postgres; - --- --- Name: inventory_held_by_customer(integer); Type: FUNCTION; Schema: public; Owner: postgres --- - -CREATE FUNCTION inventory_held_by_customer(p_inventory_id integer) RETURNS integer - LANGUAGE plpgsql - AS $$ -DECLARE - v_customer_id INTEGER; -BEGIN - - SELECT customer_id INTO v_customer_id - FROM rental - WHERE return_date IS NULL - AND inventory_id = p_inventory_id; - - RETURN v_customer_id; -END $$; - - -ALTER FUNCTION public.inventory_held_by_customer(p_inventory_id integer) OWNER TO postgres; - --- --- Name: inventory_in_stock(integer); Type: FUNCTION; Schema: public; Owner: postgres --- - -CREATE FUNCTION inventory_in_stock(p_inventory_id integer) RETURNS boolean - LANGUAGE plpgsql - AS $$ -DECLARE - v_rentals INTEGER; - v_out INTEGER; -BEGIN - -- AN ITEM IS IN-STOCK IF THERE ARE EITHER NO ROWS IN THE rental TABLE - -- FOR THE ITEM OR ALL ROWS HAVE return_date POPULATED - - SELECT count(*) INTO v_rentals - FROM rental - WHERE inventory_id = p_inventory_id; - - IF v_rentals = 0 THEN - RETURN TRUE; - END IF; - - SELECT COUNT(rental_id) INTO v_out - FROM inventory LEFT JOIN rental USING(inventory_id) - WHERE inventory.inventory_id = p_inventory_id - AND rental.return_date IS NULL; - - IF v_out > 0 THEN - RETURN FALSE; - ELSE - RETURN TRUE; - END IF; -END $$; - - -ALTER FUNCTION public.inventory_in_stock(p_inventory_id integer) OWNER TO postgres; - --- --- Name: last_day(timestamp without time zone); Type: FUNCTION; Schema: public; Owner: postgres --- - -CREATE FUNCTION last_day(timestamp without time zone) RETURNS date - LANGUAGE sql IMMUTABLE STRICT - AS $_$ - SELECT CASE - WHEN EXTRACT(MONTH FROM $1) = 12 THEN - (((EXTRACT(YEAR FROM $1) + 1) operator(pg_catalog.||) '-01-01')::date - INTERVAL '1 day')::date - ELSE - ((EXTRACT(YEAR FROM $1) operator(pg_catalog.||) '-' operator(pg_catalog.||) (EXTRACT(MONTH FROM $1) + 1) operator(pg_catalog.||) '-01')::date - INTERVAL '1 day')::date - END -$_$; - - -ALTER FUNCTION public.last_day(timestamp without time zone) OWNER TO postgres; - --- --- Name: last_updated(); Type: FUNCTION; Schema: public; Owner: postgres --- - -CREATE FUNCTION last_updated() RETURNS trigger - LANGUAGE plpgsql - AS $$ -BEGIN - NEW.last_update = CURRENT_TIMESTAMP; - RETURN NEW; -END $$; - - -ALTER FUNCTION public.last_updated() OWNER TO postgres; - --- --- Name: customer_customer_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE customer_customer_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.customer_customer_id_seq OWNER TO postgres; - -SET default_tablespace = ''; - -SET default_with_oids = false; - --- --- Name: customer; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE customer ( - customer_id integer DEFAULT nextval('customer_customer_id_seq'::regclass) NOT NULL, - store_id smallint NOT NULL, - first_name character varying(45) NOT NULL, - last_name character varying(45) NOT NULL, - email character varying(50), - address_id smallint NOT NULL, - activebool boolean DEFAULT true NOT NULL, - create_date date DEFAULT ('now'::text)::date NOT NULL, - last_update timestamp without time zone DEFAULT now(), - active integer -); - - -ALTER TABLE public.customer OWNER TO postgres; - --- --- Name: rewards_report(integer, numeric); Type: FUNCTION; Schema: public; Owner: postgres --- - -CREATE FUNCTION rewards_report(min_monthly_purchases integer, min_dollar_amount_purchased numeric) RETURNS SETOF customer - LANGUAGE plpgsql SECURITY DEFINER - AS $_$ -DECLARE - last_month_start DATE; - last_month_end DATE; -rr RECORD; -tmpSQL TEXT; -BEGIN - - /* Some sanity checks... */ - IF min_monthly_purchases = 0 THEN - RAISE EXCEPTION 'Minimum monthly purchases parameter must be > 0'; - END IF; - IF min_dollar_amount_purchased = 0.00 THEN - RAISE EXCEPTION 'Minimum monthly dollar amount purchased parameter must be > $0.00'; - END IF; - - last_month_start := CURRENT_DATE - '3 month'::interval; - last_month_start := to_date((extract(YEAR FROM last_month_start) || '-' || extract(MONTH FROM last_month_start) || '-01'),'YYYY-MM-DD'); - last_month_end := LAST_DAY(last_month_start); - - /* - Create a temporary storage area for Customer IDs. - */ - CREATE TEMPORARY TABLE tmpCustomer (customer_id INTEGER NOT NULL PRIMARY KEY); - - /* - Find all customers meeting the monthly purchase requirements - */ - - tmpSQL := 'INSERT INTO tmpCustomer (customer_id) - SELECT p.customer_id - FROM payment AS p - WHERE DATE(p.payment_date) BETWEEN '||quote_literal(last_month_start) ||' AND '|| quote_literal(last_month_end) || ' - GROUP BY customer_id - HAVING SUM(p.amount) > '|| min_dollar_amount_purchased || ' - AND COUNT(customer_id) > ' ||min_monthly_purchases ; - - EXECUTE tmpSQL; - - /* - Output ALL customer information of matching rewardees. - Customize output as needed. - */ - FOR rr IN EXECUTE 'SELECT c.* FROM tmpCustomer AS t INNER JOIN customer AS c ON t.customer_id = c.customer_id' LOOP - RETURN NEXT rr; - END LOOP; - - /* Clean up */ - tmpSQL := 'DROP TABLE tmpCustomer'; - EXECUTE tmpSQL; - -RETURN; -END -$_$; - - -ALTER FUNCTION public.rewards_report(min_monthly_purchases integer, min_dollar_amount_purchased numeric) OWNER TO postgres; - --- --- Name: group_concat(text); Type: AGGREGATE; Schema: public; Owner: postgres --- - -CREATE AGGREGATE group_concat(text) ( - SFUNC = _group_concat, - STYPE = text -); - - -ALTER AGGREGATE public.group_concat(text) OWNER TO postgres; - --- --- Name: actor_actor_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE actor_actor_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.actor_actor_id_seq OWNER TO postgres; - --- --- Name: actor; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE actor ( - actor_id integer DEFAULT nextval('actor_actor_id_seq'::regclass) NOT NULL, - first_name character varying(15) NOT NULL, - last_name character varying(15) NOT NULL, - last_update timestamp without time zone DEFAULT now() NOT NULL, - email char(20) unique, - gender char(1), - rate integer -); - - -ALTER TABLE public.actor OWNER TO postgres; - --- --- Name: category_category_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE category_category_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.category_category_id_seq OWNER TO postgres; - --- --- Name: category; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE category ( - category_id integer DEFAULT nextval('category_category_id_seq'::regclass) NOT NULL, - name character varying(25) NOT NULL, - last_update timestamp without time zone DEFAULT now() NOT NULL -); - - -ALTER TABLE public.category OWNER TO postgres; - --- --- Name: film_film_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE film_film_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.film_film_id_seq OWNER TO postgres; - --- --- Name: film; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE film ( - film_id integer DEFAULT nextval('film_film_id_seq'::regclass) NOT NULL, - title character varying(255) NOT NULL, - description text, - release_year date, - language_id smallint NOT NULL, - rental_duration smallint DEFAULT 3 NOT NULL, - rental_rate numeric(4,2) DEFAULT 4.99 NOT NULL, - length smallint, - replacement_cost numeric(5,2) DEFAULT 19.99 NOT NULL, - rating char(1) DEFAULT 'G', - last_update timestamp without time zone DEFAULT now() NOT NULL, - special_features text, - fulltext tsvector NOT NULL -); - - -ALTER TABLE public.film OWNER TO postgres; - --- --- Name: film_actor; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE film_actor ( - actor_id smallint NOT NULL, - film_id smallint NOT NULL, - last_update timestamp without time zone DEFAULT now() NOT NULL -); - - -ALTER TABLE public.film_actor OWNER TO postgres; - --- --- Name: film_category; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE film_category ( - film_id smallint NOT NULL, - category_id smallint NOT NULL, - last_update timestamp without time zone DEFAULT now() NOT NULL -); - - -ALTER TABLE public.film_category OWNER TO postgres; - --- --- Name: actor_info; Type: VIEW; Schema: public; Owner: postgres --- - -CREATE VIEW actor_info AS - SELECT a.actor_id, a.first_name, a.last_name, group_concat(DISTINCT (((c.name)::text || ': '::text) || (SELECT group_concat((f.title)::text) AS group_concat FROM ((film f JOIN film_category fc ON ((f.film_id = fc.film_id))) JOIN film_actor fa ON ((f.film_id = fa.film_id))) WHERE ((fc.category_id = c.category_id) AND (fa.actor_id = a.actor_id)) GROUP BY fa.actor_id))) AS film_info FROM (((actor a LEFT JOIN film_actor fa ON ((a.actor_id = fa.actor_id))) LEFT JOIN film_category fc ON ((fa.film_id = fc.film_id))) LEFT JOIN category c ON ((fc.category_id = c.category_id))) GROUP BY a.actor_id, a.first_name, a.last_name; - - -ALTER TABLE public.actor_info OWNER TO postgres; - --- --- Name: address_address_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE address_address_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.address_address_id_seq OWNER TO postgres; - --- --- Name: address; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE address ( - address_id integer DEFAULT nextval('address_address_id_seq'::regclass) NOT NULL, - address character varying(50) NOT NULL, - address2 character varying(50), - district character varying(20) NOT NULL, - city_id smallint NOT NULL, - postal_code character varying(10), - phone character varying(20) NOT NULL, - last_update timestamp without time zone DEFAULT now() NOT NULL -); - - -ALTER TABLE public.address OWNER TO postgres; - --- --- Name: city_city_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE city_city_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.city_city_id_seq OWNER TO postgres; - --- --- Name: city; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE city ( - city_id integer DEFAULT nextval('city_city_id_seq'::regclass) NOT NULL, - city character varying(50) NOT NULL, - country_id smallint NOT NULL, - last_update timestamp without time zone DEFAULT now() NOT NULL -); - - -ALTER TABLE public.city OWNER TO postgres; - --- --- Name: country_country_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE country_country_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.country_country_id_seq OWNER TO postgres; - --- --- Name: country; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE country ( - country_id integer DEFAULT nextval('country_country_id_seq'::regclass) NOT NULL, - country character varying(50) NOT NULL, - last_update timestamp without time zone DEFAULT now() NOT NULL -); - - -ALTER TABLE public.country OWNER TO postgres; - --- --- Name: customer_list; Type: VIEW; Schema: public; Owner: postgres --- - -CREATE VIEW customer_list AS - SELECT cu.customer_id AS id, (((cu.first_name)::text || ' '::text) || (cu.last_name)::text) AS name, a.address, a.postal_code AS "zip code", a.phone, city.city, country.country, CASE WHEN cu.activebool THEN 'active'::text ELSE ''::text END AS notes, cu.store_id AS sid FROM (((customer cu JOIN address a ON ((cu.address_id = a.address_id))) JOIN city ON ((a.city_id = city.city_id))) JOIN country ON ((city.country_id = country.country_id))); - - -ALTER TABLE public.customer_list OWNER TO postgres; - --- --- Name: film_list; Type: VIEW; Schema: public; Owner: postgres --- - -CREATE VIEW film_list AS - SELECT film.film_id AS fid, film.title, film.description, category.name AS category, film.rental_rate AS price, film.length, film.rating, group_concat((((actor.first_name)::text || ' '::text) || (actor.last_name)::text)) AS actors FROM ((((category LEFT JOIN film_category ON ((category.category_id = film_category.category_id))) LEFT JOIN film ON ((film_category.film_id = film.film_id))) JOIN film_actor ON ((film.film_id = film_actor.film_id))) JOIN actor ON ((film_actor.actor_id = actor.actor_id))) GROUP BY film.film_id, film.title, film.description, category.name, film.rental_rate, film.length, film.rating; - - -ALTER TABLE public.film_list OWNER TO postgres; - --- --- Name: inventory_inventory_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE inventory_inventory_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.inventory_inventory_id_seq OWNER TO postgres; - --- --- Name: inventory; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE inventory ( - inventory_id integer DEFAULT nextval('inventory_inventory_id_seq'::regclass) NOT NULL, - film_id smallint NOT NULL, - store_id smallint NOT NULL, - last_update timestamp without time zone DEFAULT now() NOT NULL -); - - -ALTER TABLE public.inventory OWNER TO postgres; - --- --- Name: language_language_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE language_language_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.language_language_id_seq OWNER TO postgres; - --- --- Name: language; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE language ( - language_id integer DEFAULT nextval('language_language_id_seq'::regclass) NOT NULL, - name character(20) NOT NULL, - last_update timestamp without time zone DEFAULT now() NOT NULL -); - - -ALTER TABLE public.language OWNER TO postgres; - --- --- Name: nicer_but_slower_film_list; Type: VIEW; Schema: public; Owner: postgres --- - -CREATE VIEW nicer_but_slower_film_list AS - SELECT film.film_id AS fid, film.title, film.description, category.name AS category, film.rental_rate AS price, film.length, film.rating, group_concat((((upper("substring"((actor.first_name)::text, 1, 1)) || lower("substring"((actor.first_name)::text, 2))) || upper("substring"((actor.last_name)::text, 1, 1))) || lower("substring"((actor.last_name)::text, 2)))) AS actors FROM ((((category LEFT JOIN film_category ON ((category.category_id = film_category.category_id))) LEFT JOIN film ON ((film_category.film_id = film.film_id))) JOIN film_actor ON ((film.film_id = film_actor.film_id))) JOIN actor ON ((film_actor.actor_id = actor.actor_id))) GROUP BY film.film_id, film.title, film.description, category.name, film.rental_rate, film.length, film.rating; - - -ALTER TABLE public.nicer_but_slower_film_list OWNER TO postgres; - --- --- Name: payment_payment_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE payment_payment_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.payment_payment_id_seq OWNER TO postgres; - --- --- Name: payment; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE payment ( - payment_id integer DEFAULT nextval('payment_payment_id_seq'::regclass) NOT NULL, - customer_id smallint NOT NULL, - staff_id smallint NOT NULL, - rental_id integer NOT NULL, - amount numeric(5,2) NOT NULL, - payment_date timestamp without time zone NOT NULL -); - - -ALTER TABLE public.payment OWNER TO postgres; - --- --- Name: rental_rental_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE rental_rental_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.rental_rental_id_seq OWNER TO postgres; - --- --- Name: rental; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE rental ( - rental_id integer DEFAULT nextval('rental_rental_id_seq'::regclass) NOT NULL, - rental_date timestamp without time zone NOT NULL, - inventory_id integer NOT NULL, - customer_id smallint NOT NULL, - return_date timestamp without time zone, - staff_id smallint NOT NULL, - last_update timestamp without time zone DEFAULT now() NOT NULL -); - - -ALTER TABLE public.rental OWNER TO postgres; - --- --- Name: sales_by_film_category; Type: VIEW; Schema: public; Owner: postgres --- - -CREATE VIEW sales_by_film_category AS - SELECT c.name AS category, sum(p.amount) AS total_sales FROM (((((payment p JOIN rental r ON ((p.rental_id = r.rental_id))) JOIN inventory i ON ((r.inventory_id = i.inventory_id))) JOIN film f ON ((i.film_id = f.film_id))) JOIN film_category fc ON ((f.film_id = fc.film_id))) JOIN category c ON ((fc.category_id = c.category_id))) GROUP BY c.name ORDER BY sum(p.amount) DESC; - - -ALTER TABLE public.sales_by_film_category OWNER TO postgres; - --- --- Name: staff_staff_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE staff_staff_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.staff_staff_id_seq OWNER TO postgres; - --- --- Name: staff; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE staff ( - staff_id integer DEFAULT nextval('staff_staff_id_seq'::regclass) NOT NULL, - first_name character varying(45) NOT NULL, - last_name character varying(45) NOT NULL, - address_id smallint NOT NULL, - email character varying(50), - store_id smallint NOT NULL, - active boolean DEFAULT true NOT NULL, - username character varying(16) NOT NULL, - password character varying(40), - last_update timestamp without time zone DEFAULT now() NOT NULL, - picture bytea -); - - -ALTER TABLE public.staff OWNER TO postgres; - --- --- Name: store_store_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres --- - -CREATE SEQUENCE store_store_id_seq - START WITH 1 - INCREMENT BY 1 - NO MINVALUE - NO MAXVALUE - CACHE 1; - - -ALTER TABLE public.store_store_id_seq OWNER TO postgres; - --- --- Name: store; Type: TABLE; Schema: public; Owner: postgres; Tablespace: --- - -CREATE TABLE store ( - store_id integer DEFAULT nextval('store_store_id_seq'::regclass) NOT NULL, - manager_staff_id smallint NOT NULL, - address_id smallint NOT NULL, - last_update timestamp without time zone DEFAULT now() NOT NULL -); - - -ALTER TABLE public.store OWNER TO postgres; - --- --- Name: sales_by_store; Type: VIEW; Schema: public; Owner: postgres --- - -CREATE VIEW sales_by_store AS - SELECT (((c.city)::text || ','::text) || (cy.country)::text) AS store, (((m.first_name)::text || ' '::text) || (m.last_name)::text) AS manager, sum(p.amount) AS total_sales FROM (((((((payment p JOIN rental r ON ((p.rental_id = r.rental_id))) JOIN inventory i ON ((r.inventory_id = i.inventory_id))) JOIN store s ON ((i.store_id = s.store_id))) JOIN address a ON ((s.address_id = a.address_id))) JOIN city c ON ((a.city_id = c.city_id))) JOIN country cy ON ((c.country_id = cy.country_id))) JOIN staff m ON ((s.manager_staff_id = m.staff_id))) GROUP BY cy.country, c.city, s.store_id, m.first_name, m.last_name ORDER BY cy.country, c.city; - - -ALTER TABLE public.sales_by_store OWNER TO postgres; - --- --- Name: staff_list; Type: VIEW; Schema: public; Owner: postgres --- - -CREATE VIEW staff_list AS - SELECT s.staff_id AS id, (((s.first_name)::text || ' '::text) || (s.last_name)::text) AS name, a.address, a.postal_code AS "zip code", a.phone, city.city, country.country, s.store_id AS sid FROM (((staff s JOIN address a ON ((s.address_id = a.address_id))) JOIN city ON ((a.city_id = city.city_id))) JOIN country ON ((city.country_id = country.country_id))); - - -ALTER TABLE public.staff_list OWNER TO postgres; - - --- --- Name: actor_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY actor - ADD CONSTRAINT actor_pkey PRIMARY KEY (actor_id); - - -ALTER TABLE ONLY actor - ADD CONSTRAINT actor_ukey UNIQUE (first_name, last_name); - --- --- Name: actor_gender_ckey; Type: CK CONSTRAINT; Schema: public; Owner: actor --- - -ALTER TABLE ONLY actor - ADD CONSTRAINT actor_rate_ckey CHECK (rate > 0); - --- --- Name: address_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY address - ADD CONSTRAINT address_pkey PRIMARY KEY (address_id); - - --- --- Name: category_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY category - ADD CONSTRAINT category_pkey PRIMARY KEY (category_id); - - --- --- Name: city_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY city - ADD CONSTRAINT city_pkey PRIMARY KEY (city_id); - - --- --- Name: country_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY country - ADD CONSTRAINT country_pkey PRIMARY KEY (country_id); - - --- --- Name: customer_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY customer - ADD CONSTRAINT customer_pkey PRIMARY KEY (customer_id); - - --- --- Name: film_actor_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY film_actor - ADD CONSTRAINT film_actor_pkey PRIMARY KEY (actor_id, film_id); - - --- --- Name: film_category_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY film_category - ADD CONSTRAINT film_category_pkey PRIMARY KEY (film_id, category_id); - - --- --- Name: film_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY film - ADD CONSTRAINT film_pkey PRIMARY KEY (film_id); - - --- --- Name: inventory_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY inventory - ADD CONSTRAINT inventory_pkey PRIMARY KEY (inventory_id); - - --- --- Name: language_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY language - ADD CONSTRAINT language_pkey PRIMARY KEY (language_id); - - --- --- Name: payment_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY payment - ADD CONSTRAINT payment_pkey PRIMARY KEY (payment_id); - - --- --- Name: rental_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY rental - ADD CONSTRAINT rental_pkey PRIMARY KEY (rental_id); - - --- --- Name: staff_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY staff - ADD CONSTRAINT staff_pkey PRIMARY KEY (staff_id); - - --- --- Name: store_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: --- - -ALTER TABLE ONLY store - ADD CONSTRAINT store_pkey PRIMARY KEY (store_id); - - --- --- Name: film_fulltext_idx; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX film_fulltext_idx ON film USING gist (fulltext); - - --- --- Name: idx_actor_last_name; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_actor_last_name ON actor USING btree (last_name); - - --- --- Name: idx_fk_address_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_fk_address_id ON customer USING btree (address_id); - - --- --- Name: idx_fk_city_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_fk_city_id ON address USING btree (city_id); - - --- --- Name: idx_fk_country_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_fk_country_id ON city USING btree (country_id); - - --- --- Name: idx_fk_customer_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_fk_customer_id ON payment USING btree (customer_id); - - --- --- Name: idx_fk_film_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_fk_film_id ON film_actor USING btree (film_id); - - --- --- Name: idx_fk_inventory_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_fk_inventory_id ON rental USING btree (inventory_id); - - --- --- Name: idx_fk_language_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_fk_language_id ON film USING btree (language_id); - - --- --- Name: idx_fk_rental_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_fk_rental_id ON payment USING btree (rental_id); - - --- --- Name: idx_fk_staff_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_fk_staff_id ON payment USING btree (staff_id); - - --- --- Name: idx_fk_store_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_fk_store_id ON customer USING btree (store_id); - - --- --- Name: idx_last_name; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_last_name ON customer USING btree (last_name); - - --- --- Name: idx_store_id_film_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_store_id_film_id ON inventory USING btree (store_id, film_id); - - --- --- Name: idx_title; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE INDEX idx_title ON film USING btree (title); - - --- --- Name: idx_unq_manager_staff_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE UNIQUE INDEX idx_unq_manager_staff_id ON store USING btree (manager_staff_id); - - --- --- Name: idx_unq_rental_rental_date_inventory_id_customer_id; Type: INDEX; Schema: public; Owner: postgres; Tablespace: --- - -CREATE UNIQUE INDEX idx_unq_rental_rental_date_inventory_id_customer_id ON rental USING btree (rental_date, inventory_id, customer_id); - - --- --- Name: film_fulltext_trigger; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER film_fulltext_trigger BEFORE INSERT OR UPDATE ON film FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger('fulltext', 'pg_catalog.english', 'title', 'description'); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON actor FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON address FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON category FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON city FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON country FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON customer FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON film FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON film_actor FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON film_category FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON inventory FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON language FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON rental FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON staff FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: last_updated; Type: TRIGGER; Schema: public; Owner: postgres --- - -CREATE TRIGGER last_updated BEFORE UPDATE ON store FOR EACH ROW EXECUTE PROCEDURE last_updated(); - - --- --- Name: customer_address_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY customer - ADD CONSTRAINT customer_address_id_fkey FOREIGN KEY (address_id) REFERENCES address(address_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: film_actor_actor_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY film_actor - ADD CONSTRAINT film_actor_actor_id_fkey FOREIGN KEY (actor_id) REFERENCES actor(actor_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: film_actor_film_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY film_actor - ADD CONSTRAINT film_actor_film_id_fkey FOREIGN KEY (film_id) REFERENCES film(film_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: film_category_category_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY film_category - ADD CONSTRAINT film_category_category_id_fkey FOREIGN KEY (category_id) REFERENCES category(category_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: film_category_film_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY film_category - ADD CONSTRAINT film_category_film_id_fkey FOREIGN KEY (film_id) REFERENCES film(film_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: film_language_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY film - ADD CONSTRAINT film_language_id_fkey FOREIGN KEY (language_id) REFERENCES language(language_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: fk_address_city; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY address - ADD CONSTRAINT fk_address_city FOREIGN KEY (city_id) REFERENCES city(city_id); - - --- --- Name: fk_city; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY city - ADD CONSTRAINT fk_city FOREIGN KEY (country_id) REFERENCES country(country_id); - - --- --- Name: inventory_film_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY inventory - ADD CONSTRAINT inventory_film_id_fkey FOREIGN KEY (film_id) REFERENCES film(film_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: payment_customer_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY payment - ADD CONSTRAINT payment_customer_id_fkey FOREIGN KEY (customer_id) REFERENCES customer(customer_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: payment_rental_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY payment - ADD CONSTRAINT payment_rental_id_fkey FOREIGN KEY (rental_id) REFERENCES rental(rental_id) ON UPDATE CASCADE ON DELETE SET NULL; - - --- --- Name: payment_staff_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY payment - ADD CONSTRAINT payment_staff_id_fkey FOREIGN KEY (staff_id) REFERENCES staff(staff_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: rental_customer_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY rental - ADD CONSTRAINT rental_customer_id_fkey FOREIGN KEY (customer_id) REFERENCES customer(customer_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: rental_inventory_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY rental - ADD CONSTRAINT rental_inventory_id_fkey FOREIGN KEY (inventory_id) REFERENCES inventory(inventory_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: rental_staff_id_key; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY rental - ADD CONSTRAINT rental_staff_id_key FOREIGN KEY (staff_id) REFERENCES staff(staff_id); - - --- --- Name: staff_address_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY staff - ADD CONSTRAINT staff_address_id_fkey FOREIGN KEY (address_id) REFERENCES address(address_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: store_address_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY store - ADD CONSTRAINT store_address_id_fkey FOREIGN KEY (address_id) REFERENCES address(address_id) ON UPDATE CASCADE ON DELETE RESTRICT; - - --- --- Name: store_manager_staff_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: postgres --- - -ALTER TABLE ONLY store - ADD CONSTRAINT store_manager_staff_id_fkey FOREIGN KEY (manager_staff_id) REFERENCES staff(staff_id) ON UPDATE CASCADE ON DELETE RESTRICT; - --- --- Name: public; Type: ACL; Schema: -; Owner: postgres --- - -REVOKE ALL ON SCHEMA public FROM PUBLIC; -REVOKE ALL ON SCHEMA public FROM postgres; -GRANT ALL ON SCHEMA public TO postgres; -GRANT ALL ON SCHEMA public TO PUBLIC; - - --- --- PostgreSQL database dump complete --- - diff --git a/demo/postgres/sample-table.sql b/demo/postgres/sample-table.sql deleted file mode 100644 index 6e52275..0000000 --- a/demo/postgres/sample-table.sql +++ /dev/null @@ -1,258 +0,0 @@ -DROP TABLE IF EXISTS sample1; -CREATE TABLE sample1 ( - sample_col_1 smallint NOT NULL, - sample_col_2 integer NOT NULL, - sample_col_3 integer NOT NULL, - sample_col_4 date NOT NULL, - sample_col_5 date NOT NULL, - sample_col_6 character varying(1) NOT NULL, - sample_col_7 timestamp without time zone NOT NULL, - sample_col_8 date, - sample_col_9 smallint, - sample_col_10 smallint, - sample_col_11 date, - sample_col_12 numeric(13,2), - sample_col_13 numeric(13,2), - sample_col_14 smallint, - sample_col_15 date, - sample_col_16 date, - sample_col_17 smallint, - sample_col_18 timestamp without time zone NOT NULL, - sample_col_19 smallint, - sample_col_20 smallint, - sample_col_21 numeric(13,2), - sample_col_22 numeric(5,2), - sample_col_23 character varying(1), - sample_col_24 date, - sample_col_25 time without time zone NOT NULL, - sample_col_26 character varying(1), - sample_col_27 numeric(11,2), - sample_col_28 character varying(1), - sample_col_29 character varying(4) NOT NULL, - sample_col_30 date NOT NULL, - sample_col_31 time without time zone NOT NULL, - sample_col_32 character varying(8) NOT NULL, - sample_col_33 character varying(1) NOT NULL, - sample_col_34 numeric(6,4) NOT NULL, - sample_col_35 smallint NOT NULL, - sample_col_36 smallint, - sample_col_37 character varying(1) NOT NULL, - sample_col_38 integer NOT NULL, - sample_col_39 integer NOT NULL, - sample_col_40 integer NOT NULL, - sample_col_41 integer NOT NULL, - sample_col_42 numeric(11,2), - sample_col_43 character varying(1), - sample_col_44 integer NOT NULL, - sample_col_45 integer NOT NULL, - sample_col_46 integer NOT NULL, - sample_col_47 character varying(30) NOT NULL, - sample_col_48 character varying(10) NOT NULL, - sample_col_49 character varying(8) NOT NULL, - sample_col_50 character varying(1) NOT NULL, - sample_col_51 character varying(20) NOT NULL, - sample_col_52 character varying(45) NOT NULL, - sample_col_53 character varying(1), - sample_col_54 character varying(1), - sample_col_55 integer, - sample_col_56 character varying(20) NOT NULL, - sample_col_57 character varying(30) NOT NULL, - sample_col_58 character varying(10) NOT NULL, - sample_col_59 character varying(8) NOT NULL, - sample_col_60 character varying(30) NOT NULL, - sample_col_61 character varying(10) NOT NULL, - sample_col_62 character varying(8) NOT NULL, - sample_col_63 character varying(20) NOT NULL, - sample_col_64 character varying(10), - sample_col_65 time without time zone NOT NULL, - sample_col_66 character varying(3), - sample_col_67 character varying(1), - sample_col_68 character varying(1) NOT NULL, - sample_col_69 integer, - sample_col_70 numeric(13,2) NOT NULL, - sample_col_71 character varying(1), - sample_col_72 character varying(1), - sample_col_73 smallint, - sample_col_74 smallint, - sample_col_75 character varying(1) NOT NULL, - sample_col_76 integer NOT NULL, - sample_col_77 integer NOT NULL, - sample_col_78 character varying(2) NOT NULL, - sample_col_79 character varying(1), - sample_col_80 character varying(1) NOT NULL, - sample_col_81 numeric(11,2) NOT NULL, - sample_col_82 character varying(1), - sample_col_83 numeric(5,2) NOT NULL, - sample_col_84 character varying(6) NOT NULL, - sample_col_85 character varying(15) NOT NULL, - sample_col_86 character varying(3) NOT NULL, - sample_col_87 character varying(1) NOT NULL, - sample_col_88 smallint NOT NULL, - sample_col_89 smallint NOT NULL, - sample_col_90 character varying(1) NOT NULL, - sample_col_91 character varying(1) NOT NULL, - sample_col_92 character varying(1), - sample_col_93 integer, - sample_col_94 character varying(6) NOT NULL, - sample_col_95 character varying(1), - sample_col_96 character varying(1) NOT NULL, - sample_col_97 character varying(25) NOT NULL, - sample_col_98 character varying(25) NOT NULL, - sample_col_99 numeric(11,2), - sample_col_100 character varying(1), - sample_col_101 character varying(1), - sample_col_102 smallint, - sample_col_103 smallint, - sample_col_104 smallint, - sample_col_105 date NOT NULL, - sample_col_106 date NOT NULL, - sample_col_107 character varying(1), - sample_col_108 character varying(1) NOT NULL, - sample_col_109 character varying(1) NOT NULL, - sample_col_110 smallint, - sample_col_111 date, - sample_col_112 character varying(1) NOT NULL, - sample_col_113 bigint, - sample_col_114 character varying(1), - sample_col_115 character varying(1) NOT NULL, - sample_col_116 character varying(1) NOT NULL, - sample_col_117 smallint NOT NULL, - sample_col_118 smallint NOT NULL, - sample_col_119 smallint NOT NULL, - sample_col_120 character varying(1) NOT NULL, - sample_col_121 character varying(1), - sample_col_122 integer, - sample_col_123 integer, - sample_col_124 smallint NOT NULL, - sample_col_125 character varying(1) NOT NULL, - sample_col_126 numeric(5,2) NOT NULL, - sample_col_127 smallint, - sample_col_128 character varying(1) NOT NULL, - sample_col_129 character varying(25) NOT NULL, - sample_col_130 character varying(1), - sample_col_131 character varying(11) NOT NULL, - sample_col_132 character varying(1) NOT NULL, - sample_col_133 character varying(1), - sample_col_134 character varying(1), - sample_col_135 character varying(1) NOT NULL, - sample_col_136 numeric(10,2) NOT NULL, - sample_col_137 numeric(10,2) NOT NULL, - sample_col_138 numeric(13,2), - sample_col_139 numeric(10,2) NOT NULL, - sample_col_140 character varying(1) NOT NULL, - sample_col_141 character varying(4), - sample_col_142 character varying(1), - sample_col_143 character varying(1) NOT NULL, - sample_col_144 smallint, - sample_col_145 character varying(1), - sample_col_146 smallint NOT NULL, - sample_col_147 character varying(1) NOT NULL, - sample_col_148 character varying(1) NOT NULL, - sample_col_149 character varying(1), - sample_col_150 character varying(5) NOT NULL, - sample_col_151 character varying(5) NOT NULL, - sample_col_152 character varying(1) NOT NULL, - sample_col_153 character varying(1) NOT NULL, - sample_col_154 character varying(1), - sample_col_155 character varying(1), - sample_col_156 smallint NOT NULL, - sample_col_157 date, - sample_col_158 character varying(2) NOT NULL, - sample_col_159 character varying(1) NOT NULL, - sample_col_160 character varying(1), - sample_col_161 numeric(13,2), - sample_col_162 smallint NOT NULL, - sample_col_163 character varying(1), - sample_col_164 character varying(2) NOT NULL, - sample_col_165 smallint NOT NULL, - sample_col_166 character varying(1) NOT NULL, - sample_col_167 character varying(1) NOT NULL, - sample_col_168 character varying(4), - sample_col_169 character varying(2), - sample_col_170 character varying(1) NOT NULL, - sample_col_171 character varying(1) NOT NULL, - sample_col_172 numeric(6,4) NOT NULL, - sample_col_173 smallint NOT NULL, - sample_col_174 character varying(8), - sample_col_175 character varying(7), - sample_col_176 smallint NOT NULL, - sample_col_177 character varying(15) NOT NULL, - sample_col_178 character varying(1) NOT NULL, - sample_col_179 numeric(5,2) NOT NULL, - sample_col_180 character varying(4) NOT NULL, - sample_col_181 smallint NOT NULL, - sample_col_182 character varying(6) NOT NULL, - sample_col_183 integer NOT NULL, - sample_col_184 character varying(1), - sample_col_185 bigint, - sample_col_186 numeric(11,2), - sample_col_187 character varying(1) NOT NULL, - sample_col_188 character varying(1) NOT NULL, - sample_col_189 character varying(1) NOT NULL, - sample_col_190 character varying(12) NOT NULL, - sample_col_191 character varying(2) NOT NULL, - sample_col_192 character varying(1) NOT NULL, - sample_col_193 character varying(1), - sample_col_194 smallint, - sample_col_195 character varying(1), - sample_col_196 smallint NOT NULL, - sample_col_197 character varying(1), - sample_col_198 character varying(36), - sample_col_199 character varying(3), - sample_col_200 bigint, - sample_col_201 bigint NOT NULL, - sample_col_202 bigint NOT NULL, - sample_col_203 bigint NOT NULL, - sample_col_204 character varying(1) NOT NULL, - sample_col_205 character varying(1) NOT NULL, - sample_col_206 character varying(20), - sample_col_207 character varying(1) NOT NULL, - sample_col_208 smallint, - sample_col_209 character varying(1) NOT NULL, - sample_col_210 date, - sample_col_211 smallint, - sample_col_212 smallint, - sample_col_213 numeric(13,2), - sample_col_214 smallint, - sample_col_215 numeric(13,2), - sample_col_216 smallint, - sample_col_217 smallint, - sample_col_218 character varying(1), - sample_col_219 numeric(5,2), - sample_col_220 smallint, - sample_col_221 numeric(13,2), - sample_col_222 smallint, - sample_col_223 smallint, - sample_col_224 date, - sample_col_225 smallint NOT NULL, - sample_col_226 smallint, - sample_col_227 smallint, - sample_col_228 integer, - sample_col_229 date, - sample_col_230 integer, - sample_col_231 date, - sample_col_232 integer, - sample_col_233 date, - sample_col_234 date, - sample_col_235 timestamp without time zone, - sample_col_236 date, - sample_col_237 timestamp without time zone, - sample_col_238 integer, - sample_col_239 date, - sample_col_240 integer, - sample_col_241 date, - sample_col_242 integer, - sample_col_243 date, - sample_col_244 integer, - sample_col_245 date, - sample_col_246 integer, - sample_col_247 date, - sample_col_248 integer, - sample_col_249 date, - sample_col_250 integer, - sample_col_251 date, - sample_col_252 integer, - sample_col_253 date, - sample_col_254 integer, - sample_col_255 date -); \ No newline at end of file diff --git a/demo/postgres/supported-datatypes.sql b/demo/postgres/supported-datatypes.sql deleted file mode 100644 index b7c986a..0000000 --- a/demo/postgres/supported-datatypes.sql +++ /dev/null @@ -1,120 +0,0 @@ --- Supports all datatypes of postgresql (https://www.postgresql.org/docs/9.6/static/datatype.html) --- We don't support any user custom datatypes. - -DROP TABLE supported_datatypes; -CREATE TABLE supported_datatypes ( - - -- Integer type - col_bigint int8, - col_bigserial bigserial, - col_integer int, - col_smallint smallint, - col_smallserial smallserial, - col_serial serial, - - -- Float type - col_real float4, - col_double_precision float8, - col_numeric numeric(4,2), - - -- Bit String Type - col_bit bit, - col_varbit varbit(4), - - -- Boolean type - col_boolean bool, - - -- Character type - col_character char(10), - col_character_varying varchar(10), - col_text text, - - -- Network Address Type - col_inet inet, - col_macaddr macaddr, - col_cidr cidr, - - -- Date / Time type - col_interval interval, - col_date date, - col_time time without time zone, - col_time_tz time with time zone, - col_timestamp timestamp with time zone, - col_timestamp_tz timestamp without time zone, - - -- Monetary Types - col_money money, - - -- JSON Type - col_json json, - col_jsonb jsonb, - - -- XML Type - col_xml xml, - - -- Text Search Type - col_tsquery tsquery, - col_tsvector tsvector, - - -- Geometric Type - col_box box, - col_circle circle, - col_line line, - col_lseg lseg, - col_path path, - col_polygon polygon, - col_point point, - - -- Bytea / blob type - col_bytea bytea, - - -- Log Sequence Number - col_pg_lsn pg_lsn, - - -- txid snapshot - col_txid_snapshot txid_snapshot, - - -- UUID Type - col_uuid uuid, - - -- Array Datatypes - col_smallint_array smallint[], - col_int_array int[], - col_bigint_array bigint[], - col_character_array char(10)[], - col_char_varying_array varchar(10)[], - col_bit_array bit(10)[], - col_varbit_array varbit(4)[], - col_numeric_array numeric[], - col_numeric_range_array numeric(5,3)[], - col_double_precsion_array float4[], - col_real_array float8[], - col_money_array money[], - col_time_array time without time zone[], - col_intreval_array interval[], - col_date_array date[], - col_time_tz_array time with time zone[], - col_timestamp_array timestamp with time zone[], - col_timestamp_tz_array timestamp without time zone[], - col_text_array text[], - col_bool_array bool[], - col_inet_array inet[], - col_macaddr_array macaddr[], - col_cidr_array cidr[], - col_uuid_array uuid[], - col_txid_snapshot_array txid_snapshot[], - col_pg_lsn_array pg_lsn[], - col_tsquery_array tsquery[], - col_tsvector_array tsvector[], - col_box_array box[], - col_circle_array circle[], - col_line_array line[], - col_lseg_array lseg[], - col_path_array path[], - col_polygon_array polygon[], - col_point_array point[], - col_json_array json[], - col_jsonb_array jsonb[], - col_xml_array xml[] - -); diff --git a/docs/ISSUE_TEMPLATE.md b/docs/ISSUE_TEMPLATE.md deleted file mode 100644 index 16ccb6f..0000000 --- a/docs/ISSUE_TEMPLATE.md +++ /dev/null @@ -1,32 +0,0 @@ - -## Command - - - -## What occurred - - - -## What you expected to occur - - - -## Table or Database structure - - - -## MockD Version - - - -## Database Details - - - -## Platform Details - - - -## Any other relevant information - - diff --git a/docs/PULL_REQUEST_TEMPLATE.md b/docs/PULL_REQUEST_TEMPLATE.md deleted file mode 100644 index a10e64b..0000000 --- a/docs/PULL_REQUEST_TEMPLATE.md +++ /dev/null @@ -1,10 +0,0 @@ -## What's this PR do? -## Where should the reviewer start? -## How should this be manually tested? -## Any background context you want to provide? -## What are the relevant tickets? -## Screenshots (if appropriate) -## Questions: -- Is there a blog post? -- Does the knowledge base need an update (eg.s README)? -- Does this add new (go) dependencies which need to be added ? diff --git a/images/combine-all-option-of-custom.gif b/images/combine-all-option-of-custom.gif new file mode 100644 index 0000000..548f278 Binary files /dev/null and b/images/combine-all-option-of-custom.gif differ diff --git a/images/create-demo-database.gif b/images/create-demo-database.gif new file mode 100644 index 0000000..eb4ef28 Binary files /dev/null and b/images/create-demo-database.gif differ diff --git a/images/create-demo-table-load-data.gif b/images/create-demo-table-load-data.gif new file mode 100644 index 0000000..7fbd4ce Binary files /dev/null and b/images/create-demo-table-load-data.gif differ diff --git a/images/create-user-data-plan.gif b/images/create-user-data-plan.gif new file mode 100644 index 0000000..9b191ae Binary files /dev/null and b/images/create-user-data-plan.gif differ diff --git a/images/creating-100-tables.gif b/images/creating-100-tables.gif new file mode 100644 index 0000000..3b25a09 Binary files /dev/null and b/images/creating-100-tables.gif differ diff --git a/images/faking-data-to-single-table.gif b/images/faking-data-to-single-table.gif new file mode 100644 index 0000000..6b91341 Binary files /dev/null and b/images/faking-data-to-single-table.gif differ diff --git a/images/faking-tables-from-multiple-schema.gif b/images/faking-tables-from-multiple-schema.gif new file mode 100644 index 0000000..81a0080 Binary files /dev/null and b/images/faking-tables-from-multiple-schema.gif differ diff --git a/images/faking-tables-via-schema.gif b/images/faking-tables-via-schema.gif new file mode 100644 index 0000000..bb34f01 Binary files /dev/null and b/images/faking-tables-via-schema.gif differ diff --git a/images/load-full-database-with-100-rows.gif b/images/load-full-database-with-100-rows.gif new file mode 100644 index 0000000..397a3c9 Binary files /dev/null and b/images/load-full-database-with-100-rows.gif differ diff --git a/images/mock-full-database.gif b/images/mock-full-database.gif new file mode 100644 index 0000000..c20e1b0 Binary files /dev/null and b/images/mock-full-database.gif differ diff --git a/images/releastic-data-load.gif b/images/releastic-data-load.gif new file mode 100644 index 0000000..bb3d611 Binary files /dev/null and b/images/releastic-data-load.gif differ diff --git a/images/schema-mocker.gif b/images/schema-mocker.gif new file mode 100644 index 0000000..8d0b249 Binary files /dev/null and b/images/schema-mocker.gif differ diff --git a/images/user-data-loading.gif b/images/user-data-loading.gif new file mode 100644 index 0000000..fc8a3fc Binary files /dev/null and b/images/user-data-loading.gif differ diff --git a/img/alldb.gif b/img/alldb.gif deleted file mode 100644 index 57169c0..0000000 Binary files a/img/alldb.gif and /dev/null differ diff --git a/img/multipletable.gif b/img/multipletable.gif deleted file mode 100644 index 96c5e4a..0000000 Binary files a/img/multipletable.gif and /dev/null differ diff --git a/img/singletable.gif b/img/singletable.gif deleted file mode 100644 index 09e977e..0000000 Binary files a/img/singletable.gif and /dev/null differ diff --git a/mockd.go b/mockd.go deleted file mode 100644 index ddfd4c7..0000000 --- a/mockd.go +++ /dev/null @@ -1,74 +0,0 @@ -package main - -import ( - "os" - - _ "github.com/lib/pq" - "github.com/op/go-logging" - "github.com/pivotal/mock-data/core" -) - -// Version of Mock-data -var version = "1.1" - -// All global variables -var ( - DBEngine string -) - -// Define the logging format, used in the project -var ( - log = logging.MustGetLogger("mockd") - format = logging.MustStringFormatter( - `%{color}%{time:2006-01-02 15:04:05.000}:%{level:s} > %{color:reset}%{message}`, - ) -) - -// file timestamp -var ExecutionTimestamp = core.TimeNow() - -// An Engine is an implementation of a database -// engine like PostgreSQL, MySQL or Greenplum -type Engine struct { - name, version string - port int -} - -// A Table is an implementation of a database with a set of columns and datatypes -type Table struct { - tabname string - columns map[string]string -} - -// Main block -func main() { - - // Logger for go-logging package - // create backend for os.Stderr, set the format and update the logger to what logger to be used - backend := logging.NewLogBackend(os.Stderr, "", 0) - backendFormatter := logging.NewBackendFormatter(backend, format) - logging.SetBackend(backendFormatter) - - // Parse the arguments that has been passed on to the OS - ArgPaser() - - // This execution timestamp - log.Infof("Timestamp of this mockd execution: %s", ExecutionTimestamp) - - // What is the database engine that needs to be used - // call the appropriate program that is specific to database engine - if DBEngine == "postgres" { - err := MockPostgres() - if err != nil { - log.Error(err) - log.Info("mockd program has completed with errors") - os.Exit(1) - } - } else { // Unsupported database engine. - log.Errorf("mockd application doesn't support the database: %s", DBEngine) - os.Exit(1) - } - - log.Info("mockd program has successfully completed") - -} diff --git a/postgres.go b/postgres.go deleted file mode 100644 index c34b67a..0000000 --- a/postgres.go +++ /dev/null @@ -1,349 +0,0 @@ -package main - -import ( - "database/sql" - "fmt" - "strings" - - "github.com/lib/pq" - "github.com/pivotal/mock-data/core" - "github.com/pivotal/mock-data/db/postgres" -) - -// Global Variables -var ( - skippedTab []string - db *sql.DB - stmt *sql.Stmt -) - -// Progress Database connection -func dbConn() error { - dbconn, err := sql.Open(DBEngine, fmt.Sprintf("user=%v password=%v host=%v port=%v dbname=%v sslmode=disable", Connector.Username, Connector.Password, Connector.Host, Connector.Port, Connector.Db)) - if err != nil { - return fmt.Errorf("Cannot establish a database connection: %v\n", err) - } - db = dbconn - return nil -} - -// Check if we can run the query and extract the version of the database -func dbVersion() error { - - log.Infof("Obtaining the version of the DB Engine: \"%s\"", Connector.Engine) - var version string - - // Obtain the version of the database - rows, err := db.Query(postgres.PGVersion()) - if err != nil { - return fmt.Errorf("Cannot extracting version, error from the database: %v", err) - } - - // Store the information of the version onto a variable - for rows.Next() { - err = rows.Scan(&version) - if err != nil { - return fmt.Errorf("Error scanning the rows from the version query: %v", err) - } - } - - // Print the version of the database on the logs - log.Infof("Version of the DB Engine \"%s\": %v", Connector.Engine, version) - - return nil - -} - -// Extract all the tables in the database -func dbExtractTables() ([]string, error) { - - log.Infof("Extracting all the tables in the database: \"%s\"", Connector.Db) - var tableString []string - var rows *sql.Rows - var err error - - // Obtain all the tables in the database - if Connector.Engine == "postgres" { // Use postgres specific query - rows, err = db.Query(postgres.PGAllTablesQry1()) - } else { // Use greenplum, hdb query to extract the columns - rows, err = db.Query(postgres.PGAllTablesQry2()) - } - - if err != nil { - return tableString, fmt.Errorf("Cannot extract all the tables, error from the database: %v", err) - } - - // Loop through the rows and store the table names. - for rows.Next() { - var table string - err = rows.Scan(&table) - if err != nil { - return tableString, fmt.Errorf("Error extracting the rows of the list of tables: %v", err) - } - tableString = append(tableString, table) - } - - return tableString, nil -} - -// Get all the columns and its datatype from the query -func dbColDataType() ([]Table, error) { - - log.Info("Checking for the existence of the table provided to the application, if exist extract all the column and datatype information") - var table []Table - var rows *sql.Rows - var err error - - // Loop through the table list provided and collect the columns and datatypes - for _, v := range strings.Split(Connector.Table, ",") { - var tab Table - if DBEngine == "postgres" { // Use postgres specific query - rows, err = db.Query(postgres.PGColumnQry1(v)) - } else { // Use greenplum, hdb query to extract the columns - rows, err = db.Query(postgres.PGColumnQry2(v)) - } - if err != nil { - return table, fmt.Errorf("Cannot extracting the column info, error from the database: %v", err) - } - for rows.Next() { - - var col string - var datatype string - var seqCol string = "" - - // Scan and store the rows - err = rows.Scan(&col, &datatype, &seqCol) - if err != nil { - return table, fmt.Errorf("Error extracting the rows of the list of columns: %v", err) - } - - // Ignore columns with sequence, since its auto loaded no need to randomize - if !strings.HasPrefix(seqCol, "nextval") { - tab.tabname = v - if tab.columns == nil { - tab.columns = make(map[string]string) - } - tab.columns[col] = datatype - } - } - - // If there is no columns, then ignore that table - if len(tab.columns) > 0 { - table = append(table, tab) - } - - } - - return table, nil -} - -// Extract the table & columns and request to load data -func extractor(table_info []Table) error { - - // Before we begin lets take a backup of all the PK, UK, FK, CK ( unless user says to ignore it ) - // constraints since we are not sure when we send cascade to constraints - // what all constraints are dropped. so its easy to take a backup of all - // constraints and then execute this DDL script at the end after we fix all the - // constraint issues. - // THEORY: already exists would fail and not available would be created. - if !Connector.IgnoreConstraints { - log.Infof("Backup up all the constraint in the database: \"%s\"", Connector.Db) - err := postgres.BackupDDL(db, ExecutionTimestamp) - if err != nil { - return err - } - } - - // Loop through all the tables available and start to load data - // based on columns datatypes - log.Info("Separating the input to tables, columns & datatypes and attempting to mock data to the table") - for _, v := range table_info { - err := splitter(v.columns, v.tabname) - if err != nil { - return err - } - } - - return nil -} - -// Segregate tables, columns & datatypes to load data -func splitter(columns map[string]string, tabname string) error { - - var schema string - var colkey, coldatatypes []string - - // Collect the column and datatypes - for key, dt := range columns { - colkey = append(colkey, key) - coldatatypes = append(coldatatypes, dt) - } - - // Ensure all the constriants are removed from the table - // and also store them to ensure all the constraints conditions - // are met when we re-enable them - err := postgres.RemoveConstraints(db, tabname) - if err != nil { - return err - } - - // Split the table into schema and tablename - tab := strings.Split(tabname, ".") - if len(tab) == 1 { // if no schema provide then use the default postgres schema "public" - schema = "public" - } else { // else what is provided by the user - schema = tab[0] - tabname = tab[1] - } - - // Start the progress bar - progressMsg := "(Mocking Table: " + schema + "." + tabname + ")" - core.ProgressBar(Connector.RowCount, progressMsg) - - // Commit the data to the database - err = commitData(schema, tabname, colkey, coldatatypes) - if err != nil { - return err - } - - // Close the Progress bar - core.CloseProgressBar() - - return nil -} - -// Start a transaction block and commit the data -func commitData(schema, tabname string, colkey, dtkeys []string) error { - - // Start a transaction - txn, err := db.Begin() - if err != nil { - return fmt.Errorf("Error in starting a transaction: %v", err) - } - - // Prepare the copy statement - stmt, err = txn.Prepare(pq.CopyInSchema(schema, tabname, colkey...)) - if err != nil { - return fmt.Errorf("Error in preparing the transaction statement: %v", err) - } - - // Iterate through connector row count and build data for each datatype -DataTypePickerLoop: // Label the loop to break, if there is a datatype that we don't support - for i := 0; i < Connector.RowCount; i++ { - - // data collector - var data []interface{} - - // Generate data based on the columns datatype - for _, v := range dtkeys { - dataoutput, err := core.BuildData(v) - if err != nil { - if strings.HasPrefix(fmt.Sprint(err), "Unsupported datatypes found") { - log.Errorf("Skipping table \"%s\" due to error \"%v\"", tabname, err) - skippedTab = append(skippedTab, tabname) - break DataTypePickerLoop // break the loop - } else { - return err - } - - } - data = append(data, dataoutput) - } - - // Execute the statement - _, err = stmt.Exec(data...) - if err != nil { - return err - } - - // Increment progress bar - core.IncrementBar() - } - - // Close the statement - err = stmt.Close() - if err != nil { - return fmt.Errorf("Error in closing the transaction statement: %v", err) - } - - // Commit the transaction - err = txn.Commit() - if err != nil { - return fmt.Errorf("Error in committing the transaction statement: %v", err) - } - - return nil - -} - -// Main postgres data mocker -func MockPostgres() error { - - var table []Table - log.Infof("Attempting to establish a connection to the %s database", DBEngine) - - // Establishing a connection to the database - err := dbConn() - if err != nil { - return err - } - - // Check if we can query the database and get the version of the database in the meantime - err = dbVersion() - if err != nil { - return err - } - - // If the request is to load all table then, extract all tables - // and pass to the connector table argument. - if Connector.AllTables { - tableList, err := dbExtractTables() - if err != nil { - return err - } - Connector.Table = strings.Join(tableList, ",") - } - - // Extract the columns and datatypes from the table defined on the connector table. - if Connector.Table != "" { // if there are only tables in the connector table variables - table, err = dbColDataType() - if err != nil { - return err - } - } - - // Build data for all the column and datatypes & then commit data - if len(table) > 0 { // if there are tables found, then proceed - err = extractor(table) - if err != nil { - // TODO: need to fix constraints here as well. - log.Error("Unexpected error encountered by MockD..") - return err - } - - // Recreate all the constraints of the table unless user wants to ignore it - if !Connector.IgnoreConstraints { - err = postgres.FixConstraints(db, ExecutionTimestamp, Connector.Debug) - if err != nil { - backupFiles, _ := core.ListFile(".", "*_"+ExecutionTimestamp+".sql") - log.Errorf("Some constraints creation failed (highlighted above), Will need your intervention to fix those constraints") - log.Errorf("All the DDL are saved in the files: \n%v", strings.Join(backupFiles, "\n")) - return err - } - } - - } else { // We didn't obtain any table from the database ( eg.s fresh DB's or User gave a view name etc ) - log.Warning("No table's available to load the mock data, closing the program") - } - - // If there is tables that are skipped, report to the user. - if len(skippedTab) > 0 { - log.Warning("These tables (below) are skipped, since it contain unsupported datatypes") - log.Warningf("%s", strings.Join(skippedTab, ",")) - } - - // Close the database connection - defer db.Close() - - return nil -} diff --git a/vagrant/greenplum/README.md b/vagrant/greenplum/README.md deleted file mode 100644 index 0a4e38a..0000000 --- a/vagrant/greenplum/README.md +++ /dev/null @@ -1 +0,0 @@ -You may use greenplum vagrant from the [repo](https://github.com/ielizaga/piv-go-gpdb) \ No newline at end of file diff --git a/vagrant/postgres/Vagrantfile b/vagrant/postgres/Vagrantfile deleted file mode 100644 index 6fe7086..0000000 --- a/vagrant/postgres/Vagrantfile +++ /dev/null @@ -1,91 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -# All Vagrant configuration is done below. The "2" in Vagrant.configure -# configures the configuration version (we support older styles for -# backwards compatibility). Please don't change it unless you know what -# you're doing. -Vagrant.configure("2") do |config| - # The most common configuration options are documented and commented below. - # For a complete reference, please see the online documentation at - # https://docs.vagrantup.com. - - # Every Vagrant development environment requires a box. You can search for - # boxes at https://atlas.hashicorp.com/search. - config.vm.box = "centos/7" - - # Disable automatic box update checking. If you disable this, then - # boxes will only be checked for updates when the user runs - # `vagrant box outdated`. This is not recommended. - # config.vm.box_check_update = false - - # Create a forwarded port mapping which allows access to a specific port - # within the machine from a port on the host machine. In the example below, - # accessing "localhost:8080" will access port 80 on the guest machine. - # NOTE: This will enable public access to the opened port - # config.vm.network "forwarded_port", guest: 80, host: 8080 - config.vm.network "forwarded_port", guest: 5432, host: 5432 - - # Create a forwarded port mapping which allows access to a specific port - # within the machine from a port on the host machine and only allow access - # via 127.0.0.1 to disable public access - # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1" - - # Create a private network, which allows host-only access to the machine - # using a specific IP. - # config.vm.network "private_network", ip: "192.168.33.10" - - # Create a public network, which generally matched to bridged network. - # Bridged networks make the machine appear as another physical device on - # your network. - # config.vm.network "public_network" - - # Share an additional folder to the guest VM. The first argument is - # the path on the host to the actual folder. The second argument is - # the path on the guest to mount the folder. And the optional third - # argument is a set of non-required options. - # config.vm.synced_folder "../data", "/vagrant_data" - - # Provider-specific configuration so you can fine-tune various - # backing providers for Vagrant. These expose provider-specific options. - # Example for VirtualBox: - # - config.vm.provider "virtualbox" do |vb| - # Display the VirtualBox GUI when booting the machine - # vb.gui = true - - # Customize the amount of memory on the VM: - vb.memory = "2048" - end - # - # View the documentation for the provider you are using for more - # information on available options. - - # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies - # such as FTP and Heroku are also available. See the documentation at - # https://docs.vagrantup.com/v2/push/atlas.html for more information. - # config.push.define "atlas" do |push| - # push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME" - # end - - # Enable provisioning with a shell script. Additional provisioners such as - # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the - # documentation for more information about their specific syntax and use. - config.vm.provision "shell", inline: <<-SHELL - setenforce 0 - sed -i 's/enforcing/disabled/' /etc/selinux/config - yum makecache fast - yum install -y postgresql-server - yum clean all - systemctl enable postgresql.service - sudo -u postgres initdb -D /var/lib/pgsql/data - sed -i "s/#listen_addresses = 'localhost'/listen_addresses = '*'/" /var/lib/pgsql/data/postgresql.conf - echo "host all all 0.0.0.0/0 trust" >> /var/lib/pgsql/data/pg_hba.conf - systemctl start postgresql.service - SHELL -end - - -# Once done use -# psql -h localhost -d postgres -U postgres -# to use postgres database pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy