Skip to contents

Create a cdm_reference object from a sparklyr connection.

Usage

cdmFromSpark(
  con,
  cdmSchema,
  writeSchema,
  cohortTables = NULL,
  cdmVersion = NULL,
  cdmName = NULL,
  achillesSchema = NULL,
  .softValidation = FALSE,
  writePrefix = NULL
)

Arguments

con

A spark connection created with: sparklyr::spark_connect().

cdmSchema

Schema where omop standard tables are located. Schema is defined with a named character list/vector; allowed names are: 'catalog', 'schema' and 'prefix'.

writeSchema

Schema where with writing permissions. Schema is defined with a named character list/vector; allowed names are: 'catalog', 'schema' and 'prefix'.

cohortTables

Names of cohort tables to be read from writeSchema.

cdmVersion

The version of the cdm (either "5.3" or "5.4"). If NULL cdm_source$cdm_version will be used instead.

cdmName

The name of the cdm object, if NULL cdm_source$cdm_source_name will be used instead.

achillesSchema

Schema where achilled tables are located. Schema is defined with a named character list/vector; allowed names are: 'catalog', 'schema' and 'prefix'.

.softValidation

Whether to use soft validation, this is not recommended as analysis pipelines assume the cdm fullfill the validation criteria.

writePrefix

A prefix that will be added to all tables created in the write_schema. This can be used to create namespace in your database write_schema for your tables.

Value

A cdm reference object

Examples

if (FALSE) { # \dontrun{
con <- sparklyr::spark_connect(...)
cdmFromSpark(
  con = con,
  cdmSchema = c(catalog = "...", schema = "...", prefix = "..."),
  writeSchema = list() # use `list()`/`c()`/`NULL` to use temporary tables
)
} # }