Converts data for faster analysis into either DuckDB file or into parquet files in a hive-style directory structure. Running analysis on these files is sometimes 100x times faster than working with raw CSV files, espetially when these are in gzip archives. To connect to converted data, please use 'mydata <- spod_connect(data_path = path_returned_by_spod_convert)' passing the path to where the data was saved. The connected mydata can be analysed using dplyr functions such as select, filter, mutate, group_by, summarise, etc. In the end of any sequence of commands you will need to add collect to execute the whole chain of data manipulations and load the results into memory in an R data.frame/tibble. For more in-depth usage of such data, please refer to DuckDB documentation and examples at https://duckdb.org/docs/api/r#dbplyr . Some more useful examples can be found here https://arrow-user2022.netlify.app/data-wrangling#combining-arrow-with-duckdb . You may also use arrow package to work with parquet files https://arrow.apache.org/docs/r/.
Usage
spod_convert(
type = c("od", "origin-destination", "os", "overnight_stays", "nt", "number_of_trips"),
zones = c("districts", "dist", "distr", "distritos", "municipalities", "muni",
"municip", "municipios"),
dates = NULL,
save_format = "duckdb",
save_path = NULL,
overwrite = FALSE,
data_dir = spod_get_data_dir(),
quiet = FALSE,
max_mem_gb = NULL,
max_n_cpu = max(1, parallelly::availableCores() - 1),
max_download_size_gb = 1,
ignore_missing_dates = FALSE
)Arguments
- type
The type of data to download. Can be
"origin-destination"(or ust"od"), or"number_of_trips"(or just"nt") for v1 data. For v2 data"overnight_stays"(or just"os") is also available. More data types to be supported in the future. See codebooks for v1 and v2 data in vignettes withspod_codebook(1)andspod_codebook(2)(spod_codebook).- zones
The zones for which to download the data. Can be
"districts"(or"dist","distr", or the original Spanish"distritos") or"municipalities"(or"muni","municip", or the original Spanish"municipios") for both data versions. Additionaly, these can be"large_urban_areas"(or"lua", or the original Spanish"grandes_areas_urbanas", or"gau") for v2 data (2022 onwards).- dates
A
characterorDatevector of dates to process. Kindly keep in mind that v1 and v2 data follow different data collection methodologies and may not be directly comparable. Therefore, do not try to request data from both versions for the same date range. If you need to compare data from both versions, please refer to the respective codebooks and methodology documents. The v1 data covers the period from 2020-02-14 to 2021-05-09, and the v2 data covers the period from 2022-01-01 to the present until further notice. The true dates range is checked against the available data for each version on every function run.The possible values can be any of the following:
For the
spod_get()andspod_convert()functions, thedatescan be set to "cached_v1" or "cached_v2" to request data from cached (already previously downloaded) v1 (2020-2021) or v2 (2022 onwards) data. In this case, the function will identify and use all data files that have been downloaded and cached locally, (e.g. using an explicit run ofspod_download(), or any data requests made using thespod_get()orspod_convert()functions).A single date in ISO (YYYY-MM-DD) or YYYYMMDD format.
characterorDateobject.A vector of dates in ISO (YYYY-MM-DD) or YYYYMMDD format.
characterorDateobject. Can be any non-consecutive sequence of dates.A date range
eigher a
characterorDateobject of length 2 with clearly named elementsstartandendin ISO (YYYY-MM-DD) or YYYYMMDD format. E.g.c(start = "2020-02-15", end = "2020-02-17");or a
characterobject of the formYYYY-MM-DD_YYYY-MM-DDorYYYYMMDD_YYYYMMDD. For example,2020-02-15_2020-02-17or20200215_20200217.
A regular expression to match dates in the format
YYYYMMDD.characterobject. For example,^202002will match all dates in February 2020.
- save_format
A
charactervector of length 1 with values "duckdb" or "parquet". Defaults to "duckdb". IfNULLautomatically inferred from thesave_pathargument. If onlysave_formatis provided,save_pathwill be set to the default location set inSPANISH_OD_DATA_DIRenvironment variable using spod_set_data_dir(path = 'path/to/your/cache/dir'). So for v1 data that path would be<data_dir>/clean_data/v1/tabular/duckdb/or<data_dir>/clean_data/v1/tabular/parquet/.You can also set
save_path. If it ends with ".duckdb", will save toDuckDBdatabase format, ifsave_pathdoes not end with ".duckdb", will save toparquetformat and will treat thesave_pathas a path to a folder, not a file, will create necessary hive-style subdirectories in that folder. Hive style looks likeyear=2020/month=2/day=14and inside each such directory there will be adata_0.parquetfile that contains the data for that day.- save_path
A
charactervector of length 1. The full (not relative) path to aDuckDBdatabase file orparquetfolder.If
save_pathends with.duckdb, it will be saved as a DuckDB database file. The format argument will be automatically set tosave_format='duckdb'.If
save_pathends with a folder name (e.g./data_dir/clean_data/v1/tabular/parquet/od_distrfor origin-destination data for district level), the data will be saved as a collection ofparquetfiles in a hive-style directory structure. So the subfolders ofod_distrwill beyear=2020/month=2/day=14and inside each of these folders a singleparquetfile will be placed containing the data for that day.If
NULL, uses the default location indata_dir(set by theSPANISH_OD_DATA_DIRenvironment variable using spod_set_data_dir(path = 'path/to/your/cache/dir'). Therefore, the default relative path forDuckDBis<data_dir>/clean_data/v1/tabular/duckdb/<type>_<zones>.duckdband forparquetfiles is<data_dir>/clean_data/v1/tabular/parquet/<type>_<zones>/, wheretypeis the type of data (e.g. 'od', 'os', 'nt', that correspoind to 'origin-destination', 'overnight-stays', 'number-of-trips', etc.) andzonesis the name of the geographic zones (e.g. 'distr', 'muni', etc.). See the details below in the function arguments description.
- overwrite
A
logicalor acharactervector of length 1. IfTRUE, overwrites existingDuckDBorparquetfiles. Defaults toFALSE. For parquet files can also be set to 'update', so that only parquet files are only created for the dates that have not yet been converted.- data_dir
The directory where the data is stored. Defaults to the value returned by
spod_get_data_dir()which returns the value of the environment variableSPANISH_OD_DATA_DIRor a temporary directory if the variable is not set. To set the data directory, use spod_set_data_dir.- quiet
A
logicalvalue indicating whether to suppress messages. Default isFALSE.- max_mem_gb
integervalue of the maximum operating memory to use in GB.NULLby default, delegates the choice to theDuckDBengine which usually sets it to 80% of available memory. Caution, in HPC use, the amount of memory available to your job may be determined incorrectly by theDuckDBengine, so it is recommended to set this parameter explicitly according to your job's memory limits.- max_n_cpu
The maximum number of threads to use. Defaults to the number of available cores minus 1.
- max_download_size_gb
The maximum download size in gigabytes. Defaults to 1.
- ignore_missing_dates
Logical. If
TRUE, the function will not raise an error if the some of the specified dates are missing. Any dates that are missing will be skipped, however the data for any valid dates will be acquired. Defaults toFALSE.
Value
Path to saved DuckDB database file or to a folder with parquet files in hive-style directory structure.
Examples
if (FALSE) { # interactive()
# \donttest{
# Set data dir for file downloads
spod_set_data_dir(tempdir())
# download and convert data
dates_1 <- c(start = "2020-02-17", end = "2020-02-18")
db_2 <- spod_convert(
type = "number_of_trips",
zones = "distr",
dates = dates_1,
overwrite = TRUE
)
# now connect to the converted data
my_od_data_2 <- spod_connect(db_2)
# disconnect from the database
spod_disconnect(my_od_data_2)
# }
}
