4  Building analytic pipelines for a data model

In the previous chapters we’ve seen that after connecting to a database we can create references to the various tables we’ve interested in it and write bespoke analytic code to query them. However, if we are working with the same database over and over again we are likely to want to build some tooling for tasks we are often performing.

To see how we can develop a data model with associated methods and functions we’ll use the Lahman baseball data. We can see below how the data is stored across various related tables.

4.1 Defining a data model

library(dplyr)
library(dbplyr)
library(tidyr)
library(duckdb)
library(DBI)
library(Lahman)

db <- dbConnect(duckdb(), dbdir = ":memory:")
copy_lahman(db)

Instead of manually creating references to tables of interest as we go, we will write a function to create a single reference to the Lahman data.

lahmanFromCon <- function(con) {
  tables <- c(
    "AllstarFull", "Appearances", "AwardsManagers", "AwardsPlayers", "AwardsManagers",
    "AwardsShareManagers", "Batting", "BattingPost", "CollegePlaying", "Fielding",
    "FieldingOF", "FieldingOFsplit", "FieldingPost", "HallOfFame", "HomeGames",
    "LahmanData", "Managers", "ManagersHalf", "Parks", "People", "Pitching",
    "PitchingPost", "Salaries", "Schools", "SeriesPost", "Teams", "TeamsFranchises",
    "TeamsHalf"
  )
  lahmanRef <- purrr::set_names(tables, tables) |> 
               purrr::map(~ tbl(con, .))
  class(lahmanRef) <- c("lahman_ref", class(lahmanRef))
  lahmanRef
}

With this function we can now easily get references to all our lahman tables in one go using our lahmanFromCon() function.

lahman <- lahmanFromCon(db)

lahman$People |>
  glimpse()
Rows: ??
Columns: 26
Database: DuckDB v1.1.3 [eburn@Windows 10 x64:R 4.4.0/:memory:]
$ playerID     <chr> "aardsda01", "aaronha01", "aaronto01", "aasedo01", "abada…
$ birthYear    <int> 1981, 1934, 1939, 1954, 1972, 1985, 1850, 1877, 1869, 186…
$ birthMonth   <int> 12, 2, 8, 9, 8, 12, 11, 4, 11, 10, 9, 3, 10, 2, 8, 9, 6, …
$ birthDay     <int> 27, 5, 5, 8, 25, 17, 4, 15, 11, 14, 20, 16, 22, 16, 17, 1…
$ birthCountry <chr> "USA", "USA", "USA", "USA", "USA", "D.R.", "USA", "USA", …
$ birthState   <chr> "CO", "AL", "AL", "CA", "FL", "La Romana", "PA", "PA", "V…
$ birthCity    <chr> "Denver", "Mobile", "Mobile", "Orange", "Palm Beach", "La…
$ deathYear    <int> NA, 2021, 1984, NA, NA, NA, 1905, 1957, 1962, 1926, NA, 1…
$ deathMonth   <int> NA, 1, 8, NA, NA, NA, 5, 1, 6, 4, NA, 2, 6, NA, NA, NA, N…
$ deathDay     <int> NA, 22, 16, NA, NA, NA, 17, 6, 11, 27, NA, 13, 11, NA, NA…
$ deathCountry <chr> NA, "USA", "USA", NA, NA, NA, "USA", "USA", "USA", "USA",…
$ deathState   <chr> NA, "GA", "GA", NA, NA, NA, "NJ", "FL", "VT", "CA", NA, "…
$ deathCity    <chr> NA, "Atlanta", "Atlanta", NA, NA, NA, "Pemberton", "Fort …
$ nameFirst    <chr> "David", "Hank", "Tommie", "Don", "Andy", "Fernando", "Jo…
$ nameLast     <chr> "Aardsma", "Aaron", "Aaron", "Aase", "Abad", "Abad", "Aba…
$ nameGiven    <chr> "David Allan", "Henry Louis", "Tommie Lee", "Donald Willi…
$ weight       <int> 215, 180, 190, 190, 184, 235, 192, 170, 175, 169, 220, 19…
$ height       <int> 75, 72, 75, 75, 73, 74, 72, 71, 71, 68, 74, 71, 70, 78, 7…
$ bats         <fct> R, R, R, R, L, L, R, R, R, L, R, R, R, R, R, L, R, L, L, …
$ throws       <fct> R, R, R, R, L, L, R, R, R, L, R, R, R, R, L, L, R, L, R, …
$ debut        <chr> "2004-04-06", "1954-04-13", "1962-04-10", "1977-07-26", "…
$ finalGame    <chr> "2015-08-23", "1976-10-03", "1971-09-26", "1990-10-03", "…
$ retroID      <chr> "aardd001", "aaroh101", "aarot101", "aased001", "abada001…
$ bbrefID      <chr> "aardsda01", "aaronha01", "aaronto01", "aasedo01", "abada…
$ deathDate    <date> NA, 2021-01-22, 1984-08-16, NA, NA, NA, 1905-05-17, 1957…
$ birthDate    <date> 1981-12-27, 1934-02-05, 1939-08-05, 1954-09-08, 1972-08-…

In this chapter we will be creating a bespoke data model for our database. This approach can be further extended using the dm package, which also provides various helpful functions for creating a data model and working with it.

Similar to above, we can use dm to create a single object to access our database tables.

library(dm)
lahman_dm <- dm(batting = tbl(db, "Batting"), 
                people = tbl(db, "People"))
lahman_dm
── Table source ────────────────────────────────────────────────────────────────
src:  DuckDB v1.1.3 [eburn@Windows 10 x64:R 4.4.0/:memory:]
── Metadata ────────────────────────────────────────────────────────────────────
Tables: `batting`, `people`
Columns: 48
Primary keys: 0
Foreign keys: 0

Using this approach, we can make use of various utility functions. For example here we specify primary and foreign keys and then check that the key constraints are satisfied.

lahman_dm <- lahman_dm %>%
  dm_add_pk(people, playerID) %>%
  dm_add_fk(batting, playerID, people) 

lahman_dm
── Table source ────────────────────────────────────────────────────────────────
src:  DuckDB v1.1.3 [eburn@Windows 10 x64:R 4.4.0/:memory:]
── Metadata ────────────────────────────────────────────────────────────────────
Tables: `batting`, `people`
Columns: 48
Primary keys: 1
Foreign keys: 1
dm_examine_constraints(lahman_dm)
ℹ All constraints satisfied.

For more information on the dm package see https://dm.cynkra.com/index.html

4.2 Creating functions for the data model

We can also now make various functions specific to our Lahman data model to facilitate data analyses. Given we know the structure of the data, we can build a set of functions that abstract away some of the complexities of working with data in a database.

Let’s start by making a small function to get the teams players have played for. We can see that the code we use follows on from the last couple of chapters.

getTeams <- function(lahman, name = "Barry Bonds") {
  lahman$Batting |>
    dplyr::inner_join(
      lahman$People |>
        dplyr::mutate(full_name = paste0(nameFirst, " ", nameLast)) |>
        dplyr::filter(full_name %in% name) |>
        dplyr::select("playerID"),
      by = join_by(playerID)
    ) |>
    dplyr::select(
      "teamID",
      "yearID"
    ) |>
    dplyr::distinct() |>
    dplyr::left_join(lahman$Teams,
      by = dplyr::join_by(teamID, yearID)
    ) |>
    dplyr::select("name") |>
    dplyr::distinct()
}

Now we can easily get the different teams a player represented. We can see how changing the player name changes the SQL that is getting run behind the scenes.

getTeams(lahman, "Babe Ruth")
# Source:   SQL [?? x 1]
# Database: DuckDB v1.1.3 [eburn@Windows 10 x64:R 4.4.0/:memory:]
  name            
  <chr>           
1 Boston Braves   
2 New York Yankees
3 Boston Red Sox  
<SQL>
SELECT DISTINCT q01.*
FROM (
  SELECT "name"
  FROM (
    SELECT DISTINCT q01.*
    FROM (
      SELECT teamID, yearID
      FROM Batting
      INNER JOIN (
        SELECT playerID
        FROM (
          SELECT People.*, CONCAT_WS('', nameFirst, ' ', nameLast) AS full_name
          FROM People
        ) q01
        WHERE (full_name IN ('Babe Ruth'))
      ) RHS
        ON (Batting.playerID = RHS.playerID)
    ) q01
  ) LHS
  LEFT JOIN Teams
    ON (LHS.teamID = Teams.teamID AND LHS.yearID = Teams.yearID)
) q01
getTeams(lahman, "Barry Bonds")
# Source:   SQL [?? x 1]
# Database: DuckDB v1.1.3 [eburn@Windows 10 x64:R 4.4.0/:memory:]
  name                
  <chr>               
1 San Francisco Giants
2 Pittsburgh Pirates  
<SQL>
SELECT DISTINCT q01.*
FROM (
  SELECT "name"
  FROM (
    SELECT DISTINCT q01.*
    FROM (
      SELECT teamID, yearID
      FROM Batting
      INNER JOIN (
        SELECT playerID
        FROM (
          SELECT People.*, CONCAT_WS('', nameFirst, ' ', nameLast) AS full_name
          FROM People
        ) q01
        WHERE (full_name IN ('Barry Bonds'))
      ) RHS
        ON (Batting.playerID = RHS.playerID)
    ) q01
  ) LHS
  LEFT JOIN Teams
    ON (LHS.teamID = Teams.teamID AND LHS.yearID = Teams.yearID)
) q01

The function collect() brings data out of the database and into R. When working with large datasets, as is often the case when interacting with a database, we typically want to keep as much computation as possible on the database side. In the case of our getTeams() function, for example, it does everything on the database side and so collecting will just bring out the result of the teams the person played for. In this case we could also use pull() to get our result out as a vector rather that a data frame.

getTeams(lahman, "Barry Bonds") |>
  collect()
# A tibble: 2 × 1
  name                
  <chr>               
1 Pittsburgh Pirates  
2 San Francisco Giants
getTeams(lahman, "Barry Bonds") |>
  pull()
[1] "San Francisco Giants" "Pittsburgh Pirates"  

In other cases however we may need to collect data so as to perform further analysis steps that are not possible using SQL. This might be the case for plotting or for other analytic steps like fitting statistical models. In such cases we should try to only bring out the data that we need (as we will likely have much less memory available on our local computer than is available for the database).

Similarly we could make a function to add the a player’s year of birth to a table.

addBirthCountry <- function(lahmanTbl){
  lahmanTbl |> 
    dplyr::left_join(lahman$People |> 
              dplyr::select("playerID", "birthCountry"),
              dplyr::join_by("playerID"))
}
lahman$Batting |>
  addBirthCountry()
# Source:   SQL [?? x 23]
# Database: DuckDB v1.1.3 [eburn@Windows 10 x64:R 4.4.0/:memory:]
   playerID  yearID stint teamID lgID      G    AB     R     H   X2B   X3B    HR
   <chr>      <int> <int> <fct>  <fct> <int> <int> <int> <int> <int> <int> <int>
 1 abercda01   1871     1 TRO    NA        1     4     0     0     0     0     0
 2 addybo01    1871     1 RC1    NA       25   118    30    32     6     0     0
 3 allisar01   1871     1 CL1    NA       29   137    28    40     4     5     0
 4 allisdo01   1871     1 WS3    NA       27   133    28    44    10     2     2
 5 ansonca01   1871     1 RC1    NA       25   120    29    39    11     3     0
 6 armstbo01   1871     1 FW1    NA       12    49     9    11     2     1     0
 7 barkeal01   1871     1 RC1    NA        1     4     0     1     0     0     0
 8 barnero01   1871     1 BS1    NA       31   157    66    63    10     9     0
 9 barrebi01   1871     1 FW1    NA        1     5     1     1     1     0     0
10 barrofr01   1871     1 BS1    NA       18    86    13    13     2     1     0
# ℹ more rows
# ℹ 11 more variables: RBI <int>, SB <int>, CS <int>, BB <int>, SO <int>,
#   IBB <int>, HBP <int>, SH <int>, SF <int>, GIDP <int>, birthCountry <chr>
<SQL>
SELECT Batting.*, birthCountry
FROM Batting
LEFT JOIN People
  ON (Batting.playerID = People.playerID)
lahman$Pitching |>
  addBirthCountry()
# Source:   SQL [?? x 31]
# Database: DuckDB v1.1.3 [eburn@Windows 10 x64:R 4.4.0/:memory:]
   playerID  yearID stint teamID lgID      W     L     G    GS    CG   SHO    SV
   <chr>      <int> <int> <fct>  <fct> <int> <int> <int> <int> <int> <int> <int>
 1 bechtge01   1871     1 PH1    NA        1     2     3     3     2     0     0
 2 brainas01   1871     1 WS3    NA       12    15    30    30    30     0     0
 3 fergubo01   1871     1 NY2    NA        0     0     1     0     0     0     0
 4 fishech01   1871     1 RC1    NA        4    16    24    24    22     1     0
 5 fleetfr01   1871     1 NY2    NA        0     1     1     1     1     0     0
 6 flowedi01   1871     1 TRO    NA        0     0     1     0     0     0     0
 7 mackde01    1871     1 RC1    NA        0     1     3     1     1     0     0
 8 mathebo01   1871     1 FW1    NA        6    11    19    19    19     1     0
 9 mcbridi01   1871     1 PH1    NA       18     5    25    25    25     0     0
10 mcmuljo01   1871     1 TRO    NA       12    15    29    29    28     0     0
# ℹ more rows
# ℹ 19 more variables: IPouts <int>, H <int>, ER <int>, HR <int>, BB <int>,
#   SO <int>, BAOpp <dbl>, ERA <dbl>, IBB <int>, WP <int>, HBP <int>, BK <int>,
#   BFP <int>, GF <int>, R <int>, SH <int>, SF <int>, GIDP <int>,
#   birthCountry <chr>
<SQL>
SELECT Pitching.*, birthCountry
FROM Pitching
LEFT JOIN People
  ON (Pitching.playerID = People.playerID)

We could then use our addBirthCountry() function as part of a larger query to summarise the proportion of players from each country over time (based on their presence in the batting table).

plot_data <- lahman$Batting |>
  select(playerID, yearID) |> 
  addBirthCountry() |>
  filter(yearID > 1960) |> 
  mutate(birthCountry = case_when(
    birthCountry == "USA" ~ "USA",
    birthCountry == "D.R." ~ "Dominican Republic",
    birthCountry == "Venezuela" ~ "Venezuela",
    birthCountry == "P.R." ~ "Puerto Rico ",
    birthCountry == "Cuba" ~ "Cuba",
    birthCountry == "CAN" ~ "Canada",
    birthCountry == "Mexico" ~ "Mexico",
    .default = "Other"
  )) |> 
  summarise(n = n(), .by = c("yearID", "birthCountry")) |> 
  group_by(yearID) |>
  mutate(percentage = n / sum(n) * 100) |> 
  ungroup() |> 
  collect()
<SQL>
SELECT q01.*, (n / SUM(n) OVER (PARTITION BY yearID)) * 100.0 AS percentage
FROM (
  SELECT yearID, birthCountry, COUNT(*) AS n
  FROM (
    SELECT
      playerID,
      yearID,
      CASE
WHEN (birthCountry = 'USA') THEN 'USA'
WHEN (birthCountry = 'D.R.') THEN 'Dominican Republic'
WHEN (birthCountry = 'Venezuela') THEN 'Venezuela'
WHEN (birthCountry = 'P.R.') THEN 'Puerto Rico '
WHEN (birthCountry = 'Cuba') THEN 'Cuba'
WHEN (birthCountry = 'CAN') THEN 'Canada'
WHEN (birthCountry = 'Mexico') THEN 'Mexico'
ELSE 'Other'
END AS birthCountry
    FROM (
      SELECT Batting.playerID AS playerID, yearID, birthCountry
      FROM Batting
      LEFT JOIN People
        ON (Batting.playerID = People.playerID)
    ) q01
    WHERE (yearID > 1960.0)
  ) q01
  GROUP BY yearID, birthCountry
) q01
library(ggplot2)
plot_data |> 
  ggplot() +
  geom_col(aes(yearID, 
              percentage, 
              fill = birthCountry), width=1) + 
  theme_minimal() + 
  theme(legend.title = element_blank(), 
        legend.position = "top")

As part of our lahmanFromCon() function our data model object has the class “lahman_ref”. Therefore as well as creating user-facing functions to work with our lahman data model, we can also define methods for this object.

class(lahman)
[1] "lahman_ref" "list"      

With this we can make some specific methods for a “lahman_ref” object. For example, we can define a print method like so:

print.lahman_ref <- function(x, ...) {
  len <- length(names(x))
  cli::cli_h1("# Lahman reference - {len} tables")
  cli::cli_li(paste(
    "{.strong tables:}",
    paste(names(x), collapse = ", ")
  ))
  invisible(x)
}

Now we can see a summary of our lahman data model when we print the object.

lahman
── # Lahman reference - 28 tables ──────────────────────────────────────────────
• tables: AllstarFull, Appearances, AwardsManagers, AwardsPlayers,
AwardsManagers, AwardsShareManagers, Batting, BattingPost, CollegePlaying,
Fielding, FieldingOF, FieldingOFsplit, FieldingPost, HallOfFame, HomeGames,
LahmanData, Managers, ManagersHalf, Parks, People, Pitching, PitchingPost,
Salaries, Schools, SeriesPost, Teams, TeamsFranchises, TeamsHalf

And we can see that this print is being done by the method we defined.

library(sloop)
s3_dispatch(print(lahman))
=> print.lahman_ref
   print.list
 * print.default

4.3 Building efficient analytic pipelines

4.3.1 The risk of “clean” R code

Following on from the above approach, we might think it a good idea to make another function addBirthYear(). We can then use it along with our addBirthCountry() to get a summarise average salary by birth country and birth year.

addBirthYear <- function(lahmanTbl){
  lahmanTbl |> 
    left_join(lahman$People |> 
              select("playerID", "birthYear"),
              join_by("playerID"))
}

lahman$Salaries |> 
  addBirthCountry() |> 
  addBirthYear() |> 
  summarise(average_salary = mean(salary), 
            .by = c("birthCountry", "birthYear"))
# Source:   SQL [?? x 3]
# Database: DuckDB v1.1.3 [eburn@Windows 10 x64:R 4.4.0/:memory:]
   birthCountry   birthYear average_salary
   <chr>              <int>          <dbl>
 1 USA                 1953        859391.
 2 D.R.                1958        866051.
 3 USA                 1961        811250.
 4 USA                 1941        679429 
 5 D.R.                1956       1476264.
 6 USA                 1943        372500 
 7 P.R.                1963       1164202.
 8 United Kingdom      1959        492833.
 9 Venezuela           1962        320923.
10 France              1959        221000 
# ℹ more rows

Although the R code on the face of it looks fine, when we look at the SQL we can see that our query has two joins to the People table. One join gets information on the birth country and the other on the birth year.

<SQL>
SELECT birthCountry, birthYear, AVG(salary) AS average_salary
FROM (
  SELECT
    Salaries.*,
    "People...2".birthCountry AS birthCountry,
    "People...3".birthYear AS birthYear
  FROM Salaries
  LEFT JOIN People "People...2"
    ON (Salaries.playerID = "People...2".playerID)
  LEFT JOIN People "People...3"
    ON (Salaries.playerID = "People...3".playerID)
) q01
GROUP BY birthCountry, birthYear

To improve performance, we could instead have a single function to get both of these, birth country and birth year, at the same time.

addCharacteristics <- function(lahmanTbl){
  lahmanTbl |> 
    left_join(lahman$People |> 
              select("playerID", "birthYear", "birthCountry"),
              join_by("playerID"))
}

lahman$Salaries |> 
  addCharacteristics() |> 
  summarise(average_salary = mean(salary), 
            .by = c("birthCountry", "birthYear"))
# Source:   SQL [?? x 3]
# Database: DuckDB v1.1.3 [eburn@Windows 10 x64:R 4.4.0/:memory:]
   birthCountry birthYear average_salary
   <chr>            <int>          <dbl>
 1 USA               1969       1496593.
 2 Cuba              1987       4932700.
 3 D.R.              1984       2924854.
 4 USA               1973       2142680.
 5 D.R.              1972       2722415.
 6 Mexico            1971       2247962.
 7 USA               1964       1390524.
 8 USA               1960       1030321.
 9 USA               1989       1442446.
10 Venezuela         1991        776469.
# ℹ more rows
<SQL>
SELECT birthCountry, birthYear, AVG(salary) AS average_salary
FROM (
  SELECT Salaries.*, birthYear, birthCountry
  FROM Salaries
  LEFT JOIN People
    ON (Salaries.playerID = People.playerID)
) q01
GROUP BY birthCountry, birthYear

Now this query outputs the same result but is simpler than the previous one, thus lowering the computational cost of the analysis. All this is to show that when working with databases we should keep in mind what is going on behind the scenes in terms of the SQL code actually being executed.

4.3.2 Piping and SQL

Although piping functions has little impact on performance when using R with data in memory, when working with a database the SQL generated will differ when using multiple function calls (with a separate operation specified in each) instead of multiple operations within a single function call.

For example, a single mutate function creating two new variables would generate the below SQL.

lahman$People |> 
  mutate(birthDatePlus1 = 
           add_years(birthDate, 1L),
         birthDatePlus10 = 
           add_years(birthDate, 10L)) |> 
  select("playerID", 
         "birthDatePlus1",
         "birthDatePlus10") |> 
  show_query()
<SQL>
SELECT
  playerID,
  DATE_ADD(birthDate, INTERVAL '1 year') AS birthDatePlus1,
  DATE_ADD(birthDate, INTERVAL '10 year') AS birthDatePlus10
FROM People

Whereas the SQL will be different if these were created using multiple mutate calls (with now one being created in a sub-query).

lahman$People |> 
  mutate(birthDatePlus1 = 
           add_years(birthDate, 1L)) |> 
  mutate(birthDatePlus10 = 
           add_years(birthDate, 10L)) |> 
  select("playerID", 
         "birthDatePlus1",
         "birthDatePlus10") |> 
  show_query()
<SQL>
SELECT
  playerID,
  birthDatePlus1,
  DATE_ADD(birthDate, INTERVAL '10 year') AS birthDatePlus10
FROM (
  SELECT People.*, DATE_ADD(birthDate, INTERVAL '1 year') AS birthDatePlus1
  FROM People
) q01

4.3.3 Computing intermediate queries

Let’s say we want to summarise home runs in the batting table and stike outs in the pitching table by the college players attended and their birth year. We could do this like so:

players_with_college <- lahman$People |> 
  select(playerID, birthYear) |> 
  inner_join(lahman$CollegePlaying |> 
              filter(!is.na(schoolID)) |> 
              select(playerID, schoolID) |> 
              distinct(),
            by = join_by(playerID))

lahman$Batting |> 
  left_join(players_with_college,
            by = join_by(playerID)) |> 
  summarise(home_runs = sum(H, na.rm = TRUE), 
                        .by = c(schoolID, birthYear)) |> 
  collect()
# A tibble: 6,206 × 3
   schoolID   birthYear home_runs
   <chr>          <int>     <dbl>
 1 <NA>              NA   2822499
 2 manhattan       1862         3
 3 yale            1865        15
 4 nyuroch         1861       248
 5 princeton       1864         1
 6 newyorku        1867       101
 7 michigan        1872         9
 8 johnshpkns      1871        63
 9 brown           1872        15
10 vermont         1873        45
# ℹ 6,196 more rows
lahman$Pitching |> 
  left_join(players_with_college,
            by = join_by(playerID)) |> 
  summarise(strike_outs = sum(SO, na.rm = TRUE), 
                        .by = c(schoolID, birthYear))|> 
  collect()
# A tibble: 3,659 × 3
   schoolID   birthYear strike_outs
   <chr>          <int>       <dbl>
 1 brown           1859          36
 2 washcollmd      1874           0
 3 fordham         1871          38
 4 wvirginia       1870           4
 5 stbonny         1877           3
 6 illinois        1879        1798
 7 notredame       1877         258
 8 tennessee       1877          20
 9 brown           1880         292
10 knoxil          1884          32
# ℹ 3,649 more rows

Looking at the SQL we can see, however, that there is some duplication, because as part of each full query we have run our players_with_college query.

<SQL>
SELECT schoolID, birthYear, SUM(H) AS home_runs
FROM (
  SELECT Batting.*, birthYear, schoolID
  FROM Batting
  LEFT JOIN (
    SELECT People.playerID AS playerID, birthYear, schoolID
    FROM People
    INNER JOIN (
      SELECT DISTINCT playerID, schoolID
      FROM CollegePlaying
      WHERE (NOT((schoolID IS NULL)))
    ) RHS
      ON (People.playerID = RHS.playerID)
  ) RHS
    ON (Batting.playerID = RHS.playerID)
) q01
GROUP BY schoolID, birthYear
<SQL>
SELECT schoolID, birthYear, SUM(SO) AS strike_outs
FROM (
  SELECT Pitching.*, birthYear, schoolID
  FROM Pitching
  LEFT JOIN (
    SELECT People.playerID AS playerID, birthYear, schoolID
    FROM People
    INNER JOIN (
      SELECT DISTINCT playerID, schoolID
      FROM CollegePlaying
      WHERE (NOT((schoolID IS NULL)))
    ) RHS
      ON (People.playerID = RHS.playerID)
  ) RHS
    ON (Pitching.playerID = RHS.playerID)
) q01
GROUP BY schoolID, birthYear

To avoid this we could instead make use of the compute() function to force the computation of this first, intermediate, query to a temporary table in the database.

players_with_college <- players_with_college |> 
  compute()

Now we have a temporary table with the result of our players_with_college query, and we can use this in both of our aggregation queries.

players_with_college |> 
  show_query()
<SQL>
SELECT *
FROM dbplyr_Ao1o8wBQsO
lahman$Batting |> 
  left_join(players_with_college,
            by = join_by(playerID)) |> 
  summarise(home_runs = sum(H, na.rm = TRUE), 
                        .by = c(schoolID, birthYear)) |> 
  collect()
# A tibble: 6,206 × 3
   schoolID  birthYear home_runs
   <chr>         <int>     <dbl>
 1 holycross      1864         4
 2 manhattan      1862         3
 3 brown          1863       806
 4 newyorku       1867       101
 5 syracuse       1863         1
 6 upenn          1872         0
 7 amherstma      1868         0
 8 wake           1869         0
 9 chicago        1874         2
10 norwichvt      1873       156
# ℹ 6,196 more rows
lahman$Pitching |> 
  left_join(players_with_college,
            by = join_by(playerID)) |> 
  summarise(strike_outs = sum(SO, na.rm = TRUE), 
                        .by = c(schoolID, birthYear))|> 
  collect()
# A tibble: 3,659 × 3
   schoolID   birthYear strike_outs
   <chr>          <int>       <dbl>
 1 pennst          1860         920
 2 harvard         1856           9
 3 illinoisst      1869         955
 4 holycross       1870           4
 5 knoxil          1871          23
 6 oberlinoh       1872           1
 7 beloitwi        1872           3
 8 tuftsma         1874           3
 9 maine           1877          12
10 indiana         1877           2
# ℹ 3,649 more rows
<SQL>
SELECT schoolID, birthYear, SUM(H) AS home_runs
FROM (
  SELECT Batting.*, birthYear, schoolID
  FROM Batting
  LEFT JOIN dbplyr_Ao1o8wBQsO
    ON (Batting.playerID = dbplyr_Ao1o8wBQsO.playerID)
) q01
GROUP BY schoolID, birthYear
<SQL>
SELECT schoolID, birthYear, SUM(SO) AS strike_outs
FROM (
  SELECT Pitching.*, birthYear, schoolID
  FROM Pitching
  LEFT JOIN dbplyr_Ao1o8wBQsO
    ON (Pitching.playerID = dbplyr_Ao1o8wBQsO.playerID)
) q01
GROUP BY schoolID, birthYear

In this case the SQL from our initial approach was not so complicated. However, you can imagine that without using computation to intermediate tables, the SQL associated with a series of data manipulations could quickly become unmanageable. Moreover, we can end up with inefficient code that repeatedly gets the same result as part of a larger query. Therefore although we don’t want to overuse computation of intermediate queries, it is often a necessity when creating our analytic pipelines.