Date Offset Views (Make Old Data Look New)

🤔 The Question

You’ve got a dataset… but all the dates are from the 1970s.

You want to:

  • ● Keep the original data untouched
  • ● Preserve the distribution of dates
  • ● But make everything look like it happened recently (e.g., yesterday, last week, etc.)

Is there a clean way to do this without rewriting the table?

💡 The Key Insight

Yes… use a view with a date offset expression.

Instead of modifying the data, you dynamically shift all dates forward so they appear current… while preserving their relative spacing.

🛠️ How It Works

Create Sample Table + Data

Let’s simulate a dataset from the late 1970s:

				
					CREATE TABLE t_sample (
  col1 TEXT,
  col2 TEXT,
  date_col DATE
);

INSERT INTO t_sample (col1, col2, date_col) VALUES
('A', 'X', DATE '1979-12-25'),
('B', 'Y', DATE '1979-12-28'),
('C', 'Z', DATE '1979-12-31');


				
			
📌 Intentional Design
  • ● Includes a clear max date anchor (1979-12-31)
  • ● Has non-uniform gaps (3 days, 3 days)
  • ● Simple enough to visualize, but realistic enough for demos
Option 1: Dynamic Anchor (Auto-adjusts based on data)
				
					CREATE VIEW v_sample AS
SELECT
  col1,
  col2,
  date_col 
    + (CURRENT_DATE - 1 - MAX(date_col) OVER()) AS shifted_date
FROM t_sample;
				
			
🔍 What’s happening?
  • MAX(date_col) OVER() finds the most recent date in your dataset
  • CURRENT_DATE - 1 anchors that date to yesterday
  • ● Every row is shifted forward by the same amount
Option 2: Fixed Anchor (Simpler + Faster)

If you already know your latest date:

				
					CREATE VIEW v_sample AS
SELECT
  col1,
  col2,
  date_col 
    + (CURRENT_DATE - 1 - DATE '1979-12-31') AS shifted_date
FROM t_sample;
				
			
⚖️ Dynamic vs Fixed Anchor
Approach Pros Cons
Dynamic (MAX(date_col) OVER()) Automatically adjusts if the underlying data changes Slightly more compute due to the window function
Fixed (hardcoded anchor date) Simpler, deterministic, and a little easier to read Requires you to know the anchor date in advance

🧪 Live Demo Output (YugabyteDB)

Here’s the actual view definition and output from YugabyteDB using a fixed anchor date of 1979-12-31:
				
					yugabyte=# \d+ v_sample
                            View "public.v_sample"
    Column    | Type | Collation | Nullable | Default | Storage  | Description
--------------+------+-----------+----------+---------+----------+-------------
 col1         | text |           |          |         | extended |
 col2         | text |           |          |         | extended |
 shifted_date | date |           |          |         | plain    |
View definition:
 SELECT t_sample.col1,
    t_sample.col2,
    t_sample.date_col + (CURRENT_DATE - 1 - '1979-12-31'::date) AS shifted_date
   FROM t_sample;
				
			

Query the view:

				
					SELECT * FROM v_sample;
				
			

Example output:

				
					 col1 | col2 | shifted_date
------+------+--------------
 C    | Z    | 2026-04-15
 B    | Y    | 2026-04-12
 A    | X    | 2026-04-09
				
			

📊 Before vs After

🧾 Original Data
col1 col2 date_col
A X 1979-12-25
B Y 1979-12-28
C Z 1979-12-31
🔄 Shifted View (Both Options)
col1 col2 shifted_date
A X 2026-04-09
B Y 2026-04-12
C Z 2026-04-15
The shifted view preserves the original spacing between rows… only the timeline moves forward.
✅ Key Validation
  • ● Both approaches produce identical results
  • ● The latest historical date maps to yesterday
  • ● All rows shift by the same offset
  • ● The relative spacing is preserved

🧪 EXPLAIN: What Does This Cost?

				
					EXPLAIN (ANALYZE, DIST) SELECT * FROM v_sample;
				
			

Output:

				
					                                             QUERY PLAN
----------------------------------------------------------------------------------------------------
 Seq Scan on t_sample  (cost=20.00..41.24 rows=3 width=8) (actual time=0.722..0.732 rows=3 loops=1)
   Storage Table Read Requests: 1
   Storage Table Read Execution Time: 0.546 ms
   Storage Table Read Ops: 1
   Storage Table Rows Scanned: 3
 Planning Time: 0.079 ms
 Execution Time: 0.817 ms
 Storage Read Requests: 1
 Storage Read Execution Time: 0.546 ms
 Storage Read Ops: 1
 Storage Rows Scanned: 3
 Catalog Read Requests: 0
 Catalog Read Ops: 0
 Catalog Write Requests: 0
 Storage Write Requests: 0
 Storage Flush Requests: 0
 Storage Execution Time: 0.546 ms
 Peak Memory Usage: 64 kB
				
			
🧠 Key Observation + Validation
The view does not force anything exotic… YugabyteDB simply performs a normal sequential scan of the base table and applies the date-shift expression during projection.
  • ● Only 1 storage read request
  • ● Only 3 rows scanned
  • ● Total execution time was under 1 ms
  • ● No writes, no flushes, no extra catalog work
  • ● Peak memory usage was just 64 kB

So for this kind of pattern, the view is very lightweight. You get “fresh-looking” dates without physically rewriting the table.

⚠️ The Catch: Indexing & Performance on Large Tables
While the view is extremely lightweight for simple reads (as shown in the EXPLAIN), there’s a hidden trap when filtering on the shifted date in large datasets. Because shifted_date is a computed column, it breaks SARGability.
❌ What Looks Innocent… But Isn’t
				
					SELECT *
FROM v_sample
WHERE shifted_date = DATE '2026-04-15';
				
			

👉 This cannot use an index on date_col.

Instead, YugabyteDB must:

  • ● Scan the entire table
  • ● Compute the date-shift expression for every row
  • ● Then apply the filter
🚫 Why You Can’t Just Index It
  • ● The view uses CURRENT_DATE
  • CURRENT_DATE is volatile (changes every day)
  • ● PostgreSQL/YugabyteDB require immutable expressions for indexes

👉 Result: You cannot create an index on shifted_date

🛠️ How to Work Around It
Option When to Use
Filter on date_col Best option — use original dates in WHERE, and the view for presentation
Permanent UPDATE When performance matters more than preserving original data
Materialized View When you want both: preserved data + indexed performance

🔁 Multiple Date Columns?

				
					SELECT
  start_date + offset,
  end_date   + offset
FROM ...
				
			

✔ Relationships preserved
✔ Durations remain correct

🚀 Want to Make It Permanent?

				
					UPDATE t_sample
SET date_col = date_col + (CURRENT_DATE - 1 - DATE '1979-12-31');
				
			

🏁 When (and Why) To Use This

  • ● Demo environments with stale datasets
  • ● Replay/testing scenarios
  • ● Synthetic workloads needing realistic time distribution
  • ● Avoiding risky bulk updates on production data
💡 Final Takeaway
Use a view when you want flexibility… not a rewrite. You can instantly modernize timestamps while preserving the original data distribution, making old datasets feel current without any risk or maintenance.
🙌 Acknowledgment
Special thanks to Dan Farrell, Senior Pre-Sales Engineer at YugabyteDB, for providing the detailed insights that helped shape this tip.

Have Fun!

Stopped by Ascension Coffee in Texas with my daughter and ordered their slow drip... turns out it’s brewed on a Kyoto-style cold drip coffee tower that literally takes its time. Worth the wait, but you definitely learn patience watching each drop fall. ☕