Jupyter Notebook

Bird’s eye view#

Here, you’ll backtrace file transformations through notebooks, pipelines & app uploads in a research project based on Schmidt22.

It’s a mix of a guide & a demo usecase.

Why should I care about data lineage?

Data lineage enables to trace the origin of biological insights, verify experimental outcomes, meet regulatory standards, and increase the reproducibility & reliability research.

While tracking data lineage is easier when it’s governed by deterministic pipelines, it becomes hard when its governed by interactive human-driven analyses.

This is where LaminDB fills a gap in the tools space.

Setup#

We need an instance:

!lamin init --storage ./mydata
Hide code cell output
💡 creating schemas: core==0.45.5 
✅ saved: User(id='DzTjkKse', handle='testuser1', email='testuser1@lamin.ai', name='Test User1', updated_at=2023-08-17 14:15:31)
✅ saved: Storage(id='WsXwNHjM', root='/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata', type='local', updated_at=2023-08-17 14:15:31, created_by_id='DzTjkKse')
✅ loaded instance: testuser1/mydata
💡 did not register local instance on hub (if you want, call `lamin register`)

Import lamindb:

import lamindb as ln
✅ loaded instance: testuser1/mydata (lamindb 0.50.7)

We’ll need toy data:

assert ln.setup.settings.user.handle == "testuser1"
bfx_run_output = ln.dev.datasets.dir_scrnaseq_cellranger(
    "perturbseq", basedir=ln.settings.storage, output_only=False
)
ln.track(ln.Transform(name="Chromium 10x upload", type="pipeline"))
ln.File(bfx_run_output.parent / "fastq/perturbseq_R1_001.fastq.gz").save()
ln.File(bfx_run_output.parent / "fastq/perturbseq_R2_001.fastq.gz").save()
ln.setup.login("testuser2")
Hide code cell output
✅ saved: Transform(id='flpCitEbdeV1z8', name='Chromium 10x upload', stem_id='flpCitEbdeV1', version='0', type='pipeline', updated_at=2023-08-17 14:15:32, created_by_id='DzTjkKse')
✅ saved: Run(id='fuyKmlN7vKcdx6fVFa2M', run_at=2023-08-17 14:15:32, transform_id='flpCitEbdeV1z8', created_by_id='DzTjkKse')
💡 file in storage 'mydata' with key 'fastq/perturbseq_R1_001.fastq.gz'
💡 file in storage 'mydata' with key 'fastq/perturbseq_R2_001.fastq.gz'
✅ logged in with email testuser2@lamin.ai and id bKeW4T6E
❗ record with similar name exist! did you mean to load it?
id __ratio__
name
Test User1 DzTjkKse 90.0
✅ saved: User(id='bKeW4T6E', handle='testuser2', email='testuser2@lamin.ai', name='Test User2', updated_at=2023-08-17 14:15:34)

Track a bioinformatics pipeline#

When working with a pipeline, we’ll register it before running it.

This only happens once and could be done by anyone on your team.

transform = ln.Transform(name="Cell Ranger", version="7.2.0", type="pipeline")
ln.User.filter().df()
handle email name updated_at
id
DzTjkKse testuser1 testuser1@lamin.ai Test User1 2023-08-17 14:15:31
bKeW4T6E testuser2 testuser2@lamin.ai Test User2 2023-08-17 14:15:34
transform
Transform(id='Z8KOETA6N5UusM', name='Cell Ranger', stem_id='Z8KOETA6N5Uu', version='7.2.0', type='pipeline', created_by_id='bKeW4T6E')
ln.track(transform)
✅ saved: Transform(id='Z8KOETA6N5UusM', name='Cell Ranger', stem_id='Z8KOETA6N5Uu', version='7.2.0', type='pipeline', updated_at=2023-08-17 14:15:34, created_by_id='bKeW4T6E')
✅ saved: Run(id='0zC7OlBjBfeCGrpMjb3w', run_at=2023-08-17 14:15:34, transform_id='Z8KOETA6N5UusM', created_by_id='bKeW4T6E')

Now, let’s stage a few files from an instrument upload:

files = ln.File.filter(key__startswith="fastq/perturbseq").all()
filepaths = [file.stage() for file in files]
💡 adding file d3GLVFuOjJry2qxPbGML as input for run 0zC7OlBjBfeCGrpMjb3w, adding parent transform flpCitEbdeV1z8
💡 adding file bvUrX5brP3YDvFa9pdhk as input for run 0zC7OlBjBfeCGrpMjb3w, adding parent transform flpCitEbdeV1z8

Assume we processed them and obtained 3 output files in a folder 'filtered_feature_bc_matrix':

output_files = ln.File.from_dir("./mydata/perturbseq/filtered_feature_bc_matrix/")
ln.save(output_files)
Hide code cell output
✅ created 3 files from directory using storage /home/runner/work/lamin-usecases/lamin-usecases/docs/mydata and key = perturbseq/filtered_feature_bc_matrix/
✅ storing file 'quahVY09cfhvjoPljUTQ' at 'perturbseq/filtered_feature_bc_matrix/matrix.mtx.gz'
✅ storing file 'nGghdA1OlZAaWN1JruWR' at 'perturbseq/filtered_feature_bc_matrix/barcodes.tsv.gz'
✅ storing file 'pAuqr2PEkFDSYxbS15WB' at 'perturbseq/filtered_feature_bc_matrix/features.tsv.gz'

Let’s look at the data lineage at this stage:

output_files[0].view_lineage()
https://d33wubrfki0l68.cloudfront.net/3e7c936fe4b07e61c3daa5a64fc5561a30211cfe/41531/_images/d9d4d897c8314135dbcc3ffa1a1530d41b430d2ebd9d32e6749d7d371216be25.svg

And let’s keep running the Cell Ranger pipeline in the background.

Hide code cell content
transform = ln.Transform(
    name="Preprocess Cell Ranger outputs", version="2.0", type="pipeline"
)
ln.track(transform)
[f.stage() for f in output_files]
filepath = ln.dev.datasets.schmidt22_perturbseq(basedir=ln.settings.storage)
file = ln.File(filepath, description="perturbseq counts")
file.save()
✅ saved: Transform(id='qhtHmM1xAOcg0b', name='Preprocess Cell Ranger outputs', stem_id='qhtHmM1xAOcg', version='2.0', type='pipeline', updated_at=2023-08-17 14:15:34, created_by_id='bKeW4T6E')
✅ saved: Run(id='XAEgfjgCCEDNLsfJM2bj', run_at=2023-08-17 14:15:34, transform_id='qhtHmM1xAOcg0b', created_by_id='bKeW4T6E')
💡 adding file nGghdA1OlZAaWN1JruWR as input for run XAEgfjgCCEDNLsfJM2bj, adding parent transform Z8KOETA6N5UusM
💡 adding file pAuqr2PEkFDSYxbS15WB as input for run XAEgfjgCCEDNLsfJM2bj, adding parent transform Z8KOETA6N5UusM
💡 adding file quahVY09cfhvjoPljUTQ as input for run XAEgfjgCCEDNLsfJM2bj, adding parent transform Z8KOETA6N5UusM
💡 file in storage 'mydata' with key 'schmidt22_perturbseq.h5ad'
💡 file is AnnDataLike, consider using File.from_anndata() to link var_names and obs.columns as features

Track app upload & analytics#

The hidden cell below simulates additional analytic steps including:

  • uploading phenotypic screen data

  • scRNA-seq analysis

  • analyses of the integrated datasets

Hide code cell content
# app upload
ln.setup.login("testuser1")
transform = ln.Transform(name="Upload GWS CRISPRa result", type="app")
ln.track(transform)

# upload and analyze the GWS data
filepath = ln.dev.datasets.schmidt22_crispra_gws_IFNG(ln.settings.storage)
file = ln.File(filepath, description="Raw data of schmidt22 crispra GWS")
file.save()
ln.setup.login("testuser2")
transform = ln.Transform(name="GWS CRIPSRa analysis", type="notebook")
ln.track(transform)

file_wgs = ln.File.filter(key="schmidt22-crispra-gws-IFNG.csv").one()
df = file_wgs.load().set_index("id")
hits_df = df[df["pos|fdr"] < 0.01].copy()
file_hits = ln.File(hits_df, description="hits from schmidt22 crispra GWS")
file_hits.save()
✅ logged in with email testuser1@lamin.ai and id DzTjkKse
✅ saved: Transform(id='8GC0s5UrRMwwz8', name='Upload GWS CRISPRa result', stem_id='8GC0s5UrRMww', version='0', type='app', updated_at=2023-08-17 14:15:37, created_by_id='DzTjkKse')
✅ saved: Run(id='Kggz0ibxQPDLQjte0Rw1', run_at=2023-08-17 14:15:37, transform_id='8GC0s5UrRMwwz8', created_by_id='DzTjkKse')
💡 file in storage 'mydata' with key 'schmidt22-crispra-gws-IFNG.csv'
✅ logged in with email testuser2@lamin.ai and id bKeW4T6E
✅ saved: Transform(id='HWWYy67og9YKz8', name='GWS CRIPSRa analysis', stem_id='HWWYy67og9YK', version='0', type='notebook', updated_at=2023-08-17 14:15:40, created_by_id='bKeW4T6E')
✅ saved: Run(id='8yI6SOkXsbgHDVeIay3O', run_at=2023-08-17 14:15:40, transform_id='HWWYy67og9YKz8', created_by_id='bKeW4T6E')
💡 adding file G5giHcBkFpiZhiPUQmhZ as input for run 8yI6SOkXsbgHDVeIay3O, adding parent transform 8GC0s5UrRMwwz8
💡 file will be copied to default storage upon `save()` with key `None` ('.lamindb/X6GsDqJn5qInAxfk6mzk.parquet')
💡 file is a dataframe, consider using File.from_df() to link column names as features
✅ storing file 'X6GsDqJn5qInAxfk6mzk' at '.lamindb/X6GsDqJn5qInAxfk6mzk.parquet'

Let’s see how the data lineage of this looks:

file = ln.File.filter(description="hits from schmidt22 crispra GWS").one()
file.view_lineage()
https://d33wubrfki0l68.cloudfront.net/5ca484ec68789159265964df7d0307c97153fdbd/e91a5/_images/4dc6e51ba4a11c627cf8865d312e597b16545c4c2d89fe1ee8bacf0860c621d1.svg

In the backgound, somebody integrated and analyzed the outputs of the app upload and the Cell Ranger pipeline:

Hide code cell content
# Let us add analytics on top of the cell ranger pipeline and the phenotypic screening
transform = ln.Transform(
    name="Perform single cell analysis, integrating with CRISPRa screen",
    type="notebook",
)
ln.track(transform)

file_ps = ln.File.filter(description__icontains="perturbseq").one()
adata = file_ps.load()
screen_hits = file_hits.load()
import scanpy as sc

sc.tl.score_genes(adata, adata.var_names.intersection(screen_hits.index).tolist())
filesuffix = "_fig1_score-wgs-hits.png"
sc.pl.umap(adata, color="score", show=False, save=filesuffix)
filepath = f"figures/umap{filesuffix}"
file = ln.File(filepath, key=filepath)
file.save()
filesuffix = "fig2_score-wgs-hits-per-cluster.png"
sc.pl.matrixplot(
    adata, groupby="cluster_name", var_names=["score"], show=False, save=filesuffix
)
filepath = f"figures/matrixplot_{filesuffix}"
file = ln.File(filepath, key=filepath)
file.save()
✅ saved: Transform(id='W1Rlzo9sQzclz8', name='Perform single cell analysis, integrating with CRISPRa screen', stem_id='W1Rlzo9sQzcl', version='0', type='notebook', updated_at=2023-08-17 14:15:40, created_by_id='bKeW4T6E')
✅ saved: Run(id='DfUBUMZ1j3o7gQaT74X8', run_at=2023-08-17 14:15:40, transform_id='W1Rlzo9sQzclz8', created_by_id='bKeW4T6E')
💡 adding file bcY6VQnhaWDaooJuaGVb as input for run DfUBUMZ1j3o7gQaT74X8, adding parent transform qhtHmM1xAOcg0b
💡 adding file X6GsDqJn5qInAxfk6mzk as input for run DfUBUMZ1j3o7gQaT74X8, adding parent transform HWWYy67og9YKz8
WARNING: saving figure to file figures/umap_fig1_score-wgs-hits.png
💡 file will be copied to default storage upon `save()` with key 'figures/umap_fig1_score-wgs-hits.png'
✅ storing file 'xYyPgkoUrBovipr5g52X' at 'figures/umap_fig1_score-wgs-hits.png'
WARNING: saving figure to file figures/matrixplot_fig2_score-wgs-hits-per-cluster.png
💡 file will be copied to default storage upon `save()` with key 'figures/matrixplot_fig2_score-wgs-hits-per-cluster.png'
✅ storing file 'mo53EMhYHr0d6ngGvyq7' at 'figures/matrixplot_fig2_score-wgs-hits-per-cluster.png'

The outcome of it are a few figures stored as image files. Let’s query one of them and look at the data lineage:

Track notebooks#

We’d now like to track the current Jupyter notebook to continue the work:

ln.track()
💡 notebook imports: lamindb==0.50.7 scanpy==1.9.3
✅ saved: Transform(id='1LCd8kco9lZUz8', name='Bird's eye view', short_name='birds-eye', stem_id='1LCd8kco9lZU', version='0', type=notebook, updated_at=2023-08-17 14:15:43, created_by_id='bKeW4T6E')
✅ saved: Run(id='Lt6hY37YBVlxOwVTaNyh', run_at=2023-08-17 14:15:43, transform_id='1LCd8kco9lZUz8', created_by_id='bKeW4T6E')

Visualize data lineage#

Let’s load one of the plots:

file = ln.File.filter(key__contains="figures/matrixplot").one()
file.stage()
💡 adding file mo53EMhYHr0d6ngGvyq7 as input for run Lt6hY37YBVlxOwVTaNyh, adding parent transform W1Rlzo9sQzclz8
PosixPath('/home/runner/work/lamin-usecases/lamin-usecases/docs/mydata/figures/matrixplot_fig2_score-wgs-hits-per-cluster.png')

We see that the image file is tracked as an input of the current notebook. The input is highlighted, the notebook follows at the bottom:

file.view_lineage()
https://d33wubrfki0l68.cloudfront.net/c6a7b611e1b18b09cdddb6032a74f753dd44ac49/182d8/_images/57f612db02f6a0123c9a6b6fba2a2421163b50c67e95688c86b1ac94002ec798.svg

Alternatively, we can also purely look at the sequence of transforms and ignore the files:

transform = ln.Transform.search("Bird's eye view", return_queryset=True).first()
transform.parents.df()
name short_name stem_id version type reference updated_at created_by_id
id
W1Rlzo9sQzclz8 Perform single cell analysis, integrating with... None W1Rlzo9sQzcl 0 notebook None 2023-08-17 14:15:42 bKeW4T6E
transform.view_parents()
https://d33wubrfki0l68.cloudfront.net/b118e1de22aeb71dddfd1a72b46a97ba758727cf/459bd/_images/dccbae6c00a2807d46db02249417054ff1b5d7a4677d499255eae50b00da287c.svg

Understand runs#

We tracked pipeline and notebook runs through run_context, which stores a Transform and a Run record as a global context.

File objects are the inputs and outputs of runs.

What if I don’t want a global context?

Sometimes, we don’t want to create a global run context but manually pass a run when creating a file:

run = ln.Run(transform=transform)
ln.File(filepath, run=run)
When does a file appear as a run input?

When accessing a file via stage(), load() or backed(), two things happen:

  1. The current run gets added to file.input_of

  2. The transform of that file gets added as a parent of the current transform

You can then switch off auto-tracking of run inputs if you set ln.settings.track_run_inputs = False: Can I disable tracking run inputs?

You can also track run inputs on a case by case basis via is_run_input=True, e.g., here:

file.load(is_run_input=True)

Query by provenance#

We can query or search for the notebook that created the file:

transform = ln.Transform.search("GWS CRIPSRa analysis", return_queryset=True).first()

And then find all the files created by that notebook:

ln.File.filter(transform=transform).df()
storage_id key suffix accessor description version initial_version_id size hash hash_type transform_id run_id updated_at created_by_id
id
X6GsDqJn5qInAxfk6mzk WsXwNHjM None .parquet DataFrame hits from schmidt22 crispra GWS None None 18368 yw5f-kMLJhaNhdEF-lhxOQ md5 HWWYy67og9YKz8 8yI6SOkXsbgHDVeIay3O 2023-08-17 14:15:40 bKeW4T6E

Which transform ingested a given file?

file = ln.File.filter().first()
file.transform
Transform(id='flpCitEbdeV1z8', name='Chromium 10x upload', stem_id='flpCitEbdeV1', version='0', type='pipeline', updated_at=2023-08-17 14:15:32, created_by_id='DzTjkKse')

And which user?

file.created_by
User(id='DzTjkKse', handle='testuser1', email='testuser1@lamin.ai', name='Test User1', updated_at=2023-08-17 14:15:37)

Which transforms were created by a given user?

users = ln.User.lookup()
ln.Transform.filter(created_by=users.testuser2).df()
name short_name stem_id version type reference updated_at created_by_id
id
Z8KOETA6N5UusM Cell Ranger None Z8KOETA6N5Uu 7.2.0 pipeline None 2023-08-17 14:15:34 bKeW4T6E
qhtHmM1xAOcg0b Preprocess Cell Ranger outputs None qhtHmM1xAOcg 2.0 pipeline None 2023-08-17 14:15:36 bKeW4T6E
HWWYy67og9YKz8 GWS CRIPSRa analysis None HWWYy67og9YK 0 notebook None 2023-08-17 14:15:40 bKeW4T6E
W1Rlzo9sQzclz8 Perform single cell analysis, integrating with... None W1Rlzo9sQzcl 0 notebook None 2023-08-17 14:15:42 bKeW4T6E
1LCd8kco9lZUz8 Bird's eye view birds-eye 1LCd8kco9lZU 0 notebook None 2023-08-17 14:15:43 bKeW4T6E

Which notebooks were created by a given user?

ln.Transform.filter(created_by=users.testuser2, type="notebook").df()
name short_name stem_id version type reference updated_at created_by_id
id
HWWYy67og9YKz8 GWS CRIPSRa analysis None HWWYy67og9YK 0 notebook None 2023-08-17 14:15:40 bKeW4T6E
W1Rlzo9sQzclz8 Perform single cell analysis, integrating with... None W1Rlzo9sQzcl 0 notebook None 2023-08-17 14:15:42 bKeW4T6E
1LCd8kco9lZUz8 Bird's eye view birds-eye 1LCd8kco9lZU 0 notebook None 2023-08-17 14:15:43 bKeW4T6E

We can also view all recent additions to the entire database:

ln.view()
Hide code cell output
File

storage_id key suffix accessor description version initial_version_id size hash hash_type transform_id run_id updated_at created_by_id
id
mo53EMhYHr0d6ngGvyq7 WsXwNHjM figures/matrixplot_fig2_score-wgs-hits-per-clu... .png None None None None 28814 JYIPcat0YWYVCX3RVd3mww md5 W1Rlzo9sQzclz8 DfUBUMZ1j3o7gQaT74X8 2023-08-17 14:15:42 bKeW4T6E
xYyPgkoUrBovipr5g52X WsXwNHjM figures/umap_fig1_score-wgs-hits.png .png None None None None 118999 laQjVk4gh70YFzaUyzbUNg md5 W1Rlzo9sQzclz8 DfUBUMZ1j3o7gQaT74X8 2023-08-17 14:15:42 bKeW4T6E
X6GsDqJn5qInAxfk6mzk WsXwNHjM None .parquet DataFrame hits from schmidt22 crispra GWS None None 18368 yw5f-kMLJhaNhdEF-lhxOQ md5 HWWYy67og9YKz8 8yI6SOkXsbgHDVeIay3O 2023-08-17 14:15:40 bKeW4T6E
G5giHcBkFpiZhiPUQmhZ WsXwNHjM schmidt22-crispra-gws-IFNG.csv .csv None Raw data of schmidt22 crispra GWS None None 1729685 cUSH0oQ2w-WccO8_ViKRAQ md5 8GC0s5UrRMwwz8 Kggz0ibxQPDLQjte0Rw1 2023-08-17 14:15:38 DzTjkKse
bcY6VQnhaWDaooJuaGVb WsXwNHjM schmidt22_perturbseq.h5ad .h5ad AnnData perturbseq counts None None 20659936 la7EvqEUMDlug9-rpw-udA md5 qhtHmM1xAOcg0b XAEgfjgCCEDNLsfJM2bj 2023-08-17 14:15:36 bKeW4T6E
pAuqr2PEkFDSYxbS15WB WsXwNHjM perturbseq/filtered_feature_bc_matrix/features... .tsv.gz None None None None 6 sEVrI6rE96RQ-BoOcwFFNw md5 Z8KOETA6N5UusM 0zC7OlBjBfeCGrpMjb3w 2023-08-17 14:15:34 bKeW4T6E
nGghdA1OlZAaWN1JruWR WsXwNHjM perturbseq/filtered_feature_bc_matrix/barcodes... .tsv.gz None None None None 6 JqXKNc0vDBAFYYljBr9v_w md5 Z8KOETA6N5UusM 0zC7OlBjBfeCGrpMjb3w 2023-08-17 14:15:34 bKeW4T6E
quahVY09cfhvjoPljUTQ WsXwNHjM perturbseq/filtered_feature_bc_matrix/matrix.m... .mtx.gz None None None None 6 OzFyK080iOXuSo2arSD-qw md5 Z8KOETA6N5UusM 0zC7OlBjBfeCGrpMjb3w 2023-08-17 14:15:34 bKeW4T6E
bvUrX5brP3YDvFa9pdhk WsXwNHjM fastq/perturbseq_R2_001.fastq.gz .fastq.gz None None None None 6 4ptHFNYZgPppTEVJfPCYJw md5 flpCitEbdeV1z8 fuyKmlN7vKcdx6fVFa2M 2023-08-17 14:15:32 DzTjkKse
d3GLVFuOjJry2qxPbGML WsXwNHjM fastq/perturbseq_R1_001.fastq.gz .fastq.gz None None None None 6 dDAq_6_ymUnpFMwgLuEucQ md5 flpCitEbdeV1z8 fuyKmlN7vKcdx6fVFa2M 2023-08-17 14:15:32 DzTjkKse
Run

transform_id run_at created_by_id reference reference_type
id
fuyKmlN7vKcdx6fVFa2M flpCitEbdeV1z8 2023-08-17 14:15:32 DzTjkKse None None
0zC7OlBjBfeCGrpMjb3w Z8KOETA6N5UusM 2023-08-17 14:15:34 bKeW4T6E None None
XAEgfjgCCEDNLsfJM2bj qhtHmM1xAOcg0b 2023-08-17 14:15:34 bKeW4T6E None None
Kggz0ibxQPDLQjte0Rw1 8GC0s5UrRMwwz8 2023-08-17 14:15:37 DzTjkKse None None
8yI6SOkXsbgHDVeIay3O HWWYy67og9YKz8 2023-08-17 14:15:40 bKeW4T6E None None
DfUBUMZ1j3o7gQaT74X8 W1Rlzo9sQzclz8 2023-08-17 14:15:40 bKeW4T6E None None
Lt6hY37YBVlxOwVTaNyh 1LCd8kco9lZUz8 2023-08-17 14:15:43 bKeW4T6E None None
Storage

root type region updated_at created_by_id
id
WsXwNHjM /home/runner/work/lamin-usecases/lamin-usecase... local None 2023-08-17 14:15:31 DzTjkKse
Transform

name short_name stem_id version type reference updated_at created_by_id
id
1LCd8kco9lZUz8 Bird's eye view birds-eye 1LCd8kco9lZU 0 notebook None 2023-08-17 14:15:43 bKeW4T6E
W1Rlzo9sQzclz8 Perform single cell analysis, integrating with... None W1Rlzo9sQzcl 0 notebook None 2023-08-17 14:15:42 bKeW4T6E
HWWYy67og9YKz8 GWS CRIPSRa analysis None HWWYy67og9YK 0 notebook None 2023-08-17 14:15:40 bKeW4T6E
8GC0s5UrRMwwz8 Upload GWS CRISPRa result None 8GC0s5UrRMww 0 app None 2023-08-17 14:15:38 DzTjkKse
qhtHmM1xAOcg0b Preprocess Cell Ranger outputs None qhtHmM1xAOcg 2.0 pipeline None 2023-08-17 14:15:36 bKeW4T6E
Z8KOETA6N5UusM Cell Ranger None Z8KOETA6N5Uu 7.2.0 pipeline None 2023-08-17 14:15:34 bKeW4T6E
flpCitEbdeV1z8 Chromium 10x upload None flpCitEbdeV1 0 pipeline None 2023-08-17 14:15:32 DzTjkKse
User

handle email name updated_at
id
bKeW4T6E testuser2 testuser2@lamin.ai Test User2 2023-08-17 14:15:40
DzTjkKse testuser1 testuser1@lamin.ai Test User1 2023-08-17 14:15:37
Hide code cell content
!lamin login testuser1
!lamin delete --force mydata
!rm -r ./mydata
✅ logged in with email testuser1@lamin.ai and id DzTjkKse
💡 deleting instance testuser1/mydata
✅     deleted instance settings file: /home/runner/.lamin/instance--testuser1--mydata.env
✅     instance cache deleted
✅     deleted '.lndb' sqlite file
❗     consider manually deleting your stored data: /home/runner/work/lamin-usecases/lamin-usecases/docs/mydata