extended_coda

Authors: Pang Wei Koh , Emma Pierson , Anshul Kundaje

License: MIT

Contributed by: Johnny Israeli

Cite as: https://doi.org/10.1093/bioinformatics/btx243

Type: keras

Postprocessing: None

Trained on: GM12878 (Kasowski et al., 2013), holding out a random 20% subset of the training data for validation.

Source files

Single bp resolution ChIP-seq denoising - https://github.com/kundajelab/coda

Create a new conda environment with all dependencies installed
kipoi env create extended_coda
source activate kipoi-extended_coda
Test the model
kipoi test extended_coda --source=kipoi
Make a prediction
kipoi get-example extended_coda -o example
kipoi predict extended_coda \
  --dataloader_args='{"batch_size": 4, "input_data_sources": "example/input_data_sources", "intervals_file": "example/intervals_file"}' \
  -o '/tmp/extended_coda.example_pred.tsv'
# check the results
head '/tmp/extended_coda.example_pred.tsv'
Create a new conda environment with all dependencies installed
kipoi env create extended_coda
source activate kipoi-extended_coda
Get the model
import kipoi
model = kipoi.get_model('extended_coda')
Make a prediction for example files
pred = model.pipeline.predict_example(batch_size=4)
Use dataloader and model separately
# Download example dataloader kwargs
dl_kwargs = model.default_dataloader.download_example('example')
# Get the dataloader and instantiate it
dl = model.default_dataloader(**dl_kwargs)
# get a batch iterator
batch_iterator = dl.batch_iter(batch_size=4)
for batch in batch_iterator:
    # predict for a batch
    batch_pred = model.predict_on_batch(batch['inputs'])
Make predictions for custom files directly
pred = model.pipeline.predict(dl_kwargs, batch_size=4)
Get the model
library(reticulate)
kipoi <- import('kipoi')
model <- kipoi$get_model('extended_coda')
Make a prediction for example files
predictions <- model$pipeline$predict_example()
Use dataloader and model separately
# Download example dataloader kwargs
dl_kwargs <- model$default_dataloader$download_example('example')
# Get the dataloader
dl <- model$default_dataloader(dl_kwargs)
# get a batch iterator
it <- dl$batch_iter(batch_size=4)
# predict for a batch
batch <- iter_next(it)
model$predict_on_batch(batch$inputs)
Make predictions for custom files directly
pred <- model$pipeline$predict(dl_kwargs, batch_size=4)
Get the docker image
docker pull kipoi/kipoi-docker:extended_coda-slim
Get the full sized docker image
docker pull kipoi/kipoi-docker:extended_coda
Get the activated conda environment inside the container
docker run -it kipoi/kipoi-docker:extended_coda-slim
Test the model
docker run kipoi/kipoi-docker:extended_coda-slim kipoi test extended_coda --source=kipoi
Make prediction for custom files directly
# Create an example directory containing the data
mkdir -p $PWD/kipoi-example 
# You can replace $PWD/kipoi-example with a different absolute path containing the data 
docker run -v $PWD/kipoi-example:/app/ kipoi/kipoi-docker:extended_coda-slim \
kipoi get-example extended_coda -o /app/example 
docker run -v $PWD/kipoi-example:/app/ kipoi/kipoi-docker:extended_coda-slim \
kipoi predict extended_coda \
--dataloader_args='{'batch_size': 4, 'input_data_sources': '/app/example/input_data_sources', 'intervals_file': '/app/example/intervals_file'}' \
-o '/app/extended_coda.example_pred.tsv' 
# check the results
head $PWD/kipoi-example/extended_coda.example_pred.tsv
    
Install apptainer
https://apptainer.org/docs/user/main/quick_start.html#quick-installation-steps
Make prediction for custom files directly
kipoi get-example extended_coda -o example
kipoi predict extended_coda \
--dataloader_args='{"batch_size": 4, "input_data_sources": "example/input_data_sources", "intervals_file": "example/intervals_file"}' \
-o 'extended_coda.example_pred.tsv' \
--singularity 
# check the results
head extended_coda.example_pred.tsv

Schema

Inputs

Dictionary of numpy arrays

Name: H3K27AC_subsampled

    Shape: (None, 1) 

    Doc: Track representing ...


Targets

Dictionary of numpy arrays

Name: H3K27ac

    Shape: (None, 1) 

    Doc: Predicted track...


Dataloader

Defined as: .

Doc: DataLoader for single bp resolution ChIP-seq denoising

Authors: Johnny Israeli

Type: BatchGenerator

License: MIT


Arguments

batch_size (optional): batch size. Default = 128

input_data_sources : Can either be `{data_name: <path to genomelake directory>}` or the path to the zipped genomelake directory.

intervals_file : tsv file with `chrom start end`

target_data_sources (optional): optional; {data_name: <path to genomelake directory>}


Model dependencies
conda:
  • python=3.7
  • pip=20.3.3
  • pysam=0.15.3

pip:
  • tensorflow==1.13.1
  • keras==1.2.2
  • protobuf==3.20

Dataloader dependencies
conda:
  • bioconda::genomelake==0.1.4
  • cython

pip: