SeqVec/embedding
Authors: Michael Heinzinger, Ahmed Elnaggar
License: MIT
Contributed by: Michael Heinzinger, Ahmed Elnaggar
Cite as: https://doi.org/10.1101/614313
Embeddings from Language Models (ELMo) trained on protein sequences. Allows to convert protein sequence to a vector representation.
kipoi env create SeqVec/embedding
source activate kipoi-SeqVec__embedding
kipoi test SeqVec/embedding --source=kipoi
kipoi get-example SeqVec/embedding -o example
kipoi predict SeqVec/embedding \
--dataloader_args='{"fasta_file": "example/fasta_file"}' \
-o '/tmp/SeqVec|embedding.example_pred.tsv'
# check the results
head '/tmp/SeqVec|embedding.example_pred.tsv'
kipoi env create SeqVec/embedding
source activate kipoi-SeqVec__embedding
import kipoi
model = kipoi.get_model('SeqVec/embedding')
pred = model.pipeline.predict_example(batch_size=4)
# Download example dataloader kwargs
dl_kwargs = model.default_dataloader.download_example('example')
# Get the dataloader and instantiate it
dl = model.default_dataloader(**dl_kwargs)
# get a batch iterator
batch_iterator = dl.batch_iter(batch_size=4)
for batch in batch_iterator:
# predict for a batch
batch_pred = model.predict_on_batch(batch['inputs'])
pred = model.pipeline.predict(dl_kwargs, batch_size=4)
library(reticulate)
kipoi <- import('kipoi')
model <- kipoi$get_model('SeqVec/embedding')
predictions <- model$pipeline$predict_example()
# Download example dataloader kwargs
dl_kwargs <- model$default_dataloader$download_example('example')
# Get the dataloader
dl <- model$default_dataloader(dl_kwargs)
# get a batch iterator
it <- dl$batch_iter(batch_size=4)
# predict for a batch
batch <- iter_next(it)
model$predict_on_batch(batch$inputs)
pred <- model$pipeline$predict(dl_kwargs, batch_size=4)
docker pull kipoi/kipoi-docker:seqvec-slim
docker pull kipoi/kipoi-docker:seqvec
docker run -it kipoi/kipoi-docker:seqvec-slim
docker run kipoi/kipoi-docker:seqvec-slim kipoi test SeqVec/embedding --source=kipoi
# Create an example directory containing the data
mkdir -p $PWD/kipoi-example
# You can replace $PWD/kipoi-example with a different absolute path containing the data
docker run -v $PWD/kipoi-example:/app/ kipoi/kipoi-docker:seqvec-slim \
kipoi get-example SeqVec/embedding -o /app/example
docker run -v $PWD/kipoi-example:/app/ kipoi/kipoi-docker:seqvec-slim \
kipoi predict SeqVec/embedding \
--dataloader_args='{'fasta_file': '/app/example/fasta_file'}' \
-o '/app/SeqVec_embedding.example_pred.tsv'
# check the results
head $PWD/kipoi-example/SeqVec_embedding.example_pred.tsv
https://apptainer.org/docs/user/main/quick_start.html#quick-installation-steps
kipoi get-example SeqVec/embedding -o example
kipoi predict SeqVec/embedding \
--dataloader_args='{"fasta_file": "example/fasta_file"}' \
-o 'SeqVec_embedding.example_pred.tsv' \
--singularity
# check the results
head SeqVec_embedding.example_pred.tsv
Inputs
Single numpy array
Name: None
Doc: Path to file containing protein sequences in fasta format. Sequences can have different length.
Defined as: .
Doc: Data-loader returning protein sequence as required by ELMo
Type: Dataset
License: MIT
Arguments
fasta_file : fasta file containing multiple protein sequence(s)
split_char (optional): charcter used for separating header of fasta files (together with id_field used to extract protein identifier)
id_field (optional): index for extracting protein identifier from fasta header after splitting after split_char
- python=3.6
- conda-forge::allennlp=0.7.2
- pip=9.0.3
- scikit-learn==0.22.2.post1
- overrides=3.1.0
- python=3.6
- conda-forge::allennlp=0.7.2