SeqVec/embedding

Authors: Michael Heinzinger , Ahmed Elnaggar , Yu Wang , Christian Dallago , Dmitrii Nechaev , Florian Matthes , Burkhard Rost

License: MIT

Contributed by: Michael Heinzinger, Ahmed Elnaggar

Cite as: https://doi.org/10.1101/614313

Type: None

Postprocessing: None

Trained on: UniRef50

Source files

Embeddings from Language Models (ELMo) trained on protein sequences. Allows to convert protein sequence to a vector representation.

Create a new conda environment with all dependencies installed
kipoi env create SeqVec/embedding
source activate kipoi-SeqVec__embedding
Install model dependencies into current environment
kipoi env install SeqVec/embedding
Test the model
kipoi test SeqVec/embedding --source=kipoi
Make a prediction
kipoi get-example SeqVec/embedding -o example
kipoi predict SeqVec/embedding \
  --dataloader_args='{"fasta_file": "example/fasta_file"}' \
  -o '/tmp/SeqVec|embedding.example_pred.tsv'
# check the results
head '/tmp/SeqVec|embedding.example_pred.tsv'
Get the model
import kipoi
model = kipoi.get_model('SeqVec/embedding')
Make a prediction for example files
pred = model.pipeline.predict_example()
Use dataloader and model separately
# Download example dataloader kwargs
dl_kwargs = model.default_dataloader.download_example('example')
# Get the dataloader and instantiate it
dl = model.default_dataloader(**dl_kwargs)
# get a batch iterator
it = dl.batch_iter(batch_size=4)
# predict for a batch
batch = next(it)
model.predict_on_batch(batch['inputs'])
Make predictions for custom files directly
pred = model.pipeline.predict(dl_kwargs, batch_size=4)
Get the model
library(reticulate)
kipoi <- import('kipoi')
model <- kipoi$get_model('SeqVec/embedding')
Make a prediction for example files
predictions <- model$pipeline$predict_example()
Use dataloader and model separately
# Download example dataloader kwargs
dl_kwargs <- model$default_dataloader$download_example('example')
# Get the dataloader
dl <- model$default_dataloader(dl_kwargs)
# get a batch iterator
it <- dl$batch_iter(batch_size=4)
# predict for a batch
batch <- iter_next(it)
model$predict_on_batch(batch$inputs)
Make predictions for custom files directly
pred <- model$pipeline$predict(dl_kwargs, batch_size=4)
Get the docker image
docker pull haimasree/kipoi-docker:seqvec
Get the activated conda environment inside the container
docker run -it haimasree/kipoi-docker:seqvec
Test the model
docker run haimasree/kipoi-docker:seqvec kipoi test SeqVec/embedding --source=kipoi
Make prediction for custom files directly
# Create an example directory containing the data
mkdir -p $PWD/kipoi-example 
# You can replace $PWD/kipoi-example with a different absolute path containing the data 
docker run -v $PWD/kipoi-example:/app/ haimasree/kipoi-docker:seqvec \
kipoi get-example SeqVec/embedding -o /app/example 
docker run -v $PWD/kipoi-example:/app/ haimasree/kipoi-docker:seqvec \
kipoi predict SeqVec/embedding \
--dataloader_args='{'fasta_file': '/app/example/fasta_file'}' \
-o '/app/SeqVec_embedding.example_pred.tsv' 
# check the results
head $PWD/kipoi-example/SeqVec_embedding.example_pred.tsv

Schema

Inputs

Single numpy array

Name: None

    Shape: (1,) 

    Doc: Path to file containing protein sequences in fasta format. Sequences can have different length.


Targets

List of numpy arrays

Name: seq

    Shape: (1024, None) 

    Doc: Embedding for a protein sequence. Each amino acid in your protein of length L is represented by a vector of length 1024.


Dataloader

Defined as: .

Doc: Data-loader returning protein sequence as required by ELMo

Authors: Michael Heinzinger

Type: Dataset

License: MIT


Arguments

fasta_file : fasta file containing multiple protein sequence(s)

split_char (optional): charcter used for separating header of fasta files (together with id_field used to extract protein identifier)

id_field (optional): index for extracting protein identifier from fasta header after splitting after split_char


Model dependencies
conda:
  • python=3.6
  • conda-forge::allennlp=0.7.2
  • pip=9.0.3

pip:
  • scikit-learn==0.22.2.post1

Dataloader dependencies
conda:
  • python=3.6
  • conda-forge::allennlp

pip: