Edit model card

emozilla/landmark-llama-7b

This model is an out-of-the-box ready version of the LLaMA-7B variant of Landmark Attention. The original code is modified from the Landmark GitHub and the weights from here.

As a LLaMA variant, this model may be subject to the LLaMA license.

To use

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch

tokenizer = AutoTokenizer.from_pretrained("emozilla/landmark-llama-7b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("emozilla/landmark-llama-7b", \
  torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

print(pipe("Somebody once told me the world is gonna roll me", \
           max_new_tokens=256, temperature=0.8, do_sample=True))

You can configure the Landmark parameters by editing mem_freq, mem_top_k, mem_max_seq_len, and mem_max_cache_size.

config = AutoConfig.from_pretrained("emozilla/landmark-llama-7b", trust_remote_code=True)
config.mem_top_k = 6
model = AutoModelForCausalLM.from_pretrained("emozilla/landmark-llama-7b", \
  torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", config=config)
Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.