Skip to content
Snippets Groups Projects
Commit ed49f037 authored by Armaan Bhullar's avatar Armaan Bhullar Committed by Jeff Wu
Browse files

Add documentation for help flags (#81)

add description for flags
parent c314ddab
Branches
No related tags found
No related merge requests found
......@@ -75,6 +75,11 @@ There are various flags for controlling the samples:
python3 src/generate_unconditional_samples.py --top_k 40 --temperature 0.7 | tee /tmp/samples
```
To check flag descriptions, use:
```
python3 src/generate_unconditional_samples.py -- --help
```
### Conditional sample generation
To give the model custom prompts, you can use:
......@@ -82,6 +87,11 @@ To give the model custom prompts, you can use:
python3 src/interactive_conditional_samples.py --top_k 40
```
To check flag descriptions, use:
```
python3 src/interactive_conditional_samples.py -- --help
```
## GPT-2 samples
| WARNING: Samples are unfiltered and may contain offensive content. |
......
......@@ -17,6 +17,26 @@ def sample_model(
temperature=1,
top_k=0,
):
"""
Run the sample_model
:model_name=117M : String, which model to use
:seed=None : Integer seed for random number generators, fix seed to
reproduce results
:nsamples=0 : Number of samples to return, if 0, continues to
generate samples indefinately.
:batch_size=1 : Number of batches, model runs nsamples//batch_size
times, each batch run is independent of previous run.
:length=None : Number of tokens in generated text, if None (default), is
determined by model hyperparameters
:temperature=1 : Float value controlling randomness in boltzmann
distribution. Lower temperature results in less random completions. As the
temperature approaches zero, the model will become deterministic and
repetitive. Higher temperature results in more random completions.
:top_k=0 : Integer value controlling diversity. 1 means only 1 word is
considered for each step (token), resulting in deterministic completions,
while 40 means 40 words are considered at each step. 0 (default) is a
special setting meaning no restrictions. 40 generally is a good value.
"""
enc = encoder.get_encoder(model_name)
hparams = model.default_hparams()
with open(os.path.join('models', model_name, 'hparams.json')) as f:
......
......@@ -12,11 +12,30 @@ def interact_model(
model_name='117M',
seed=None,
nsamples=1,
batch_size=None,
batch_size=1,
length=None,
temperature=1,
top_k=0,
):
"""
Interactively run the model
:model_name=117M : String, which model to use
:seed=None : Integer seed for random number generators, fix seed to reproduce
results
:nsamples=1 : Number of samples to return
:batch_size=1 : Number of batches, model runs nsamples//batch_size
times, each batch run is independent of previous run.
:length=None : Number of tokens in generated text, if None (default), is
determined by model hyperparameters
:temperature=1 : Float value controlling randomness in boltzmann
distribution. Lower temperature results in less random completions. As the
temperature approaches zero, the model will become deterministic and
repetitive. Higher temperature results in more random completions.
:top_k=0 : Integer value controlling diversity. 1 means only 1 word is
considered for each step (token), resulting in deterministic completions,
while 40 means 40 words are considered at each step. 0 (default) is a
special setting meaning no restrictions. 40 generally is a good value.
"""
if batch_size is None:
batch_size = 1
assert nsamples % batch_size == 0
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment