Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
G
gpt-2-server
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Paul Geisler
gpt-2-server
Commits
ed49f037
Commit
ed49f037
authored
6 years ago
by
Armaan Bhullar
Committed by
Jeff Wu
6 years ago
Browse files
Options
Downloads
Patches
Plain Diff
Add documentation for help flags (#81)
add description for flags
parent
c314ddab
Branches
Branches containing commit
No related tags found
No related merge requests found
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
README.md
+10
-0
10 additions, 0 deletions
README.md
src/generate_unconditional_samples.py
+20
-0
20 additions, 0 deletions
src/generate_unconditional_samples.py
src/interactive_conditional_samples.py
+20
-1
20 additions, 1 deletion
src/interactive_conditional_samples.py
with
50 additions
and
1 deletion
README.md
+
10
−
0
View file @
ed49f037
...
...
@@ -75,6 +75,11 @@ There are various flags for controlling the samples:
python3 src/generate_unconditional_samples.py --top_k 40 --temperature 0.7 | tee /tmp/samples
```
To check flag descriptions, use:
```
python3 src/generate_unconditional_samples.py -- --help
```
### Conditional sample generation
To give the model custom prompts, you can use:
...
...
@@ -82,6 +87,11 @@ To give the model custom prompts, you can use:
python3 src/interactive_conditional_samples.py --top_k 40
```
To check flag descriptions, use:
```
python3 src/interactive_conditional_samples.py -- --help
```
## GPT-2 samples
| WARNING: Samples are unfiltered and may contain offensive content. |
...
...
This diff is collapsed.
Click to expand it.
src/generate_unconditional_samples.py
+
20
−
0
View file @
ed49f037
...
...
@@ -17,6 +17,26 @@ def sample_model(
temperature
=
1
,
top_k
=
0
,
):
"""
Run the sample_model
:model_name=117M : String, which model to use
:seed=None : Integer seed for random number generators, fix seed to
reproduce results
:nsamples=0 : Number of samples to return, if 0, continues to
generate samples indefinately.
:batch_size=1 : Number of batches, model runs nsamples//batch_size
times, each batch run is independent of previous run.
:length=None : Number of tokens in generated text, if None (default), is
determined by model hyperparameters
:temperature=1 : Float value controlling randomness in boltzmann
distribution. Lower temperature results in less random completions. As the
temperature approaches zero, the model will become deterministic and
repetitive. Higher temperature results in more random completions.
:top_k=0 : Integer value controlling diversity. 1 means only 1 word is
considered for each step (token), resulting in deterministic completions,
while 40 means 40 words are considered at each step. 0 (default) is a
special setting meaning no restrictions. 40 generally is a good value.
"""
enc
=
encoder
.
get_encoder
(
model_name
)
hparams
=
model
.
default_hparams
()
with
open
(
os
.
path
.
join
(
'
models
'
,
model_name
,
'
hparams.json
'
))
as
f
:
...
...
This diff is collapsed.
Click to expand it.
src/interactive_conditional_samples.py
+
20
−
1
View file @
ed49f037
...
...
@@ -12,11 +12,30 @@ def interact_model(
model_name
=
'
117M
'
,
seed
=
None
,
nsamples
=
1
,
batch_size
=
None
,
batch_size
=
1
,
length
=
None
,
temperature
=
1
,
top_k
=
0
,
):
"""
Interactively run the model
:model_name=117M : String, which model to use
:seed=None : Integer seed for random number generators, fix seed to reproduce
results
:nsamples=1 : Number of samples to return
:batch_size=1 : Number of batches, model runs nsamples//batch_size
times, each batch run is independent of previous run.
:length=None : Number of tokens in generated text, if None (default), is
determined by model hyperparameters
:temperature=1 : Float value controlling randomness in boltzmann
distribution. Lower temperature results in less random completions. As the
temperature approaches zero, the model will become deterministic and
repetitive. Higher temperature results in more random completions.
:top_k=0 : Integer value controlling diversity. 1 means only 1 word is
considered for each step (token), resulting in deterministic completions,
while 40 means 40 words are considered at each step. 0 (default) is a
special setting meaning no restrictions. 40 generally is a good value.
"""
if
batch_size
is
None
:
batch_size
=
1
assert
nsamples
%
batch_size
==
0
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment