modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 12:28:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 12:28:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Nalla/PDF_To_CSV
|
Nalla
| 2022-02-23T00:12:40Z | 0 | 4 | null |
[
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
title: Pdf Table Extractor To CSV
emoji: ;)
colorFrom: yellow
colorTo: green
sdk: streamlit
app_file: App_For_PDF_To_Dataframe.py
pinned: false
---
# Configuration
`title`: _string_
Display title for the Space
`emoji`: _string_
Space emoji (emoji-only character allowed)
`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: _string_
Can be either `gradio`, `streamlit`, or `static`
`sdk_version` : _string_
Only applicable for `streamlit` SDK.
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
Path is relative to the root of the repository.
`pinned`: _boolean_
Whether the Space stays on top of your list.
|
espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert
|
espnet
| 2022-02-22T18:44:52Z | 2 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:iemocap",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- iemocap
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert`
This model was trained by Yushi Ueda using iemocap recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout dfa2868243a897c2a6c34b7407eaea5e4b5508a5
pip install -e .
cd egs2/iemocap/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Feb 12 23:11:32 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `f6cde1c419c814a14ccd40abe557a780508cbcdf`
- Commit date: `Fri Feb 11 12:25:33 2022 -0500`
## Using Conformer based encoder, Transformer based decoder, and self-supervised learning features with spectral augmentation and predicting transcript along with sentiment
- ASR config: [conf/tuning/train_asr_conformer_hubert.yaml](conf/tuning/train_asr_conformer_hubert.yaml)
- token_type: word
- Sentiment Labels: Positive, Neutral, Negative
|dataset|Snt|Intent Classification Macro F1 (%)| Weighted F1 (%)| Micro F1 (%)|
|---|---|---|---|---|
|decode_asr_model_valid.acc.ave_10best/valid|754|66.5|76.4|75.7|
|decode_asr_model_valid.acc.ave_10best/test|1650|62.0|65.5|65.8|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_hubert.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_hubert_sentiment
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- i
- you
- Negative
- to
- it
- '''s'
- the
- '''t'
- that
- and
- Neutral
- Positive
- a
- know
- what
- of
- like
- we
- don
- just
- is
- do
- this
- '''m'
- me
- have
- can
- in
- for
- 'no'
- so
- not
- '''re'
- my
- but
- mean
- be
- going
- all
- was
- they
- well
- want
- yeah
- right
- get
- 'on'
- there
- he
- oh
- here
- go
- out
- with
- your
- if
- okay
- are
- she
- at
- '''ll'
- '''ve'
- got
- think
- about
- up
- see
- then
- why
- how
- time
- really
- one
- now
- or
- as
- back
- look
- her
- him
- been
- because
- 'yes'
- would
- didn
- little
- did
- good
- some
- them
- something
- need
- maybe
- never
- um
- come
- take
- god
- had
- could
- will
- uh
- am
- people
- thing
- when
- very
- let
- much
- sorry
- from
- again
- long
- give
- anything
- too
- make
- fish
- years
- where
- isn
- three
- said
- things
- nothing
- help
- work
- tell
- guess
- over
- 'off'
- business
- even
- sir
- any
- his
- around
- were
- way
- who
- new
- kind
- '''d'
- our
- everything
- more
- came
- an
- should
- down
- understand
- only
- great
- else
- man
- line
- us
- ask
- last
- doing
- say
- waiting
- other
- lot
- job
- feel
- yourself
- point
- thought
- day
- whole
- away
- coming
- better
- marry
- always
- these
- still
- wrong
- two
- sure
- care
- phone
- probably
- remember
- annie
- life
- year
- believe
- gonna
- supposed
- went
- first
- talk
- listen
- alright
- before
- thinking
- after
- stuff
- happy
- ever
- turn
- thank
- home
- fine
- into
- than
- call
- money
- stay
- actually
- every
- hope
- love
- huh
- married
- wait
- somewhere
- has
- being
- father
- larry
- hell
- wanted
- trying
- getting
- guys
- name
- saying
- bag
- hear
- girl
- hey
- flashlight
- beach
- put
- leave
- dollars
- mind
- augie
- does
- won
- fifty
- excited
- hate
- four
- done
- through
- their
- keep
- car
- lost
- doesn
- happen
- wouldn
- school
- big
- calm
- night
- '''cause'
- id
- another
- though
- myself
- nobody
- somebody
- best
- might
- same
- form
- mom
- nice
- matter
- spot
- stop
- told
- by
- shut
- enough
- five
- joe
- hard
- find
- course
- chris
- drunk
- snap
- luggage
- rather
- standing
- someone
- laugh
- took
- those
- please
- live
- six
- ridiculous
- minute
- looking
- bring
- show
- start
- brought
- days
- must
- pretty
- sort
- talking
- sand
- child
- working
- send
- next
- hundred
- whatever
- many
- moon
- moment
- champagne
- s
- problem
- end
- real
- dear
- happened
- person
- place
- fill
- awesome
- house
- such
- cool
- c
- haven
- knew
- die
- finally
- glasses
- stupid
- least
- dad
- supervisor
- totally
- each
- try
- waited
- idea
- u
- party
- asked
- anymore
- sick
- evening
- license
- kid
- wow
- flight
- felt
- pay
- since
- single
- miss
- without
- different
- mmhmm
- free
- sometimes
- yet
- couldn
- view
- hour
- knows
- drive
- themselves
- swim
- ah
- brandy
- fact
- ma
- '''am'
- already
- part
- sit
- thanks
- comes
- check
- everyone
- started
- kiss
- weren
- hotel
- own
- beast
- bad
- above
- run
- worst
- grunions
- darling
- seem
- baby
- turned
- gone
- shouldn
- exactly
- reason
- full
- both
- crazy
- pack
- bit
- swimming
- liquor
- seemed
- serious
- cause
- peter
- burden
- gosh
- forgot
- happens
- alone
- pass
- letters
- heard
- manager
- hours
- baggage
- card
- number
- argue
- seen
- walk
- forget
- kids
- family
- blanket
- honey
- open
- quite
- gotta
- forms
- mother
- old
- needs
- times
- airline
- which
- once
- service
- week
- together
- twenty
- stand
- made
- fun
- dead
- sake
- men
- kate
- today
- plane
- most
- carla
- driving
- deal
- information
- wanna
- definitely
- while
- yea
- certificate
- particular
- lots
- calling
- fortune
- write
- entire
- found
- trouble
- use
- forever
- woman
- enjoy
- room
- damn
- war
- meaning
- longer
- jacket
- ticket
- twice
- sent
- wonder
- small
- amanda
- cannot
- able
- half
- ha
- saw
- bus
- ago
- hmm
- hi
- kidding
- giving
- gave
- move
- women
- ahead
- york
- guy
- suppose
- company
- incredible
- either
- minutes
- tonight
- shoes
- utterly
- wasn
- filled
- gets
- amazing
- beautiful
- hello
- birth
- prove
- choice
- friend
- expect
- says
- blue
- anywhere
- died
- weird
- umm
- blood
- d
- face
- body
- alive
- diagram
- goes
- read
- far
- race
- wind
- fly
- interested
- california
- coast
- news
- past
- charles
- floor
- idiotic
- indeed
- absolutely
- softball
- answer
- somehow
- having
- campus
- completely
- file
- everybody
- given
- fair
- front
- telling
- tried
- sign
- helping
- dollar
- used
- takes
- hair
- behind
- head
- also
- question
- pull
- brother
- nonsense
- kill
- pocket
- cold
- mine
- watching
- shall
- divorce
- driver
- m
- makes
- cried
- security
- suitcase
- seems
- control
- set
- letter
- realized
- paper
- weeks
- address
- sweet
- lose
- huge
- death
- ones
- living
- glad
- bed
- until
- thinks
- wedding
- pieces
- parents
- ready
- almost
- forgive
- kissed
- silver
- during
- forty
- lives
- grow
- arrive
- eyes
- putting
- quiet
- poor
- presents
- sting
- tired
- row
- anyhow
- window
- v
- thousand
- watch
- ashamed
- figure
- vacation
- application
- left
- certainly
- calls
- months
- student
- close
- helpful
- called
- welcome
- major
- match
- morning
- fit
- reach
- door
- wife
- faith
- noticed
- several
- killed
- accident
- rat
- flop
- hands
- ear
- dancing
- hairs
- bugging
- dinner
- bills
- worked
- bored
- conversation
- tunis
- overbearing
- grand
- nine
- amusing
- vile
- tempered
- obviously
- tomorrow
- taken
- eight
- venice
- worth
- boy
- realize
- midnight
- evil
- sixteen
- gotten
- paying
- bottle
- smart
- cindy
- excuse
- along
- seven
- children
- figured
- jobs
- joke
- charge
- memorial
- sitting
- hardly
- young
- story
- feels
- pronouncing
- insane
- forgotten
- fast
- inspire
- grub
- tough
- arguing
- air
- toss
- instance
- raining
- pair
- dry
- socks
- selfish
- included
- yours
- mystery
- mindedness
- urgency
- pure
- urge
- insulting
- ideas
- herself
- period
- missed
- backwards
- dance
- worms
- pop
- except
- perfect
- blow
- funny
- listening
- sadistic
- bully
- cruel
- 'true'
- second
- acting
- lucky
- handle
- loved
- hit
- shaking
- destroyed
- changed
- book
- eleven
- animals
- ice
- cream
- brings
- frustrating
- otherwise
- onto
- pregnant
- operator
- baltimore
- san
- diego
- contract
- brown
- friends
- pictures
- internet
- piece
- high
- anyone
- tickets
- inconvenience
- gift
- usually
- green
- city
- couple
- chuck
- growing
- pick
- throw
- yay
- walking
- grave
- considerate
- inspired
- looked
- mistake
- believes
- avoid
- sucker
- rock
- strangers
- missing
- hide
- geez
- imagination
- overseas
- command
- earth
- monument
- difference
- zipped
- kansas
- reservations
- ahh
- formed
- barefoot
- shower
- running
- garage
- knickerbocker
- locker
- wasting
- roses
- peaches
- rosy
- mention
- shh
- behave
- exquisitely
- beautifully
- rolling
- biting
- scratching
- panthers
- suddenly
- ought
- dreadfully
- pity
- eye
- world
- making
- bark
- roll
- hoops
- insufferable
- weak
- upstairs
- insist
- boorish
- conceited
- impossible
- torment
- brute
- perfectly
- wicked
- crawling
- top
- wish
- wants
- bank
- plan
- soon
- plenty
- bags
- congratulations
- play
- carry
- ignore
- sudden
- refrigerator
- loot
- fight
- lights
- swallows
- goose
- bumps
- keeps
- fighting
- massive
- celebration
- sex
- human
- ours
- light
- minded
- social
- needed
- anyway
- words
- problems
- claim
- reimburse
- checked
- airport
- meet
- e
- responsibility
- grunion
- knees
- thousands
- important
- shows
- goddamn
- strong
- law
- sara
- brent
- passport
- aren
- month
- romantic
- leaving
- random
- applied
- interesting
- regular
- taking
- harder
- hurt
- movie
- freaking
- record
- airlines
- responsible
- honestly
- grew
- proud
- hang
- mrs
- fellow
- terrible
- contradict
- infuriate
- throws
- afraid
- suffer
- bloody
- settled
- thrash
- may
- son
- faithful
- moments
- act
- sleep
- detroit
- planning
- yard
- particularly
- natural
- phenomenon
- highlight
- flopping
- laying
- eggs
- mating
- orgy
- magic
- unexplainable
- instincts
- seaweed
- instinctual
- firecracker
- spent
- clasped
- intimate
- special
- wishes
- seriously
- refreshments
- ooh
- pinpoint
- marge
- dishes
- fat
- ring
- later
- shivers
- spine
- sillier
- poise
- trumpets
- squeakers
- sockets
- allure
- contrary
- violently
- glass
- temperamental
- fiend
- loathe
- adder
- riotous
- mentioned
- intemperate
- tots
- downstairs
- mad
- loose
- lived
- yelling
- happening
- promise
- known
- exciting
- finish
- college
- atlanta
- searching
- fired
- drinking
- jesus
- lock
- plans
- hole
- santa
- kitchen
- invite
- believing
- ann
- landing
- eats
- panties
- sore
- throat
- unmistakable
- capistrano
- lemmings
- cliffs
- invitation
- map
- heaven
- carpet
- poodle
- suicide
- pact
- turns
- court
- dies
- mustn
- vampire
- identification
- places
- danger
- hand
- middle
- situation
- option
- willing
- paid
- horrible
- pain
- anybody
- paperwork
- difficult
- dream
- sakes
- matters
- toes
- become
- habit
- hold
- survive
- break
- babe
- shit
- contact
- land
- water
- transfer
- backersen
- desk
- wallet
- stolen
- credit
- cards
- clearly
- appreciate
- complicated
- uhuh
- bucks
- win
- theatre
- resume
- riding
- helps
- less
- planes
- means
- future
- ran
- red
- wrote
- loans
- spend
- dreaming
- proof
- shooting
- crack
- cracked
- dares
- invited
- breaks
- embarrassed
- wondering
- aw
- style
- granted
- embarrassing
- mixed
- su
- spawning
- stubbed
- toe
- bodies
- expectantly
- meant
- beginning
- traumatized
- freda
- sooner
- applies
- philosophers
- rots
- trivial
- torture
- stiff
- venom
- fangs
- wake
- bended
- voice
- build
- unbelievable
- hiring
- resumes
- eventually
- aggressive
- awhile
- especially
- further
- mass
- pointless
- claus
- neither
- mmm
- cannes
- figures
- burnt
- debate
- exception
- busy
- safe
- possible
- spring
- starting
- buy
- rest
- office
- complaint
- accepted
- ten
- area
- seats
- foam
- vibrations
- drives
- popped
- slightly
- exaggerated
- scientific
- proposed
- bathroom
- awful
- scene
- adders
- afford
- packet
- forward
- customer
- brand
- yellow
- fifteen
- brian
- asking
- percent
- girlfriend
- acceptance
- patient
- patience
- dishonest
- cheese
- restaurant
- t
- sixty
- direct
- holiday
- inn
- refund
- hmmm
- receiving
- sim
- browns
- unacceptable
- northwest
- dorky
- putt
- change
- filling
- z
- x
- simple
- mail
- request
- raise
- town
- hadn
- played
- pennies
- visa
- visit
- loves
- list
- environment
- frustrated
- ride
- imagine
- flew
- nash
- replace
- paris
- personal
- issue
- flights
- track
- angry
- headstone
- cemetery
- cancer
- poetry
- palm
- l
- dropped
- bunch
- p
- chair
- broke
- o
- allow
- nights
- talent
- ignoring
- center
- lovely
- sneaking
- whose
- es
- naturally
- stays
- wide
- bought
- arm
- exact
- curtsy
- wiggle
- superficial
- paint
- naked
- vendome
- rouser
- younger
- jealous
- fascinating
- duty
- photographer
- studio
- cad
- restraint
- ill
- knee
- applying
- questions
- picture
- fake
- apartment
- cash
- drink
- upset
- sending
- flying
- speak
- details
- wherever
- unfortunate
- education
- leaves
- basically
- hospital
- messed
- sounds
- pinch
- malibu
- drop
- team
- professional
- till
- ambiguous
- seeing
- ugh
- wet
- heading
- release
- fire
- inside
- pr
- includes
- rub
- ludicrous
- wriggle
- flippancy
- acid
- sweetness
- curling
- dressing
- gown
- broach
- enjoyable
- original
- '''em'
- early
- ok
- daughter
- age
- steps
- rejected
- starts
- competitive
- hired
- worse
- itself
- nowhere
- unfortunately
- process
- fault
- decision
- package
- easy
- transferred
- straight
- suckers
- none
- returning
- throwing
- cork
- softest
- breathe
- road
- catch
- threw
- canal
- comb
- towels
- sacred
- savor
- delight
- needn
- late
- web
- website
- rough
- daddy
- talked
- feeling
- talented
- interview
- food
- looks
- misplaced
- theft
- likely
- stuck
- tags
- cult
- everywhere
- menu
- choose
- press
- lady
- bill
- department
- online
- immediately
- miles
- notice
- vote
- heavens
- yell
- anna
- tables
- hasn
- stole
- losing
- unfair
- positive
- boston
- celebrate
- system
- turning
- newspapers
- pays
- dare
- jokes
- swine
- demand
- building
- finished
- staying
- cheap
- anyways
- okey
- lobster
- wonderful
- harvard
- engineering
- summer
- lawyer
- mr
- lax
- delta
- funeral
- report
- property
- whoever
- corporate
- miso
- soup
- holy
- olivia
- camera
- power
- sold
- testing
- greens
- explain
- agreement
- undecided
- access
- babies
- street
- vegas
- slot
- honeymoon
- husband
- penny
- slots
- wheel
- cat
- citizenship
- england
- fan
- spending
- craig
- services
- monster
- baloney
- saving
- necessarily
- carousel
- cameras
- airplane
- sentimental
- value
- incredibly
- shopping
- jet
- clothes
- apologize
- allowed
- amount
- candy
- redlands
- sprinklers
- whenever
- brain
- park
- holding
- memorized
- surgery
- audience
- joy
- scholarships
- commuting
- h
- ruined
- mm
- bet
- neighborhood
- sticking
- woo
- teach
- class
- confused
- clock
- foolish
- ocean
- distinctly
- whispered
- wishing
- white
- elliott
- strange
- quest
- ultimate
- truth
- shan
- word
- disagreeable
- wench
- birthday
- national
- thin
- rent
- colors
- citizen
- account
- '''til'
- hire
- short
- fuse
- america
- audition
- sponge
- language
- arriving
- reimbursement
- computer
- cover
- ass
- dealing
- quick
- freaks
- pitch
- hitting
- housing
- force
- scholarship
- dirty
- depends
- helicopter
- wild
- sport
- games
- streets
- although
- mi
- trust
- cracker
- curtsey
- bicker
- irons
- besides
- splendid
- born
- weekends
- letting
- tear
- apart
- touch
- flipped
- hot
- outside
- flowers
- candles
- approve
- surprised
- lead
- ends
- worthless
- apparently
- worker
- annoy
- belongings
- disappeared
- under
- case
- checking
- admit
- risk
- agreed
- yesterday
- country
- financial
- aid
- within
- automated
- systems
- specific
- rate
- star
- aisle
- afternoon
- maui
- machine
- waste
- available
- confirmed
- thinkin
- liked
- kicked
- intermittently
- burned
- desire
- fade
- passion
- laughable
- cunning
- mirrors
- painted
- wooden
- snake
- suspicious
- nosey
- silly
- wonders
- order
- standard
- site
- sense
- dangerous
- cute
- whether
- considering
- opinion
- f
- few
- guarantee
- possessions
- claims
- sue
- easier
- cared
- expected
- trip
- europe
- its
- circles
- large
- store
- macy
- rotary
- instead
- showed
- hundreds
- planned
- someplace
- sensitive
- popping
- opened
- backrub
- fantasy
- damned
- sheet
- cut
- purchase
- amy
- quit
- clapping
- onstage
- eighteen
- auditioning
- rejection
- prepared
- thirty
- master
- kelly
- natalie
- pants
- isabella
- verizon
- goodbye
- fucking
- challenge
- slept
- created
- checkbook
- argument
- uhh
- perhaps
- loath
- complete
- sad
- priorities
- between
- moving
- song
- temporary
- pulling
- smith
- receptionist
- extra
- lodging
- eh
- la
- cost
- boss
- peanuts
- doctor
- production
- downtown
- april
- contracts
- incompetent
- realtor
- fix
- payphone
- verify
- electrical
- outage
- symptoms
- nature
- pilot
- hook
- realizes
- bother
- trade
- event
- meadow
- faint
- blues
- bananas
- overnight
- station
- attention
- purchasing
- terms
- taser
- excellent
- counsel
- sorority
- golfing
- library
- dork
- taco
- branch
- separate
- sacrifices
- mothers
- kicking
- videotape
- stream
- sitters
- moved
- computers
- machines
- bride
- cruise
- likes
- tabs
- plays
- giant
- renamed
- brenda
- lumber
- janet
- state
- quarters
- costs
- escort
- reliable
- board
- posting
- trail
- following
- fantastic
- mighty
- recommending
- generally
- outline
- affords
- save
- carpool
- frustration
- refuse
- anger
- fourth
- lines
- fourteen
- mileage
- candid
- packed
- replaced
- expensive
- lawsuit
- cruising
- bruising
- president
- mistakenly
- behalf
- listed
- liable
- held
- sean
- badge
- employee
- impression
- cemeteries
- urban
- oasis
- wandering
- hers
- pathetic
- ground
- stones
- tumors
- heather
- built
- prospect
- garden
- section
- parties
- feet
- poems
- curly
- tree
- crown
- john
- dunn
- begin
- wheelchair
- reciting
- envelope
- grants
- mold
- minds
- mess
- rapper
- ho
- masters
- teacher
- dash
- popular
- seasoning
- messing
- ruin
- woke
- darkest
- beating
- bush
- porch
- fresh
- rooms
- sweetest
- pets
- cheeked
- brooch
- however
- jones
- voices
- berating
- christmas
- shame
- bunker
- guard
- spread
- companies
- shipping
- shock
- group
- dual
- unattached
- engagement
- sock
- dude
- lucked
- blush
- beige
- loaded
- craziest
- offered
- spoke
- english
- accent
- illegal
- jail
- caught
- hardcore
- tropical
- bahamas
- tahiti
- wealthy
- royalty
- removed
- attitude
- extremely
- hostile
- cutting
- sentence
- jumping
- produce
- field
- shake
- across
- soaked
- dying
- georgia
- educated
- boarding
- attendance
- seat
- offer
- publicize
- abuse
- insinuating
- smug
- mouth
- tossing
- hanky
- black
- wheels
- easily
- overhead
- compartment
- data
- collecting
- lip
- coffee
- smoking
- cigarettes
- union
- differently
- numb
- sickness
- boom
- mortality
- affecting
- slow
- books
- per
- diem
- victorian
- houses
- west
- sider
- commute
- practice
- neon
- softballs
- glow
- co
- ed
- nationally
- ranked
- ping
- pong
- denigrate
- rookie
- donuts
- recently
- pitcher
- hitter
- mostly
- shortstop
- ex
- trojans
- sports
- nicer
- monica
- player
- type
- helipad
- fell
- literally
- doubt
- cares
- mustache
- papers
- crying
- floorboards
- sorted
- everyday
- seas
- bringing
- sacrifice
- guilty
- opening
- return
- jumped
- distinctively
- direction
- tiny
- action
- passed
- cheeks
- darn
- urgh
- restrain
- self
- centered
- registration
- lunch
- documents
- identifications
- deadline
- carries
- official
- documentation
- government
- wireless
- crucial
- pulls
- kinda
- girly
- radiant
- ya
- shine
- invitations
- response
- mcdonald
- level
- member
- pavement
- indicators
- prejudice
- against
- applications
- hating
- physically
- amateur
- crawl
- dumber
- cases
- etiquette
- bug
- opinions
- magically
- irresponsible
- carrousel
- contents
- main
- liability
- provides
- shops
- reimbursed
- investigate
- provide
- uncommon
- johnny
- conscious
- stories
- africa
- image
- hurts
- goout
- gradual
- impact
- subside
- heals
- parts
- football
- recognizable
- accomplished
- prestige
- load
- worrying
- decide
- tour
- friendly
- ivy
- walls
- collegiate
- g
- choices
- math
- prestigious
- departments
- orientation
- graduate
- shiloh
- valued
- customers
- previous
- purchases
- scheduling
- highly
- discounted
- uses
- corporation
- hotels
- rated
- aisles
- switch
- fortunately
- allows
- spare
- shuttle
- appropriate
- traveling
- deals
- shuttles
- sleeps
- gee
- futile
- moralists
- unbearable
- flippant
- shibboleths
- rush
- madly
- piazza
- iron
- dri
- counter
- applica
- lonely
- disappear
- video
- definitive
- magazine
- boyfriend
- stage
- golly
- concert
- crew
- freak
- guaranteed
- nervous
- hah
- persistence
- factors
- types
- male
- female
- consideration
- cooking
- reconsidering
- uhm
- retirement
- foot
- persistent
- table
- skewed
- painting
- outer
- employment
- unlucky
- planet
- normal
- peoples
- reading
- difficulties
- loading
- mishap
- cart
- shipped
- tracking
- reim
- tight
- error
- continue
- 'false'
- compensate
- policy
- gifts
- nobodies
- tag
- originally
- shoe
- core
- memories
- kathy
- lasted
- gary
- closed
- surreal
- troops
- loving
- los
- angeles
- schools
- kinds
- secrets
- explore
- rip
- nuts
- champions
- leaning
- towards
- communications
- broad
- confined
- ropes
- recording
- depending
- leads
- bypass
- zero
- pleasant
- ebay
- bye
- steve
- hint
- asks
- tone
- pretend
- protection
- rid
- submit
- print
- regarding
- grievance
- sites
- protected
- processed
- careful
- secure
- unreliable
- trash
- kept
- spotting
- certain
- specifically
- pushing
- headed
- ears
- watched
- sends
- ceaseless
- wear
- often
- pleasure
- sonya
- promoted
- nurses
- mommy
- va
- videotaped
- cousin
- postpone
- performance
- swear
- cast
- spotlight
- microphone
- tripped
- surprise
- scored
- points
- members
- loser
- marrying
- weddings
- carats
- lousy
- chaperone
- drowsy
- deserve
- cry
- tears
- happiness
- marriage
- commercials
- refection
- financially
- studied
- passing
- russel
- crowe
- pooling
- funds
- owe
- learning
- role
- auditions
- denny
- tip
- teaching
- oof
- france
- steal
- keys
- laughing
- rosenkrantz
- thingy
- bopper
- limit
- whoa
- ways
- suffered
- disease
- handsome
- gifted
- parent
- ripped
- uveny
- tricia
- chemo
- baseball
- benny
- nat
- nation
- bread
- eat
- beer
- dorm
- sometime
- mattresses
- reserved
- grauman
- scale
- whooooo
- acti
- film
- art
- academy
- films
- fuck
- ethiopia
- cuddle
- profanity
- provider
- satellites
- average
- compensating
- unbeknownst
- satellite
- exaggerate
- advising
- addressed
- fax
- dumb
- fritz
- incoming
- million
- grown
- fella
- shootin
- travel
- sat
- instinct
- goosebumps
- arms
- danced
- intimately
- spart
- strumpets
- bristling
- diamonds
- taste
- portion
- side
- stairs
- condescending
- copy
- proceed
- remove
- missy
- behaving
- sweetie
- deploy
- specialist
- increase
- triple
- promotion
- retire
- quiets
- faster
- career
- lame
- drew
- barrymore
- nasty
- mouse
- cheesy
- jane
- tarzan
- engaged
- esmeralda
- hitched
- spontaneous
- character
- conga
- dim
- pulled
- chucky
- sarah
- guiding
- graduated
- apply
- colleges
- energy
- busing
- clerk
- excuses
- qualified
- chang
- investment
- banking
- deloitte
- touche
- temp
- degrading
- smarter
- astronaut
- biomedical
- internship
- plus
- breaking
- evicting
- typing
- shoot
- degree
- science
- club
- joking
- doomed
- maryland
- cooperate
- emergency
- pounds
- urn
- deduction
- sherlock
- holmes
- vessel
- burst
- caption
- therefore
- placed
- firing
- lobby
- fastest
- ibm
- misplace
- count
- hanging
- explanation
- follow
- footsteps
- overboard
- paralyzed
- coma
- fucked
- studying
- countries
- goal
- met
- greatest
- hopefully
- mmmm
- cinema
- chapter
- professionals
- sipping
- martinis
- sushi
- vat
- assistance
- starve
- south
- central
- firm
- police
- officer
- viacom
- digits
- speaking
- network
- charging
- connect
- outages
- hurricane
- katrina
- chose
- maam
- proven
- failing
- receive
- cuts
- using
- flip
- writing
- ms
- fall
- older
- game
- orange
- pink
- goodies
- battling
- sees
- flat
- stronger
- acted
- deserves
- hats
- shore
- pokes
- nah
- paul
- boats
- dammit
- enjoys
- bound
- harm
- pleasured
- lure
- devil
- rile
- topic
- initialed
- lets
- correctly
- spelled
- signed
- shitty
- timing
- susie
- tours
- emotionally
- bullshit
- enlist
- lie
- traditional
- church
- cabins
- flowery
- naturey
- midsummer
- excitement
- hoping
- attacked
- bears
- trim
- cooler
- dog
- tanish
- contrast
- cake
- buffet
- fried
- chicken
- mashed
- potatoes
- happier
- thrilled
- ecstatic
- rushed
- pressure
- interviews
- favors
- bite
- excessive
- unemployed
- cab
- gas
- possibly
- extreme
- trained
- presentable
- quote
- buck
- chugging
- engine
- realm
- minimum
- wage
- fry
- flipper
- bottom
- clear
- affect
- cle
- dressed
- shave
- legs
- presentation
- eighty
- success
- position
- training
- mcdonalds
- tv
- rainbow
- colored
- crap
- safely
- destination
- percoes
- equivalent
- amends
- courtesy
- inconveniencing
- near
- communicate
- conditions
- frequently
- current
- expecting
- pissed
- honor
- grandmother
- condition
- inevitable
- peace
- general
- mace
- present
- knife
- puny
- underwater
- basket
- weaving
- lying
- decided
- works
- worried
- occasion
- cruisers
- vibe
- greek
- lessons
- suck
- celebrating
- crush
- throughout
- test
- waters
- movies
- vermont
- cruiser
- abused
- frat
- boys
- dorms
- dell
- requests
- fixed
- dealt
- worries
- refunded
- situa
- relevant
- ordered
- orders
- others
- incorrectly
- tomatoes
- del
- cents
- attached
- cuz
- hoped
- opportunity
- rushing
- goods
- skipped
- breath
- kleenex
- alaska
- bearing
- hated
- holes
- calf
- witch
- whore
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: hubert_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer
|
espnet
| 2022-02-22T18:25:19Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:iemocap",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- iemocap
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer`
This model was trained by Yushi Ueda using iemocap recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout dfa2868243a897c2a6c34b7407eaea5e4b5508a5
pip install -e .
cd egs2/iemocap/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Feb 17 11:25:22 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `f6cde1c419c814a14ccd40abe557a780508cbcdf`
- Commit date: `Fri Feb 11 12:25:33 2022 -0500`
## Using Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with sentiment
- ASR config: [conf/tuning/train_asr_conformer.yaml](conf/tuning/train_asr_conformer.yaml)
- token_type: word
- labels: Positive, Neutral, Negative
|dataset|Snt|Intent Classification Macro F1 (%)| Weighted F1 (%)| Micro F1 (%)|
|---|---|---|---|---|
|decode_asr_model_valid.acc.ave_10best/valid|754|53.9|65.7|66.4|
|decode_asr_model_valid.acc.ave_10best/test|1650|50.3|54.5|55.7|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 64
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 5000
token_list:
- <blank>
- <unk>
- i
- you
- Negative
- to
- it
- '''s'
- the
- '''t'
- that
- and
- Neutral
- Positive
- a
- know
- what
- of
- like
- we
- don
- just
- is
- do
- this
- '''m'
- me
- have
- can
- in
- for
- 'no'
- so
- not
- '''re'
- my
- but
- mean
- be
- going
- all
- was
- they
- well
- want
- yeah
- right
- get
- 'on'
- there
- he
- oh
- here
- go
- out
- with
- your
- if
- okay
- are
- she
- at
- '''ll'
- '''ve'
- got
- think
- about
- up
- see
- then
- why
- how
- time
- really
- one
- now
- or
- as
- back
- look
- her
- him
- been
- because
- 'yes'
- would
- didn
- little
- did
- good
- some
- them
- something
- need
- maybe
- never
- um
- come
- take
- god
- had
- could
- will
- uh
- am
- people
- thing
- when
- very
- let
- much
- sorry
- from
- again
- long
- give
- anything
- too
- make
- fish
- years
- where
- isn
- three
- said
- things
- nothing
- help
- work
- tell
- guess
- over
- 'off'
- business
- even
- sir
- any
- his
- around
- were
- way
- who
- new
- kind
- '''d'
- our
- everything
- more
- came
- an
- should
- down
- understand
- only
- great
- else
- man
- line
- us
- ask
- last
- doing
- say
- waiting
- other
- lot
- job
- feel
- yourself
- point
- thought
- day
- whole
- away
- coming
- better
- marry
- always
- these
- still
- wrong
- two
- sure
- care
- phone
- probably
- remember
- annie
- life
- year
- believe
- gonna
- supposed
- went
- first
- talk
- listen
- alright
- before
- thinking
- after
- stuff
- happy
- ever
- turn
- thank
- home
- fine
- into
- than
- call
- money
- stay
- actually
- every
- hope
- love
- huh
- married
- wait
- somewhere
- has
- being
- father
- larry
- hell
- wanted
- trying
- getting
- guys
- name
- saying
- bag
- hear
- girl
- hey
- flashlight
- beach
- put
- leave
- dollars
- mind
- augie
- does
- won
- fifty
- excited
- hate
- four
- done
- through
- their
- keep
- car
- lost
- doesn
- happen
- wouldn
- school
- big
- calm
- night
- '''cause'
- id
- another
- though
- myself
- nobody
- somebody
- best
- might
- same
- form
- mom
- nice
- matter
- spot
- stop
- told
- by
- shut
- enough
- five
- joe
- hard
- find
- course
- chris
- drunk
- snap
- luggage
- rather
- standing
- someone
- laugh
- took
- those
- please
- live
- six
- ridiculous
- minute
- looking
- bring
- show
- start
- brought
- days
- must
- pretty
- sort
- talking
- sand
- child
- working
- send
- next
- hundred
- whatever
- many
- moon
- moment
- champagne
- s
- problem
- end
- real
- dear
- happened
- person
- place
- fill
- awesome
- house
- such
- cool
- c
- haven
- knew
- die
- finally
- glasses
- stupid
- least
- dad
- supervisor
- totally
- each
- try
- waited
- idea
- u
- party
- asked
- anymore
- sick
- evening
- license
- kid
- wow
- flight
- felt
- pay
- since
- single
- miss
- without
- different
- mmhmm
- free
- sometimes
- yet
- couldn
- view
- hour
- knows
- drive
- themselves
- swim
- ah
- brandy
- fact
- ma
- '''am'
- already
- part
- sit
- thanks
- comes
- check
- everyone
- started
- kiss
- weren
- hotel
- own
- beast
- bad
- above
- run
- worst
- grunions
- darling
- seem
- baby
- turned
- gone
- shouldn
- exactly
- reason
- full
- both
- crazy
- pack
- bit
- swimming
- liquor
- seemed
- serious
- cause
- peter
- burden
- gosh
- forgot
- happens
- alone
- pass
- letters
- heard
- manager
- hours
- baggage
- card
- number
- argue
- seen
- walk
- forget
- kids
- family
- blanket
- honey
- open
- quite
- gotta
- forms
- mother
- old
- needs
- times
- airline
- which
- once
- service
- week
- together
- twenty
- stand
- made
- fun
- dead
- sake
- men
- kate
- today
- plane
- most
- carla
- driving
- deal
- information
- wanna
- definitely
- while
- yea
- certificate
- particular
- lots
- calling
- fortune
- write
- entire
- found
- trouble
- use
- forever
- woman
- enjoy
- room
- damn
- war
- meaning
- longer
- jacket
- ticket
- twice
- sent
- wonder
- small
- amanda
- cannot
- able
- half
- ha
- saw
- bus
- ago
- hmm
- hi
- kidding
- giving
- gave
- move
- women
- ahead
- york
- guy
- suppose
- company
- incredible
- either
- minutes
- tonight
- shoes
- utterly
- wasn
- filled
- gets
- amazing
- beautiful
- hello
- birth
- prove
- choice
- friend
- expect
- says
- blue
- anywhere
- died
- weird
- umm
- blood
- d
- face
- body
- alive
- diagram
- goes
- read
- far
- race
- wind
- fly
- interested
- california
- coast
- news
- past
- charles
- floor
- idiotic
- indeed
- absolutely
- softball
- answer
- somehow
- having
- campus
- completely
- file
- everybody
- given
- fair
- front
- telling
- tried
- sign
- helping
- dollar
- used
- takes
- hair
- behind
- head
- also
- question
- pull
- brother
- nonsense
- kill
- pocket
- cold
- mine
- watching
- shall
- divorce
- driver
- m
- makes
- cried
- security
- suitcase
- seems
- control
- set
- letter
- realized
- paper
- weeks
- address
- sweet
- lose
- huge
- death
- ones
- living
- glad
- bed
- until
- thinks
- wedding
- pieces
- parents
- ready
- almost
- forgive
- kissed
- silver
- during
- forty
- lives
- grow
- arrive
- eyes
- putting
- quiet
- poor
- presents
- sting
- tired
- row
- anyhow
- window
- v
- thousand
- watch
- ashamed
- figure
- vacation
- application
- left
- certainly
- calls
- months
- student
- close
- helpful
- called
- welcome
- major
- match
- morning
- fit
- reach
- door
- wife
- faith
- noticed
- several
- killed
- accident
- rat
- flop
- hands
- ear
- dancing
- hairs
- bugging
- dinner
- bills
- worked
- bored
- conversation
- tunis
- overbearing
- grand
- nine
- amusing
- vile
- tempered
- obviously
- tomorrow
- taken
- eight
- venice
- worth
- boy
- realize
- midnight
- evil
- sixteen
- gotten
- paying
- bottle
- smart
- cindy
- excuse
- along
- seven
- children
- figured
- jobs
- joke
- charge
- memorial
- sitting
- hardly
- young
- story
- feels
- pronouncing
- insane
- forgotten
- fast
- inspire
- grub
- tough
- arguing
- air
- toss
- instance
- raining
- pair
- dry
- socks
- selfish
- included
- yours
- mystery
- mindedness
- urgency
- pure
- urge
- insulting
- ideas
- herself
- period
- missed
- backwards
- dance
- worms
- pop
- except
- perfect
- blow
- funny
- listening
- sadistic
- bully
- cruel
- 'true'
- second
- acting
- lucky
- handle
- loved
- hit
- shaking
- destroyed
- changed
- book
- eleven
- animals
- ice
- cream
- brings
- frustrating
- otherwise
- onto
- pregnant
- operator
- baltimore
- san
- diego
- contract
- brown
- friends
- pictures
- internet
- piece
- high
- anyone
- tickets
- inconvenience
- gift
- usually
- green
- city
- couple
- chuck
- growing
- pick
- throw
- yay
- walking
- grave
- considerate
- inspired
- looked
- mistake
- believes
- avoid
- sucker
- rock
- strangers
- missing
- hide
- geez
- imagination
- overseas
- command
- earth
- monument
- difference
- zipped
- kansas
- reservations
- ahh
- formed
- barefoot
- shower
- running
- garage
- knickerbocker
- locker
- wasting
- roses
- peaches
- rosy
- mention
- shh
- behave
- exquisitely
- beautifully
- rolling
- biting
- scratching
- panthers
- suddenly
- ought
- dreadfully
- pity
- eye
- world
- making
- bark
- roll
- hoops
- insufferable
- weak
- upstairs
- insist
- boorish
- conceited
- impossible
- torment
- brute
- perfectly
- wicked
- crawling
- top
- wish
- wants
- bank
- plan
- soon
- plenty
- bags
- congratulations
- play
- carry
- ignore
- sudden
- refrigerator
- loot
- fight
- lights
- swallows
- goose
- bumps
- keeps
- fighting
- massive
- celebration
- sex
- human
- ours
- light
- minded
- social
- needed
- anyway
- words
- problems
- claim
- reimburse
- checked
- airport
- meet
- e
- responsibility
- grunion
- knees
- thousands
- important
- shows
- goddamn
- strong
- law
- sara
- brent
- passport
- aren
- month
- romantic
- leaving
- random
- applied
- interesting
- regular
- taking
- harder
- hurt
- movie
- freaking
- record
- airlines
- responsible
- honestly
- grew
- proud
- hang
- mrs
- fellow
- terrible
- contradict
- infuriate
- throws
- afraid
- suffer
- bloody
- settled
- thrash
- may
- son
- faithful
- moments
- act
- sleep
- detroit
- planning
- yard
- particularly
- natural
- phenomenon
- highlight
- flopping
- laying
- eggs
- mating
- orgy
- magic
- unexplainable
- instincts
- seaweed
- instinctual
- firecracker
- spent
- clasped
- intimate
- special
- wishes
- seriously
- refreshments
- ooh
- pinpoint
- marge
- dishes
- fat
- ring
- later
- shivers
- spine
- sillier
- poise
- trumpets
- squeakers
- sockets
- allure
- contrary
- violently
- glass
- temperamental
- fiend
- loathe
- adder
- riotous
- mentioned
- intemperate
- tots
- downstairs
- mad
- loose
- lived
- yelling
- happening
- promise
- known
- exciting
- finish
- college
- atlanta
- searching
- fired
- drinking
- jesus
- lock
- plans
- hole
- santa
- kitchen
- invite
- believing
- ann
- landing
- eats
- panties
- sore
- throat
- unmistakable
- capistrano
- lemmings
- cliffs
- invitation
- map
- heaven
- carpet
- poodle
- suicide
- pact
- turns
- court
- dies
- mustn
- vampire
- identification
- places
- danger
- hand
- middle
- situation
- option
- willing
- paid
- horrible
- pain
- anybody
- paperwork
- difficult
- dream
- sakes
- matters
- toes
- become
- habit
- hold
- survive
- break
- babe
- shit
- contact
- land
- water
- transfer
- backersen
- desk
- wallet
- stolen
- credit
- cards
- clearly
- appreciate
- complicated
- uhuh
- bucks
- win
- theatre
- resume
- riding
- helps
- less
- planes
- means
- future
- ran
- red
- wrote
- loans
- spend
- dreaming
- proof
- shooting
- crack
- cracked
- dares
- invited
- breaks
- embarrassed
- wondering
- aw
- style
- granted
- embarrassing
- mixed
- su
- spawning
- stubbed
- toe
- bodies
- expectantly
- meant
- beginning
- traumatized
- freda
- sooner
- applies
- philosophers
- rots
- trivial
- torture
- stiff
- venom
- fangs
- wake
- bended
- voice
- build
- unbelievable
- hiring
- resumes
- eventually
- aggressive
- awhile
- especially
- further
- mass
- pointless
- claus
- neither
- mmm
- cannes
- figures
- burnt
- debate
- exception
- busy
- safe
- possible
- spring
- starting
- buy
- rest
- office
- complaint
- accepted
- ten
- area
- seats
- foam
- vibrations
- drives
- popped
- slightly
- exaggerated
- scientific
- proposed
- bathroom
- awful
- scene
- adders
- afford
- packet
- forward
- customer
- brand
- yellow
- fifteen
- brian
- asking
- percent
- girlfriend
- acceptance
- patient
- patience
- dishonest
- cheese
- restaurant
- t
- sixty
- direct
- holiday
- inn
- refund
- hmmm
- receiving
- sim
- browns
- unacceptable
- northwest
- dorky
- putt
- change
- filling
- z
- x
- simple
- mail
- request
- raise
- town
- hadn
- played
- pennies
- visa
- visit
- loves
- list
- environment
- frustrated
- ride
- imagine
- flew
- nash
- replace
- paris
- personal
- issue
- flights
- track
- angry
- headstone
- cemetery
- cancer
- poetry
- palm
- l
- dropped
- bunch
- p
- chair
- broke
- o
- allow
- nights
- talent
- ignoring
- center
- lovely
- sneaking
- whose
- es
- naturally
- stays
- wide
- bought
- arm
- exact
- curtsy
- wiggle
- superficial
- paint
- naked
- vendome
- rouser
- younger
- jealous
- fascinating
- duty
- photographer
- studio
- cad
- restraint
- ill
- knee
- applying
- questions
- picture
- fake
- apartment
- cash
- drink
- upset
- sending
- flying
- speak
- details
- wherever
- unfortunate
- education
- leaves
- basically
- hospital
- messed
- sounds
- pinch
- malibu
- drop
- team
- professional
- till
- ambiguous
- seeing
- ugh
- wet
- heading
- release
- fire
- inside
- pr
- includes
- rub
- ludicrous
- wriggle
- flippancy
- acid
- sweetness
- curling
- dressing
- gown
- broach
- enjoyable
- original
- '''em'
- early
- ok
- daughter
- age
- steps
- rejected
- starts
- competitive
- hired
- worse
- itself
- nowhere
- unfortunately
- process
- fault
- decision
- package
- easy
- transferred
- straight
- suckers
- none
- returning
- throwing
- cork
- softest
- breathe
- road
- catch
- threw
- canal
- comb
- towels
- sacred
- savor
- delight
- needn
- late
- web
- website
- rough
- daddy
- talked
- feeling
- talented
- interview
- food
- looks
- misplaced
- theft
- likely
- stuck
- tags
- cult
- everywhere
- menu
- choose
- press
- lady
- bill
- department
- online
- immediately
- miles
- notice
- vote
- heavens
- yell
- anna
- tables
- hasn
- stole
- losing
- unfair
- positive
- boston
- celebrate
- system
- turning
- newspapers
- pays
- dare
- jokes
- swine
- demand
- building
- finished
- staying
- cheap
- anyways
- okey
- lobster
- wonderful
- harvard
- engineering
- summer
- lawyer
- mr
- lax
- delta
- funeral
- report
- property
- whoever
- corporate
- miso
- soup
- holy
- olivia
- camera
- power
- sold
- testing
- greens
- explain
- agreement
- undecided
- access
- babies
- street
- vegas
- slot
- honeymoon
- husband
- penny
- slots
- wheel
- cat
- citizenship
- england
- fan
- spending
- craig
- services
- monster
- baloney
- saving
- necessarily
- carousel
- cameras
- airplane
- sentimental
- value
- incredibly
- shopping
- jet
- clothes
- apologize
- allowed
- amount
- candy
- redlands
- sprinklers
- whenever
- brain
- park
- holding
- memorized
- surgery
- audience
- joy
- scholarships
- commuting
- h
- ruined
- mm
- bet
- neighborhood
- sticking
- woo
- teach
- class
- confused
- clock
- foolish
- ocean
- distinctly
- whispered
- wishing
- white
- elliott
- strange
- quest
- ultimate
- truth
- shan
- word
- disagreeable
- wench
- birthday
- national
- thin
- rent
- colors
- citizen
- account
- '''til'
- hire
- short
- fuse
- america
- audition
- sponge
- language
- arriving
- reimbursement
- computer
- cover
- ass
- dealing
- quick
- freaks
- pitch
- hitting
- housing
- force
- scholarship
- dirty
- depends
- helicopter
- wild
- sport
- games
- streets
- although
- mi
- trust
- cracker
- curtsey
- bicker
- irons
- besides
- splendid
- born
- weekends
- letting
- tear
- apart
- touch
- flipped
- hot
- outside
- flowers
- candles
- approve
- surprised
- lead
- ends
- worthless
- apparently
- worker
- annoy
- belongings
- disappeared
- under
- case
- checking
- admit
- risk
- agreed
- yesterday
- country
- financial
- aid
- within
- automated
- systems
- specific
- rate
- star
- aisle
- afternoon
- maui
- machine
- waste
- available
- confirmed
- thinkin
- liked
- kicked
- intermittently
- burned
- desire
- fade
- passion
- laughable
- cunning
- mirrors
- painted
- wooden
- snake
- suspicious
- nosey
- silly
- wonders
- order
- standard
- site
- sense
- dangerous
- cute
- whether
- considering
- opinion
- f
- few
- guarantee
- possessions
- claims
- sue
- easier
- cared
- expected
- trip
- europe
- its
- circles
- large
- store
- macy
- rotary
- instead
- showed
- hundreds
- planned
- someplace
- sensitive
- popping
- opened
- backrub
- fantasy
- damned
- sheet
- cut
- purchase
- amy
- quit
- clapping
- onstage
- eighteen
- auditioning
- rejection
- prepared
- thirty
- master
- kelly
- natalie
- pants
- isabella
- verizon
- goodbye
- fucking
- challenge
- slept
- created
- checkbook
- argument
- uhh
- perhaps
- loath
- complete
- sad
- priorities
- between
- moving
- song
- temporary
- pulling
- smith
- receptionist
- extra
- lodging
- eh
- la
- cost
- boss
- peanuts
- doctor
- production
- downtown
- april
- contracts
- incompetent
- realtor
- fix
- payphone
- verify
- electrical
- outage
- symptoms
- nature
- pilot
- hook
- realizes
- bother
- trade
- event
- meadow
- faint
- blues
- bananas
- overnight
- station
- attention
- purchasing
- terms
- taser
- excellent
- counsel
- sorority
- golfing
- library
- dork
- taco
- branch
- separate
- sacrifices
- mothers
- kicking
- videotape
- stream
- sitters
- moved
- computers
- machines
- bride
- cruise
- likes
- tabs
- plays
- giant
- renamed
- brenda
- lumber
- janet
- state
- quarters
- costs
- escort
- reliable
- board
- posting
- trail
- following
- fantastic
- mighty
- recommending
- generally
- outline
- affords
- save
- carpool
- frustration
- refuse
- anger
- fourth
- lines
- fourteen
- mileage
- candid
- packed
- replaced
- expensive
- lawsuit
- cruising
- bruising
- president
- mistakenly
- behalf
- listed
- liable
- held
- sean
- badge
- employee
- impression
- cemeteries
- urban
- oasis
- wandering
- hers
- pathetic
- ground
- stones
- tumors
- heather
- built
- prospect
- garden
- section
- parties
- feet
- poems
- curly
- tree
- crown
- john
- dunn
- begin
- wheelchair
- reciting
- envelope
- grants
- mold
- minds
- mess
- rapper
- ho
- masters
- teacher
- dash
- popular
- seasoning
- messing
- ruin
- woke
- darkest
- beating
- bush
- porch
- fresh
- rooms
- sweetest
- pets
- cheeked
- brooch
- however
- jones
- voices
- berating
- christmas
- shame
- bunker
- guard
- spread
- companies
- shipping
- shock
- group
- dual
- unattached
- engagement
- sock
- dude
- lucked
- blush
- beige
- loaded
- craziest
- offered
- spoke
- english
- accent
- illegal
- jail
- caught
- hardcore
- tropical
- bahamas
- tahiti
- wealthy
- royalty
- removed
- attitude
- extremely
- hostile
- cutting
- sentence
- jumping
- produce
- field
- shake
- across
- soaked
- dying
- georgia
- educated
- boarding
- attendance
- seat
- offer
- publicize
- abuse
- insinuating
- smug
- mouth
- tossing
- hanky
- black
- wheels
- easily
- overhead
- compartment
- data
- collecting
- lip
- coffee
- smoking
- cigarettes
- union
- differently
- numb
- sickness
- boom
- mortality
- affecting
- slow
- books
- per
- diem
- victorian
- houses
- west
- sider
- commute
- practice
- neon
- softballs
- glow
- co
- ed
- nationally
- ranked
- ping
- pong
- denigrate
- rookie
- donuts
- recently
- pitcher
- hitter
- mostly
- shortstop
- ex
- trojans
- sports
- nicer
- monica
- player
- type
- helipad
- fell
- literally
- doubt
- cares
- mustache
- papers
- crying
- floorboards
- sorted
- everyday
- seas
- bringing
- sacrifice
- guilty
- opening
- return
- jumped
- distinctively
- direction
- tiny
- action
- passed
- cheeks
- darn
- urgh
- restrain
- self
- centered
- registration
- lunch
- documents
- identifications
- deadline
- carries
- official
- documentation
- government
- wireless
- crucial
- pulls
- kinda
- girly
- radiant
- ya
- shine
- invitations
- response
- mcdonald
- level
- member
- pavement
- indicators
- prejudice
- against
- applications
- hating
- physically
- amateur
- crawl
- dumber
- cases
- etiquette
- bug
- opinions
- magically
- irresponsible
- carrousel
- contents
- main
- liability
- provides
- shops
- reimbursed
- investigate
- provide
- uncommon
- johnny
- conscious
- stories
- africa
- image
- hurts
- goout
- gradual
- impact
- subside
- heals
- parts
- football
- recognizable
- accomplished
- prestige
- load
- worrying
- decide
- tour
- friendly
- ivy
- walls
- collegiate
- g
- choices
- math
- prestigious
- departments
- orientation
- graduate
- shiloh
- valued
- customers
- previous
- purchases
- scheduling
- highly
- discounted
- uses
- corporation
- hotels
- rated
- aisles
- switch
- fortunately
- allows
- spare
- shuttle
- appropriate
- traveling
- deals
- shuttles
- sleeps
- gee
- futile
- moralists
- unbearable
- flippant
- shibboleths
- rush
- madly
- piazza
- iron
- dri
- counter
- applica
- lonely
- disappear
- video
- definitive
- magazine
- boyfriend
- stage
- golly
- concert
- crew
- freak
- guaranteed
- nervous
- hah
- persistence
- factors
- types
- male
- female
- consideration
- cooking
- reconsidering
- uhm
- retirement
- foot
- persistent
- table
- skewed
- painting
- outer
- employment
- unlucky
- planet
- normal
- peoples
- reading
- difficulties
- loading
- mishap
- cart
- shipped
- tracking
- reim
- tight
- error
- continue
- 'false'
- compensate
- policy
- gifts
- nobodies
- tag
- originally
- shoe
- core
- memories
- kathy
- lasted
- gary
- closed
- surreal
- troops
- loving
- los
- angeles
- schools
- kinds
- secrets
- explore
- rip
- nuts
- champions
- leaning
- towards
- communications
- broad
- confined
- ropes
- recording
- depending
- leads
- bypass
- zero
- pleasant
- ebay
- bye
- steve
- hint
- asks
- tone
- pretend
- protection
- rid
- submit
- print
- regarding
- grievance
- sites
- protected
- processed
- careful
- secure
- unreliable
- trash
- kept
- spotting
- certain
- specifically
- pushing
- headed
- ears
- watched
- sends
- ceaseless
- wear
- often
- pleasure
- sonya
- promoted
- nurses
- mommy
- va
- videotaped
- cousin
- postpone
- performance
- swear
- cast
- spotlight
- microphone
- tripped
- surprise
- scored
- points
- members
- loser
- marrying
- weddings
- carats
- lousy
- chaperone
- drowsy
- deserve
- cry
- tears
- happiness
- marriage
- commercials
- refection
- financially
- studied
- passing
- russel
- crowe
- pooling
- funds
- owe
- learning
- role
- auditions
- denny
- tip
- teaching
- oof
- france
- steal
- keys
- laughing
- rosenkrantz
- thingy
- bopper
- limit
- whoa
- ways
- suffered
- disease
- handsome
- gifted
- parent
- ripped
- uveny
- tricia
- chemo
- baseball
- benny
- nat
- nation
- bread
- eat
- beer
- dorm
- sometime
- mattresses
- reserved
- grauman
- scale
- whooooo
- acti
- film
- art
- academy
- films
- fuck
- ethiopia
- cuddle
- profanity
- provider
- satellites
- average
- compensating
- unbeknownst
- satellite
- exaggerate
- advising
- addressed
- fax
- dumb
- fritz
- incoming
- million
- grown
- fella
- shootin
- travel
- sat
- instinct
- goosebumps
- arms
- danced
- intimately
- spart
- strumpets
- bristling
- diamonds
- taste
- portion
- side
- stairs
- condescending
- copy
- proceed
- remove
- missy
- behaving
- sweetie
- deploy
- specialist
- increase
- triple
- promotion
- retire
- quiets
- faster
- career
- lame
- drew
- barrymore
- nasty
- mouse
- cheesy
- jane
- tarzan
- engaged
- esmeralda
- hitched
- spontaneous
- character
- conga
- dim
- pulled
- chucky
- sarah
- guiding
- graduated
- apply
- colleges
- energy
- busing
- clerk
- excuses
- qualified
- chang
- investment
- banking
- deloitte
- touche
- temp
- degrading
- smarter
- astronaut
- biomedical
- internship
- plus
- breaking
- evicting
- typing
- shoot
- degree
- science
- club
- joking
- doomed
- maryland
- cooperate
- emergency
- pounds
- urn
- deduction
- sherlock
- holmes
- vessel
- burst
- caption
- therefore
- placed
- firing
- lobby
- fastest
- ibm
- misplace
- count
- hanging
- explanation
- follow
- footsteps
- overboard
- paralyzed
- coma
- fucked
- studying
- countries
- goal
- met
- greatest
- hopefully
- mmmm
- cinema
- chapter
- professionals
- sipping
- martinis
- sushi
- vat
- assistance
- starve
- south
- central
- firm
- police
- officer
- viacom
- digits
- speaking
- network
- charging
- connect
- outages
- hurricane
- katrina
- chose
- maam
- proven
- failing
- receive
- cuts
- using
- flip
- writing
- ms
- fall
- older
- game
- orange
- pink
- goodies
- battling
- sees
- flat
- stronger
- acted
- deserves
- hats
- shore
- pokes
- nah
- paul
- boats
- dammit
- enjoys
- bound
- harm
- pleasured
- lure
- devil
- rile
- topic
- initialed
- lets
- correctly
- spelled
- signed
- shitty
- timing
- susie
- tours
- emotionally
- bullshit
- enlist
- lie
- traditional
- church
- cabins
- flowery
- naturey
- midsummer
- excitement
- hoping
- attacked
- bears
- trim
- cooler
- dog
- tanish
- contrast
- cake
- buffet
- fried
- chicken
- mashed
- potatoes
- happier
- thrilled
- ecstatic
- rushed
- pressure
- interviews
- favors
- bite
- excessive
- unemployed
- cab
- gas
- possibly
- extreme
- trained
- presentable
- quote
- buck
- chugging
- engine
- realm
- minimum
- wage
- fry
- flipper
- bottom
- clear
- affect
- cle
- dressed
- shave
- legs
- presentation
- eighty
- success
- position
- training
- mcdonalds
- tv
- rainbow
- colored
- crap
- safely
- destination
- percoes
- equivalent
- amends
- courtesy
- inconveniencing
- near
- communicate
- conditions
- frequently
- current
- expecting
- pissed
- honor
- grandmother
- condition
- inevitable
- peace
- general
- mace
- present
- knife
- puny
- underwater
- basket
- weaving
- lying
- decided
- works
- worried
- occasion
- cruisers
- vibe
- greek
- lessons
- suck
- celebrating
- crush
- throughout
- test
- waters
- movies
- vermont
- cruiser
- abused
- frat
- boys
- dorms
- dell
- requests
- fixed
- dealt
- worries
- refunded
- situa
- relevant
- ordered
- orders
- others
- incorrectly
- tomatoes
- del
- cents
- attached
- cuz
- hoped
- opportunity
- rushing
- goods
- skipped
- breath
- kleenex
- alaska
- bearing
- hated
- holes
- calf
- witch
- whore
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
ignore_id: -1
lsm_weight: 0.0
length_normalized_loss: false
report_cer: true
report_wer: true
sym_space: <space>
sym_blank: <blank>
extract_feats_in_collect_stats: true
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
NeonBohdan/stt-polyglot-it
|
NeonBohdan
| 2022-02-22T17:49:20Z | 0 | 0 | null |
[
"tflite",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
NeonBohdan/stt-polyglot-de
|
NeonBohdan
| 2022-02-22T17:39:43Z | 0 | 0 | null |
[
"tflite",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
NeonBohdan/stt-polyglot-pl
|
NeonBohdan
| 2022-02-22T17:27:31Z | 0 | 0 | null |
[
"tflite",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
keras-io/convmixer
|
keras-io
| 2022-02-22T16:42:59Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"ConvMixer",
"keras-io",
"en",
"dataset:cifar10",
"arxiv:2201.09792",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
tags:
- ConvMixer
- keras-io
license: apache-2.0
datasets:
- cifar10
---
# ConvMixer model
The ConvMixer model is trained on Cifar10 dataset and is based on [the paper](https://arxiv.org/abs/2201.09792v1), [github](https://github.com/locuslab/convmixer).
Disclaimer : This is a demo model for Sayak Paul's keras [example](https://keras.io/examples/vision/convmixer/). Please refrain from using this model for any other purpose.
## Description
The paper uses 'patches' (square group of pixels) extracted from the image, which has been done in other Vision Transformers like [ViT](https://arxiv.org/abs/2010.11929v2). One notable dawback of such architectures is the quadratic runtime of self-attention layers which takes a lot of time and resources to train for usable output. The ConvMixer model, instead uses Convolutions along with the MLP-mixer to obtain similar results to that of transformers at a fraction of cost.
### Intended Use
This model is intended to be used as a demo model for keras-io.
|
kdo6301/bert-base-uncased-finetuned-cola-2
|
kdo6301
| 2022-02-22T15:19:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6015706950519473
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9235
- Matthews Correlation: 0.6016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4906 | 1.0 | 535 | 0.5046 | 0.5080 |
| 0.2901 | 2.0 | 1070 | 0.5881 | 0.5235 |
| 0.1818 | 3.0 | 1605 | 0.7253 | 0.5584 |
| 0.1177 | 4.0 | 2140 | 0.8316 | 0.5927 |
| 0.0826 | 5.0 | 2675 | 0.9235 | 0.6016 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
spasis/bert-finetuned-ner
|
spasis
| 2022-02-22T13:23:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9214944042132982
- name: Recall
type: recall
value: 0.9422753281723325
- name: F1
type: f1
value: 0.9317690131469462
- name: Accuracy
type: accuracy
value: 0.9849738034967916
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0569
- Precision: 0.9215
- Recall: 0.9423
- F1: 0.9318
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 439 | 0.0702 | 0.8847 | 0.9170 | 0.9006 | 0.9795 |
| 0.183 | 2.0 | 878 | 0.0599 | 0.9161 | 0.9391 | 0.9274 | 0.9842 |
| 0.0484 | 3.0 | 1317 | 0.0569 | 0.9215 | 0.9423 | 0.9318 | 0.9850 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic
|
mbeukman
| 2022-02-22T11:42:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"am",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- am
tags:
- NER
- token-classification
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
---
# xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-swahili](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Amharic part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic) (This model) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | amh | 70.34 | 69.72 | 70.97 | 72.00 | 75.00 | 51.00 | 73.00 |
| [xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic) | [amh](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic) | amh | 79.55 | 76.71 | 82.62 | 70.00 | 84.00 | 62.00 | 91.00 |
| [xlm-roberta-base-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-amharic) | [base](https://huggingface.co/xlm-roberta-base) | amh | 72.63 | 70.49 | 74.91 | 76.00 | 75.00 | 52.00 | 78.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-amharic
|
mbeukman
| 2022-02-22T11:32:33Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"am",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- am
tags:
- NER
- token-classification
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
---
# xlm-roberta-base-finetuned-ner-amharic
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Amharic part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-amharic) (This model) | [base](https://huggingface.co/xlm-roberta-base) | amh | 72.63 | 70.49 | 74.91 | 76.00 | 75.00 | 52.00 | 78.00 |
| [xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic) | [amh](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic) | amh | 79.55 | 76.71 | 82.62 | 70.00 | 84.00 | 62.00 | 91.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | amh | 70.34 | 69.72 | 70.97 | 72.00 | 75.00 | 51.00 | 73.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-amharic'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic
|
mbeukman
| 2022-02-22T11:30:02Z | 86 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"am",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- am
tags:
- NER
- token-classification
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
---
# xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-amharic](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Amharic part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic) (This model) | [amh](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic) | amh | 79.55 | 76.71 | 82.62 | 70.00 | 84.00 | 62.00 | 91.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-amharic) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | amh | 70.34 | 69.72 | 70.97 | 72.00 | 75.00 | 51.00 | 73.00 |
| [xlm-roberta-base-finetuned-ner-amharic](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-amharic) | [base](https://huggingface.co/xlm-roberta-base) | amh | 72.63 | 70.49 | 74.91 | 76.00 | 75.00 | 52.00 | 78.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-amharic-finetuned-ner-amharic'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "ቀዳሚው የሶማሌ ክልል በአወዳይ ከተማ ለተገደሉ የክልሉ ተወላጆች ያከናወነው የቀብር ስነ ስርዓትን የተመለከተ ዘገባ ነው ፡፡"
ner_results = nlp(example)
print(ner_results)
```
|
MahsaShahidi/Persian-Image-Captioning
|
MahsaShahidi
| 2022-02-22T10:49:24Z | 55 | 2 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
model-index:
name: Persian-Image-Captioning
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Persian-Image-Captioning
This model is a fine-tuned version of [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) on coco-flickr-farsi.
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cammy/bart-large-cnn-finetuned-weaksup-1000-pad
|
cammy
| 2022-02-22T09:29:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-weaksup-1000-pad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000-pad
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4168
- Rouge1: 26.2506
- Rouge2: 10.7802
- Rougel: 19.2236
- Rougelsum: 22.6883
- Gen Len: 68.74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1434 | 1.0 | 1000 | 0.4168 | 26.2506 | 10.7802 | 19.2236 | 22.6883 | 68.74 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
z3c1f4/distilbert-base-uncased-finetuned-cola
|
z3c1f4
| 2022-02-22T07:48:31Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5320879841803337
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7400
- Matthews Correlation: 0.5321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5298 | 1.0 | 535 | 0.5168 | 0.4092 |
| 0.349 | 2.0 | 1070 | 0.4993 | 0.5099 |
| 0.2345 | 3.0 | 1605 | 0.6194 | 0.5046 |
| 0.1731 | 4.0 | 2140 | 0.7400 | 0.5321 |
| 0.1282 | 5.0 | 2675 | 0.8724 | 0.5078 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Fan-s/reddit-tc-bert
|
Fan-s
| 2022-02-22T05:25:39Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-uncased-base
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-base
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an Reddit-dialogue dataset.
This model can be used for Text Classification: Given two sentences, see if they are related.
It achieves the following results on the evaluation set:
- Loss: 0.2297
- Accuracy: 0.9267
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 320
- eval_batch_size: 80
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.11.0
## Usage (HuggingFace Transformers)
You can use the model like this:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# label_list
label_list = ['matched', 'unmatched']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("Fan-s/reddit-tc-bert", use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained("Fan-s/reddit-tc-bert")
# Set the input
post = "don't make gravy with asbestos."
response = "i'd expect someone with a culinary background to know that. since we're talking about school dinner ladies, they need to learn this pronto."
# Predict whether the two sentences are matched
def predict(post, response, max_seq_length=128):
with torch.no_grad():
args = (post, response)
input = tokenizer(*args, padding="max_length", max_length=max_seq_length, truncation=True, return_tensors="pt")
output = model(**input)
logits = output.logits
item = torch.argmax(logits, dim=1)
predict_label = label_list[item]
return predict_label, logits
predict_label, logits = predict(post, response)
# Matched
print("predict_label:", predict_label)
```
|
speech-seq2seq/wav2vec2-2-gpt2-no-adapter
|
speech-seq2seq
| 2022-02-22T02:47:55Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1277
- Wer: 1.0334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7015 | 0.28 | 500 | 5.3313 | 1.9454 |
| 4.7239 | 0.56 | 1000 | 5.1316 | 1.9288 |
| 4.6686 | 0.84 | 1500 | 4.8812 | 1.9646 |
| 4.0138 | 1.12 | 2000 | 4.8274 | 1.8905 |
| 3.6314 | 1.4 | 2500 | 3.8913 | 1.7298 |
| 1.9511 | 1.68 | 3000 | 2.3486 | 1.3674 |
| 1.212 | 1.96 | 3500 | 1.6223 | 1.1877 |
| 0.8092 | 2.24 | 4000 | 1.3949 | 1.1049 |
| 0.497 | 2.52 | 4500 | 1.2544 | 1.0749 |
| 0.4401 | 2.8 | 5000 | 1.1277 | 1.0334 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
speech-seq2seq/wav2vec2-2-bart-large-no-adapter-frozen-enc
|
speech-seq2seq
| 2022-02-22T01:08:44Z | 33 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 18.7898
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.5396 | 0.28 | 500 | 9.0401 | 1.0120 |
| 5.898 | 0.56 | 1000 | 9.3199 | 1.0 |
| 4.9595 | 0.84 | 1500 | 8.4434 | 1.4563 |
| 5.7082 | 1.12 | 2000 | 15.1805 | 1.0000 |
| 5.4377 | 1.4 | 2500 | 15.7984 | 1.0021 |
| 5.5941 | 1.68 | 3000 | 18.4928 | 1.0 |
| 5.0662 | 1.96 | 3500 | 17.4886 | 1.0000 |
| 4.8363 | 2.24 | 4000 | 18.9458 | 1.0 |
| 4.7908 | 2.52 | 4500 | 18.2794 | 1.0006 |
| 4.679 | 2.8 | 5000 | 18.7898 | 1.0 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
pritoms/gpt2-group2
|
pritoms
| 2022-02-21T23:03:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-group2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-group2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 3.7517 |
| No log | 2.0 | 12 | 3.6951 |
| No log | 3.0 | 18 | 3.6769 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CennetOguz/distilbert-base-uncased-finetuned-recipe-1
|
CennetOguz
| 2022-02-21T22:10:49Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-recipe-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-recipe-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 3.2689 |
| No log | 2.0 | 6 | 3.0913 |
| No log | 3.0 | 9 | 3.0641 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-42
|
anas-awadalla
| 2022-02-21T22:04:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-42
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 4.541154210028382, 'f1': 10.04181288563879}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ajrae/bert-base-uncased-finetuned-mrpc
|
ajrae
| 2022-02-21T21:19:06Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8578431372549019
- name: F1
type: f1
value: 0.9003436426116839
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4520
- Accuracy: 0.8578
- F1: 0.9003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.4169 | 0.8039 | 0.8639 |
| No log | 2.0 | 460 | 0.4299 | 0.8137 | 0.875 |
| 0.4242 | 3.0 | 690 | 0.4520 | 0.8578 | 0.9003 |
| 0.4242 | 4.0 | 920 | 0.6323 | 0.8431 | 0.8926 |
| 0.1103 | 5.0 | 1150 | 0.6163 | 0.8578 | 0.8997 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
vocab-transformers/dense_encoder-msmarco-bert-base-word2vec256k_emb_updated
|
vocab-transformers
| 2022-02-21T20:13:25Z | 11 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# dense_encoder-msmarco-bert-base-word2vec256k
**Note: Token embeddings where updated!**
This model is based on [msmarco-word2vec256000-bert-base-uncased](https://huggingface.co/nicoladecao/msmarco-word2vec256000-bert-base-uncased) with a 256k sized vocabulary initialized with word2vec.
It has been trained on MS MARCO using [MarginMSELoss](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_margin-mse.py). See the train_script.py in this repository.
Performance:
- MS MARCO dev: (evaluating) (MRR@10)
- TREC-DL 2019: 67.56 (nDCG@10)
- TREC-DL 2020: 71.26 (nDCG@10)
## Usage (Sentence-Transformers)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15716 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k_emb_updated
|
vocab-transformers
| 2022-02-21T20:13:11Z | 95 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# dense_encoder-msmarco-distilbert-word2vec256k
**Note: Token embeddings where updated!**
This model is based on [msmarco-word2vec256000-distilbert-base-uncased](https://huggingface.co/nicoladecao/msmarco-word2vec256000-distilbert-base-uncased) with a 256k sized vocabulary initialized with word2vec.
It has been trained on MS MARCO using [MarginMSELoss](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_margin-mse.py). See the train_script.py in this repository.
Performance:
- MS MARCO dev: 34.51 (MRR@10)
- TREC-DL 2019: 66.12 (nDCG@10)
- TREC-DL 2020: 68.62 (nDCG@10)
## Usage (Sentence-Transformers)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7858 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k-MLM_445k_emb_updated
|
vocab-transformers
| 2022-02-21T20:09:42Z | 98 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# dense_encoder-msmarco-distilbert-word2vec256k-MLM_445k
This model is based on [vocab-transformers/msmarco-distilbert-word2vec256k-MLM_445k](https://huggingface.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_445k) with a 256k sized vocabulary initialized with word2vec that has been trained with MLM for 445k steps. **Note: Token embeddings where updated!**
It has been trained on MS MARCO using [MarginMSELoss](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_margin-mse.py). See the train_script.py in this repository. **Note: Token embeddings where updated!**
Performance:
- MS MARCO dev: 34.94 (MRR@10)
- TREC-DL 2019: 66.72 (nDCG@10)
- TREC-DL 2020: 69.14 (nDCG@10)
## Usage (Sentence-Transformers)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7858 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
stellaathena/test-small
|
stellaathena
| 2022-02-21T19:30:22Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42
|
anas-awadalla
| 2022-02-21T19:28:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
{'exact_match': 40.91769157994324, 'f1': 52.89154394730339}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42
|
anas-awadalla
| 2022-02-21T18:31:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 3.207190160832545, 'f1': 6.680463956037787}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
speech-seq2seq/wav2vec2-2-bert-large-no-adapter
|
speech-seq2seq
| 2022-02-21T17:49:39Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9251
- Wer: 1.7858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.6487 | 0.28 | 500 | 6.8354 | 1.4719 |
| 6.5662 | 0.56 | 1000 | 6.7877 | 0.9371 |
| 6.4309 | 0.84 | 1500 | 6.7640 | 1.1317 |
| 6.7123 | 1.12 | 2000 | 6.7907 | 1.9354 |
| 6.7547 | 1.4 | 2500 | 6.7830 | 1.8854 |
| 6.6726 | 1.68 | 3000 | 6.8211 | 1.9203 |
| 6.6538 | 1.96 | 3500 | 6.8444 | 1.8235 |
| 6.5693 | 2.24 | 4000 | 6.8873 | 1.8606 |
| 6.7234 | 2.52 | 4500 | 6.8649 | 1.8126 |
| 6.5104 | 2.8 | 5000 | 6.9251 | 1.7858 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
GunnarThor/talromur_f_tacotron2
|
GunnarThor
| 2022-02-21T13:57:35Z | 11 | 0 |
espnet
|
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:talromur",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
text-to-speech
| 2022-03-02T23:29:04Z |
---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- talromur
license: cc-by-4.0
---
## ESPnet2 TTS model
### `GunnarThor/talromur_f_tacotron2`
This model was trained by Gunnar Thor using talromur recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 81522029063e42ce807d9d145b64d3f9aca45987
pip install -e .
cd egs2/talromur/tts1
./run.sh --skip_data_prep false --skip_train true --download_model GunnarThor/talromur_f_tacotron2
```
## TTS config
<details><summary>expand</summary>
```
config: ./conf/tuning/train_tacotron2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp_f/tts_train_tacotron2_raw_phn_none
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 55005
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 5120000
valid_batch_bins: null
train_shape_file:
- exp_f/tts_stats_raw_phn_none/train/text_shape.phn
- exp_f/tts_stats_raw_phn_none/train/speech_shape
valid_shape_file:
- exp_f/tts_stats_raw_phn_none/valid/text_shape.phn
- exp_f/tts_stats_raw_phn_none/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_f_phn/text
- text
- text
- - dump/raw/train_f_phn/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dev_f_phn/text
- text
- text
- - dump/raw/dev_f_phn/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ','
- .
- r
- t
- n
- a0
- s
- I0
- D
- l
- Y0
- m
- v
- h
- k
- E1
- a:1
- E:1
- f
- G
- j
- a1
- T
- p
- c
- au:1
- E0
- i:1
- O:1
- I:1
- I1
- r_0
- t_h
- k_h
- Y1
- ei1
- i0
- ei:1
- ou:1
- u:1
- O1
- N
- l_0
- '91'
- ai0
- au1
- ou0
- ai:1
- n_0
- ei0
- O0
- ou1
- i1
- '9:1'
- ai1
- '90'
- au0
- x
- c_h
- 9i:1
- C
- p_h
- u0
- Y:1
- J
- 9i1
- u1
- 9i0
- N_0
- m_0
- J_0
- Yi0
- Oi1
- Yi1
- Oi0
- au:0
- '9:0'
- E:0
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp_f/tts_stats_raw_phn_none/train/feats_stats.npz
tts: tacotron2
tts_conf:
embed_dim: 512
elayers: 1
eunits: 512
econv_layers: 3
econv_chans: 512
econv_filts: 5
atype: location
adim: 512
aconv_chans: 32
aconv_filts: 15
cumulate_att_w: true
dlayers: 2
dunits: 1024
prenet_layers: 2
prenet_units: 256
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
output_activation: null
use_batch_norm: true
use_concate: true
use_residual: false
dropout_rate: 0.5
zoneout_rate: 0.1
reduction_factor: 1
spk_embed_dim: null
use_masking: true
bce_pos_weight: 5.0
use_guided_attn_loss: true
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 1.0
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
osanseviero/BigGAN-deep-128
|
osanseviero
| 2022-02-21T13:55:46Z | 26 | 17 |
generic
|
[
"generic",
"pytorch",
"text-to-image",
"region:us"
] |
text-to-image
| 2022-03-02T23:29:05Z |
---
tags:
- text-to-image
library_name: generic
---
# Image generation using pretrained BigGAN
## Warning: This only works for ImageNet inputs.
List of possible inputs: https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a
GitHub repository: https://github.com/huggingface/pytorch-pretrained-BigGAN
|
phongdtd/wavLM-VLSP-vi-base
|
phongdtd
| 2022-02-21T13:01:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"phongdtd/VinDataVLSP",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- automatic-speech-recognition
- phongdtd/VinDataVLSP
- generated_from_trainer
model-index:
- name: wavLM-VLSP-vi-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavLM-VLSP-vi-base
This model is a fine-tuned version of [microsoft/wavlm-base-plus](https://huggingface.co/microsoft/wavlm-base-plus) on the PHONGDTD/VINDATAVLSP - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0390
- Wer: 0.9995
- Cer: 0.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
kurianbenoy/distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb
|
kurianbenoy
| 2022-02-21T11:55:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2165
- Accuracy: 0.9303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2749 | 1.0 | 3125 | 0.2165 | 0.9303 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
joe5campbell/ROBERTA_Tweet_Sentiment_50_2eps
|
joe5campbell
| 2022-02-21T10:49:04Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: ROBERTA_Tweet_Sentiment_50_2eps
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ROBERTA_Tweet_Sentiment_50_2eps
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6625
- Train Accuracy: 0.6310
- Validation Loss: 0.8607
- Validation Accuracy: 0.25
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.7325 | 0.4762 | 0.7489 | 0.25 | 0 |
| 0.6625 | 0.6310 | 0.8607 | 0.25 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
Ayham/bert_bert_summarization_cnn_dailymail
|
Ayham
| 2022-02-21T08:57:52Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: bert_bert_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_bert_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
raynardj/keywords-cangtou-chinese-poetry
|
raynardj
| 2022-02-21T08:32:16Z | 4 | 8 |
transformers
|
[
"transformers",
"pytorch",
"generation",
"poetry",
"zh",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- zh
tags:
- generation
- poetry
widget:
- text: "疆场-思乡-归家-耕织《丘处机》"
---
# 终于落不了油腻俗套, 来弄这劳什子的藏头诗模型
> This is a model to generated Chinese poetry with leading characters and certain tune of mood.
## 本模型为了达到两个目的 Objectives
* 创作藏头诗 🎸
* 创作时尽量融入关键词的意境🪁 🌼 ❄️ 🌝
## 运作原理 How
这个模型充分利用了[gpt2论文](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)的精髓, 论文标题为**《语言模型即学万事万物》**, 也就是许许多多的学习任务, 可以安排成文本序列的形式,来管理输入输出, 即模型如能根据 **「所有自然常数的导数是0, 0的cos是1 ,」**算出后面的句子应该是**「 四个1相加的阶乘是4, 4的阶乘是24」**也就学会了二十四点。 模型在训练上只做了猜测语言序列的任务, 但会兼通万物。
这个码诗模型就是这么来的, 训练任务, 是输入0~10来个关键词+藏头标题+藏头字数+把头换成分类符```[CLS]```之后的诗句。
```
'忍看-窈窕-孤寝-勾带-嫩-黄昏《粉度》『二』[CLS]堞云齐,[CLS]清笳、愁入暮烟林杪。素艳透春,玉骨凄凉,勾带月痕生早。江天苍莽黄昏後,依然是、粉寒香瘦。动追感、西园嫩约,夜深人悄。记得东风窈窕。曾夜踏横斜,醉携娇小。惆怅旧欢,回首俱非,忍看绿笺红豆。香销纸帐人孤寝,相思恨、花还知否。梦回处,霜飞翠楼已晓。'
```
## Inference 通道矫情了一点, 大家照抄就是了
### 不然藏头就不见了
```python
from transformers import (AutoTokenizer, AutoModelForCausalLM)
tokenizer = AutoTokenizer.from_pretrained('raynardj/keywords-cangtou-chinese-poetry')
model = AutoModelForCausalLM.from_pretrained('raynardj/keywords-cangtou-chinese-poetry')
def inference(lead, keywords = []):
"""
lead: 藏头的语句, 比如一个人的名字, 2,3 或4个字
keywords:关键词, 0~12个关键词比较好
"""
leading = f"《{lead}》"
text = "-".join(keywords)+leading
input_ids = tokenizer(text, return_tensors='pt', ).input_ids[:,:-1]
lead_tok = tokenizer(lead, return_tensors='pt', ).input_ids[0,1:-1]
with torch.no_grad():
pred = model.generate(
input_ids,
max_length=256,
num_beams=5,
do_sample=True,
repetition_penalty=2.1,
top_p=.6,
bos_token_id=tokenizer.sep_token_id,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.sep_token_id,
)[0,1:]
# 我们需要将[CLS] 字符, 也就是101, 逐个换回藏头的字符
mask = (pred==101)
while mask.sum()<len(lead_tok):
lead_tok = lead_tok[:mask.sum()]
while mask.sum()>len(lead_tok):
reversed_lead_tok = lead_tok.flip(0)
lead_tok = torch.cat([
lead_tok, reversed_lead_tok[:mask.sum()-len(lead_tok)]])
pred[mask] = lead_tok
# 从 token 编号解码成语句
generate = tokenizer.decode(pred, skip_special_tokens=True)
# 清理语句
generate = generate.replace("》","》\n").replace("。","。\n").replace(" ","")
return generate
```
感谢liangtongt指出Inference 代码运行时可能会发生的bug.
## Cherry picking
大家下了模型,可以自己玩耍。
却也可以尝尝我替大家摘的樱桃🍒
```python
>>> inference("上海",["高楼","虹光","灯红酒绿","华厦"])
高楼-虹光-灯红酒绿-华厦《上海》
『二』
上台星月明如昼。
海阁珠帘卷画堂。
>>> inference("刘先生",["妆容","思","落花","空镜"])
妆容-思-落花-空镜《刘先生》
『三』
刘郎何事不相逢,先把金尊酒未空。
生意自知人薄命,多情只有月明中。
```
## 其他文言诗词的资源
* [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan)
* [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn)
* [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient)
* [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern)
* [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian)
* [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry)
|
edbeeching/decision_transformer_atari
|
edbeeching
| 2022-02-21T08:15:44Z | 0 | 2 | null |
[
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] |
reinforcement-learning
| 2022-03-02T23:29:05Z |
---
tags:
- deep-reinforcement-learning
- reinforcement-learning
---
Find here pretrained model weights for the [Decision Transformer] (https://github.com/kzl/decision-transformer).
Weights are available for 4 Atari games: Breakout, Pong, Qbert and Seaquest. Found in the checkpoints directory.
We share models trained for one seed (123), whereas the paper contained weights for 3 random seeds.
### Usage
```
git clone https://huggingface.co/edbeeching/decision_transformer_atari
conda env create -f conda_env.yml
```
Then, you can use the model like this:
```python
from decision_transform_atari import GPTConfig, GPT
vocab_size = 4
block_size = 90
model_type = "reward_conditioned"
timesteps = 2654
mconf = GPTConfig(
vocab_size,
block_size,
n_layer=6,
n_head=8,
n_embd=128,
model_type=model_type,
max_timestep=timesteps,
)
model = GPT(mconf)
checkpoint_path = "checkpoints/Breakout_123.pth" # or Pong, Qbert, Seaquest
checkpoint = torch.load(checkpoint_path)
model.load_state_dict(checkpoint)
```
|
Muennighoff/SGPT-125M-weightedmean-nli
|
Muennighoff
| 2022-02-21T06:19:26Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-125M-weightedmean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
Muennighoff/SGPT-1.3B-mean-nli
|
Muennighoff
| 2022-02-21T06:17:16Z | 3 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2202.08904",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# SGPT-1.3B-mean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
```
{'batch_size': 6}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 9394,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9395,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
Muennighoff/SGPT-1.3B-weightedmean-nli
|
Muennighoff
| 2022-02-21T06:15:32Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-1.3B-weightedmean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
```
{'batch_size': 6}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 9394,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9395,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
Muennighoff/SGPT-125M-mean-nli-linearthenpool5
|
Muennighoff
| 2022-02-21T06:10:51Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-125M-mean-nli-linearthenpool5
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU', 'key_name': 'token_embeddings'})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU', 'key_name': 'token_embeddings'})
(3): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU', 'key_name': 'token_embeddings'})
(4): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU', 'key_name': 'token_embeddings'})
(5): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU', 'key_name': 'token_embeddings'})
(6): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
Muennighoff/SGPT-125M-mean-nli-bitfit
|
Muennighoff
| 2022-02-21T06:08:15Z | 4 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2202.08904",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# SGPT-125M-mean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
Muennighoff/SGPT-125M-learntmean-nli
|
Muennighoff
| 2022-02-21T06:07:22Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-125M-learntmean-nli
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 880,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 881,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): WeightedMeanPooling()
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
Muennighoff/SGPT-125M-weightedmean-msmarco
|
Muennighoff
| 2022-02-21T06:05:32Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-125M-weightedmean-msmarco
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
Muennighoff/SGPT-125M-weightedmean-msmarco-specb
|
Muennighoff
| 2022-02-21T06:04:34Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"arxiv:2202.08904",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-125M-weightedmean-msmarco-specb
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
Muennighoff/SBERT-base-msmarco-bitfit
|
Muennighoff
| 2022-02-21T05:58:04Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2202.08904",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# SBERT-base-msmarco-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
EnsarEmirali/distilbert-base-uncased-finetuned-emotion
|
EnsarEmirali
| 2022-02-21T05:53:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9268984054036417
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2131
- Accuracy: 0.9265
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8031 | 1.0 | 250 | 0.2973 | 0.9125 | 0.9110 |
| 0.2418 | 2.0 | 500 | 0.2131 | 0.9265 | 0.9269 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
202015004/wav2vec2-base-timit-demo-colab
|
202015004
| 2022-02-21T03:49:39Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6259
- Wer: 0.3544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6744 | 0.5 | 500 | 2.9473 | 1.0 |
| 1.4535 | 1.01 | 1000 | 0.7774 | 0.6254 |
| 0.7376 | 1.51 | 1500 | 0.6923 | 0.5712 |
| 0.5848 | 2.01 | 2000 | 0.5445 | 0.5023 |
| 0.4492 | 2.51 | 2500 | 0.5148 | 0.4958 |
| 0.4006 | 3.02 | 3000 | 0.5283 | 0.4781 |
| 0.3319 | 3.52 | 3500 | 0.5196 | 0.4628 |
| 0.3424 | 4.02 | 4000 | 0.5285 | 0.4551 |
| 0.2772 | 4.52 | 4500 | 0.5060 | 0.4532 |
| 0.2724 | 5.03 | 5000 | 0.5216 | 0.4422 |
| 0.2375 | 5.53 | 5500 | 0.5376 | 0.4443 |
| 0.2279 | 6.03 | 6000 | 0.6051 | 0.4308 |
| 0.2091 | 6.53 | 6500 | 0.5084 | 0.4423 |
| 0.2029 | 7.04 | 7000 | 0.5083 | 0.4242 |
| 0.1784 | 7.54 | 7500 | 0.6123 | 0.4297 |
| 0.1774 | 8.04 | 8000 | 0.5749 | 0.4339 |
| 0.1542 | 8.54 | 8500 | 0.5110 | 0.4033 |
| 0.1638 | 9.05 | 9000 | 0.6324 | 0.4318 |
| 0.1493 | 9.55 | 9500 | 0.6100 | 0.4152 |
| 0.1591 | 10.05 | 10000 | 0.5508 | 0.4022 |
| 0.1304 | 10.55 | 10500 | 0.5090 | 0.4054 |
| 0.1234 | 11.06 | 11000 | 0.6282 | 0.4093 |
| 0.1218 | 11.56 | 11500 | 0.5817 | 0.3941 |
| 0.121 | 12.06 | 12000 | 0.5741 | 0.3999 |
| 0.1073 | 12.56 | 12500 | 0.5818 | 0.4149 |
| 0.104 | 13.07 | 13000 | 0.6492 | 0.3953 |
| 0.0934 | 13.57 | 13500 | 0.5393 | 0.4083 |
| 0.0961 | 14.07 | 14000 | 0.5510 | 0.3919 |
| 0.0965 | 14.57 | 14500 | 0.5896 | 0.3992 |
| 0.0921 | 15.08 | 15000 | 0.5554 | 0.3947 |
| 0.0751 | 15.58 | 15500 | 0.6312 | 0.3934 |
| 0.0805 | 16.08 | 16000 | 0.6732 | 0.3948 |
| 0.0742 | 16.58 | 16500 | 0.5990 | 0.3884 |
| 0.0708 | 17.09 | 17000 | 0.6186 | 0.3869 |
| 0.0679 | 17.59 | 17500 | 0.5837 | 0.3848 |
| 0.072 | 18.09 | 18000 | 0.5831 | 0.3775 |
| 0.0597 | 18.59 | 18500 | 0.6562 | 0.3843 |
| 0.0612 | 19.1 | 19000 | 0.6298 | 0.3756 |
| 0.0514 | 19.6 | 19500 | 0.6746 | 0.3720 |
| 0.061 | 20.1 | 20000 | 0.6236 | 0.3788 |
| 0.054 | 20.6 | 20500 | 0.6012 | 0.3718 |
| 0.0521 | 21.11 | 21000 | 0.6053 | 0.3778 |
| 0.0494 | 21.61 | 21500 | 0.6154 | 0.3772 |
| 0.0468 | 22.11 | 22000 | 0.6052 | 0.3747 |
| 0.0413 | 22.61 | 22500 | 0.5877 | 0.3716 |
| 0.0424 | 23.12 | 23000 | 0.5786 | 0.3658 |
| 0.0403 | 23.62 | 23500 | 0.5828 | 0.3658 |
| 0.0391 | 24.12 | 24000 | 0.5913 | 0.3685 |
| 0.0312 | 24.62 | 24500 | 0.5850 | 0.3625 |
| 0.0316 | 25.13 | 25000 | 0.6029 | 0.3611 |
| 0.0282 | 25.63 | 25500 | 0.6312 | 0.3624 |
| 0.0328 | 26.13 | 26000 | 0.6312 | 0.3621 |
| 0.0258 | 26.63 | 26500 | 0.5891 | 0.3581 |
| 0.0256 | 27.14 | 27000 | 0.6259 | 0.3546 |
| 0.0255 | 27.64 | 27500 | 0.6315 | 0.3587 |
| 0.0249 | 28.14 | 28000 | 0.6547 | 0.3579 |
| 0.025 | 28.64 | 28500 | 0.6237 | 0.3565 |
| 0.0228 | 29.15 | 29000 | 0.6187 | 0.3559 |
| 0.0209 | 29.65 | 29500 | 0.6259 | 0.3544 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hugsao123/XLM-R-fine-tuned-for-ner
|
hugsao123
| 2022-02-21T01:09:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: XLM-R-fine-tuned-for-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.8377982238973259
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-R-fine-tuned-for-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5679
- F1: 0.8378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4202 | 1.0 | 2500 | 0.3449 | 0.7963 |
| 0.2887 | 2.0 | 5000 | 0.2756 | 0.8057 |
| 0.2309 | 3.0 | 7500 | 0.2971 | 0.8040 |
| 0.1832 | 4.0 | 10000 | 0.3319 | 0.8167 |
| 0.1461 | 5.0 | 12500 | 0.3958 | 0.8350 |
| 0.114 | 6.0 | 15000 | 0.4087 | 0.8316 |
| 0.0833 | 7.0 | 17500 | 0.4320 | 0.8361 |
| 0.0614 | 8.0 | 20000 | 0.4885 | 0.8353 |
| 0.039 | 9.0 | 22500 | 0.5408 | 0.8390 |
| 0.0251 | 10.0 | 25000 | 0.5679 | 0.8378 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hugsao123/xlm-roberta-base-finetuned-panx-de
|
hugsao123
| 2022-02-20T15:03:47Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.7589617892939993
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3903
- F1: 0.7590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0489 | 1.0 | 50 | 0.5561 | 0.6565 |
| 0.4953 | 2.0 | 100 | 0.4385 | 0.7189 |
| 0.35 | 3.0 | 150 | 0.3903 | 0.7590 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.2+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
joe5campbell/BERT_Tweet_Sentiment_100k_2eps
|
joe5campbell
| 2022-02-20T14:45:01Z | 8 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT_Tweet_Sentiment_100k_2eps
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_100k_2eps
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1259
- Train Accuracy: 0.9542
- Validation Loss: 0.6133
- Validation Accuracy: 0.8315
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3330 | 0.8562 | 0.3847 | 0.8415 | 0 |
| 0.1259 | 0.9542 | 0.6133 | 0.8315 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
huggingtweets/buckyisotope-dril
|
huggingtweets
| 2022-02-20T09:03:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/buckyisotope-dril/1645347820169/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1214363231038263296/6kWmdpPD_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Dr. Bucky Isotope, Dice Rolling Expert</div>
<div style="text-align: center; font-size: 14px;">@buckyisotope-dril</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Dr. Bucky Isotope, Dice Rolling Expert.
| Data | wint | Dr. Bucky Isotope, Dice Rolling Expert |
| --- | --- | --- |
| Tweets downloaded | 3229 | 3231 |
| Retweets | 477 | 652 |
| Short tweets | 300 | 361 |
| Tweets kept | 2452 | 2218 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/31a3ij74/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @buckyisotope-dril's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/bnoz7zgh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/bnoz7zgh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/buckyisotope-dril')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ckenlam/nlu_sherlock_model_20220220
|
ckenlam
| 2022-02-20T09:02:06Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: nlu_sherlock_model_20220220
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlu_sherlock_model_20220220
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -955, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
kdo6301/bert-base-uncased-finetuned-cola
|
kdo6301
| 2022-02-20T08:16:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5640063794282216
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9089
- Matthews Correlation: 0.5640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4864 | 1.0 | 535 | 0.4689 | 0.5232 |
| 0.2864 | 2.0 | 1070 | 0.5835 | 0.5296 |
| 0.1884 | 3.0 | 1605 | 0.6953 | 0.5458 |
| 0.1263 | 4.0 | 2140 | 0.8082 | 0.5625 |
| 0.0832 | 5.0 | 2675 | 0.9089 | 0.5640 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anegi/autonlp-dialogue-summariztion-583416409
|
anegi
| 2022-02-20T06:52:08Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autonlp",
"en",
"dataset:anegi/autonlp-data-dialogue-summariztion",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- anegi/autonlp-data-dialogue-summariztion
co2_eq_emissions: 72.26141764997115
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 583416409
- CO2 Emissions (in grams): 72.26141764997115
## Validation Metrics
- Loss: 1.4701834917068481
- Rouge1: 47.7785
- Rouge2: 24.8518
- RougeL: 40.2231
- RougeLsum: 43.9487
- Gen Len: 18.8029
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/anegi/autonlp-dialogue-summariztion-583416409
```
|
joniponi/bert-finetuned-sem_eval-english
|
joniponi
| 2022-02-20T04:45:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
Epoch Training Loss Validation Loss F1 Roc Auc Accuracy
1 0.115400 0.099458 0.888763 0.920410 0.731760
2 0.070400 0.080343 0.911700 0.943234 0.781116
|
huggingtweets/wilton_quinn
|
huggingtweets
| 2022-02-20T01:44:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1485887832346357760/JhrQ8-73_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Quinn Wilton</div>
<div style="text-align: center; font-size: 14px;">@wilton_quinn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Quinn Wilton.
| Data | Quinn Wilton |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 72 |
| Short tweets | 135 |
| Tweets kept | 3038 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1789r49b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wilton_quinn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31qiamqk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31qiamqk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wilton_quinn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
joe5campbell/TEST
|
joe5campbell
| 2022-02-19T23:49:12Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEST
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEST
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4904
- Train Accuracy: 0.9375
- Validation Loss: 0.7016
- Validation Accuracy: 0.5
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6954 | 0.5 | 0.7286 | 0.5 | 0 |
| 0.4904 | 0.9375 | 0.7016 | 0.5 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
swcrazyfan/KingJamesify-T5-large-lm-adapt
|
swcrazyfan
| 2022-02-19T20:43:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
pere/multi-sentencefix-byt5
|
pere
| 2022-02-19T11:24:09Z | 0 | 0 | null |
[
"license:cc",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: cc
---
# Multi-Lingual DeUnCaser - Base byT5 Version
The output from Automated Speak Recognition software is usually uncased and without any punctation. This does not make a very readable text.
The DeUnCaser is a sequence-to-sequence model that is reversing this process. It adds punctation, and capitalises the correct words. In some languages this means adding capital letters at start of sentences and on all proper nouns, in other languages, like German, it means capitalising the first letter of all nouns. It will also make attempts at adding hyphens and parentheses if this is making the meaning clearer.
It is using based on the multi-lingual T5 model. It is finetuned for 100,000 steps. The finetuning scripts is based on 100,000 training examples from each of the 44 languages with Latin alphabet that is both part of OSCAR and the mT5 training set: Afrikaans, Albanian, Basque, Catalan, Cebuano, Czech, Danish, Dutch, English, Esperanto, Estonian, Finnish, French, Galician, German, Haitian Creole, Hungarian, Icelandic, Indonesian, Irish, Italian, Kurdish, Latin, Latvian, Lithuanian, Luxembourgish, Malagasy, Malay, Maltese, Norwegian Bokmål, Norwegian Nynorsk, Polish, Portuguese, Romanian, Slovak, Spanish, Sundanese, Swahili, Swedish, Turkish, Uzbek, Vietnamese, Welsh, West Frisian.
A Notebook for creating the training corpus is available [here](https://colab.research.google.com/drive/1bkH94z-0wIQP8Pz0qXFndhoQsokU-78x?usp=sharing).
|
shahukareem/wav2vec2-xls-r-1b-dv-with-lm
|
shahukareem
| 2022-02-19T04:02:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
# wav2vec2-xls-r-1b-dv-with-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
|
phongdtd/wavLM-VLSP-vi
|
phongdtd
| 2022-02-19T00:36:24Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"phongdtd/VinDataVLSP",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- automatic-speech-recognition
- phongdtd/VinDataVLSP
- generated_from_trainer
model-index:
- name: wavLM-VLSP-vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavLM-VLSP-vi
This model is a fine-tuned version of [microsoft/wavlm-base-plus](https://huggingface.co/microsoft/wavlm-base-plus) on the PHONGDTD/VINDATAVLSP - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 45.8892
- Wer: 0.9999
- Cer: 0.9973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|
| 3.4482 | 9.41 | 40000 | 3.4480 | 0.9999 | 0.9974 |
| 3.4619 | 18.81 | 80000 | 3.4514 | 0.9999 | 0.9974 |
| 3.7961 | 28.22 | 120000 | 3.8732 | 0.9999 | 0.9974 |
| 24.3843 | 37.62 | 160000 | 22.5457 | 0.9999 | 0.9973 |
| 48.5691 | 47.03 | 200000 | 45.8892 | 0.9999 | 0.9973 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
pyf98/librispeech_100h_transformer
|
pyf98
| 2022-02-18T21:43:49Z | 1 | 1 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech_100",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librispeech_100
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/librispeech_100h_transformer`
This model was trained by Yifan Peng using librispeech_100 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout f6779876103be2116de158a44757f8979eff0ab0
pip install -e .
cd egs2/librispeech_100/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/librispeech_100h_transformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Feb 18 16:00:45 EST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `f6779876103be2116de158a44757f8979eff0ab0`
- Commit date: `Fri Feb 18 15:57:13 2022 -0500`
## asr_transformer_win400_hop160_ctc0.3_lr2e-3_warmup15k_timemask5_amp_no-deterministic
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|beam20_ctc0.3/dev_clean|2703|54402|93.0|6.4|0.5|1.1|8.1|63.1|
|beam20_ctc0.3/dev_other|2864|50948|82.5|15.9|1.6|2.7|20.2|83.8|
|beam20_ctc0.3/test_clean|2620|52576|92.8|6.5|0.7|1.2|8.4|63.3|
|beam20_ctc0.3/test_other|2939|52343|82.1|16.0|1.9|2.6|20.5|84.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|beam20_ctc0.3/dev_clean|2703|288456|97.5|1.4|1.1|0.9|3.4|63.1|
|beam20_ctc0.3/dev_other|2864|265951|92.1|4.8|3.1|2.4|10.3|83.8|
|beam20_ctc0.3/test_clean|2620|281530|97.4|1.4|1.2|0.9|3.5|63.3|
|beam20_ctc0.3/test_other|2939|272758|92.0|4.7|3.2|2.3|10.2|84.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|beam20_ctc0.3/dev_clean|2703|69558|89.9|6.1|4.0|0.8|10.9|63.1|
|beam20_ctc0.3/dev_other|2864|64524|78.5|15.3|6.2|2.8|24.3|83.8|
|beam20_ctc0.3/test_clean|2620|66983|90.0|6.2|3.9|0.8|10.9|63.3|
|beam20_ctc0.3/test_other|2939|66650|77.9|15.2|6.9|2.5|24.6|84.8|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_transformer_win400_hop160_ctc0.3_lr2e-3_warmup15k_timemask5_amp_no-deterministic.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_transformer_win400_hop160_ctc0.3_lr2e-3_warmup15k_timemask5_amp_no-deterministic
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 400
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_clean_100_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_clean_100_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- ▁THE
- S
- ▁AND
- ▁OF
- ▁TO
- ▁A
- ▁IN
- ED
- ▁I
- ▁HE
- ▁WAS
- ▁THAT
- ING
- ▁IT
- ''''
- ▁HIS
- ▁HAD
- ▁WITH
- ▁YOU
- ▁FOR
- T
- ▁AS
- ▁HER
- LY
- ▁NOT
- ▁BUT
- ▁SHE
- ▁BE
- D
- E
- ▁IS
- ▁AT
- ▁ON
- ▁HIM
- ▁THEY
- ▁BY
- ▁HAVE
- Y
- ▁MY
- ▁SO
- ▁ALL
- ▁THIS
- ▁WERE
- ▁WHICH
- ▁ME
- ▁FROM
- ▁ONE
- ▁SAID
- ▁WE
- N
- ER
- ▁NO
- ▁THERE
- ▁WHEN
- ▁AN
- ▁THEIR
- ▁OR
- ▁WOULD
- ▁WHO
- ▁THEM
- R
- ▁IF
- ▁WHAT
- ▁ARE
- ▁BEEN
- ▁OUT
- ▁UP
- M
- ▁WILL
- ▁DO
- ▁MAN
- ▁COULD
- C
- ▁THEN
- ▁INTO
- ▁MORE
- ▁SOME
- ES
- P
- ▁VERY
- ▁NOW
- ▁YOUR
- ▁LITTLE
- ▁TIME
- ▁ABOUT
- ▁DID
- ▁THAN
- ▁LIKE
- ▁HAS
- L
- G
- AL
- IN
- ▁UPON
- ▁CAN
- ▁WELL
- ▁OTHER
- ▁OVER
- US
- ▁TWO
- ▁ONLY
- ▁ANY
- ▁OUR
- O
- EN
- RE
- ▁MADE
- U
- ▁AFTER
- ▁SEE
- ▁S
- ▁DOWN
- ▁BEFORE
- LL
- ST
- B
- ▁OLD
- ▁DAY
- ▁MISS
- ▁GREAT
- ▁US
- ▁KNOW
- OR
- ▁SUCH
- ▁GOOD
- ▁WAY
- A
- ▁THESE
- ▁CAME
- ▁UN
- ▁SHOULD
- ▁HOW
- ▁MISTER
- ▁GO
- ▁MUCH
- ▁WHERE
- ▁MUST
- ▁NEVER
- ▁COME
- ▁BACK
- ION
- 'ON'
- ▁LONG
- F
- ▁AGAIN
- ▁FIRST
- LE
- ▁MEN
- ▁EVEN
- NESS
- ▁MIGHT
- ▁OWN
- ▁MAY
- K
- ▁HIMSELF
- ▁SAY
- ▁JUST
- ▁THROUGH
- ▁RE
- ▁AM
- ▁ITS
- ▁WENT
- ▁THOUGHT
- ▁
- ▁DE
- ▁MAKE
- I
- ▁HAND
- ▁THINK
- ▁HOUSE
- ▁HERE
- IC
- H
- ATION
- ▁LIFE
- IT
- ▁EYES
- ▁MOST
- ▁WITHOUT
- ▁TOO
- ▁THOSE
- ABLE
- ▁EVERY
- ▁DON
- ▁MANY
- ▁AWAY
- ITY
- VE
- W
- ▁STILL
- ▁BEING
- ▁C
- ▁LAST
- ▁NIGHT
- ▁O
- ▁HEAD
- AN
- ▁FOUND
- ▁NOTHING
- ▁YOUNG
- ▁WHILE
- ▁TAKE
- ▁GET
- ▁PEOPLE
- RO
- ▁OFF
- ▁THOUGH
- EST
- ▁YET
- ▁THREE
- TH
- ▁RIGHT
- ▁UNDER
- AR
- ▁FACE
- IES
- ▁ROOM
- ▁NEW
- ▁SAW
- RA
- V
- ▁ASKED
- ▁TELL
- ERS
- ▁SAME
- MENT
- ▁HEART
- LESS
- ▁WORK
- ▁PLACE
- ▁ANOTHER
- ▁EVER
- ▁LEFT
- ▁SHALL
- ▁FATHER
- ▁PUT
- ▁ONCE
- ▁TOOK
- ▁LET
- ▁ALWAYS
- ▁SEEMED
- ▁PART
- IL
- UR
- ▁WHY
- ▁TOLD
- ▁GIVE
- ▁LOVE
- CE
- ▁MIND
- ▁LOOKED
- ▁HEARD
- ▁SOON
- ▁LOOK
- ▁MOTHER
- ▁FAR
- IVE
- ▁BECAUSE
- ▁HOME
- OUS
- ▁T
- EL
- ▁D
- ▁SOMETHING
- ▁SIDE
- ▁KING
- IS
- ATE
- ▁MOMENT
- ENT
- RY
- ▁THINGS
- ▁ST
- ▁LIGHT
- ▁FIND
- ▁GOING
- ▁THING
- ▁WORLD
- IR
- AT
- ▁WATER
- ▁END
- ▁DOOR
- ISH
- ▁KNEW
- ▁WOMAN
- ▁SIR
- ▁EACH
- RI
- ▁HAVING
- ▁AGAINST
- ▁FEW
- ▁E
- ▁BEGAN
- ▁BETTER
- ▁YES
- ▁NAME
- ▁ENOUGH
- ET
- ▁HARD
- ▁VOICE
- ▁YEARS
- ▁GOT
- ▁WHOLE
- ▁WHITE
- ▁WANT
- ▁GIRL
- ▁DONE
- ▁SEEN
- ▁HUNDRED
- ▁CALLED
- ▁BETWEEN
- ▁MORNING
- FUL
- AS
- ▁FELT
- TER
- ▁KIND
- X
- CH
- ▁HERSELF
- ANT
- ▁TOWARD
- ▁HALF
- ▁OH
- ▁AMONG
- ▁HOWEVER
- ▁TURNED
- ▁ALSO
- ▁BOTH
- ▁POOR
- ▁PERHAPS
- ▁REPLIED
- ▁COURSE
- UL
- ▁QUITE
- ▁REST
- ▁DOES
- ▁MYSELF
- NG
- LO
- ANCE
- ▁MA
- ▁SET
- ▁SMALL
- ▁B
- ▁SURE
- ▁F
- ▁GAVE
- ▁PRESENT
- ▁HIGH
- ▁ALMO
- ▁R
- CK
- ▁WHOM
- ▁NEAR
- ▁CARE
- ▁WAR
- ▁GOD
- ▁TOGETHER
- ▁SAT
- ▁SHOW
- TE
- NE
- ▁BEST
- ▁UNTIL
- ▁OPEN
- ▁W
- ▁FOUR
- ▁DEAR
- ▁HANDS
- ▁WORDS
- ▁SINCE
- ▁LAND
- ▁DIS
- MAN
- ▁ANYTHING
- ▁FEET
- ▁NEXT
- ▁GENERAL
- LING
- ▁LAY
- ▁NOR
- ▁STOOD
- ▁BLACK
- ▁POWER
- ▁BROUGHT
- Z
- IE
- ▁ROUND
- ▁BELIEVE
- ▁LARGE
- ▁ALONG
- ▁HELP
- ▁DAYS
- ▁FIVE
- ▁K
- ▁HOPE
- AM
- ▁CO
- ▁KEEP
- ▁FULL
- ▁WALK
- ▁MASTER
- ATED
- ▁NATURE
- ▁JOHN
- ▁POINT
- ▁DUR
- ▁MATTER
- ▁MONEY
- ▁CHILD
- ▁LOOKING
- ▁RATHER
- ▁AIR
- IA
- ▁P
- ▁TWENTY
- ▁FIRE
- OL
- ▁LESS
- ▁SHORT
- ▁PASSED
- ▁INDEED
- TY
- ▁CASE
- ▁WORD
- ▁WISH
- ▁COUNTRY
- LED
- ID
- ▁BOY
- ▁SOUND
- ▁FORM
- ▁CRIED
- LA
- ▁FRIEND
- TON
- ▁FACT
- ▁UNCLE
- ▁TAKEN
- ▁AL
- ▁TEN
- IAN
- ▁GONE
- ▁SEA
- ▁REASON
- TING
- ▁WHOSE
- ▁OTHERS
- AC
- ▁LI
- ▁DEATH
- ▁CERTAIN
- ▁ANSWERED
- ▁THEMSELVES
- ▁LADY
- ▁STATE
- ▁CAR
- ▁WIFE
- ▁THOUSAND
- ▁TRUE
- ▁BEHIND
- AGE
- ▁DOCTOR
- ▁FEAR
- ▁OFTEN
- OM
- ▁TILL
- ▁HA
- IOUS
- ▁AROUND
- IST
- ▁SENT
- ▁SPEAK
- ▁WOMEN
- ▁GROUND
- VER
- ENCE
- NA
- ▁TALK
- ▁CHILDREN
- TION
- CO
- MO
- ▁HEAR
- ▁ORDER
- ▁LEAVE
- ▁PRO
- ▁ALREADY
- ▁LA
- ▁FINE
- SE
- ▁BA
- PP
- ▁THUS
- AD
- ▁NEED
- ▁SIGHT
- ▁CALL
- ▁FELL
- ▁MANNER
- MP
- ▁BECAME
- UM
- ▁WATCH
- OW
- ▁FOOT
- ▁CANNOT
- ▁BODY
- ▁TOWN
- ▁LIVE
- INE
- ▁RETURNED
- ▁WONDER
- MA
- ▁G
- UT
- ▁CLOSE
- UN
- IM
- ▁ALONE
- ▁DIDN
- ▁LORD
- ▁RED
- ARY
- ▁GIVEN
- ▁SIX
- ▁EVERYTHING
- ▁DARK
- ▁DEAD
- ▁STRONG
- ▁SON
- ▁COMING
- URE
- ▁HELD
- ▁ABOVE
- ▁REALLY
- ▁BEAUTIFUL
- ▁SECOND
- ARD
- ▁EVENING
- ▁CON
- ▁HOUR
- ▁FELLOW
- ▁ROSE
- ▁PERSON
- ▁EX
- ▁CH
- ▁FORCE
- ▁MO
- ▁ARM
- ▁CAUSE
- ▁TURN
- ▁CITY
- ▁DOUBT
- ▁QUESTION
- TIC
- ▁DEEP
- ▁HAIR
- ICAL
- ▁MEAN
- ▁DI
- ▁CLEAR
- ▁SOMETIMES
- ▁STRANGE
- ▁FEEL
- ▁HO
- ▁IMP
- WARD
- AUGHT
- ▁CAPTAIN
- ▁USE
- ▁UNDERSTAND
- ▁KEPT
- ▁BR
- ▁WOOD
- ▁PRE
- ▁YEAR
- ▁TI
- ▁LEAST
- ▁BED
- ▁SA
- ▁TABLE
- ▁BECOME
- ▁FREE
- ▁FAMILY
- ME
- ▁EYE
- ▁WHETHER
- ▁MAKING
- ▁WITHIN
- ▁SORT
- ▁ANSWER
- ▁PO
- ▁SAYS
- ▁EARTH
- ▁RETURN
- ▁SUDDENLY
- ▁FRIENDS
- ▁GREEN
- ▁SUN
- ▁FAIR
- ▁TH
- ▁FALL
- ▁EITHER
- ▁BO
- ▁PRINCE
- ▁THOU
- ▁ITSELF
- ▁CHURCH
- ▁BIG
- ▁ABLE
- ▁DIFFERENT
- ▁SEVERAL
- ▁DAUGHTER
- ▁WON
- ▁WIND
- ▁BAD
- ▁LOST
- ▁READ
- ▁STORY
- ▁APPEARED
- DE
- ▁NUMBER
- ▁SP
- ▁LOW
- ▁ROAD
- ▁POSSIBLE
- ▁HUMAN
- ▁RIVER
- ▁STREET
- ▁GA
- ▁COLD
- ▁MET
- ▁ACT
- ▁BROTHER
- ▁AGE
- ▁KNOWN
- ▁CONTINUED
- ▁BRING
- ▁ILL
- ▁RUN
- ▁LAW
- ▁SUBJECT
- ▁CUT
- J
- PER
- ▁PA
- ▁TROUBLE
- ▁GLAD
- HE
- ▁SLEEP
- MEN
- ▁LATE
- ▁MEANS
- ▁ASK
- ▁REACHED
- ▁RAN
- AK
- ▁HORSE
- ▁USED
- WAY
- OP
- ▁WINDOW
- ▁SNOW
- ▁PAST
- ▁OBJECT
- ▁THEREFORE
- IONS
- ▁TREE
- ▁COMP
- ▁BLUE
- CA
- ▁VI
- ▁SIGN
- ▁EIGHTEEN
- ▁GARDEN
- ▁BUSINESS
- ▁PETER
- ▁FOLLOWED
- ▁SEEM
- ▁HOLD
- ▁HAPPY
- ▁LONGER
- ▁ACROSS
- ▁BU
- BE
- ▁ELSE
- ▁PLAY
- ▁SOUL
- ▁STAND
- ▁ARMS
- ▁SCHOOL
- ▁PRINCESS
- ▁CERTAINLY
- LT
- ▁ENGLISH
- ▁SEVEN
- ▁PER
- ▁IDEA
- ▁LE
- ▁BOOK
- ▁FEELING
- ▁HUSBAND
- ▁LINE
- PT
- THOUGH
- ▁OUGHT
- ▁RICH
- IP
- ▁VIEW
- ▁DREAM
- ▁SENSE
- ▁LO
- ▁READY
- ▁CARRIED
- ▁M
- ▁REGARD
- ▁CHANCE
- ▁WANTED
- ▁LIVED
- ▁LATER
- ▁INTEREST
- ▁EN
- ▁EFFECT
- ▁CLA
- ▁CHANGE
- ▁CA
- ▁REAL
- ▁SUPPOSE
- LES
- ▁ART
- ▁TIMES
- ▁MAR
- IF
- ▁WILD
- ▁ADDED
- ▁LETTER
- IAL
- ▁THANK
- ▁PARTY
- LAND
- ▁PAY
- ▁BREATH
- ▁TAKING
- ▁COURT
- ▁COUNT
- ILY
- ▁COMMON
- ▁PUBLIC
- ▁PURPOSE
- ▁PRETTY
- ▁TRUTH
- ▁STAY
- ▁EM
- NT
- ▁SH
- ▁REMEMBER
- ▁ENTERED
- ▁RECEIVED
- RED
- ▁SPOKE
- ▁USUAL
- ▁THY
- ▁FIGURE
- ▁LED
- ▁TREES
- ▁TRIED
- ▁FORWARD
- NED
- ▁HAT
- ▁BLOOD
- ▁BEYOND
- ▁BANK
- ▁LIVING
- ▁JOY
- ▁HOURS
- ▁ENGLAND
- ▁STONE
- VI
- GE
- ▁SWEET
- ▁POSITION
- ▁FRONT
- ▁GIRLS
- ▁VISIT
- ▁CHARACTER
- ▁SPIRIT
- ▁TA
- BO
- QUE
- QUI
- ▁OPENED
- ▁OCCASION
- ▁MEET
- ▁EIGHT
- ▁REMAIN
- ▁PASS
- TO
- ▁NORTH
- ▁SERVICE
- ▁SISTER
- ▁SE
- ▁BEAR
- ▁PLEASURE
- ▁CHIEF
- ▁FOREST
- ▁BELL
- ▁EXPERIENCE
- ▁STRUCK
- ▁CARRY
- ORY
- ▁WARM
- 'NO'
- ▁WORTH
- ▁SAYING
- ▁SILENCE
- ▁CROSS
- ▁JE
- ▁H
- ▁BEAUTY
- PH
- ▁DEAL
- KE
- ▁SECRET
- DY
- ▁MILES
- ▁LU
- ▁DOING
- ▁BOYS
- ▁CROWD
- ▁ACCOUNT
- REW
- ISM
- TI
- ▁FE
- ▁NONE
- ▁RO
- ▁NEARLY
- ▁CHA
- ▁YOUTH
- ▁CAP
- HA
- ▁BIT
- ▁LIE
- ▁ATTENTION
- ▁STANDING
- ▁STAR
- ▁RESPECT
- ▁FURTHER
- ATIONS
- ▁ROCK
- ▁BOW
- EM
- ▁EARLY
- ▁MOUTH
- ▁BOAT
- UB
- ▁IMMEDIATELY
- ▁EXCEPT
- SHIP
- ▁PICTURE
- ▁BRIGHT
- ▁WA
- ▁GREW
- ▁LEAD
- ▁CUR
- ▁TONE
- RRY
- RS
- ▁WIDE
- CHE
- ▁FORTH
- IG
- OS
- ▁NEITHER
- ▁YOURSELF
- ▁SMILE
- ▁DRESS
- ▁OPINION
- ▁HAPPENED
- ▁WAIT
- ▁SIT
- ▁SHIP
- ▁AH
- ▁DESIRE
- ▁THICK
- ▁THIRD
- ▁GRAND
- ▁FOLLOW
- ▁GATHER
- ▁HILL
- ALLY
- ▁COMPANY
- ▁CHAIR
- DER
- ▁TOP
- ▁PAR
- ▁LENGTH
- ▁THIRTY
- ▁MINE
- ▁MI
- ▁EAT
- ▁EQUAL
- ▁AFRAID
- ▁FRESH
- ▁TAIL
- ▁FILLED
- ▁SU
- ▁MINUTES
- ▁FAST
- BU
- ▁ENTER
- ▁QUEEN
- ▁UTTER
- AG
- ▁FLOOR
- ▁SHA
- DI
- ▁HEAVEN
- ▁STOPPED
- ▁GUARD
- ▁HALL
- ▁BAR
- ▁COMPLETE
- ▁NINE
- ▁WEEK
- ▁GOLD
- VA
- ▁FIFTY
- ▁BEAT
- ▁PRESS
- ▁ATTEMPT
- ▁EXCLAIMED
- DO
- ▁CONF
- ▁SEEMS
- ▁STARTED
- ▁EL
- ▁HAR
- ▁EXPRESSION
- ▁TRA
- ▁WONDERFUL
- ▁SAINT
- ▁APPEARANCE
- ▁GRAVE
- ▁OFFICE
- ▁INSTEAD
- ▁SILENT
- ▁SOUTH
- ▁AGO
- ▁CAMP
- ▁LOVED
- ▁PATH
- ▁LEARN
- ▁PLAN
- ▁GOVERNMENT
- OUR
- PPED
- ▁SITTING
- ▁SEAT
- TEN
- RESS
- SIDE
- ▁MOVED
- ▁DIE
- ▁RESULT
- ▁SPRING
- ▁PLEASE
- ▁RI
- ▁NATURAL
- ▁ANNE
- ▁STA
- ▁CORNER
- ▁WALL
- ▁IMPOSSIBLE
- ▁BROWN
- ▁SUIT
- ▁MUSIC
- PI
- ▁TRY
- ▁DIED
- ▁TEARS
- ▁JU
- ▁COMFORT
- ▁DANGER
- ▁MEASURE
- ▁PROPERTY
- ▁BORN
- CON
- ▁CR
- ▁BROKEN
- ▁MASS
- EVER
- IER
- ▁EXPRESS
- ▁POCKET
- ▁SCARCE
- ▁SELF
- NY
- ▁MADAME
- ▁LAUGHED
- ▁TOUCH
- ▁APPEAR
- ▁LONDON
- ▁SAFE
- ▁SHARP
- ▁ATTACK
- ▁JANE
- ▁COVERED
- ▁OUTSIDE
- ▁WHATEVER
- ▁PLACED
- ▁RACE
- ▁SHORE
- ▁LAID
- ▁ROMAN
- ▁PERSONAL
- UP
- AU
- ▁REMAINED
- ▁HAPPINESS
- ▁AFTERNOON
- ▁DISTANCE
- ▁STORM
- ▁MARRIED
- ▁FRANK
- ▁VALLEY
- ▁BOUND
- ▁TALKING
- ▁JO
- ▁QUICK
- ▁STEP
- AND
- ▁ARMY
- ▁EFFORT
- ▁FRENCH
- ▁V
- LEY
- ▁PARTICULAR
- ▁START
- ATING
- OO
- LU
- ▁TRANS
- ▁HAPPEN
- ▁HABIT
- ▁VILLAGE
- ▁BELOW
- ▁GENTLEMAN
- BLE
- ▁BILL
- ▁SAVE
- ACT
- ▁SOCIETY
- ▁MAJOR
- ▁QUARTER
- ▁SKY
- ▁GUESS
- CY
- ▁SAD
- ILE
- ▁SL
- ▁PLEASANT
- ▁STRAIGHT
- ▁STRENGTH
- ▁FORTUNE
- ▁WRONG
- ▁COMMAND
- ▁BOX
- ▁QUIET
- ISE
- ▁JA
- IBLE
- ▁TREAT
- ▁GLANCE
- ▁NECESSARY
- ▁FORGET
- ▁MOUNTAIN
- ▁WINTER
- ▁DREW
- ▁WAV
- ▁PLAIN
- ▁ENTIRELY
- ▁TEA
- ▁SOFT
- ▁QUICKLY
- ▁INFLUENCE
- ▁DINNER
- ▁FOOD
- ▁CHAPTER
- ▁YE
- ▁REACH
- ▁GETT
- ▁PAPER
- ▁GIVING
- ▁BEGINNING
- ▁SEND
- ▁FIGHT
- ▁SCENE
- ▁RUSH
- ▁PI
- ▁MARK
- ▁NA
- ▁BROKE
- ▁CLASS
- ▁BATTLE
- ▁EASY
- ▁GROUP
- BY
- ▁STOP
- ▁DIRECTION
- ▁BESIDE
- ▁MOR
- HAM
- UFF
- ▁WEST
- ▁OBLIG
- ▁COLOR
- ▁SINGLE
- ▁EASILY
- ▁PALE
- ▁ACTION
- ▁INTER
- ▁STRANGER
- ▁WI
- ▁CONVERSATION
- ▁BLOW
- ▁MARY
- ▁MU
- ▁TERRIBLE
- ▁THINKING
- ▁PULL
- ▁MOON
- AB
- ▁REP
- ▁ESPECIALLY
- ▁HEAVY
- ▁SICK
- ▁LUCK
- ▁TRAIN
- ▁GUN
- ▁GU
- ▁WAITING
- ▁TURNING
- ITIES
- ▁BREAD
- ▁BELONG
- ▁LOUD
- ▁REPORT
- ▁AMERICAN
- ▁JOURNEY
- ▁ANXIOUS
- ▁LIPS
- ▁KILLED
- IGHT
- GO
- ▁CONSIDER
- ▁PROBABLY
- ▁PALACE
- ▁HISTORY
- ▁LAKE
- ▁SHUT
- ▁SIMPLY
- WA
- ▁PAIN
- ▁HORSES
- ▁SEEING
- FULLY
- ▁EXPECTED
- ▁EVIL
- ▁BURN
- ▁SIMPLE
- ▁DIRECT
- IFIED
- HER
- ▁SLOWLY
- ▁LEG
- UGH
- ▁SAIL
- RIC
- ▁WISHED
- ▁RULE
- ▁LAD
- ▁MORAL
- ▁MOVE
- ▁FOLLOWING
- ▁SILVER
- ▁SEARCH
- ▁CHANGED
- ▁HANDSOME
- ▁COULDN
- ▁PASSION
- ▁HU
- ▁SMILED
- ▁STREAM
- ▁CONCERN
- ▁PRESENCE
- STER
- ▁CONTENT
- ▁BOARD
- ▁SHAPE
- ▁DECIDED
- ▁MARRY
- ▁PERFECT
- ▁STEPS
- ▁CLOSED
- ABLY
- DEN
- ▁WEAK
- ▁SUFFICIENT
- ▁SHADOW
- ▁EXPECT
- ▁SPOT
- ▁DUTY
- ▁SPEAKING
- ▁BESIDES
- ▁FIELD
- ▁ROLL
- ▁TRYING
- ▁EAR
- ▁VER
- ▁MARRIAGE
- ▁SHOT
- ▁SLAVE
- ▁MILL
- ▁NATION
- ▁NECK
- ▁ARRIVED
- ▁TALL
- ▁GRACE
- LIN
- ▁FORTY
- ▁BROAD
- ▁SUMMER
- ▁COUSIN
- ▁BEGIN
- ▁CATCH
- ▁FO
- ▁PE
- ▁MEANT
- ▁THIN
- IO
- ▁GROW
- ▁TRO
- ▁NOTICE
- ▁CRY
- ▁FISH
- ▁COM
- ▁DEGREE
- ▁HONOUR
- ▁UNDERSTOOD
- ▁SHOP
- ▁TRUST
- ▁CONDITION
- ▁FARM
- IZ
- ▁SUDDEN
- ▁SUCCESS
- ▁SURPRISE
- ORS
- ▁THOUGHTS
- UND
- ▁ALLOWED
- ITE
- ▁NARROW
- ▁GLASS
- ▁SERIOUS
- ▁STICK
- ▁GAME
- ▁SPENT
- ▁SELL
- ▁GRA
- ▁LOWER
- ▁RAISED
- ▁PIN
- ▁ALLOW
- ▁CALM
- FT
- ▁L
- ▁PU
- ▁FIT
- ACH
- ▁SUFFER
- ▁LEGS
- ▁SUPPORT
- ▁FRANCE
- ▁LATTER
- OV
- ▁TASTE
- ▁GATE
- ▁INSTANT
- ▁MINUTE
- ▁OFFER
- ▁GREATER
- ▁PORT
- ILL
- ▁INDIVIDUAL
- ▁AUNT
- ▁EAST
- ▁ADVANTAGE
- ▁FASHION
- ▁SWORD
- ▁TWELVE
- ▁HONOR
- ▁MOVEMENT
- ▁ISLAND
- ACK
- ▁WOODS
- NCH
- ▁PLEASED
- ▁ENEMY
- ▁RAIN
- ▁VARIOUS
- ▁OBSERVED
- ▁LADIES
- ▁BELIEVED
- ▁CAST
- ▁RISE
- ▁BALL
- ▁MONTHS
- ICE
- ▁MURDER
- ▁CONDUCT
- ▁SOCIAL
- ▁TENDER
- ▁LEARNED
- ▁FRA
- ▁FIRM
- CLOCK
- ▁PREVENT
- ▁RING
- LIE
- ▁GOLDEN
- ▁DECLARED
- ▁BUILDING
- ▁WRITE
- ▁ATTEND
- ▁CARRIAGE
- ▁SITUATION
- IDE
- ▁NOBLE
- ▁HUNG
- ▁RUNN
- ▁YELLOW
- ▁KNOWLEDGE
- ▁YORK
- ▁PUSH
- ▁LEAVING
- ▁POST
- ▁CIRCUMSTANCES
- ▁SEEK
- ▁FINALLY
- ▁MAIN
- ▁LETTERS
- ▁POL
- ▁ADD
- FE
- ▁ANCIENT
- ▁MARCH
- ▁WINE
- ▁STATES
- ▁WALLS
- ▁PRISONER
- ▁ISABEL
- ▁TEMPER
- ▁JUDGE
- ▁FAINT
- ▁POND
- ▁GRASS
- ▁FAM
- OUT
- ▁LAUGH
- ▁GRAY
- IGN
- ▁ESCAPE
- ▁KILL
- ▁PRAY
- ▁COMES
- ▁ABSOLUTE
- ▁BLIND
- ▁WIN
- ▁HOST
- ▁MERELY
- ▁RID
- ▁EVERYBODY
- ▁MATERIAL
- ▁STRETCH
- ▁DUE
- ▁ROW
- ▁TIN
- ▁PROMISE
- ▁LISTEN
- ▁WALKING
- ▁COMPANION
- ▁INDIAN
- ▁BREAK
- ▁BENEATH
- ▁RUIN
- ▁EDGE
- ▁WOR
- ▁FORMER
- ▁WORSE
- ▁EVIDENTLY
- ▁HARM
- ▁CENT
- ▁PIECE
- ▁LOT
- ▁PRESIDENT
- ▁SPECIAL
- ▁LABOR
- ▁HEALTH
- GA
- ▁PLACES
- ▁BEN
- ▁SOMEWHAT
- ▁DROPPED
- ▁AFFECTION
- ▁EXACTLY
- ▁DARKNESS
- ▁FALLEN
- ▁DRESSED
- ▁BILLY
- ▁ACCEPT
- ▁FL
- ▁HOT
- ▁REPEATED
- ▁MEETING
- PA
- ▁PERIOD
- ▁HONEST
- ▁INSTANCE
- ▁FLA
- ▁PASSAGE
- ▁NE
- ▁POSSESSION
- ▁WEAR
- ▁PEACE
- ▁COAT
- ▁HOUSES
- ▁MOUNTAINS
- ▁FIFTEEN
- ▁WELCOME
- ▁YARD
- ▁PROPER
- ▁MUS
- ADE
- ▁RECEIVE
- ▁SKIN
- ▁GROWN
- ▁AFTERWARDS
- ANG
- ▁DA
- ▁DIFFICULT
- ▁PERSONS
- ▁ACCORDING
- ▁FARMER
- ▁SPEECH
- ▁IMPORTANT
- PAR
- ▁PERFECTLY
- ▁MIN
- ▁CONSIDERED
- ▁NU
- ▁DEPEND
- ▁MORROW
- ▁MOUNT
- ▁KISS
- ▁LYING
- ▁SUFFERING
- ▁EXIST
- ERY
- OOK
- BA
- ▁PAINT
- AH
- ▁CAT
- ▁PURE
- ▁WISE
- ▁PRIVATE
- ▁REBECCA
- ▁VESSEL
- ▁CLEAN
- ▁GENTLEMEN
- ▁IRON
- ▁STORE
- ▁FUR
- ▁INDIANS
- ▁LOSE
- ▁BATH
- ▁NEWS
- ▁CHI
- ▁FA
- ▁CHARGE
- ▁PRIEST
- ▁WRITTEN
- ▁FORGOTTEN
- ▁TRAIL
- ▁CLOTHES
- ▁ALIVE
- ▁SUB
- ▁REPLY
- ▁THROW
- ▁AB
- ▁SOLDIERS
- ▁ISN
- ▁COTTAGE
- ▁COURAGE
- ▁CONTAIN
- ▁BUILT
- ▁PAID
- ▁HUNT
- ▁CASTLE
- HOOK
- ▁MERE
- GGED
- ▁NI
- ▁UNC
- ▁PREPARED
- ▁BARE
- ▁SMILING
- ▁SPREAD
- ▁WEATHER
- ▁EDWARD
- ▁GERMAN
- ▁CURIOUS
- ▁SERVANT
- ▁DISCOVERED
- ▁TRAVEL
- EY
- ▁DANCE
- ▁PEN
- BR
- GEN
- ▁BREAKFAST
- ▁CHAMBER
- ▁WILLIAM
- ▁TERROR
- ▁SPITE
- ▁TIRED
- ▁LOCK
- ▁CONSIDERABLE
- TLE
- ▁MANAG
- ▁DRY
- ▁FINISHED
- ▁MILLION
- ▁FRE
- ▁MIS
- ▁PASSING
- ▁DRAW
- ▁BON
- ▁VA
- ▁VEN
- ▁MAKES
- ▁VAIN
- ▁BOTTOM
- ▁DRINK
- ▁FUTURE
- ▁RACHEL
- ▁SORROW
- ▁SIXTEEN
- ▁KNIT
- ▁PROUD
- WI
- ▁TOBY
- ▁NOISE
- ▁SLIGHT
- ▁PROCEED
- ▁FER
- ▁COVER
- ▁DRAWING
- ▁FAVOR
- ▁CATHERINE
- ▁NEWSPAPER
- ▁NOBODY
- ▁ROOF
- ▁WEALTH
- ▁PROVE
- ▁DRAWN
- TTED
- OKE
- ▁DETERMINED
- ▁DOG
- ▁REMEMBERED
- ▁OPENING
- ▁FLOWERS
- ▁GENTLE
- ▁KNIGHT
- ▁RECOVER
- ▁DESERT
- ▁MOTION
- ▁NICE
- ▁INTENTION
- ▁GROWING
- ▁CLOUD
- ▁MONTH
- HOOD
- ▁POT
- UDE
- ▁PLANT
- ▁MAD
- ▁ENJOY
- ▁FAT
- ▁COR
- ▁KNOWING
- ▁IDEAS
- IZED
- ▁CHEEK
- ▁EUROPE
- ▁KNOCK
- ▁ALARM
- ▁TONGUE
- ▁SPACE
- ▁PATSY
- ▁MISTRESS
- ▁HENRY
- ▁JERRY
- ▁LIKED
- ▁PLAYED
- ▁BOOKS
- ▁MODER
- ▁CORN
- ▁ELIZABETH
- ▁CLUB
- ▁BRAIN
- ▁TROOP
- ▁COOK
- ▁DU
- ▁FUN
- DAY
- ▁QUA
- ▁FLOW
- ▁DARE
- ▁DELIGHT
- ▁WOUND
- ▁DESCEND
- ▁EVERYWHERE
- ▁FRIGHTENED
- ▁GEORGE
- ▁PECULIAR
- ▁MACHINE
- ▁PATIENT
- ▁MEADOW
- ▁PEASANT
- ▁BURST
- ▁ORDINAR
- ▁SONG
- ▁BRAVE
- ▁EXISTENCE
- ▁LUCY
- ▁J
- ▁CAREFULLY
- ▁PRESENTLY
- ▁GEN
- ▁COW
- LLY
- ▁PROMISED
- UOUS
- ▁LIFTED
- ▁MEANING
- ALL
- ▁FAIL
- NER
- ▁REGULAR
- ▁VIRTUE
- ▁STUDY
- ▁PROTECT
- ▁FOND
- ▁FANCY
- ▁STOCK
- ▁KEY
- ▁JUSTICE
- ▁PACK
- LET
- ▁AFFAIRS
- ▁DIFFICULTY
- ▁WORE
- ▁COST
- ▁HEAT
- ▁SHOULDER
- ▁OFFERED
- ▁MISTAKE
- ▁DOLLARS
- ▁LOOKS
- QUA
- ▁BREAST
- ▁PRINCIPLE
- ▁CHARLES
- ▁TEETH
- ▁OCCUPIED
- ▁DROP
- ▁PAPA
- ▁SHEEP
- ▁KNOWS
- ▁DECK
- ▁BORE
- ▁EXC
- ▁SURPRISED
- ▁STATION
- ▁PL
- ▁PR
- ▁OURSELVES
- ▁SYMPATHY
- ▁RUTH
- ▁EXCITED
- ▁CONTROL
- ▁ANGRY
- ▁IMAGINATION
- ▁WITNESS
- ▁HOLDING
- THER
- DA
- ▁TRADE
- ▁CREATURE
- ▁SISTERS
- ▁JOIN
- LAS
- ▁ALTOGETHER
- ▁CIVIL
- ▁EMPTY
- ▁LEAP
- ▁HURT
- ▁BOLD
- ▁TASK
- ▁POLICE
- ▁DRAGON
- ▁MAID
- ▁CLAIM
- ▁SHAME
- ▁PHYSICAL
- ▁CONC
- ▁SEIZED
- ▁OB
- ▁LIVES
- ▁HEIGHT
- ▁GI
- ▁PAL
- ▁CHARMING
- ▁FEELINGS
- ▁SERVANTS
- ▁DELIVER
- ▁FRUIT
- ▁SATISFIED
- ▁STRUGGLE
- ▁WROTE
- ▁CONCEAL
- ▁MOVING
- ▁FLASH
- ▁OPPOSITE
- ▁HURRY
- ▁ROUGH
- ▁PRICE
- ▁AWFUL
- ▁SAND
- ▁SLIPP
- ▁SHOWN
- ▁SPRA
- ▁AGREED
- ▁FIXED
- ▁PERCEIVED
- ▁UPPER
- ▁FINGER
- ▁FINGERS
- ▁EAGER
- LF
- ▁EARS
- LIGHT
- ▁IMAGINE
- ▁LIKELY
- ▁COAST
- ▁UNITED
- ▁VAN
- ▁EXPLAINED
- ▁TELLING
- ▁DANGEROUS
- ▁DICK
- ▁COOL
- ▁CAL
- ▁INSIST
- BI
- ▁SECURE
- ▁HILLS
- ▁SAN
- ▁CHEER
- ▁FILL
- ▁BUY
- ZA
- HI
- ▁CLOTH
- ▁POSSESSED
- ▁ADVANCE
- ▁METHOD
- ATIVE
- ▁GREATLY
- ▁SMOKE
- ▁HIGHER
- ▁COMPANIONS
- ▁ANIMALS
- ▁GALL
- ▁QUIETLY
- ▁TRAVELL
- ▁RESOLVED
- ▁FLEW
- ▁CARLYLE
- ▁MEMORY
- ▁RESIST
- ▁GRAHAM
- ▁LAUGHING
- ▁FAITH
- ▁BIRD
- CRI
- ▁LEAVES
- ▁AMERICA
- ▁DEMAND
- BOARD
- ▁AWAKE
- ▁CURIOSITY
- ▁LANGUAGE
- ▁VIOLENT
- ▁AWARE
- ▁DOUBLE
- ▁LOOSE
- LIKE
- ▁ADAM
- ▁RISING
- ▁HOTEL
- ▁BAND
- ▁ENGAGED
- ▁HEADS
- ▁LOG
- ▁FORMED
- ▁WINDOWS
- ▁PREFER
- RUS
- ▁THROWN
- ▁ARCH
- ▁PAUSE
- ▁SERVE
- KIN
- ▁FALLING
- ▁VO
- ▁WHISPERED
- ▁POWERFUL
- ▁ER
- ▁DEPART
- ▁CRUEL
- ▁EXAMPLE
- ▁SMOOTH
- ▁INTRODUC
- ▁RELIGION
- ▁SEVENTEEN
- ▁ABSENCE
- ▁PRINT
- ▁SHINING
- ▁ICE
- ▁POET
- ▁DREADFUL
- ▁REQUIRED
- ▁ORIGINAL
- ▁POINTED
- ▁INSIDE
- ▁BROTHERS
- ▁PRODUCED
- ▁SPOKEN
- ▁CREATURES
- ▁FLY
- ▁TOM
- ▁PURSU
- ▁SYSTEM
- ▁EXCELLENT
- ▁EXCITEMENT
- ▁MIDDLE
- ▁FALSE
- ▁REGRET
- ▁RAY
- ▁PHYSICIAN
- ▁COP
- ▁VALUE
- ▁TOUCHED
- ▁FLAT
- ▁OAK
- ▁SUM
- ▁LOSS
- ▁PAPERS
- ▁STEPP
- ▁REVER
- ▁SHADE
- SOME
- ▁LISTENED
- ▁N
- ▁DISCOVER
- ▁BITTER
- TERN
- ▁HOLE
- ▁ADVANCED
- ▁PICK
- ARTAGNAN
- ▁CORPORAL
- ▁ASLEEP
- ▁TEMPLE
- ▁INDICAT
- IUM
- ▁FARTHER
- ▁EXCUSE
- ▁FLU
- ▁NOSE
- ▁SIXTY
- ▁SUPPOSED
- ▁PROVED
- ▁RATE
- ▁SHOULDERS
- ▁AFFAIR
- ▁FIELDS
- ▁REMARKED
- AVE
- ▁WEEKS
- ▁ESTABLISH
- ▁PARIS
- ▁ADMIT
- ▁NEIGHBOR
- ▁ATTRACT
- ▁CUSTOM
- ▁DISTINGUISH
- ▁SURFACE
- ▁COUPLE
- ▁DEVIL
- ▁LIMIT
- ▁ROYAL
- ▁FOOL
- ▁RARE
- ▁PRIDE
- ▁PROFESSOR
- ▁SAKE
- ▁DALE
- ▁VAST
- ▁REFUSED
- ▁FAILED
- ▁BAG
- ▁ROB
- ▁WASH
- ▁FAIRY
- ▁FREQUENT
- ▁MARILLA
- ▁PROGRESS
- ▁RELIEF
- ▁DROVE
- ▁DOZEN
- ▁AHEAD
- ▁ADVENTURE
- ▁GRANT
- ▁PRIM
- ▁MENTAL
- ▁PAIR
- ▁IMPRESSION
- ▁WOUNDED
- ▁FULLY
- ▁DISAPPEARED
- ▁MILE
- ▁DRIVE
- ▁MUD
- ▁SIZE
- ▁ANIMAL
- ZE
- ▁GRE
- ▁REPRESENT
- ▁ACQUAINTANCE
- ▁INSTRUMENT
- ▁SPLENDID
- ▁UNKNOWN
- ▁CORONEL
- ▁EMPEROR
- ▁EARNEST
- ▁EXTEND
- ▁BRIEF
- ▁RENDER
- ▁PARENTS
- ▁GENTLY
- ▁CALLING
- ▁TRIBE
- ▁CHRISTIAN
- ▁INTERESTING
- ▁LAMP
- ▁JIMM
- ▁DIV
- ▁LOVER
- UCH
- ▁HID
- ▁NEEDED
- ▁ORDERED
- ▁MEAL
- ▁SLOW
- ▁DAM
- ▁CLOUDS
- ▁DAN
- ▁GAR
- ▁EXPLAIN
- ▁QUI
- ▁CLIMB
- ▁HURRIED
- ▁MURMUR
- ▁SWIFT
- ▁ARTHUR
- ▁JEFF
- ▁KINGDOM
- ▁MESSAGE
- ▁PROTEST
- ▁ORGAN
- ▁RISK
- ▁FORGIVE
- ▁OCCURRED
- ▁PEARL
- ▁ODD
- ▁INFORMATION
- ▁BUSY
- ▁TRI
- ▁LACK
- ▁BAY
- ▁FLEET
- ▁CROWN
- ▁WAITED
- ▁BIRDS
- ▁PITY
- ▁SUCCEEDED
- ▁INFORMED
- ▁WISHES
- ▁DIRECTLY
- ▁CABIN
- ▁AUGUST
- ▁COUNTENANCE
- ▁HORROR
- ▁PHILIP
- ▁POPULAR
- ▁PREVIOUS
- ▁CONTRARY
- ▁ARTICLE
- ▁DIFFERENCE
- ▁HIDDEN
- ▁HUGE
- ▁AUTHORITY
- ▁POUND
- ▁JUMP
- ▁SPI
- ▁SHAKE
- ▁EVENTS
- ▁FRO
- ▁LEAN
- ▁CRO
- ▁TRIM
- ▁SHARE
- ▁FISHER
- ▁SETTLED
- ▁QUESTIONS
- ▁SI
- ▁VAL
- ▁APPROACHED
- ▁SUGGESTED
- ▁CONTINU
- ▁PERFORM
- ▁ACKNOWLEDG
- ▁CLIFF
- ▁COLONEL
- ▁GHOST
- ▁MAJESTY
- ▁EMOTION
- ▁SUPPER
- ▁DISTANT
- ▁INTERESTED
- ▁JACK
- ▁HUM
- ▁TRAMP
- ▁BRI
- ▁POUR
- ▁SHIPS
- ▁CHAIN
- ▁DY
- ▁RANK
- ▁MATTERS
- ▁LOVELY
- AW
- ▁PAT
- ▁WORKING
- ▁CONSEIL
- ▁EVIDENCE
- ▁MERCHANT
- ▁SOLEMN
- ▁CONSTANT
- ▁MINISTER
- ▁OFFICIAL
- ▁SENTIMENT
- ▁CENTURY
- ▁DELAY
- ▁JAMES
- ▁MATCH
- ▁FOREIGN
- ▁AROSE
- ▁BEAST
- ▁BAB
- ▁WIT
- ▁REMARKABLE
- ▁THOR
- ▁COMPAR
- ▁MAL
- ▁NEARER
- ▁FOURTH
- ▁GREY
- ▁MENTION
- ▁RUBB
- ▁CHARM
- ▁BARON
- ▁DESIRED
- SCAR
- ▁HOPED
- ▁TEACHER
- ▁MON
- ITCH
- BEL
- ▁PARTS
- ▁EIGHTY
- LAC
- GGING
- ▁REFLECT
- ▁COLLECT
- ▁BULL
- ▁CONSCIOUS
- ▁MOMENTS
- ▁DISTURB
- ▁COLLEGE
- ▁EGGS
- ▁STUPID
- ▁YESTERDAY
- ▁EXAMINE
- ▁FAULT
- ▁DEPTH
- ▁ROOT
- ▁MOUSE
- ▁SOUGHT
- ▁TURTLE
- ▁NATIVE
- ▁CRACK
- ▁SOLD
- ▁INVIT
- ▁PICKED
- ▁CEASED
- ▁HEARING
- ▁MIDS
- ▁PLAYING
- ▁STAGE
- ▁UNTO
- ▁GAIN
- ▁MIST
- ▁ORDERS
- ▁KNEES
- ▁TALE
- ▁DISTINCT
- ▁BENT
- ▁DESPAIR
- ▁TRIUMPH
- ▁SQUARE
- ▁THROAT
- ▁BOUGHT
- ▁PERMIT
- ▁SPEND
- ▁TRIP
- ▁THREATEN
- ▁ROME
- INESS
- ▁EXPOS
- GON
- ▁WRITING
- ▁INCREASED
- ▁PORTION
- ▁TENT
- IUS
- ▁YO
- ▁INTENDED
- ▁NAMED
- RATION
- ▁NOTIC
- ▁PIPE
- ▁WILLING
- ▁INSTANTLY
- ▁SERVED
- ▁BAL
- ▁POSSESS
- ▁CRE
- ▁ADMIRATION
- ▁LIBERTY
- ▁OPPORTUNITY
- ▁SELDOM
- ▁BIRTH
- ▁GLOW
- ▁INCLUD
- ▁REQUEST
- ▁TYPE
- ▁SLEPT
- ▁CRIME
- ▁MOTIVE
- ▁ELSIE
- ▁BEGUN
- ▁CONSENT
- ▁ADMITTED
- ▁AVOID
- ▁ADDRESS
- ▁HATE
- ▁DEMANDED
- ▁APPARENTLY
- ▁SUGGESTION
- ▁CONSIDERATION
- ▁BLESS
- ▁PROCEEDED
- NCY
- ▁PRISON
- ▁CONT
- ▁SHOUTED
- ▁FACES
- ▁SPIRITS
- ▁DEVELOP
- ▁ACCIDENT
- ▁ADVICE
- ▁INNOCENT
- ▁INSTINCT
- ▁UNCONSCIOUS
- ▁MYSTERIOUS
- ▁PRETEND
- ▁PEEP
- ▁ANYONE
- ▁DUKE
- ▁PLUM
- VILLE
- ▁SEVERE
- ▁ALAS
- ▁DELIGHTED
- ▁ISSUE
- ▁ASKING
- ▁CROW
- ▁ACCEPTED
- ▁RIDE
- ▁DOORS
- ▁TAR
- ▁PREPAR
- ▁SUGGEST
- WOOD
- ▁CITIZEN
- ▁ENTRANCE
- ▁LINCOLN
- ▁POLITICAL
- ▁PRACTICAL
- ▁STIFF
- ▁WIDOW
- ▁CAPITAL
- ▁CLEVER
- ▁MAMMA
- ▁CREDIT
- ▁OBEY
- ▁STRING
- ▁DAILY
- ▁ARGUMENT
- ▁HEAP
- ▁APARTMENT
- ▁FLIGHT
- ▁ELDER
- ▁PUR
- ▁PAGE
- ▁DUST
- ▁GAZE
- ▁NATIONAL
- ▁BABY
- DDING
- ISTS
- ▁TEACH
- ▁STREETS
- CAL
- ▁GE
- AFF
- ▁GOES
- ▁POSSIBL
- UNG
- ▁LINES
- GUE
- ▁VOTE
- ▁HUNTING
- ▁QUO
- ▁RESEMBL
- ▁BASKET
- ▁CIRCLE
- ▁CONSEQUENCE
- ▁KITCHEN
- ▁TREASURE
- ▁NEVERTHELESS
- ▁FANCI
- ▁ASSEMBL
- ▁GRIEF
- ▁VEIL
- ▁SEASON
- ▁INVENT
- ▁VIRGINIA
- ▁HUT
- ▁GUEST
- ▁ROAR
- ▁BEHOLD
- ▁VICTORY
- ▁CAPABLE
- ▁DULL
- ▁SHOE
- ▁FLOAT
- ▁MERRY
- ▁IMMEDIATE
- ETH
- ▁ELEANOR
- ▁EXPLANATION
- ▁PARLIAMENT
- ▁PRINCIPAL
- ▁PROPORTION
- ▁RESOLUTION
- ▁UNUSUAL
- ▁BLUFF
- ▁NINETEEN
- ▁SENSATION
- ▁VISIBLE
- ▁INCOME
- ▁FATE
- ▁SUPER
- ▁LAUGHTER
- ▁EASE
- ▁LOAD
- ▁JEW
- ▁ZE
- ▁FEVER
- ▁WEDDING
- ▁JOINED
- ▁TRACE
- ▁LEADER
- ▁CLEARLY
- ▁FLOWER
- ▁TERMS
- ▁EMPLOYED
- OCK
- ▁PARTICULARLY
- ▁MEMBERS
- ▁CONFESS
- ▁GRO
- ▁ADDRESSED
- ▁CHRIST
- ▁ACCOMPANI
- ▁AFFORD
- ▁AMOUNT
- ▁BRILLIANT
- ▁COMMUNICAT
- ▁FIERCE
- ▁RECORD
- ▁SACRIFICE
- ▁TEMPT
- ▁CORDIAL
- ▁COLOUR
- ▁PROOF
- ▁ESTATE
- ▁PARDON
- ▁ADVIS
- ▁ATTITUDE
- ▁IMPORTANCE
- ▁BOOT
- ▁SHOCK
- ▁FIR
- ▁PLENT
- ▁HIT
- ▁MEMBER
- ▁SUR
- ▁SEATED
- ▁MAG
- AVING
- ▁FAVOUR
- ▁REMARK
- ▁DIM
- ▁FAITHFUL
- ▁SAVED
- CHI
- ▁SIN
- THE
- ▁CONFIDENCE
- ▁EXTRAORDINARY
- ▁FORTUNATE
- ▁MISFORTUNE
- ▁PATIENCE
- ▁RELIGIOUS
- ▁SATISFACTION
- ▁POSITIVE
- ▁SIMILAR
- ▁EXCHANG
- ▁RETREAT
- ▁FLESH
- ▁ADMIRE
- ▁SPIRITUAL
- ▁DAWN
- ▁BURIED
- ▁URGE
- ▁SUNDAY
- ▁FOX
- ▁EMMA
- ▁NURSE
- ▁SNAPP
- ▁PARK
- ▁OBTAIN
- ▁RECOGNIZED
- ▁SPEED
- ▁MAGIC
- ▁LAWS
- ▁REMOVED
- ▁HAM
- ▁PRESERV
- ▁AID
- HOUSE
- ▁MENTIONED
- ▁CONSCIENCE
- ▁CONTEMPT
- ▁DETAIL
- ▁IMMENSE
- ▁NERVOUS
- ▁PRISCILLA
- ▁UNFORTUNATE
- ▁UNHAPPY
- ▁COMPLAIN
- ▁TWICE
- ▁WHISTL
- ▁SNAKE
- ▁WASHINGTON
- ▁PIRATE
- ▁WICKED
- ▁BODIES
- ▁DESIGN
- ▁JASON
- ▁VAGUE
- ▁CONSIST
- ▁GIFT
- ▁ANGEL
- ▁RODE
- ▁FOLD
- ▁BRIDE
- ▁ANGER
- ▁BASE
- ITUDE
- ▁CONCLUDED
- ▁ALTER
- ▁FRI
- ▁PANT
- ▁BID
- ▁HIGHEST
- ▁SAILOR
- MPLE
- ▁OBSERV
- ▁CHEERFUL
- IFICATION
- RID
- ▁DESCRIBED
- ▁BIN
- ▁JEWEL
- ▁ARTIST
- ▁PEER
- ▁NORA
- ▁SKI
- ▁DIAMOND
- ▁ENCOURAGE
- ▁PRIVILEGE
- ▁PROJECT
- ▁ANYBODY
- ▁ENCOUNTER
- ▁HOLLOW
- ▁YIELD
- ▁BOBBY
- ▁SAVAGE
- ▁SOMEBODY
- ▁OTHERWISE
- ▁PRAISE
- ▁PROBLEM
- ▁DISTRESS
- ▁UGLY
- ▁WARRIOR
- ▁MOURN
- ▁RELIEV
- ▁DESK
- ▁FOOLISH
- ▁STARTLED
- ▁SKILL
- SHONE
- ▁LONE
- ▁OBSERVATION
- ▁DENI
- ▁NEST
- ▁SOLDIER
- ▁RELATION
- ▁TRULY
- ▁VISITOR
- ▁OFFICERS
- ERSON
- ▁YA
- ▁EVIDENT
- ▁DREAMS
- ▁KEEPING
- ▁PLAINLY
- ▁DRUNK
- ▁EMBRAC
- ▁INTELLIGENCE
- ▁LIEUTENANT
- ▁PERSUADE
- ▁SURROUNDING
- ▁UNIVERSAL
- ▁GLEAM
- ▁SUPERIOR
- ▁WHEEL
- ▁JEALOUS
- ▁QUEER
- ▁PIERRE
- ▁MILK
- ▁RAIL
- ▁FLUSH
- ▁STAIRS
- ▁JESUS
- ▁HORN
- ▁REGION
- ▁SAFETY
- ▁KA
- ▁GUIDE
- ▁CAKE
- ▁CUP
- ▁INQUIRED
- ▁DEFI
- ▁LESSON
- ▁WRETCHED
- ▁PACE
- ▁TEST
- ▁READING
- ▁ENTIRE
- ▁NET
- ▁DOGS
- ▁COMMANDER
- ▁PRODUCE
- ▁GAINED
- ▁ARRIVAL
- ▁FAMILIAR
- ▁MEANWHILE
- ▁SUSPICION
- ▁CHOICE
- ▁IMPULSE
- ▁THRUST
- ▁PROCESS
- ▁SUMMON
- ▁SHEPHERD
- ▁HASTILY
- ▁GRASP
- ▁COUNTESS
- ▁STYLE
- ▁DWELL
- ▁MERIT
- ▁PITCH
- ▁HUNGRY
- ▁SPORT
- ▁LOUISE
- ▁STERN
- ▁PROVIDED
- ▁ASSUME
- ▁EARLIE
- ▁RAGE
- ▁U
- ▁RAPIDLY
- PORT
- ▁SUCCESSFUL
- ▁FLED
- ▁AGREE
- ▁CONDITIONS
- ▁RELATIONS
- ▁DREAD
- ▁NATURALLY
- ▁EARL
- ▁GAY
- ▁HYPNOTI
- ▁PUTT
- ▁GAZ
- ▁JIM
- ▁PAUS
- ▁PROPOS
- ▁ADMINISTRATION
- ▁ELEVEN
- ▁HOSPITAL
- ▁MAGISTRATE
- ▁STRIKE
- ▁DIGNITY
- ▁GLORY
- ▁BOTTLE
- ▁THRONE
- ▁RECKON
- ▁COSETTE
- ▁MOREOVER
- ▁APPLI
- ▁HIND
- ▁PRODUCT
- ▁POOL
- ▁TRIAL
- HAN
- ▁ERIC
- ▁CUB
- ▁PIECES
- ▁EXCEPTION
- ▁ENJOYED
- ▁DARED
- ▁TRU
- ▁CLOSELY
- ▁RAPID
- ▁AFFECTED
- ▁REQUIRE
- ▁SOFTLY
- ▁BROW
- UCK
- ▁MARKED
- ▁SEVENT
- ▁ELECT
- ▁FORGOT
- ▁CORRECT
- ▁FRANCS
- ▁MARGUERITE
- ▁SCIENCE
- ▁UNEXPECTED
- ▁FOUGHT
- ▁MILITA
- ▁THUNDER
- ▁VOYAGE
- ▁GANEM
- ▁FREEDOM
- ▁NODDED
- ▁CAPTURE
- ▁MORTAL
- ▁OWNER
- ▁POLITE
- ▁VISION
- ▁EDUCATION
- ▁GOVERNOR
- ▁RAV
- ▁REWARD
- ▁HASTE
- ▁REPEAT
- ▁DETERMIN
- ▁PITI
- ▁KNEE
- LINE
- ▁DEVOTED
- ▁INTERRUPTED
- ▁FOLKS
- ▁EXTREME
- ▁APPROACH
- ▁CONTINUE
- ▁BEARING
- ▁CHAP
- ▁ACQUAINTED
- ▁GLIMPSE
- ▁GRADUALLY
- ▁SUNSHINE
- ▁PRACTICE
- ▁SUPPLI
- ▁DAVID
- ▁DRIFT
- ▁SHOWING
- ▁LEVEL
- ▁PROMPT
- ▁QUARREL
- ▁REPRESENTATIVE
- ▁PLUNG
- ▁GIANT
- FALL
- ▁STOUT
- CHA
- WEPT
- ▁GLANC
- ▁SALT
- ▁CHOSEN
- ▁BUCK
- ▁REALIZED
- ▁REALITY
- ▁TUR
- ▁DRIVEN
- ▁CARD
- ▁PRAYER
- ▁TERM
- AID
- ▁HOLY
- ▁ENDURE
- ▁RANGE
- ▁HANG
- ▁SAM
- LAN
- ▁CAVE
- INA
- ▁GRI
- ▁SIGH
- ▁NEIGHBOUR
- ▁COUNCIL
- ▁EXERCISE
- ▁NAUTILUS
- ▁SOMEWHERE
- ▁SYLVIA
- ▁THOROUGH
- ▁VICTIM
- ▁BRIDGE
- ▁COMPELLED
- ▁INCLINED
- ▁OVERCOME
- ▁RESERVE
- ▁ARREST
- ▁PRECIOUS
- ▁DUTCH
- ▁OCEAN
- ▁ACQUIR
- ▁RECALL
- ▁DESTIN
- ▁ATTACH
- ▁SLIM
- ▁WEEP
- ▁CONSCIOUSNESS
- ▁TIGHT
- ▁WAKE
- ▁COMFORTABLE
- ▁ACTIVE
- ▁WINGS
- ▁GRIN
- ▁AFFECT
- ▁WHIT
- ▁IDEAL
- ▁EASTER
- ▁APPROACHING
- ▁CREATED
- ▁PLANS
- ▁INCREASE
- ▁FLYING
- ▁SHOUT
- OES
- MISSION
- ▁ARMED
- ABILITY
- ▁BLUSH
- ▁CONNECTION
- ▁MATTHEW
- ▁MEDICINE
- ▁REMIND
- ▁EXHIBIT
- ▁BLOCK
- ▁DESERVE
- ▁LISTENING
- ▁TITLE
- ▁FLOUR
- ▁FLAME
- ▁AGENT
- ▁USEFUL
- ▁BRIG
- ▁BOIL
- ▁ASSURED
- ▁REFLECTION
- ▁PINE
- ▁WAG
- ▁YOUNGER
- ▁BEARD
- ▁KINDNESS
- CTUALLY
- ▁ACTUAL
- ▁WEIGHT
- ▁LILY
- ▁IMPRESS
- ▁DESCRIBE
- ▁BEHELD
- ▁COMMUNITY
- ▁DESPERATE
- ▁DISPLAY
- ▁ENEMIES
- ▁MELANCHOLY
- ▁MIRROR
- ▁RECOMMEND
- ▁SPANISH
- ▁BLAME
- ▁VOLUME
- ▁SHOOT
- ▁COMBIN
- ▁SHAKING
- ▁SOUTHERN
- ▁MYSTERY
- ▁EVERYONE
- ▁COMMISSION
- ▁COMPOSED
- ▁UDO
- ▁IMAGE
- ▁DECEIV
- ▁FAILURE
- ▁PATTY
- ▁ALICE
- ▁FRAME
- ▁MODEST
- ▁MAGNIFICENT
- ▁BRANCHES
- ▁REIGN
- ▁RAG
- ▁PARISH
- ▁KATE
- ▁AMID
- ▁SLEEPING
- ▁ANNOUNCED
- ▁EAGERLY
- ▁WIRE
- ▁LAP
- ▁ARAB
- ▁EATING
- ▁RUM
- ▁CAREFUL
- ▁DISCUSS
- WORTH
- ▁DISTRICT
- ▁FOREHEAD
- ▁FRANCIS
- ▁INCIDENT
- ▁APPEAL
- ▁EMBARRASS
- ▁MAINTAIN
- ▁PRONOUNC
- ▁FURNISH
- ▁STRAIN
- ▁ELEMENT
- ▁SILK
- ▁FEAST
- ▁RECENT
- ▁DANCING
- ▁LODGE
- ▁ASHAMED
- ▁TRICK
- ▁BOBO
- ▁STUFF
- ▁ET
- ▁ASSERT
- ▁SANK
- ▁TREATMENT
- ECI
- ▁SWIM
- ▁BECOMING
- ▁SINGING
- ▁PLATE
- ▁SCATTERED
- ▁EXTREMELY
- ▁GRIM
- ▁SANG
- ▁FIGHTING
- ▁FACTOR
- ▁PAINFUL
- ▁HIDE
- ▁FUNN
- ▁AFTERWARD
- ▁FROG
- ▁VENTURE
- ▁DISAPPOINT
- ▁COMRADE
- ▁MONSIEUR
- ▁OBVIOUS
- ▁PASSENGER
- ▁PROFOUND
- ▁PUBLISH
- ▁ACCUSTOM
- ▁BLOOM
- ▁SMITH
- ▁RELATIVE
- ▁ACCUSE
- ▁MANIFEST
- ▁SOLID
- ▁MONSTER
- ▁MARIUS
- ▁CANDLE
- ▁PROCUR
- ▁INTERFERE
- ▁HOUSEHOLD
- ▁DEVELOPMENT
- ▁AGREEABLE
- ▁HALT
- ▁NECESSITY
- FOLD
- ▁CITIES
- ▁REGI
- ▁GLOOMY
- BBL
- ▁SEPARATED
- ▁CHEST
- ▁STRIP
- ▁SPAR
- ▁DUN
- ▁SETTLE
- ▁STARED
- ▁HANGING
- ▁FEATURES
- ▁PILE
- ▁ORIGIN
- ARIES
- ▁LION
- ▁ALI
- ▁ASTONISHMENT
- ▁COMPLIMENT
- ▁DELICATE
- ▁COUNSEL
- ▁FIFTH
- ▁SUPPRESS
- ▁BURDEN
- ▁COMPLEX
- ▁ADDITION
- ▁CRUSH
- ▁TWIST
- ▁PIANO
- ▁BRUSH
- ▁CHECK
- ▁ANNIE
- ▁SHELTER
- ▁IMPROV
- ▁WESTERN
- ▁LOCAL
- ▁APPLE
- ▁GREET
- ▁MASK
- ▁RUSSIAN
- ▁TOWER
- ▁CREW
- ▁TIP
- ▁WANDERING
- ▁READER
- ▁WANDERED
- ▁DESTROY
- ▁OBSERVE
- MORE
- ▁ESCAPED
- ▁PET
- ▁BUILD
- ▁REAR
- ▁DESTROYED
- HIN
- ▁OWE
- ▁RANG
- ▁TEAR
- ▁NED
- ▁OFFICER
- ▁TRAP
- ▁OCCUR
- ▁APPOINTED
- ▁ATMOSPHERE
- ▁CHOOSE
- ▁CONCLUSION
- ▁CULTIVAT
- ▁DESCRIPTION
- ▁ENORMOUS
- ▁EXHAUSTED
- ▁LANDSCAPE
- ▁NATASHA
- ▁PROSPECT
- ▁REFRESH
- ▁SPECIES
- ▁SURROUNDED
- ▁WEAPON
- ▁BLANK
- ▁DEFEND
- ▁EDITH
- ▁HORRIBL
- ▁BETRAY
- ▁FERKO
- ▁LABOUR
- ▁NEGRO
- ▁RESUMED
- ▁LEAF
- ▁MUSKET
- ▁INTENSE
- ▁MERCY
- ▁ADOPT
- ▁SCORE
- ▁DASH
- ▁LAWYER
- ▁SLOPE
- ▁CHUCK
- ▁ASSISTANCE
- ▁BROOK
- ▁BREAKING
- ▁ASSIST
- ▁GROAN
- ▁HELEN
- ▁BEHAV
- ▁MAIDEN
- ▁CRIS
- ▁SHOUTING
- ▁NAY
- ▁PIG
- ▁ACCORDINGLY
- ETTE
- ▁DESIR
- ▁RUB
- ▁GRU
- ▁PIT
- ▁HEAVI
- ▁OBTAINED
- ▁SPARE
- ▁BRANCH
- ▁COUNTER
- ▁APART
- ▁AMBITION
- ▁ASTONISHED
- ▁CORRESPOND
- ▁DRIVING
- ▁ENERGY
- ▁HISTORIAN
- ▁REVOLUTION
- ▁SWEEP
- ▁TREMBLING
- ▁CRAFT
- ▁FAMILIES
- ▁LITERATURE
- SBURG
- ▁FEMALE
- ▁TILNEY
- ▁GENEROUS
- ▁SUBMIT
- ▁INTELLECTUAL
- ▁ORCHARD
- ▁STORIES
- ▁DIANA
- ▁VEIN
- ▁TRIFL
- ▁TWIN
- ▁WORSHIP
- ▁MARBLE
- ▁GALLANT
- ▁SENSIBLE
- ▁NEAT
- ▁BROWNIE
- ▁JUNE
- ▁SHAW
- ▁WORST
- ▁USELESS
- ▁FISHING
- ▁CRYING
- ▁MAYBE
- ▁VARI
- ▁PRESERVE
- ▁VOL
- ▁EMPLOY
- ▁INTERRUPT
- ▁SLIGHTLY
- ▁ACCOMPLISHED
- NEY
- ▁STEAM
- ▁BALANC
- ▁LEANING
- ▁SIGHED
- ▁REFUSE
- ▁IMAGINED
- ▁DATE
- GROUND
- ▁ENTERTAIN
- ▁PERCEIVE
- ▁ABROAD
- ▁CHEESE
- ▁DESTRUCTION
- ▁ESSENTIAL
- ▁EXPEDITION
- ▁GRANDFATHER
- ▁INFINITE
- ▁LIBRARY
- ▁MULTITUDE
- ▁NEGLECT
- ▁SWALLOW
- ▁VILLEFORT
- ▁BELOVED
- ▁COMMITTEE
- ▁CONFIDENT
- ▁PURPLE
- ▁PURCHAS
- ▁SCRAP
- ▁SPOIL
- ▁LIKEWISE
- ▁EXTRA
- ▁STRAW
- ▁SALUT
- ▁SOURCE
- ▁HASTENED
- ▁RESENT
- ▁FLOCK
- ▁LOFT
- ▁FLO
- ▁CLO
- ▁CONVINCED
- ▁GOODNESS
- ▁HYPNOTIZ
- ▁SETTING
- ▁HAIL
- ▁PHI
- ▁GROVE
- ▁DISCOVERY
- ▁DAMP
- ▁WHISPER
- ▁LIFT
- ▁HOP
- ▁SUSPECTED
- ▁SCR
- OLI
- ▁FAC
- ▁BUSH
- ▁FOREVER
- ▁BARRICADE
- ▁CONSTITUTION
- ▁ENDEAVOR
- ▁ENTHUSIASM
- ▁EXECUTION
- ▁HYACINTH
- ▁PERCEVAL
- ▁PSYCHE
- ▁REPROACH
- ▁THIRTEEN
- ▁ABSORB
- ▁GRATITUDE
- ▁MERCER
- ▁REPUTATION
- ▁SCREAM
- ▁PUPIL
- ▁RETIRED
- ▁STEEP
- ▁SUMMIT
- ▁MISERABLE
- ▁STRICT
- ▁MINGLED
- ▁DEFEAT
- ▁REVEAL
- ▁LOVING
- ▁GOOSE
- ▁ECHO
- ▁AWAIT
- ▁MOOD
- ▁CRAWLEY
- ▁CELL
- ▁ENGAGEMENT
- ▁PRECED
- ▁SOMEONE
- ▁ARRANGEMENT
- ▁PICKET
- ▁GASP
- ▁HUMOR
- ▁INVITATION
- ▁JOB
- WITHSTAND
- ▁LAMENT
- ▁CLASSES
- ▁HUNGER
- ▁DISPOSED
- ▁STEAMER
- ▁FEARFUL
- ▁GER
- ▁FINAL
- ▁FLAG
- ▁JULY
- ▁DIG
- WORK
- ▁OPPOS
- ▁ANXIETY
- ▁AUDIENCE
- ▁BACHELOR
- ▁COLUMN
- ▁HANDKERCHIEF
- ▁IMPATIENT
- ▁JUDGMENT
- ▁KNIFE
- ▁SOVEREIGN
- ▁STRIKING
- ▁THOMPSON
- ▁EMPIRE
- ▁FULFIL
- ▁CONSULT
- ▁JENNY
- ▁THENARDIER
- ▁POYSER
- ▁FOURTEEN
- ▁JAPANESE
- ▁INDULG
- ▁MARTIAN
- ▁COUNTRIES
- ▁FETCH
- ▁CRITIC
- ▁ROBBER
- ▁CROOK
- ▁DEPARTURE
- ▁MABEL
- ▁PREACH
- ESCENT
- ▁WHIP
- ▁NAIL
- ▁DELIGHTFUL
- ▁DISCUSSION
- ▁SENTENCE
- ▁LANE
- ▁ENGINEER
- ▁ARRANGED
- MMY
- ▁LEST
- ▁RENT
- MMED
- ▁LIST
- ▁ROBE
- ▁MISSION
- ▁GRACEFUL
- ▁LIGHTN
- STONE
- COURT
- ▁CONCEPTION
- ▁CONTRACT
- ▁DROWN
- ▁EXPERIMENT
- ▁HITHERTO
- ▁PLAGUE
- ▁PORTHOS
- ▁SHRIEK
- ▁DETECT
- ▁ACCENT
- ▁ERECT
- ▁SAZEN
- ▁PROFIT
- ▁VIVID
- ▁SQUIRE
- ▁OPERATION
- ▁SMELL
- ▁SIMON
- ▁EXTENT
- ▁KEEN
- ▁EMERG
- ▁REVIV
- ▁REGIMENT
- ▁DISAPPOINTMENT
- ▁STOLE
- ▁DIVINE
- ▁GUILTY
- ▁COWARD
- ▁EXPECTATION
- ▁SIGNOR
- ▁MODE
- ▁CENTRE
- ▁FIL
- HOW
- ▁WEARI
- ▁TOTAL
- ▁VICTOR
- ▁GOVERN
- ▁RAISE
- ▁ABANDON
- ▁ABSURD
- ▁ASPECT
- ▁CRIMINAL
- ▁DEFINITE
- ▁DELIBERAT
- ▁FEATHER
- ▁FLORINA
- ▁MIDNIGHT
- ▁RICHMOND
- ▁SATISFY
- ▁SINGULAR
- ▁STEADILY
- ▁SUPREME
- ▁TIMBER
- ▁PSYCHOLOG
- ▁GESTURE
- ▁VALUABLE
- ▁INTERVAL
- ▁CONFUSION
- ▁FLUTTER
- ▁SACRED
- ▁DISEASE
- ▁UNDERTAKE
- ▁PENETRAT
- ▁MARVEL
- ▁NORTHERN
- ▁GRIEV
- ▁GENIUS
- ▁SADDLE
- ▁NOVEL
- ▁MISERY
- ▁CONVICTION
- ▁SINK
- ▁WAGON
- ▁ARISE
- ▁COMMENT
- ▁BARN
- UPON
- ▁FENCE
- ▁ASSOCIATION
- ▁BONES
- ▁IDLE
- ▁DOUBTFUL
- ▁PREPARATION
- IZZ
- ▁RAIS
- ▁BITTERLY
- ▁JOE
- ▁RELI
- ADI
- ▁METAL
- ▁EXACT
- ▁GLOOM
- FIELD
- ▁DANGLARS
- ▁DISGRACE
- ▁EXAMINATION
- ▁FASCINAT
- ▁GLITTER
- ▁INCREASING
- ▁MESSENGER
- ▁PATRIOT
- ▁PLATFORM
- ▁PROVISION
- ▁QUALITIES
- ▁SELECT
- ▁STEADY
- ▁POVERTY
- ▁POWDER
- ▁PROPHET
- ▁HOLLAND
- ▁TRUNK
- ▁VARIETY
- ▁PLANCHET
- ▁CONQUER
- ▁CONCEIVE
- ▁COMBAT
- ▁STOOP
- ▁SHIRT
- ▁GENERATION
- ▁COMMITTED
- ▁INSULT
- ▁CONFUSED
- ▁RADIAN
- ▁DEBT
- ▁IMITAT
- ▁DART
- ▁CAROLINE
- ▁SWAM
- ▁WREN
- ▁CHILDHOOD
- ▁BRAND
- ▁JOKE
- ▁FRIENDSHIP
- ▁DIRT
- ▁JOLL
- ▁BUSHES
- ▁MINK
- ▁ROUT
- ▁EQUALITY
- ▁HESITATED
- ▁BARK
- ▁ANTI
- ▁STATEMENT
- PHER
- ▁SUNK
- ▁DAT
- ▁BACKWARD
- ▁SUSPECT
- ▁OBJECTION
- ▁RAP
- ▁CHIN
- ▁MATE
- ▁REDUC
- ▁GREGG
- ▁ACCOMPANY
- ▁ANYWHERE
- ▁BENEFIT
- ▁CLERK
- ▁EXPENSE
- ▁FETNAH
- ▁INTERPRET
- ▁LUKASHKA
- ▁NUMEROUS
- ▁SURGEON
- ▁PUZZL
- ▁RESCUE
- ▁GRATEFUL
- ▁APPROV
- ▁RIVAL
- ▁NIECE
- ▁FLOOD
- ▁VANISHED
- ▁ERROR
- ▁BLAZ
- ▁TUMBL
- ▁WENDY
- ▁PERSIST
- ▁CONSOL
- ▁SOAP
- ▁HUMOUR
- ▁FITTED
- ▁HOUSEKEEPER
- ▁ENABL
- ▁OCCASIONALLY
- ▁HATRED
- ▁SWELL
- ▁WORRY
- ▁RUST
- ▁PURSUIT
- ▁INTIMATE
- ▁SEAL
- ▁COLLECTION
- ▁TREMBLED
- ▁DENY
- ▁HUMANITY
- ▁FATAL
- ▁COCK
- ▁DRIVER
- ▁HOPELESS
- ▁MISTAKEN
- ▁LUC
- ▁ACCOMPLISH
- ▁COAL
- ▁ACCORD
- ▁PURSE
- ▁SEPARATE
- ▁ARRIVE
- ▁SMOK
- ▁MADAM
- ▁ASSOCIAT
- ▁INSTRUCT
- ▁CELEBR
- ▁CHANNEL
- ▁CIVILIZATION
- ▁DOCTRINE
- ▁ENDEAVOUR
- ▁GLACIER
- ▁INTELLIGENT
- ▁INVOLVE
- ▁LEATHER
- ▁MUTTERED
- ▁OLENIN
- ▁PENCROFT
- ▁PERPLEX
- ▁SPECTATOR
- ▁UNIVERSITY
- ▁ATTAIN
- ▁INEVITABL
- ▁YONDER
- ▁ENCHANT
- ▁REPAIR
- ▁CURRENT
- ▁ASCEND
- ▁CREEK
- ▁SPARKL
- ▁RUE
- ▁BEAVER
- ▁INFANT
- ▁CONTINUALLY
- ▁CLASP
- ▁IRISH
- ▁ROLLIN
- ▁PUNISHMENT
- ▁LUNCH
- ▁AGONY
- ▁RUDE
- ▁DRAGG
- ▁INQUIRI
- ▁SEX
- ▁TERRIFI
- ▁ROBIN
- ▁PROFESSIONAL
- ▁SPUR
- ▁GRAIN
- ▁VINE
- ▁PENN
- ▁ROC
- ▁CHASE
- ▁INFORM
- ▁WRITER
- ▁AVO
- ▁TAP
- ▁CREAT
- ▁WHIL
- ▁BARR
- ▁ASSURE
- ▁CIRCUMSTANCE
- ▁OIL
- ▁ROUSE
- ▁COLUMB
- ▁CUNNING
- ▁DOMESTIC
- ▁GLORIOUS
- ▁INDIGNATION
- ▁PRECISELY
- ▁PRUDENCE
- ▁RAILROAD
- ▁SATURDAY
- ▁UTMOST
- ▁VIOLENCE
- ▁WHIRL
- ▁CALCULAT
- ▁OVERWHELM
- ▁PERPETUAL
- ▁QUARLES
- ▁SLENDER
- ▁TELEGRAPH
- ▁ALOUD
- ▁OPPRESS
- ▁CROPPER
- ▁CANADIAN
- ▁HERBERT
- ▁TIMID
- ▁SUPPLY
- ▁STROLL
- ▁CREEP
- ▁OATH
- ▁DUSK
- ▁EXCESS
- ▁HUMBLE
- ▁FURIOUS
- ▁RIDGE
- ▁BULLET
- ▁PONY
- ▁STATU
- ▁ENJOYMENT
- ▁CONWAY
- ▁DIFFICULTIES
- ▁PATCH
- ▁JOYCE
- ▁CLOCK
- ▁RESTORED
- ▁ARGU
- ▁WIG
- ▁CHATT
- ▁PLAC
- ▁REMOVE
- ▁TORN
- ▁DISAPPEAR
- TIME
- WELL
- ▁RECOGNIZE
- ▁FISHE
- ▁DECLARE
- ISTIC
- ▁AUTHOR
- ▁WHISK
- ▁COFFEE
- ▁COMPREHEND
- ▁DISGUISE
- ▁ELZEVIR
- ▁ENTERPRISE
- ▁HOLIDAY
- ▁HORIZON
- ▁IGNORANT
- ▁INTERVIEW
- ▁OLIVER
- ▁RONICKY
- ▁CAPACITY
- ▁DISPOSITION
- ▁EXTERNAL
- ▁OPPOSITION
- ▁REPUBLIC
- ▁WHEAT
- ▁CORPSE
- ▁DARLING
- ▁THRILL
- ▁INHABITANTS
- ▁ORNAMENT
- ▁SHIFT
- ▁RECOGNISE
- ▁SHIVER
- ▁BOAST
- ▁HINT
- ▁BOSTON
- ▁MULTI
- IFYING
- ▁STEAL
- ▁INSTRUCTIONS
- ▁ELECTRIC
- ▁SWING
- ▁SOOTH
- ▁SCALE
- ▁MORLAND
- ▁DISLIKE
- ▁FLATTER
- ▁COACH
- ▁LEIF
- ▁STAMP
- ▁ANYHOW
- ▁MOTIONLESS
- ▁ANDREA
- ▁LOSING
- ▁PAUL
- ▁CAROL
- ▁ADVANC
- ▁IMAGIN
- ▁CENTER
- ▁JAR
- ▁SUCCEED
- ▁DISMISS
- CTOR
- ▁RECEIV
- ▁DRAG
- ▁INTENT
- ▁BARBAR
- ▁PUNISH
- ▁ABRUPTLY
- ▁BERNARD
- ▁DECISION
- ▁INDEPENDENT
- ▁PROVINCE
- ▁SLEEVE
- ▁TREMENDOUS
- ▁UNPLEASANT
- ▁LEISURE
- ▁THRONG
- ▁THUMB
- ▁BANNER
- ▁CONTRADICT
- ▁RESTRAIN
- ▁DIVIDED
- ▁WRAPPED
- ▁HAUNT
- ▁SNEER
- CHESTER
- ▁JULIA
- ▁MILD
- ▁CONTACT
- ▁MEANTIME
- ▁NEEDLE
- ▁BLOT
- ▁BARREL
- ▁ISABELLA
- ▁THEATRE
- ▁ESTABLISHMENT
- ▁MARKET
- ▁CHINA
- ▁FORBID
- ▁PERISH
- ▁DOORWAY
- ▁CARLING
- ▁PERIL
- ▁PRIZE
- ▁HATCH
- ▁CURL
- ▁REFER
- ▁DEVOT
- EMBER
- MONT
- ▁CANOE
- ▁PROFESSION
- ▁CONVICT
- ▁CRAWL
- ▁ACTIVITY
- ▁BEWILDER
- ▁BREEZE
- ▁CONTEMPLAT
- ▁DISGUST
- ▁FATIGUE
- ▁MERRICK
- ▁PRAIRIE
- ▁REFORM
- ▁SPECTACLE
- ▁STUDENT
- ▁TUMULT
- ▁UNIFORM
- ▁VIGOROUS
- ▁CONDEMN
- ▁GENUINE
- ▁THOMAS
- ▁ARROW
- ▁PILLOW
- ▁FEEBLE
- ▁RALPH
- ▁SCHEME
- ▁COLLAR
- ▁JUSTINIAN
- ▁NERVE
- ▁OYSTER
- ▁BENNET
- ▁DUTIES
- ▁BINGLEY
- ▁CHRISTMAS
- ▁CONVEY
- ▁DESPIS
- ▁RATTL
- ▁GARMENTS
- ▁GOWN
- ▁BERYL
- ▁BARRIER
- ▁CHARACTERISTIC
- ▁MEDITAT
- ▁DISCOURSE
- ▁STAFF
- ▁KARA
- ▁MONTE
- ▁READILY
- ▁VENTUR
- ▁HENCE
- ▁ROPE
- ▁CRIES
- ▁ANGLE
- ▁RESPECTABLE
- ▁MOAN
- ▁OUTLINE
- BORN
- ▁FIX
- ▁INTEND
- LIA
- ▁CHILL
- ▁CREP
- ▁CHOSE
- ▁SPECULAT
- ▁ATTRIBUT
- ▁BUFFALO
- ▁ENTREAT
- ▁ENVELOP
- ▁FREDERICK
- ▁IMPATIENCE
- ▁INDIFFERENCE
- ▁INDUSTRY
- ▁INSTITUTION
- ▁LYNDE
- ▁RETAIN
- ▁TROUTINA
- ▁UNCOMFORTABL
- ▁VENGEANCE
- ▁JENKS
- ▁CONGRESS
- ▁SMART
- ▁THITHER
- ▁DISAGREE
- ▁IMPROVEMENT
- ▁PISTOL
- ▁GOSSIP
- ▁ETERNAL
- ▁BELIEF
- ▁SLEDGE
- ▁AROUSED
- ▁ORANGE
- ▁FASTENED
- ▁MONKEY
- ▁WITHDREW
- ▁OFFEND
- ▁PIERC
- ▁MOONLIGHT
- ▁OARS
- ▁GROOM
- ▁FIDDLER
- ▁BARBARA
- SHIRE
- ▁ATTENDANT
- ▁DIVERS
- ▁DUCK
- ▁PROPOSAL
- ▁GROWTH
- ▁CURATE
- ▁STEWAR
- ▁MOCK
- ▁SUCCESSION
- ▁CREATION
- ▁PARTIAL
- ▁SWU
- ▁FROST
- ▁EIGHTH
- ▁AWE
- ▁PERCH
- ▁LACE
- SPOON
- ▁ARRANGE
- SERIES
- ▁FOG
- ▁SCU
- ▁ABRAHAM
- ▁ADMIRAL
- ▁BARBICANE
- ▁CAMPAIGN
- ▁CONSEQUENTLY
- ▁CULTURE
- ▁GRAMMONT
- ▁GWYNPLAINE
- ▁HAPPILY
- ▁HOOPDRIVER
- ▁INDEPENDENCE
- ▁LEOPOLD
- ▁MISCHIEF
- ▁MONTGOMERY
- ▁NECESSARILY
- ▁PSYCHIC
- ▁RABBIT
- ▁REFUGE
- ▁RESPONSIBILIT
- ▁SENATOR
- ▁UNCERTAIN
- ▁MENSTRUA
- ▁FANNY
- ▁SUBSTANCE
- ▁APRIL
- ▁ELBOW
- ▁QUALITY
- ▁BORDER
- ▁BRUTAL
- ▁CARPET
- ▁SOLITAR
- ▁FROWN
- ▁SCENT
- ▁ANNOY
- ▁NAKED
- ▁BOSOM
- ▁CONSUM
- ▁TIGER
- ▁ITALIAN
- ▁PARSON
- ▁DECLIN
- ▁NEIGHBORHOOD
- ▁GREGGORY
- ▁EXCEED
- ▁SILLY
- ▁ICELAND
- ▁HIDEOUS
- ▁STRU
- ▁ALTERNAT
- ▁CABINET
- ▁ABILITY
- ▁BEECH
- ▁SECRETARY
- ▁CONTEST
- ▁MONK
- ▁PADD
- ▁EVA
- ▁CREST
- ▁FINISH
- ▁APPARENT
- ▁MIX
- ▁SLIP
- ▁LUXURI
- ▁AUTUMN
- ▁CIRCULAR
- ▁COMPOSITION
- ▁DISPLEAS
- ▁EXCELLENC
- ▁FURNITURE
- ▁GRADUATE
- ▁INDIFFERENT
- ▁JOSEPH
- ▁OCCUPATION
- ▁POSSIBILITY
- ▁RENEWED
- ▁RESPONDED
- ▁PREVAIL
- ▁HOARSE
- ▁PRACTIS
- ▁FAREWELL
- ▁JULIET
- ▁OVERHEAD
- ▁THREAD
- ▁APPLICATION
- ▁SOLITUDE
- ▁ADAPT
- ▁FALK
- ▁LARK
- ▁COARSE
- ▁MANKIND
- ▁KICK
- ▁BATTER
- ▁SOLICIT
- ▁RESIGN
- ▁MOTOR
- ▁STEEL
- ▁CONTRIV
- ▁AUTHORITIES
- ▁HARSH
- ▁FAVORITE
- ▁TALENT
- ▁FLEECE
- ▁AGITATION
- ▁ABBE
- ▁STUCK
- ▁HEDGE
- ▁BIBLE
- ▁RECOLLECTION
- ▁PARTNER
- ▁DAMON
- ▁SHINE
- ▁HOOK
- ▁CONFESSION
- ▁ASSENT
- ▁ELDE
- ▁BIGGE
- ▁PEACEFUL
- SCRIBED
- ▁WEIGH
- CARLET
- ▁DECIDE
- ▁RECOLLECT
- ▁BOHEMIA
- ▁CALIFORNIA
- ▁CONSTRUCT
- ▁DEMONSTRAT
- ▁DISTRIBUT
- ▁FRIGHTFUL
- ▁GNOME
- ▁IGNORANCE
- ▁JANUARY
- ▁JULIUS
- ▁MEMORIES
- ▁OCCUPY
- ▁PHRASE
- ▁WHIRLWIND
- ▁WILMINGTON
- ▁CARLINI
- ▁CHAUVELIN
- ▁ESTEEM
- ▁GENZABURO
- ▁GLOBE
- ▁LECOQ
- ▁MARGARET
- ▁MONARCH
- ▁NAPOLEON
- ▁SCORN
- ▁STAGGER
- ▁SUSTAIN
- ▁TRADITION
- ▁ADJUST
- ▁FROZEN
- ▁IMPRISON
- ▁LANTERN
- ▁MICHEL
- ▁STOMACH
- ▁TORRENT
- ▁WITHDRAW
- ▁FRANZ
- ▁POISON
- ▁SURVEY
- ▁BRITISH
- ▁ELEVAT
- ▁AWOKE
- ▁ESTHER
- ▁INHERIT
- ▁TRAVERS
- ▁STOPPING
- ▁IRELAND
- ▁COMPARATIVE
- ▁SOBB
- ▁FAVOURITE
- ▁CANVAS
- ▁CLOAK
- ▁GLAR
- ▁ASSISTANT
- ▁DAMAGE
- ▁PEAK
- ▁DISTINCTION
- FARE
- ▁DOLLAR
- ▁BEGGAR
- LUSIVE
- ▁MODEL
- ▁SECUR
- ▁DISPOS
- ▁SLID
- ▁PEA
- ▁SPEEDI
- HOLD
- ▁SNAP
- ▁CIGAR
- ▁AFFLICT
- ▁AMAZEMENT
- ▁LAUNCELOT
- ▁LEAGUE
- ▁MARIPOSA
- ▁POPULATION
- ▁UNEASY
- ▁BLOSSOM
- ▁CATERPILLAR
- ▁INCLINATION
- ▁SUSPEND
- ▁SYNDIC
- ▁TAYLOR
- ▁WILSON
- ▁CONTRAST
- ▁PORTRAIT
- ▁CORONER
- ▁GREEK
- ▁BUNDLE
- ▁BLEW
- ▁THORPE
- ▁ORPHAN
- ▁MUSCLE
- ▁DEAF
- ▁SURVIV
- ▁EXCEEDINGLY
- ▁TENDENC
- ▁ISRAEL
- ▁QUANTIT
- ▁PENSION
- ▁DRIED
- TEXT
- ▁REFERENCE
- ▁REPOSE
- ▁FOLLY
- ▁REPLACE
- ▁TERR
- ▁ANKLE
- ▁SUNLIGHT
- ▁SECURITY
- ▁SHOV
- ▁RAW
- CULAR
- ▁JACKET
- ▁TUNE
- ▁HOBB
- ▁MARTIN
- DUCED
- ▁FIST
- ▁BEGG
- ▁CHOK
- ▁INQUIRE
- ▁INTELLECT
- ▁AMUSEMENT
- ▁APPROPRIATE
- ▁CONGRATULAT
- ▁CONVENTION
- ▁DISCOURAG
- ▁EXQUISITE
- ▁FOUNTAIN
- ▁JUNIOR
- ▁NONSENSE
- ▁OBSTACLE
- ▁SPECIMEN
- ▁SWEAR
- ▁TRANQUIL
- ▁VEHICLE
- ▁WISDOM
- ▁ASCERTAIN
- ▁CAUTIOUS
- ▁CENTURIES
- ▁CORRUPT
- ▁EXPLOR
- ▁TURKEY
- ▁BARGAIN
- ▁CONFOUND
- ▁FUNCTION
- ▁GRACIOUS
- ▁MONICA
- ▁ILLUSTRAT
- ▁CRUMB
- ▁REMEDY
- ▁REMOTE
- ▁REVENGE
- ▁BABYLON
- ▁CAUTION
- ▁INTERIOR
- ▁CRISTEL
- ▁BRAZ
- ▁THIRST
- ▁PROBABLE
- ▁HARMONY
- ▁CHARITY
- ▁DECAY
- ▁COLONI
- ▁AVAIL
- ▁REPULS
- ▁ABSENT
- ▁PULSE
- ▁PRESUM
- ▁CRANE
- ▁NEIGHBOURHOOD
- ▁SUNSET
- ▁CANNON
- ▁GRAPE
- ▁SOFA
- ▁DRANK
- MINOUS
- ▁DECLARATION
- ▁CLOSING
- ▁MEEK
- ▁STARV
- ▁BUNCH
- ▁PERFORMANCE
- ▁ENTERTAINMENT
- ▁STRIV
- ▁EMILY
- ▁VALET
- MPOSED
- ▁INTIMA
- ▁POLISH
- ▁HIRE
- POST
- ▁TREMBLE
- ▁CEASE
- ▁VIRGIN
- ▁RUSSIA
- COURSE
- ▁EDUCAT
- BOUND
- ▁INHABIT
- ▁SUPERINTEND
- ▁BISCUIT
- ▁CHICAGO
- ▁CHOKICHI
- ▁CONFLICT
- ▁ENCLOS
- ▁EXCLUSION
- ▁EXECUTIVE
- ▁GRANDMOTHER
- ▁HEADQUARTERS
- ▁INFERIOR
- ▁INVISIBLE
- ▁MUTUAL
- ▁OPPONENT
- ▁SENSITIVE
- ▁STUDIED
- ▁TEMPORARY
- ▁UNWILLING
- ▁PERMANENT
- ▁BEDROOM
- ▁NOVEMBER
- ▁COMPLICAT
- ▁DEVOUR
- ▁SCRAMBL
- ▁SECTION
- ▁PROPOSITION
- ▁DEPRIV
- ▁RYNCH
- ▁PLEAD
- ▁TORTURE
- ▁SCOUT
- ▁PILOT
- ▁CHERISH
- ▁SPEAR
- ▁SUGAR
- ▁JASPER
- ▁STRAY
- ▁RIFLE
- ▁NORMAL
- ▁JERK
- ▁HONEY
- ▁AWAKENED
- ▁QUIVER
- ▁PYE
- ▁APPLY
- LICK
- JA
- ▁ANNOUNC
- FORE
- ▁ENGINE
- ▁HESITATE
- ▁PROVIDE
- ▁REALIZE
- ▁SEIZE
- ▁RESTORE
- MOUTH
- FOOT
- ▁DIFFER
- ▁ULTIMATE
- ▁ABUNDANCE
- ▁APPRECIATE
- ▁APPREHENSION
- ▁AVENUE
- ▁AWKWARD
- ▁CETERA
- ▁CHIMNEY
- ▁CLUTCH
- ▁CONVENIENT
- ▁CORRIDOR
- ▁DISTRACT
- ▁ELEGANT
- ▁ELSEWHERE
- ▁ENTHUSIASTIC
- ▁EXECUTE
- ▁EXTREMIT
- ▁JERUSALEM
- ▁MIRACLE
- ▁MONSTROUS
- ▁OBEDIENCE
- ▁OBSCURE
- ▁PHENOMENA
- ▁RESIDENCE
- ▁RESOURCE
- ▁REVOLT
- ▁SCIENTIFIC
- ▁SHIELD
- ▁SIMPSON
- ▁UNIVERSE
- VOLUNTARY
- ▁ATTENTIVE
- ▁BRENDA
- ▁DEPOSIT
- ▁MAXIM
- ▁REJECT
- ▁STIRRED
- ▁DISORDER
- ▁SERENE
- ▁TOBACCO
- ▁MILTON
- ▁BALLOON
- ▁STEPHEN
- ▁STRAIT
- ▁CHINESE
- ▁COURTEOUS
- ▁RELEASE
- ▁RECESS
- ▁COTTON
- ▁STUMP
- ▁TANK
- ▁PROMOTE
- ▁DERIVE
- ▁LOYAL
- ▁GRANIT
- ▁DISMAL
- ▁CATTLE
- ▁DOONE
- ▁CUPID
- DIGNIFIED
- ▁RIPE
- ▁EXILE
- ▁ANTIQU
- UMINAT
- ▁SUPPOS
- ▁WRETCH
- ▁IDENTI
- ▁EASI
- ▁SERV
- ▁QUEST
- TOWN
- ▁ACHIEVEMENT
- ▁APPETITE
- ▁BUCCANEER
- ▁COMMENCED
- ▁DELAWARE
- ▁DISCERN
- ▁IMMORTAL
- ▁INDIGNANT
- ▁JOSIANA
- ▁MECHANICAL
- ▁MUSKRAT
- ▁REVIEW
- ▁ROBARTS
- ▁SIGNIFICANT
- ▁SUBSEQUENT
- ▁YOURSELVES
- ▁ANGRILY
- ▁BORROW
- ▁SUBLIME
- ▁AFRICA
- ▁CHICKEN
- ▁DEGRAD
- ▁GEORGI
- ▁HUMILIAT
- ▁LODGING
- ▁REDCOAT
- ▁VIOLET
- ▁HOPKINS
- ▁RAWDON
- ▁PRICK
- ▁WHALE
- ▁FUNERAL
- ▁GUINEA
- ▁DISMAY
- ▁PORCH
- ▁HARVEST
- ▁PARCEL
- ▁SUBDU
- ▁SYRIA
- ▁PANIC
- ▁BOUGHS
- ▁CIGARETTE
- ▁CHRON
- ▁INQUIRY
- ▁CRYSTAL
- ▁SPELL
- ▁PLUCK
- ▁PATTERN
- ▁DARING
- ▁CRITICISM
- ▁DAINT
- ▁DISTURBANCE
- ▁BUTCHER
- ▁LITERA
- ▁ABUSE
- IXTURE
- ▁ANIMAT
- ▁WRIT
- ▁BELIEV
- ▁INDUCE
- COMING
- ▁DRAMA
- ▁AGITAT
- SHAW
- ▁IMPERFECT
- ▁MANUFACTURE
- ▁AFFIRM
- ▁ANGUISH
- ▁ARTIFICIAL
- ▁BIBBS
- ▁CHARLOTTE
- ▁CIRCUS
- ▁CONNISTON
- ▁CONSTITUTE
- ▁DAZZL
- ▁DEFECT
- ▁DISCHARG
- ▁ESCORT
- ▁EXAGGERAT
- ▁GWENDOLEN
- ▁IRRESISTIBL
- ▁PHILOSOPHY
- ▁PHOTOGRAPH
- ▁PILGRIM
- ▁PLEASING
- ▁QUIXOTE
- ▁RESPONSE
- ▁SCRATCH
- ▁SERGEANT
- ▁SHERIFF
- ▁SHUDDER
- ▁STRUCTURE
- ▁SUFFRAGE
- ▁SURRENDER
- ▁SWORE
- ▁VILLAIN
- ▁HESITATING
- ▁FLORENCE
- ▁IRRITAT
- ▁RIGID
- ▁SINISTER
- ▁STUDIO
- ▁RAFT
- ▁CHAMPION
- ▁PAVEMENT
- ▁WOLF
- ▁DEVICE
- ▁WRECK
- ▁HESITATION
- ▁LAZY
- ▁ADJO
- ▁DECENT
- ▁INTERVEN
- ▁WOOL
- ▁ILLUSION
- ▁HAWK
- ▁IMPART
- ▁LUNGS
- ▁WINNING
- ▁VITAL
- ▁CONSPI
- ▁SUBTLE
- ▁CONSTANC
- ▁HURL
- ▁AMIABL
- ▁FOLK
- GGY
- ▁NECESSIT
- ▁PROFESS
- WASH
- ▁ADMIRING
- ▁AMBITIOUS
- ▁ANTHONY
- ▁CEREMONY
- ▁CONTRIBUTE
- ▁CRAGGS
- ▁DETAIN
- ▁DISCLOS
- ▁DWELT
- ▁EGYPT
- ▁FELIX
- ▁JOURNAL
- ▁KWAIRYO
- ▁LIBERAL
- ▁LUMBER
- ▁OCTOBER
- ▁ORGANIZATION
- ▁POPULACE
- ▁PRECAUTION
- ▁PREJUDICE
- ▁PROCLAIM
- ▁PROPRIETOR
- ▁RESPONSIBLE
- ▁RHYTHM
- ▁RIDICULOUS
- ▁SCHOLAR
- ▁SQUEEZ
- ▁SUBSTITUTE
- ▁SURPASS
- ▁THRESHOLD
- ▁WHARTON
- ▁FLICKER
- ▁AMAZED
- ▁BRONZE
- ▁COSSACK
- ▁SPILETT
- ▁CASUAL
- ▁DARCY
- ▁PARLOUR
- ▁SEXUAL
- ▁INSECT
- ▁NATHAN
- ▁EMINENT
- ▁PENCIL
- ▁PETITION
- ▁ROTTEN
- ▁VIGIL
- ▁CAESAR
- ▁EAGLE
- ▁TREAD
- ▁REACTION
- ▁TACIT
- ▁PARLOR
- ▁SPAIN
- ▁WILDERNESS
- ▁DICTAT
- ▁GRATIFY
- ▁STOVE
- ▁SKIRT
- ▁UTILI
- ▁CONCERT
- ▁GORGE
- ▁DECORAT
- ▁LATIN
- ▁ANCHOR
- ▁KNOT
- ▁MONDAY
- ▁GABLES
- ▁TOLERABL
- ▁ROGER
- BERRIES
- ▁INVAD
- IMMER
- OMETER
- ▁PRODUC
- OBIL
- ▁PERMISSI
- FICIENCY
- ▁WANDER
- RREL
- PIECE
- HORN
- ▁COMMIT
- ▁ACCUMULAT
- ▁JAPAN
- ▁ABUNDANT
- ▁ACADEMY
- ▁ALBERT
- ▁BANQUET
- ▁DELICIOUS
- ▁DOCUMENT
- ▁EXCLAMATION
- ▁FEBRUARY
- ▁GROTESQUE
- ▁HEATHERSTONE
- ▁HUMPHREY
- ▁HURSTWOOD
- ▁MOHAMMED
- ▁MOSCOW
- ▁NICHOLAS
- ▁OBSTINATE
- ▁PHANTOM
- ▁PHILOSOPHER
- ▁RECEPTION
- ▁SPANIARD
- ▁SWOLLEN
- ▁TELEPHONE
- ▁TRIBUTE
- ▁TUNNEL
- ▁UNREASONABL
- ▁WIGWAM
- ▁BUTTERFLY
- ▁COLLINS
- ▁DISPATCH
- ▁EDITOR
- ▁CONTINENT
- ▁DIMINISH
- ▁HORRID
- ▁KEATS
- ▁PROVIDENCE
- ▁BEHALF
- ▁CHARLEY
- ▁DRAKE
- ▁LAUNCH
- ▁SALOON
- ▁GIGANT
- ▁DISPUTE
- ▁HYSTERI
- ▁DEFENCE
- ▁SCREEN
- ▁VAULT
- ▁NINTH
- ▁HARBOR
- ▁FLANK
- ▁SPECK
- ▁UPRIGHT
- ▁KEMP
- ▁CANADA
- ▁STALK
- ▁OWL
- ▁BRUTE
- ▁FERRIS
- ▁DECREE
- ▁HABITUAL
- ▁BRISK
- ▁INSPIRE
- ▁HUSH
- ▁CROUCH
- ▁FRIDAY
- ▁MOUNTAINEER
- ▁HISTORIC
- ▁BATES
- ▁RUSK
- ▁SEMI
- DICTION
- ▁BUSI
- ▁REMOV
- MMI
- ▁SUFFIC
- ▁FLEE
- ▁LOUIS
- NLEA
- ▁IMPORT
- OLOGY
- ▁CLERGY
- ▁ADVERTISEMENT
- ▁BENEVOLEN
- ▁BORODINO
- ▁CATHOLIC
- ▁COMMERCIAL
- ▁CONJECTURE
- ▁CURTAIN
- ▁CUTHBERT
- ▁DEMOCRACY
- ▁GUARANTEE
- ▁HYPNOSIS
- ▁INDEFINITE
- ▁INVESTIGATION
- ▁IRREGULAR
- ▁KOYO
- ▁MERRIWIG
- ▁MIRANDA
- ▁NICHOLL
- ▁ONLOOKER
- ▁PERSECUT
- ▁RECOGNITION
- ▁REJOICE
- ▁REMEMBRANCE
- ▁REVELATION
- ▁SCOLD
- ▁SENIOR
- ▁SQUIRREL
- ▁SYMPATHETIC
- ▁TEMPEST
- ▁TREACHER
- ▁UNDERNEATH
- ▁UNEASINESS
- ▁UNNECESSARY
- ▁UPSTAIRS
- ▁VEXATION
- ▁ACCESS
- ▁CHEAP
- ▁ESTIMATE
- ▁HAZARD
- ▁HORSEBACK
- ▁PLUNDER
- ▁RASCAL
- ▁ROSTOV
- ▁ACCUR
- ▁GRAVITY
- ▁SITUATED
- ▁INVARIABL
- ▁PLENTIFUL
- ▁SPENCER
- ▁WALLACE
- ▁POLICY
- ▁WARRANT
- ▁ENVY
- ▁LAMB
- ▁EXTRACT
- ▁CORRAL
- ▁PANEL
- ▁LINK
- ▁LILIES
- ▁BECKON
- ▁SENOR
- ▁BORG
- ▁DEBATE
- ▁STEER
- COGNI
- COMB
- ▁SETTL
- ▁VENERA
- ▁FEATURE
- ▁TERRIBL
- CAPABLE
- OLOGICAL
- ▁INCESSANT
- ▁RESOLUTE
- SHAUGHNESSY
- ▁ABOLITION
- ▁ASSASSIN
- ▁BEHAVIOUR
- ▁BLUNT
- ▁COMMERCE
- ▁CONSTANTINOPLE
- ▁CRICKET
- ▁DISCIPLINE
- ▁DROUET
- ▁DWARF
- ▁INJUSTICE
- ▁LUXURY
- ▁MANUSCRIPT
- ▁MISUNDERSTAND
- ▁POLITICIAN
- ▁REDOUBT
- ▁SALVATION
- ▁SERMON
- ▁STRUGGLING
- ▁SURPRISING
- ▁TRIGGER
- ▁TUESDAY
- ▁TWILIGHT
- ▁UNDOUBTEDLY
- ▁VEGETABLE
- ▁VULGAR
- ▁WAISTCOAT
- ▁WRINKLE
- ▁ALEXANDER
- ▁CEILING
- ▁ECONOMIC
- ▁EVERLASTING
- ▁INFLICT
- ▁LEVISON
- ▁LOBSTER
- ▁OVERFLOW
- ▁SNATCH
- ▁TRAGEDY
- ▁DEASEY
- ▁ENLIGHTEN
- ▁FRIGATE
- ▁INSPECT
- ▁MARVELLOUS
- ▁ATLANTIC
- ▁LUFTON
- ▁BLADE
- ▁CRASH
- ▁SLAUGHTER
- ▁ANNUAL
- ▁CONFERENCE
- ▁TWIG
- ▁REASSUR
- ▁UNIQUE
- ▁WRATH
- ▁CRADLE
- ▁HULLO
- ▁LIQUID
- ▁MIRTH
- ▁EXPERT
- ▁HARVEY
- ▁RESTORATION
- ▁PRETTI
- ▁APOLOGY
- ▁SLAIN
- ▁BARBER
- ▁UPROAR
- ▁SCANT
- ▁BADGER
- ▁GROCER
- ▁ACRES
- ▁BRIDLE
- ▁SPECIFI
- ▁TANGLE
- ▁FERTIL
- ▁PATRON
- WIXT
- LAMOUR
- ▁DARN
- ▁POPE
- ▁PERCEIV
- ▁CONCLUDE
- ▁SIMPL
- ▁GUILT
- ▁CARRIE
- EFFICIENT
- SGIVING
- ▁APPOINTMENT
- ▁APPRECIATION
- ▁CARTRIDGE
- ▁CHALLENGE
- ▁CRAYFISH
- ▁CRIMSON
- ▁CUCUMETTO
- ▁ENERGETIC
- ▁EPOCH
- ▁EXAMINING
- ▁EXTENSIVE
- ▁EXTINGUISH
- ▁GLOODY
- ▁INSIGNIFICANT
- ▁LANDLORD
- ▁LANGUID
- ▁LEGISLATURE
- ▁MAJESTIC
- ▁PACIFIC
- ▁PASTRINI
- ▁PHRONSIE
- ▁RECONCIL
- ▁SIMULTANEOUS
- ▁SKELETON
- ▁SKETCH
- ▁TRANSFORM
- ▁UNJUST
- ▁VEXED
- ▁ASYLUM
- ▁CLUSTER
- ▁ERRAND
- ▁EXPEND
- ▁NEGATIVE
- ▁NORHALA
- ▁SCANDAL
- ▁STIMULAT
- ▁SWEAT
- ▁COMPOUND
- ▁DECEMBER
- ▁EXPAND
- ▁PROLONG
- ▁PURITAN
- ▁CONQUEST
- ▁MAGUA
- ▁SANCHO
- ▁TRENCH
- ▁ENTITLE
- ▁PEPPER
- ▁DISASTER
- ▁REGAIN
- ▁SHREWD
- ▁SULLEN
- ▁CLAVIER
- ▁COLOSS
- ▁SHILLING
- ▁ETHEL
- ▁MYSTERIES
- ▁BULK
- ▁GRANDEUR
- ▁AGNES
- ▁CONVERT
- ▁WRIST
- ▁GLID
- ▁TERRACE
- ▁SONYA
- ▁DANTES
- ▁MOULD
- ▁MAGNET
- ▁PLOT
- RANK
- ▁CAVIT
- ▁SUBSID
- ▁SLAP
- TURNED
- ▁THREAT
- BREAK
- ▁ANCESTORS
- ▁ANTICIPATED
- ▁APPLAUSE
- ▁ASSAULT
- ▁ATTORNEY
- ▁AUTOMATIC
- ▁CARAVAN
- ▁CATASTROPHE
- ▁CAVALCANTI
- ▁CROMWELL
- ▁ENVOY
- ▁EXHAUSTION
- ▁FIEND
- ▁GENEROSITY
- ▁GIMBLET
- ▁HARDQUANONNE
- ▁HOUARN
- ▁INJURY
- ▁MACKINSON
- ▁OGLETHORPE
- ▁PETTICOAT
- ▁RASPBERR
- ▁REHNHJELM
- ▁REJOICING
- ▁REMNANT
- ▁SCOTLAND
- ▁SHRINK
- ▁STANDPOINT
- ▁TESTIMONY
- ▁THEREAFTER
- ▁THIRTIETH
- ▁TWENTIETH
- ▁TYRANT
- ▁VENTNOR
- ▁VETERAN
- ▁WHITTAKER
- ▁ZVERKOV
- ▁ARCHITECTUR
- ▁BLUNDER
- ▁DENSHER
- ▁FORTNIGHT
- ▁JUDITH
- ▁MARIANNE
- ▁MEMORABLE
- ▁REFINED
- ▁REVOLV
- ▁UNDERTAKING
- ▁CLUMP
- ▁GRUMBLE
- ▁SYMPATHI
- ▁TICKET
- ▁TWITCH
- ▁EDITION
- ▁FALANDER
- ▁CARTHAGE
- ▁ORLEANS
- ▁POSSUM
- ▁SWITCH
- ▁CLUNG
- ▁CARDINAL
- ▁GNAW
- ▁LOCATED
- ▁HARROW
- ▁RASH
- ▁SIEGE
- ▁LOAF
- ▁BRUISE
- ▁REGULAT
- ▁RESORT
- ▁SARAH
- ▁LEVIN
- ▁NAVY
- ▁MOOSE
- ▁STOOL
- ▁CHANCELLOR
- ▁INGENIOUS
- ▁CHALK
- ▁PRETENCE
- ▁REPAY
- ▁ROAST
- ▁PLUTO
- ▁BAFFL
- ▁STUMBL
- ▁SPHERE
- ▁PLEDGE
- ▁SPRAWL
- ▁WRAP
- ▁FRINGE
- ▁DREAR
- ARRINGTON
- ▁FEDERA
- KEEPER
- ▁PHYSIC
- ▁ADVENT
- HUMAN
- OLOGIST
- ▁ALEXANDR
- ▁APPARITION
- ▁BARTHOLEMY
- ▁CITOYEN
- ▁CLIMATE
- ▁CONTEMPORAR
- ▁DESOLATE
- ▁DISCONTENT
- ▁ELEPHANT
- ▁FERNANDO
- ▁FERRALTI
- ▁FOLIAGE
- ▁FUGITIVE
- ▁GAMBLING
- ▁INVOLUNTARILY
- ▁LABYRINTH
- ▁LEGITIMATE
- ▁MILLIONAIRE
- ▁PERCEPTION
- ▁PROPRIETY
- ▁REBELLION
- ▁REFRAIN
- ▁RUGGLES
- ▁SCRIPTURE
- ▁SPLENDOR
- ▁SQUADRON
- ▁STRICKEN
- ▁SWARM
- ▁THEODORA
- ▁TOMORROW
- ▁VELVET
- ▁WOLVES
- ▁DISREGARD
- ▁GLIMMER
- ▁SHROUD
- ▁TWINKLING
- ▁UNEQUAL
- ▁CHANNING
- ▁CLUMS
- ▁ENIGMA
- ▁NAVIGAT
- ▁TARKAS
- ▁TEMPERATURE
- ▁DIVISION
- ▁GRATIFICATION
- ▁MONUMENT
- ▁SQUEAK
- ▁KAVIN
- ▁INTERPOSE
- ▁THORNTON
- ▁SOLUTION
- ▁STREAK
- ▁SHRILL
- ▁APRON
- ▁PITEOUS
- ▁HAUGHTY
- ▁RECKLESS
- ▁EMPTI
- ▁WADMAN
- ▁BONNET
- ▁MARTHA
- ▁DUMB
- ▁SHATTER
- ▁ACUTE
- ▁BRINK
- ▁CAPRICE
- ▁HURON
- ▁INFERN
- ▁FOWL
- ▁ENRAGE
- ▁ADORN
- ▁CRUIS
- ▁PROBABILIT
- ▁EXPIR
- ▁IMPETU
- ▁OVERHEAR
- BURTON
- ▁TRANSLAT
- ▁ENGAGE
- ▁CONVINCE
- ▁ABNORMAL
- ▁GESTICULAT
- ▁ABOMINABL
- ▁ADVERSARY
- ▁ADVERTISER
- ▁ADVERTISING
- ▁ANNIHILAT
- ▁ARTILLERY
- ▁CATHEDRAL
- ▁COMPETITOR
- ▁COULSON
- ▁CREVICE
- ▁CUSHION
- ▁DEBRAY
- ▁DEJECT
- ▁DIETRICH
- ▁DISADVANTAGE
- ▁ELLISON
- ▁EMPHASIS
- ▁EXCURSION
- ▁FANTASTIC
- ▁HYPOTHES
- ▁INCONVENIENCE
- ▁INDESCRIBABLE
- ▁INDUSTRI
- ▁INVALID
- ▁MERCILESS
- ▁MESOPOTAMIA
- ▁MOSQUITO
- ▁NARRATIVE
- ▁NOWADAYS
- ▁OPPORTUNITIES
- ▁PROMISING
- ▁RECTANGLE
- ▁REMONSTRANCE
- ▁RESTAURANT
- ▁RIBBON
- ▁SCIENTIST
- ▁SHALMANESER
- ▁SKULL
- ▁SPRUCE
- ▁SUBSTANTIAL
- ▁SYMBOL
- ▁TEAPOT
- ▁TERRITORY
- ▁TRAFFIC
- ▁TREASON
- ▁TRUMPET
- ▁TYRANN
- ▁UNANIMOUS
- ▁UNAWARE
- ▁VICINITY
- ▁WREATH
- ▁ZADIG
- ▁CHATEAU
- ▁CONFRONT
- ▁DUCHESS
- ▁EMBODI
- ▁FEMININ
- ▁FURNACE
- ▁MONTONI
- ▁RENOWN
- ▁SMASH
- ▁HARVARD
- ▁NEWBERRY
- ▁PERFUME
- ▁SIGNATURE
- ▁SPLASH
- ▁SUPPOSITION
- ▁HARBOUR
- ▁ASSURANCE
- ▁BRISTOL
- ▁BUCKINGHAM
- ▁DUDLEY
- ▁INTENSITY
- ▁CHOPIN
- ▁ENLIST
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 18
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
LenaSchmidt/distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv
|
LenaSchmidt
| 2022-02-18T16:02:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.25 | 1.0 | 1273 | 0.8052 |
| 1.1199 | 2.0 | 2546 | 0.7950 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/charlievivante-darkerfirestar-retrokatg
|
huggingtweets
| 2022-02-18T15:25:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/charlievivante-darkerfirestar-retrokatg/1645197908512/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1493256091421061122/VidYLdbx_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1477273095140433924/PIoCpFux_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1481536189312253953/0Q62EiMi_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Damo & retro kat - retro blogger and Discord chatterbox🐾 & Sexual Lazy Susan</div>
<div style="text-align: center; font-size: 14px;">@charlievivante-darkerfirestar-retrokatg</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Damo & retro kat - retro blogger and Discord chatterbox🐾 & Sexual Lazy Susan.
| Data | Damo | retro kat - retro blogger and Discord chatterbox🐾 | Sexual Lazy Susan |
| --- | --- | --- | --- |
| Tweets downloaded | 1860 | 3250 | 3232 |
| Retweets | 518 | 4 | 1141 |
| Short tweets | 105 | 192 | 458 |
| Tweets kept | 1237 | 3054 | 1633 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/flprc1ro/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @charlievivante-darkerfirestar-retrokatg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g1pklik) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g1pklik/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/charlievivante-darkerfirestar-retrokatg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Milos/slovak-gpt-j-405M
|
Milos
| 2022-02-18T13:46:50Z | 1,188 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"Slovak GPT-J",
"causal-lm",
"sk",
"arxiv:2104.09864",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language:
- sk
tags:
- Slovak GPT-J
- pytorch
- causal-lm
license: gpl-3.0
---
# Slovak GPT-J-405M
Slovak GPT-J-405M is the second model released in Slovak GPT-J series after its smaller variant [Slovak GPT-J-162M](https://huggingface.co/Milos/slovak-gpt-j-162M). Since then a larger [Slovak GPT-J-1.4B](https://huggingface.co/Milos/slovak-gpt-j-1.4B) was released.
## Model Description
Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 405M trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| \\(n_{parameters}\\) | 405,677,136 |
| \\(n_{layers}\\) | 24 |
| \\(d_{model}\\) | 1024 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3†) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<p><strong>†</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure>
## Training data
Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model.
The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information.
## Training procedure
This model was trained for a bit more than 36.5 billion tokens over 69,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was `2.821`.
## Intended Use
Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt.
### How to use
This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-405M")
model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-405M")
```
When generating a prompt keep in mind these three things, and you should be good to go:
1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`.
2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`.
3. In case of a new line always enter `\n\n` instead of a single `\n`
To illustrate an example of a basic text generation:
```
>>> prompt = "Tradičné jedlo na Orave sú"
>>> encoded_input = tokenizer(prompt, return_tensors='pt')
>>> output = model.generate(**encoded_input)
>>> tokenizer.decode(output[0])
'Tradičné jedlo na Orave sú bryndzové halušky\n\nNa Orave sa v minulosti varilo viac druhov'
```
### Capabilities, Limitations, and Biases
The capability of this particular model is somewhat decent despite its small size totalling 405M parameters. With relative ease it can manage to generate interesting and grammatically correct content.
For example, you can try few of the following prompts. (For sake of simplicity, I have omitted all the boilerplate code and swapped `\n` for new lines).
Try generating "How to" articles:
```
PROMPT
Ako napísať pôsobivú esej krok za krokom
OUTPUT
Ako napísať pôsobivú esej krok za krokom
V tomto článku sa dozviete, ako napísať esej, ktorá bude mať v sebe niečo, čo zaujme a bude sa vám páčiť.
V prvom rade si musíte uvedomiť, že esej je vlastne písaný text, ktorý má byť napísaný na počítači.'
```
However, relying on the model to produce factually correct information isn't recommended.
Or try to generate chat conversations:
```
PROMPT
Peter: Ako sa mas?
Lucia: Ale celkom dobre. Co si robil?
Peter:
OUTPUT
Peter: Ako sa mas?
Lucia: Ale celkom dobre. Co si robil?
Peter: No, bol som na chate.
Lucia: A co si tam robil?
Peter: No, bol som tam s kamošmi.
Lucia: A co si tam robil?
Peter: No, bol som tam s kamošmi.
```
Apparently either Peters are more likely to act suspiciously in this particular context or there's a problem with the model. Let's assume that the second explanation may hold some merit. In general, GPT models can (and often will) get into a repeating cycle of generating same content. This is a common problem beyond the scope of this README, however, see [generate's documentation](https://huggingface.co/docs/transformers/master/en/main_classes/model#transformers.generation_utils.GenerationMixin.generate) on how to introduce a frequency/repetition penalty.
Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech:
```
>>> prompt = "Věta nesmí být sprostá a musí být zcela"
>>> encoded_input = tokenizer(prompt, return_tensors='pt')
>>> output = model.generate(**encoded_input, max_length=16)
>>> tokenizer.decode(output[0])
'Věta nesmí být sprostá a musí být zcela pravdivá.'
```
## Citation and Related Information
This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :)
If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile.
### BibTeX entry
To cite this model:
```bibtex
@misc{slovak-gpt-j-405m,
author = {Kondela, Milos},
title = {{Slovak GPT-J-405M}},
howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-405M}},
year = 2022,
month = February
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
## Acknowledgements
This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).
|
AkshatSurolia/BEiT-FaceMask-Finetuned
|
AkshatSurolia
| 2022-02-18T13:40:53Z | 93 | 1 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"dataset:Face-Mask18K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- Face-Mask18K
---
# BEiT for Face Mask Detection
BEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
## Training Metrics
epoch = 0.55
total_flos = 576468516GF
train_loss = 0.151
train_runtime = 0:58:16.56
train_samples_per_second = 16.505
train_steps_per_second = 1.032
---
## Evaluation Metrics
epoch = 0.55
eval_accuracy = 0.975
eval_loss = 0.0803
eval_runtime = 0:03:13.02
eval_samples_per_second = 18.629
eval_steps_per_second = 2.331
|
AkshatSurolia/DeiT-FaceMask-Finetuned
|
AkshatSurolia
| 2022-02-18T13:10:05Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deit",
"image-classification",
"dataset:Face-Mask18K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- Face-Mask18K
---
# Distilled Data-efficient Image Transformer for Face Mask Detection
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.
## Model description
This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
## Training Metrics
epoch = 2.0
total_flos = 2078245655GF
train_loss = 0.0438
train_runtime = 1:37:16.87
train_samples_per_second = 9.887
train_steps_per_second = 0.309
---
## Evaluation Metrics
epoch = 2.0
eval_accuracy = 0.9922
eval_loss = 0.0271
eval_runtime = 0:03:17.36
eval_samples_per_second = 18.22
eval_steps_per_second = 2.28
|
tosin/pcl_22
|
tosin
| 2022-02-18T12:33:52Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"text classification",
"en",
"dataset:PCL",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
license: cc-by-4.0
tags:
- text classification
- transformers
datasets:
- PCL
metrics:
- F1
inference: false
---
## T5Base-PCL
This is a fine-tuned model of T5 (base) on the patronizing and condenscending language (PCL) dataset by Pérez-Almendros et al (2020) used for Task 4 competition of SemEval-2022.
It is intended to be used as a classification model for identifying PCL (0 - neg; 1 - pos). The task prefix we used for the T5 model is 'classification: '.
The dataset it's trained on is limited in scope, as it covers only some news texts covering about 20 English-speaking countries.
The macro F1 score achieved on the test set, based on the official evaluation, is 0.5452.
More information about the original pre-trained model can be found [here](https://huggingface.co/t5-base)
* Classification examples:
|Prediction | Input |
|---------|------------|
|0 | selective kindness : in europe , some refugees are more equal than others |
|1 | he said their efforts should not stop only at creating many graduates but also extended to students from poor families so that they could break away from the cycle of poverty |
### How to use
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
model = T5ForConditionalGeneration.from_pretrained("tosin/pcl_22")
tokenizer = T5Tokenizer.from_pretrained("t5-base") # use the source tokenizer because T5 finetuned tokenizer breaks
tokenizer.pad_token = tokenizer.eos_token
input_ids = tokenizer("he said their efforts should not stop only at creating many graduates but also extended to students from poor families so that they could break away from the cycle of poverty", padding=True, truncation=True, return_tensors='pt').input_ids
outputs = model.generate(input_ids)
pred = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(pred)
|
Souranil/VAE
|
Souranil
| 2022-02-18T11:32:27Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
### VAE with Pytorch-Lightning
This is inspired from vae-playground. This is an example where we test out vae and conv_vae models with multiple datasets
like MNIST, celeb-a and MNIST-Fashion datasets.
This also comes with an example streamlit app & deployed at huggingface.
## Model Training
You can train the VAE models by using `train.py` and editing the `config.yaml` file. \
Hyperparameters to change are:
- model_type [vae|conv_vae]
- alpha
- hidden_dim
- dataset [celeba|mnist|fashion-mnist]
There are other configurations that can be changed if required like height, width, channels etc. It also contains the pytorch-lightning configs as well.
|
keras-io/ner-with-transformers
|
keras-io
| 2022-02-18T07:47:32Z | 20 | 1 |
tf-keras
|
[
"tf-keras",
"multimodal-entailment",
"generic",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- multimodal-entailment
- generic
---
## Tensorflow Keras Implementation of Named Entity Recognition using Transformers.
This repo contains code using the model. [Named Entity Recognition using Transformers](https://keras.io/examples/nlp/ner_transformers/).
Credits: [Varun Singh](https://www.linkedin.com/in/varunsingh2/) - Original Author
HF Contribution: [Rishav Chandra Varma](https://huggingface.co/reichenbach)
## Background Information
### Introduction
Named Entity Recognition (NER) is the process of identifying named entities in text. Example of named entities are: "Person", "Location", "Organization", "Dates" etc. NER is essentially a token classification task where every token is classified into one or more predetermined categories.
We will train a simple Transformer based model to perform NER. We will be using the data from CoNLL 2003 shared task. For more information about the dataset, please visit the [dataset website](https://www.clips.uantwerpen.be/conll2003/ner/). However, since obtaining this data requires an additional step of getting a free license, we will be using HuggingFace's datasets library which contains a processed version of this [dataset](https://huggingface.co/datasets/conll2003).
|
swcrazyfan/KingJamesify-T5-Base
|
swcrazyfan
| 2022-02-18T03:46:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Bible",
"KJV",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
license: apache-2.0
tags:
- Bible
- KJV
---
# King Jamesify
This seq2seq model is my first experiment for "translating" modern English to the famous KJV Bible style.
The model is based on Google's "T5 Efficient Base" model. It was fine-tuned for 3 epochs on a NET to KJV dataset.
|
huggingtweets/wellshit0
|
huggingtweets
| 2022-02-18T02:12:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/wellshit0/1645150325840/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1490480138752036868/de5inQAA_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">well shit</div>
<div style="text-align: center; font-size: 14px;">@wellshit0</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from well shit.
| Data | well shit |
| --- | --- |
| Tweets downloaded | 3155 |
| Retweets | 319 |
| Short tweets | 768 |
| Tweets kept | 2068 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1a1dkcwv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wellshit0's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3b61bmmt) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3b61bmmt/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wellshit0')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hakurei/lit-125M
|
hakurei
| 2022-02-17T22:52:19Z | 15 | 6 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"causal-lm",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- pytorch
- causal-lm
license: mit
---
# Lit-125M - A Small Fine-tuned Model For Fictional Storytelling
Lit-125M is a GPT-Neo 125M model fine-tuned on 2GB of a diverse range of light novels, erotica, and annotated literature for the purpose of generating novel-like fictional text.
## Model Description
The model used for fine-tuning is [GPT-Neo 125M](https://huggingface.co/EleutherAI/gpt-neo-125M), which is a 125 million parameter auto-regressive language model trained on [The Pile](https://pile.eleuther.ai/)..
## Training Data & Annotative Prompting
The data used in fine-tuning has been gathered from various sources such as the [Gutenberg Project](https://www.gutenberg.org/). The annotated fiction dataset has prepended tags to assist in generating towards a particular style. Here is an example prompt that shows how to use the annotations.
```
[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror; Tags: 3rdperson, scary; Style: Dark ]
***
When a traveler in north central Massachusetts takes the wrong fork...
```
The annotations can be mixed and matched to help generate towards a specific style.
## Downstream Uses
This model can be used for entertainment purposes and as a creative writing assistant for fiction writers. The small size of the model can also help for easy debugging or further development of other models with a similar purpose.
## Example Code
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('hakurei/lit-125M')
tokenizer = AutoTokenizer.from_pretrained('hakurei/lit-125M')
prompt = '''[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ]
***
When a traveler'''
input_ids = tokenizer.encode(prompt, return_tensors='pt')
output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0])
print(generated_text)
```
An example output from this code produces a result that will look similar to:
```
[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ]
***
When a traveler takes a trip through the streets of the world, the traveler feels like a youkai with a whole world inside her mind. It can be very scary for a youkai. When someone goes in the opposite direction and knocks on your door, it is actually the first time you have ever come to investigate something like that.
That's right: everyone has heard stories about youkai, right? If you have heard them, you know what I'm talking about.
It's hard not to say you
```
## Team members and Acknowledgements
- [Anthony Mercurio](https://github.com/harubaru)
- Imperishable_NEET
|
bryan6aero/wav2vec2-base-timit-demo-colab
|
bryan6aero
| 2022-02-17T22:00:53Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4779
- Wer: 0.3453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4307 | 4.0 | 500 | 1.4129 | 0.9980 |
| 0.626 | 8.0 | 1000 | 0.4605 | 0.4499 |
| 0.2199 | 12.0 | 1500 | 0.4457 | 0.3898 |
| 0.1303 | 16.0 | 2000 | 0.4418 | 0.3771 |
| 0.0851 | 20.0 | 2500 | 0.4647 | 0.3548 |
| 0.0604 | 24.0 | 3000 | 0.4603 | 0.3499 |
| 0.0461 | 28.0 | 3500 | 0.4779 | 0.3453 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
oemga38/distilbert-base-uncased-finetuned-cola
|
oemga38
| 2022-02-17T21:51:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5570389007427182
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7475
- Matthews Correlation: 0.5570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5251 | 1.0 | 535 | 0.5304 | 0.4272 |
| 0.3474 | 2.0 | 1070 | 0.4874 | 0.5136 |
| 0.2356 | 3.0 | 1605 | 0.6454 | 0.5314 |
| 0.1699 | 4.0 | 2140 | 0.7475 | 0.5570 |
| 0.1244 | 5.0 | 2675 | 0.8525 | 0.5478 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
BigSalmon/GPTNeo350MInformalToFormalLincoln
|
BigSalmon
| 2022-02-17T21:37:07Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
|
huggingtweets/owljohn
|
huggingtweets
| 2022-02-17T21:30:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/owljohn/1645133439835/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/984175260135231488/eqWrIzlg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Scott Hutchison</div>
<div style="text-align: center; font-size: 14px;">@owljohn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Scott Hutchison.
| Data | Scott Hutchison |
| --- | --- |
| Tweets downloaded | 2807 |
| Retweets | 454 |
| Short tweets | 239 |
| Tweets kept | 2114 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pbwq7lq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @owljohn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6jgoemgd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6jgoemgd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/owljohn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
JBNLRY/distilbert-base-uncased-finetuned-cola
|
JBNLRY
| 2022-02-17T19:56:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5471613867597194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8366
- Matthews Correlation: 0.5472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5224 | 1.0 | 535 | 0.5432 | 0.4243 |
| 0.3447 | 2.0 | 1070 | 0.4968 | 0.5187 |
| 0.2347 | 3.0 | 1605 | 0.6540 | 0.5280 |
| 0.1747 | 4.0 | 2140 | 0.7547 | 0.5367 |
| 0.1255 | 5.0 | 2675 | 0.8366 | 0.5472 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
mofawzy/bert-ajgt
|
mofawzy
| 2022-02-17T19:56:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"AJGT",
"ar",
"dataset:AJGT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- ar
datasets:
- AJGT
tags:
- AJGT
widget:
- text: "يهدي الله من يشاء"
- text: "الاسلوب قذر وقمامه"
---
# BERT-AJGT
Arabic version bert model fine tuned on AJGT dataset
## Data
The model were fine-tuned on ~1800 sentence from twitter for Jordanian dialect.
## Results
| class | precision | recall | f1-score | Support |
|----------|-----------|--------|----------|---------|
| 0 | 0.9462 | 0.9778 | 0.9617 | 90 |
| 1 | 0.9399 | 0.9689 | 0.9542 | 90 |
| Accuracy | | | 0.9611 | 180 |
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name="mofawzy/bert-ajgt"
model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels=2)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
|
cankeles/ConvTasNet_WHAMR_enhsingle_16k
|
cankeles
| 2022-02-17T19:32:29Z | 27 | 2 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `cankeles/ConvTasNet_WHAMR_enhsingle_16k`
Description:
This model was fine tuned on a modified version of WHAMR! where the speakers were taken from audiobook recordings and reverb was added by Pedalboard, Spotify.
The initial model was taken from here: https://huggingface.co/JorisCos/ConvTasNet_Libri1Mix_enhsingle_16k
This model was trained by M. Can Keles using the WHAM recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the WHAM dataset.
Training config:
```yml
data:
mode: min
nondefault_nsrc: null
sample_rate: 16000
task: enh_single
train_dir: wav16k/min/tr/
valid_dir: wav16k/min/cv/
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
main_args:
exp_dir: exp/tmp
help: null
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments: {}
training:
batch_size: 2
early_stop: true
epochs: 10
half_lr: true
num_workers: 4
```
Results:
```
'sar': 13.612368475881558,
'sar_imp': 9.709316571584433,
'sdr': 13.612368475881558,
'sdr_imp': 9.709316571584433,
'si_sdr': 12.978640274976373,
'si_sdr_imp': 9.161273840297232,
'sir': inf,
'sir_imp': nan,
'stoi': 0.9214516928197306,
'stoi_imp': 0.11657488247668318
```
|
keras-io/CycleGAN
|
keras-io
| 2022-02-17T16:47:55Z | 13 | 10 |
tf-keras
|
[
"tf-keras",
"gan",
"computer vision",
"horse to zebra",
"license:cc0-1.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- gan
- computer vision
- horse to zebra
license:
- cc0-1.0
---
## Keras Implementation of CycleGAN model using [Horse to Zebra dataset](https://www.tensorflow.org/datasets/catalog/cycle_gan#cycle_ganhorse2zebra) 🐴 -> 🦓
This repo contains the model and the notebook [to this Keras example on CycleGAN](https://keras.io/examples/generative/cyclegan/).
Full credits to: [Aakash Kumar Nain](https://twitter.com/A_K_Nain)
## Background Information
CycleGAN is a model that aims to solve the image-to-image translation problem. The goal of the image-to-image translation problem is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, obtaining paired examples isn't always feasible. CycleGAN tries to learn this mapping without requiring paired input-output images, using cycle-consistent adversarial networks.

|
huggingtweets/janiedied
|
huggingtweets
| 2022-02-17T15:30:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/janiedied/1645111847557/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1455690959132532738/Z4UvDtLA_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Poolgirl Janie Diamond</div>
<div style="text-align: center; font-size: 14px;">@janiedied</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Poolgirl Janie Diamond.
| Data | Poolgirl Janie Diamond |
| --- | --- |
| Tweets downloaded | 1505 |
| Retweets | 552 |
| Short tweets | 283 |
| Tweets kept | 670 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3232onrl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @janiedied's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/smx9pf1l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/smx9pf1l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/janiedied')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Milos/slovak-gpt-j-1.4B
|
Milos
| 2022-02-17T14:29:47Z | 415 | 6 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"Slovak GPT-J",
"causal-lm",
"sk",
"arxiv:2104.09864",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language:
- sk
tags:
- Slovak GPT-J
- pytorch
- causal-lm
license: gpl-3.0
---
# Slovak GPT-J-1.4B
Slovak GPT-J-1.4B with the whopping `1,415,283,792` parameters is the latest and the largest model released in Slovak GPT-J series. Smaller variants, [Slovak GPT-J-405M](https://huggingface.co/Milos/slovak-gpt-j-405M) and [Slovak GPT-J-162M](https://huggingface.co/Milos/slovak-gpt-j-162M), are still available.
## Model Description
Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 1.4B trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| \\(n_{parameters}\\) | 1,415,283,792 |
| \\(n_{layers}\\) | 24 |
| \\(d_{model}\\) | 2048 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3†) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<p><strong>†</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure>
## Training data
Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model.
The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information.
## Training procedure
This model was trained for a bit more than 26.5 billion tokens over 48,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was `2.657`.
## Intended Use
Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt.
### How to use
This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-1.4B")
model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-1.4B")
```
When generating a prompt keep in mind these three things, and you should be good to go:
1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`.
2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`.
3. In case of a new line always enter `\n\n` instead of a single `\n`
To illustrate an example of a basic text generation:
```
>>> prompt = "Tradičné jedlo na Orave sú"
>>> encoded_input = tokenizer(prompt, return_tensors='pt')
>>> output = model.generate(**encoded_input)
>>> tokenizer.decode(output[0])
'Tradičné jedlo na Orave sú bryndzové halušky\n\nNa Orave sa v minulosti varilo viac druhov'
```
### Capabilities, Limitations, and Biases
Based on the parameter count and our internal testing this should be by far the most capable model out of all models released in Slovak GPT-J series.
For sake of simplicity, I have omitted all the boilerplate code and swapped `\n` for new lines in the examples that follow. These were generated with both `top_p` and `temperature` set to `0.9`.
A spin on "Mor ho!" by Samo Chalpuka
```
PROMPT
Zleteli orly z Tatry, tiahnu na podolia, ponad vysoké hory, ponad rovné polia; preleteli cez Dunaj, cez tú šíru vodu, sadli tam za pomedzím slovenského rodu.
Duní Dunaj a luna za lunou sa valí
OUTPUT
Zleteli orly z Tatry, tiahnu na podolia, ponad vysoké hory, ponad rovné polia; preleteli cez Dunaj, cez tú šíru vodu, sadli tam za pomedzím slovenského rodu.
Duní Dunaj a luna za lunou sa valí: nad ním svieti pevný hrad na vysokom bralí. Pod tým hradom je zem slovenská, na nej je Slovensko naše.
Na tom vrchu leží slovenský národ, pod tým bralom leží krajina slovenská. A pod tým hradom leží i jeho sláva: na tom vrchu, na tom brale leží sláva slovenského národa.```
```
Jokes are a bit hit or miss. Very often they contain vulgarisms and/or are highly politically incorrect. This one is rather exception than the rule.
```
PROMPT
Nasledujú najlepšie vtipy.
Prečo
OUTPUT
Nasledujú najlepšie vtipy.
Prečo má policajt pri sebe vždy dva nože? Jeden je na sebaobranu, druhý na boj zblízka.
```
The infamous unicorn prompt from GPT-2 paper.
```
PROMPT
V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky.
OUTPUT
V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky.
V Andách ich totiž žilo približne 700 jedincov, ktoré sa živili výhradne materským mliekom. Slováci sú tak pravdepodobne prvými Európanmi, ktorí v Andách stretli jednorožca. "Je to dôkaz, že tieto zvieratá sú inteligentné a že žijú v Andách už stovky rokov," povedal pre webový portál televízie JOJ profesor geológie, geografie a zoológie, Milan Kováč.
Podľa profesora Kováča si v Andách zvieratá vytvárajú svoj vlastný jazyk. Je to zároveň dôkaz, že jednorožce žili v minulosti aj v slovenských pohoriach. "Jednorožce sa tam síce vyskytovali, ale neboli tak dobre preskúmané, ako teraz v Andách."
Na Slovensku však ľudia o jednorožcoch donedávna vedeli veľmi málo.<|endoftext|>
```
Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech:
```
>>> prompt = "Věta nesmí být sprostá a musí být zcela"
>>> encoded_input = tokenizer(prompt, return_tensors='pt')
>>> output = model.generate(**encoded_input, max_length=16)
>>> tokenizer.decode(output[0])
'Věta nesmí být sprostá a musí být zcela pravdivá.'
```
## Citation and Related Information
This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :)
If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile.
### BibTeX entry
To cite this model:
```bibtex
@misc{slovak-gpt-j-1.4B,
author = {Kondela, Milos},
title = {{Slovak GPT-J-1.4B}},
howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-1.4B}},
year = 2022,
month = February
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
## Acknowledgements
This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).
|
cuongngm/layoutlm-bill
|
cuongngm
| 2022-02-17T09:45:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv2",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
Fine tuning LayoutLMv2 model on Vietnamese bill dataset
```python
from transformers import LayoutLMv2ForTokenClassification
model = LayoutLMv2ForTokenClassification.from_pretrained('cuongngm/layoutlm-bill', num_labels=len(labels))
```
labels = ['price',
'storename',
'total_cost',
'phone',
'address',
'unitprice',
'item',
'subitem',
'other',
'time',
'unit',
'total refunds',
'total_qty',
'seller',
'total_received']
|
lrsowmya/SentimentAnalyer
|
lrsowmya
| 2022-02-17T09:42:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
This sentiment analyzer is used for analyzing the comments of restaurant's review
|
DATEXIS/CORe-clinical-diagnosis-prediction
|
DATEXIS
| 2022-02-17T09:36:23Z | 636 | 29 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"medical",
"clinical",
"diagnosis",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: "en"
tags:
- bert
- medical
- clinical
- diagnosis
- text-classification
thumbnail: "https://core.app.datexis.com/static/paper.png"
widget:
- text: "Patient with hypertension presents to ICU."
---
# CORe Model - Clinical Diagnosis Prediction
## Model description
The CORe (_Clinical Outcome Representations_) model is introduced in the paper [Clinical Outcome Predictions from Admission Notes using Self-Supervised Knowledge Integration](https://www.aclweb.org/anthology/2021.eacl-main.75.pdf).
It is based on BioBERT and further pre-trained on clinical notes, disease descriptions and medical articles with a specialised _Clinical Outcome Pre-Training_ objective.
This model checkpoint is **fine-tuned on the task of diagnosis prediction**.
The model expects patient admission notes as input and outputs multi-label ICD9-code predictions.
#### Model Predictions
The model makes predictions on a total of 9237 labels. These contain 3- and 4-digit ICD9 codes and textual descriptions of these codes. The 4-digit codes and textual descriptions help to incorporate further topical and hierarchical information into the model during training (see Section 4.2 _ICD+: Incorporation of ICD Hierarchy_ in our paper). We recommend to only use the **3-digit code predictions at inference time**, because only those have been evaluated in our work.
#### How to use CORe Diagnosis Prediction
You can load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bvanaken/CORe-clinical-diagnosis-prediction")
model = AutoModelForSequenceClassification.from_pretrained("bvanaken/CORe-clinical-diagnosis-prediction")
```
The following code shows an inference example:
```
input = "CHIEF COMPLAINT: Headaches\n\nPRESENT ILLNESS: 58yo man w/ hx of hypertension, AFib on coumadin presented to ED with the worst headache of his life."
tokenized_input = tokenizer(input, return_tensors="pt")
output = model(**tokenized_input)
import torch
predictions = torch.sigmoid(output.logits)
predicted_labels = [model.config.id2label[_id] for _id in (predictions > 0.3).nonzero()[:, 1].tolist()]
```
Note: For the best performance, we recommend to determine the thresholds (0.3 in this example) individually per label.
### More Information
For all the details about CORe and contact info, please visit [CORe.app.datexis.com](http://core.app.datexis.com/).
### Cite
```bibtex
@inproceedings{vanaken21,
author = {Betty van Aken and
Jens-Michalis Papaioannou and
Manuel Mayrdorfer and
Klemens Budde and
Felix A. Gers and
Alexander Löser},
title = {Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration},
booktitle = {Proceedings of the 16th Conference of the European Chapter of the
Association for Computational Linguistics: Main Volume, {EACL} 2021,
Online, April 19 - 23, 2021},
publisher = {Association for Computational Linguistics},
year = {2021},
}
```
|
phongdtd/wavlm-vindata-demo-dist
|
phongdtd
| 2022-02-17T05:00:57Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"phongdtd/VinDataVLSP",
"generated_from_trainer",
"dataset:vin_data_vlsp",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- automatic-speech-recognition
- phongdtd/VinDataVLSP
- generated_from_trainer
datasets:
- vin_data_vlsp
model-index:
- name: wavlm-vindata-demo-dist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-vindata-demo-dist
This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the PHONGDTD/VINDATAVLSP - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4439
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:---:|
| 4.0704 | 0.01 | 100 | 3.8768 | 1.0 |
| 3.6236 | 0.01 | 200 | 3.4611 | 1.0 |
| 6.597 | 0.02 | 300 | 3.4557 | 1.0 |
| 3.4744 | 0.03 | 400 | 3.4567 | 1.0 |
| 5.3992 | 0.04 | 500 | 3.4631 | 1.0 |
| 4.5348 | 0.04 | 600 | 3.4651 | 1.0 |
| 3.2457 | 0.05 | 700 | 3.4917 | 1.0 |
| 3.9245 | 0.06 | 800 | 3.4680 | 1.0 |
| 3.2904 | 0.07 | 900 | 3.4518 | 1.0 |
| 3.4768 | 0.07 | 1000 | 3.4506 | 1.0 |
| 3.2418 | 0.08 | 1100 | 3.4474 | 1.0 |
| 3.3111 | 0.09 | 1200 | 3.4684 | 1.0 |
| 3.986 | 0.09 | 1300 | 3.4465 | 1.0 |
| 4.3206 | 0.1 | 1400 | 3.4723 | 1.0 |
| 4.682 | 0.11 | 1500 | 3.4732 | 1.0 |
| 4.858 | 0.12 | 1600 | 3.4416 | 1.0 |
| 3.2949 | 0.12 | 1700 | 3.4481 | 1.0 |
| 3.4435 | 0.13 | 1800 | 3.4570 | 1.0 |
| 5.0695 | 0.14 | 1900 | 3.4448 | 1.0 |
| 3.4962 | 0.14 | 2000 | 3.4416 | 1.0 |
| 3.4891 | 0.15 | 2100 | 3.4455 | 1.0 |
| 4.1281 | 0.16 | 2200 | 3.4447 | 1.0 |
| 3.5956 | 0.17 | 2300 | 3.4512 | 1.0 |
| 3.6312 | 0.17 | 2400 | 3.4484 | 1.0 |
| 4.5383 | 0.18 | 2500 | 3.4435 | 1.0 |
| 6.1329 | 0.19 | 2600 | 3.4530 | 1.0 |
| 3.709 | 0.2 | 2700 | 3.4466 | 1.0 |
| 3.289 | 0.2 | 2800 | 3.4463 | 1.0 |
| 4.3301 | 0.21 | 2900 | 3.4418 | 1.0 |
| 4.6656 | 0.22 | 3000 | 3.4447 | 1.0 |
| 3.4288 | 0.22 | 3100 | 3.4715 | 1.0 |
| 3.5506 | 0.23 | 3200 | 3.4437 | 1.0 |
| 3.7497 | 0.24 | 3300 | 3.4910 | 1.0 |
| 3.5198 | 0.25 | 3400 | 3.4574 | 1.0 |
| 3.4183 | 0.25 | 3500 | 3.4607 | 1.0 |
| 4.5573 | 0.26 | 3600 | 3.4421 | 1.0 |
| 3.5737 | 0.27 | 3700 | 3.4481 | 1.0 |
| 4.9008 | 0.28 | 3800 | 3.4411 | 1.0 |
| 4.8725 | 0.28 | 3900 | 3.4422 | 1.0 |
| 3.5799 | 0.29 | 4000 | 3.4659 | 1.0 |
| 3.3257 | 0.3 | 4100 | 3.4519 | 1.0 |
| 3.6887 | 0.3 | 4200 | 3.4827 | 1.0 |
| 3.3037 | 0.31 | 4300 | 3.4632 | 1.0 |
| 5.5543 | 0.32 | 4400 | 3.4480 | 1.0 |
| 3.2898 | 0.33 | 4500 | 3.4404 | 1.0 |
| 3.2794 | 0.33 | 4600 | 3.4633 | 1.0 |
| 3.7896 | 0.34 | 4700 | 3.4439 | 1.0 |
| 3.6662 | 0.35 | 4800 | 3.4587 | 1.0 |
| 3.588 | 0.35 | 4900 | 3.4520 | 1.0 |
| 4.0535 | 0.36 | 5000 | 3.4450 | 1.0 |
| 3.4335 | 0.37 | 5100 | 3.4577 | 1.0 |
| 3.6317 | 0.38 | 5200 | 3.4443 | 1.0 |
| 5.2564 | 0.38 | 5300 | 3.4505 | 1.0 |
| 3.8781 | 0.39 | 5400 | 3.4418 | 1.0 |
| 4.6269 | 0.4 | 5500 | 3.4425 | 1.0 |
| 3.6095 | 0.41 | 5600 | 3.4581 | 1.0 |
| 4.6164 | 0.41 | 5700 | 3.4404 | 1.0 |
| 3.117 | 0.42 | 5800 | 3.4596 | 1.0 |
| 4.3939 | 0.43 | 5900 | 3.4401 | 1.0 |
| 3.5856 | 0.43 | 6000 | 3.4413 | 1.0 |
| 3.5187 | 0.44 | 6100 | 3.4452 | 1.0 |
| 4.7991 | 0.45 | 6200 | 3.4481 | 1.0 |
| 3.3905 | 0.46 | 6300 | 3.4420 | 1.0 |
| 3.5086 | 0.46 | 6400 | 3.4494 | 1.0 |
| 4.8217 | 0.47 | 6500 | 3.4477 | 1.0 |
| 3.3193 | 0.48 | 6600 | 3.4382 | 1.0 |
| 5.3482 | 0.49 | 6700 | 3.4580 | 1.0 |
| 3.3947 | 0.49 | 6800 | 3.4767 | 1.0 |
| 6.3352 | 0.5 | 6900 | 3.4476 | 1.0 |
| 3.4448 | 0.51 | 7000 | 3.4557 | 1.0 |
| 3.5358 | 0.51 | 7100 | 3.4438 | 1.0 |
| 3.3499 | 0.52 | 7200 | 3.4445 | 1.0 |
| 3.6932 | 0.53 | 7300 | 3.4463 | 1.0 |
| 6.9058 | 0.54 | 7400 | 3.4482 | 1.0 |
| 4.5514 | 0.54 | 7500 | 3.4422 | 1.0 |
| 3.517 | 0.55 | 7600 | 3.4505 | 1.0 |
| 7.4479 | 0.56 | 7700 | 3.4461 | 1.0 |
| 3.3761 | 0.56 | 7800 | 3.4511 | 1.0 |
| 4.5925 | 0.57 | 7900 | 3.4389 | 1.0 |
| 5.2682 | 0.58 | 8000 | 3.4563 | 1.0 |
| 5.6748 | 0.59 | 8100 | 3.4601 | 1.0 |
| 4.4335 | 0.59 | 8200 | 3.4439 | 1.0 |
| 5.1686 | 0.6 | 8300 | 3.4444 | 1.0 |
| 3.5245 | 0.61 | 8400 | 3.4629 | 1.0 |
| 4.9426 | 0.62 | 8500 | 3.4389 | 1.0 |
| 4.4654 | 0.62 | 8600 | 3.4427 | 1.0 |
| 3.5626 | 0.63 | 8700 | 3.4521 | 1.0 |
| 4.7086 | 0.64 | 8800 | 3.4489 | 1.0 |
| 3.238 | 0.64 | 8900 | 3.4478 | 1.0 |
| 4.2738 | 0.65 | 9000 | 3.4510 | 1.0 |
| 3.4468 | 0.66 | 9100 | 3.4411 | 1.0 |
| 3.2292 | 0.67 | 9200 | 3.4416 | 1.0 |
| 3.4972 | 0.67 | 9300 | 3.4643 | 1.0 |
| 7.3434 | 0.68 | 9400 | 3.4587 | 1.0 |
| 3.708 | 0.69 | 9500 | 3.4799 | 1.0 |
| 4.6466 | 0.69 | 9600 | 3.4490 | 1.0 |
| 3.3347 | 0.7 | 9700 | 3.4532 | 1.0 |
| 5.1486 | 0.71 | 9800 | 3.4427 | 1.0 |
| 3.6456 | 0.72 | 9900 | 3.4492 | 1.0 |
| 5.3904 | 0.72 | 10000 | 3.4497 | 1.0 |
| 4.8832 | 0.73 | 10100 | 3.4476 | 1.0 |
| 3.4482 | 0.74 | 10200 | 3.4539 | 1.0 |
| 3.617 | 0.75 | 10300 | 3.4547 | 1.0 |
| 5.4691 | 0.75 | 10400 | 3.4663 | 1.0 |
| 4.2759 | 0.76 | 10500 | 3.4401 | 1.0 |
| 8.2106 | 0.77 | 10600 | 3.4404 | 1.0 |
| 3.4894 | 0.77 | 10700 | 3.4426 | 1.0 |
| 3.6875 | 0.78 | 10800 | 3.4439 | 1.0 |
| 3.3277 | 0.79 | 10900 | 3.4446 | 1.0 |
| 4.5175 | 0.8 | 11000 | 3.4456 | 1.0 |
| 5.2161 | 0.8 | 11100 | 3.4388 | 1.0 |
| 3.5234 | 0.81 | 11200 | 3.4418 | 1.0 |
| 4.2212 | 0.82 | 11300 | 3.4392 | 1.0 |
| 3.6923 | 0.83 | 11400 | 3.4494 | 1.0 |
| 3.4863 | 0.83 | 11500 | 3.4572 | 1.0 |
| 6.3201 | 0.84 | 11600 | 3.4377 | 1.0 |
| 3.7543 | 0.85 | 11700 | 3.4533 | 1.0 |
| 3.3959 | 0.85 | 11800 | 3.4600 | 1.0 |
| 3.5691 | 0.86 | 11900 | 3.4673 | 1.0 |
| 3.49 | 0.87 | 12000 | 3.4407 | 1.0 |
| 7.1165 | 0.88 | 12100 | 3.4427 | 1.0 |
| 6.731 | 0.88 | 12200 | 3.4394 | 1.0 |
| 4.4682 | 0.89 | 12300 | 3.4407 | 1.0 |
| 3.3696 | 0.9 | 12400 | 3.4415 | 1.0 |
| 4.0241 | 0.9 | 12500 | 3.4454 | 1.0 |
| 3.521 | 0.91 | 12600 | 3.4379 | 1.0 |
| 5.5273 | 0.92 | 12700 | 3.4423 | 1.0 |
| 3.4781 | 0.93 | 12800 | 3.4635 | 1.0 |
| 3.4542 | 0.93 | 12900 | 3.4411 | 1.0 |
| 3.2363 | 0.94 | 13000 | 3.4396 | 1.0 |
| 5.3009 | 0.95 | 13100 | 3.4458 | 1.0 |
| 3.498 | 0.96 | 13200 | 3.4398 | 1.0 |
| 6.3325 | 0.96 | 13300 | 3.4514 | 1.0 |
| 3.5368 | 0.97 | 13400 | 3.4437 | 1.0 |
| 5.1164 | 0.98 | 13500 | 3.4623 | 1.0 |
| 3.6144 | 0.98 | 13600 | 3.4512 | 1.0 |
| 6.6018 | 0.99 | 13700 | 3.4493 | 1.0 |
| 3.7539 | 1.0 | 13800 | 3.4597 | 1.0 |
| 3.2903 | 1.01 | 13900 | 3.4813 | 1.0 |
| 3.3243 | 1.01 | 14000 | 3.4510 | 1.0 |
| 3.3485 | 1.02 | 14100 | 3.4389 | 1.0 |
| 3.6197 | 1.03 | 14200 | 3.4519 | 1.0 |
| 3.322 | 1.04 | 14300 | 3.4399 | 1.0 |
| 3.2897 | 1.04 | 14400 | 3.4378 | 1.0 |
| 3.3969 | 1.05 | 14500 | 3.4476 | 1.0 |
| 3.3289 | 1.06 | 14600 | 3.4646 | 1.0 |
| 3.3556 | 1.06 | 14700 | 3.4520 | 1.0 |
| 3.2527 | 1.07 | 14800 | 3.4575 | 1.0 |
| 3.4003 | 1.08 | 14900 | 3.4443 | 1.0 |
| 3.3171 | 1.09 | 15000 | 3.4434 | 1.0 |
| 3.4034 | 1.09 | 15100 | 3.4448 | 1.0 |
| 3.4363 | 1.1 | 15200 | 3.4560 | 1.0 |
| 3.3969 | 1.11 | 15300 | 3.4405 | 1.0 |
| 3.4134 | 1.11 | 15400 | 3.4408 | 1.0 |
| 3.5059 | 1.12 | 15500 | 3.4395 | 1.0 |
| 3.3963 | 1.13 | 15600 | 3.4488 | 1.0 |
| 3.2937 | 1.14 | 15700 | 3.4482 | 1.0 |
| 3.5635 | 1.14 | 15800 | 3.4621 | 1.0 |
| 3.4463 | 1.15 | 15900 | 3.4433 | 1.0 |
| 3.2588 | 1.16 | 16000 | 3.4434 | 1.0 |
| 3.3617 | 1.17 | 16100 | 3.4542 | 1.0 |
| 3.3721 | 1.17 | 16200 | 3.4388 | 1.0 |
| 3.3867 | 1.18 | 16300 | 3.4577 | 1.0 |
| 3.34 | 1.19 | 16400 | 3.4510 | 1.0 |
| 3.3676 | 1.19 | 16500 | 3.4434 | 1.0 |
| 3.5519 | 1.2 | 16600 | 3.4410 | 1.0 |
| 3.3129 | 1.21 | 16700 | 3.4507 | 1.0 |
| 3.3368 | 1.22 | 16800 | 3.4718 | 1.0 |
| 3.3107 | 1.22 | 16900 | 3.4439 | 1.0 |
| 3.2987 | 1.23 | 17000 | 3.4471 | 1.0 |
| 3.3102 | 1.24 | 17100 | 3.4435 | 1.0 |
| 3.2089 | 1.25 | 17200 | 3.4432 | 1.0 |
| 3.415 | 1.25 | 17300 | 3.4472 | 1.0 |
| 3.2884 | 1.26 | 17400 | 3.4388 | 1.0 |
| 3.3837 | 1.27 | 17500 | 3.4444 | 1.0 |
| 3.3181 | 1.27 | 17600 | 3.4438 | 1.0 |
| 3.3071 | 1.28 | 17700 | 3.4406 | 1.0 |
| 3.389 | 1.29 | 17800 | 3.4573 | 1.0 |
| 3.3246 | 1.3 | 17900 | 3.4580 | 1.0 |
| 3.3122 | 1.3 | 18000 | 3.4455 | 1.0 |
| 3.282 | 1.31 | 18100 | 3.4606 | 1.0 |
| 3.2671 | 1.32 | 18200 | 3.4378 | 1.0 |
| 3.3441 | 1.32 | 18300 | 3.4432 | 1.0 |
| 3.3115 | 1.33 | 18400 | 3.4458 | 1.0 |
| 3.3542 | 1.34 | 18500 | 3.4617 | 1.0 |
| 3.3924 | 1.35 | 18600 | 3.4549 | 1.0 |
| 3.4895 | 1.35 | 18700 | 3.4557 | 1.0 |
| 3.4071 | 1.36 | 18800 | 3.4462 | 1.0 |
| 3.3373 | 1.37 | 18900 | 3.4606 | 1.0 |
| 3.3497 | 1.38 | 19000 | 3.4458 | 1.0 |
| 3.3088 | 1.38 | 19100 | 3.4712 | 1.0 |
| 3.333 | 1.39 | 19200 | 3.4483 | 1.0 |
| 3.3773 | 1.4 | 19300 | 3.4455 | 1.0 |
| 3.357 | 1.4 | 19400 | 3.4379 | 1.0 |
| 3.3506 | 1.41 | 19500 | 3.4477 | 1.0 |
| 3.2944 | 1.42 | 19600 | 3.4478 | 1.0 |
| 3.241 | 1.43 | 19700 | 3.4492 | 1.0 |
| 3.4317 | 1.43 | 19800 | 3.4441 | 1.0 |
| 3.3478 | 1.44 | 19900 | 3.4385 | 1.0 |
| 3.3952 | 1.45 | 20000 | 3.4437 | 1.0 |
| 3.4808 | 1.46 | 20100 | 3.4644 | 1.0 |
| 3.3625 | 1.46 | 20200 | 3.4529 | 1.0 |
| 3.4842 | 1.47 | 20300 | 3.4524 | 1.0 |
| 3.3887 | 1.48 | 20400 | 3.4551 | 1.0 |
| 3.3198 | 1.48 | 20500 | 3.4433 | 1.0 |
| 3.3397 | 1.49 | 20600 | 3.4448 | 1.0 |
| 3.3173 | 1.5 | 20700 | 3.4590 | 1.0 |
| 3.3687 | 1.51 | 20800 | 3.4720 | 1.0 |
| 3.257 | 1.51 | 20900 | 3.4461 | 1.0 |
| 3.4451 | 1.52 | 21000 | 3.4541 | 1.0 |
| 3.2979 | 1.53 | 21100 | 3.4556 | 1.0 |
| 3.3566 | 1.53 | 21200 | 3.4438 | 1.0 |
| 3.3466 | 1.54 | 21300 | 3.4422 | 1.0 |
| 3.308 | 1.55 | 21400 | 3.4637 | 1.0 |
| 3.3952 | 1.56 | 21500 | 3.4435 | 1.0 |
| 3.4009 | 1.56 | 21600 | 3.4434 | 1.0 |
| 3.7952 | 1.57 | 21700 | 3.4675 | 1.0 |
| 3.3891 | 1.58 | 21800 | 3.4565 | 1.0 |
| 3.31 | 1.59 | 21900 | 3.4538 | 1.0 |
| 3.3186 | 1.59 | 22000 | 3.4492 | 1.0 |
| 3.3512 | 1.6 | 22100 | 3.4381 | 1.0 |
| 3.309 | 1.61 | 22200 | 3.4558 | 1.0 |
| 3.597 | 1.61 | 22300 | 3.4484 | 1.0 |
| 3.4474 | 1.62 | 22400 | 3.4574 | 1.0 |
| 3.3316 | 1.63 | 22500 | 3.4498 | 1.0 |
| 3.3909 | 1.64 | 22600 | 3.4384 | 1.0 |
| 3.6999 | 1.64 | 22700 | 3.4503 | 1.0 |
| 3.6071 | 1.65 | 22800 | 3.4578 | 1.0 |
| 3.2812 | 1.66 | 22900 | 3.4563 | 1.0 |
| 3.2921 | 1.67 | 23000 | 3.4564 | 1.0 |
| 3.3291 | 1.67 | 23100 | 3.4490 | 1.0 |
| 3.3454 | 1.68 | 23200 | 3.4403 | 1.0 |
| 3.4212 | 1.69 | 23300 | 3.4409 | 1.0 |
| 3.5481 | 1.69 | 23400 | 3.4534 | 1.0 |
| 3.2784 | 1.7 | 23500 | 3.4486 | 1.0 |
| 3.4625 | 1.71 | 23600 | 3.4413 | 1.0 |
| 3.2427 | 1.72 | 23700 | 3.4694 | 1.0 |
| 3.8438 | 1.72 | 23800 | 3.4444 | 1.0 |
| 3.4009 | 1.73 | 23900 | 3.4505 | 1.0 |
| 3.8029 | 1.74 | 24000 | 3.4712 | 1.0 |
| 3.36 | 1.74 | 24100 | 3.4552 | 1.0 |
| 3.2751 | 1.75 | 24200 | 3.4511 | 1.0 |
| 3.309 | 1.76 | 24300 | 3.4368 | 1.0 |
| 3.4597 | 1.77 | 24400 | 3.4517 | 1.0 |
| 3.2812 | 1.77 | 24500 | 3.4475 | 1.0 |
| 3.4425 | 1.78 | 24600 | 3.4413 | 1.0 |
| 3.3968 | 1.79 | 24700 | 3.4482 | 1.0 |
| 3.35 | 1.8 | 24800 | 3.4473 | 1.0 |
| 3.3156 | 1.8 | 24900 | 3.4435 | 1.0 |
| 3.3008 | 1.81 | 25000 | 3.4439 | 1.0 |
| 3.3365 | 1.82 | 25100 | 3.4382 | 1.0 |
| 3.5473 | 1.82 | 25200 | 3.4396 | 1.0 |
| 3.3568 | 1.83 | 25300 | 3.4577 | 1.0 |
| 3.28 | 1.84 | 25400 | 3.4458 | 1.0 |
| 3.4389 | 1.85 | 25500 | 3.4436 | 1.0 |
| 3.345 | 1.85 | 25600 | 3.4435 | 1.0 |
| 3.3295 | 1.86 | 25700 | 3.4428 | 1.0 |
| 4.4622 | 1.87 | 25800 | 3.4638 | 1.0 |
| 3.3717 | 1.88 | 25900 | 3.4450 | 1.0 |
| 3.3 | 1.88 | 26000 | 3.4616 | 1.0 |
| 3.3399 | 1.89 | 26100 | 3.4391 | 1.0 |
| 3.4243 | 1.9 | 26200 | 3.4375 | 1.0 |
| 3.326 | 1.9 | 26300 | 3.4533 | 1.0 |
| 3.3337 | 1.91 | 26400 | 3.4538 | 1.0 |
| 3.2655 | 1.92 | 26500 | 3.4460 | 1.0 |
| 3.2963 | 1.93 | 26600 | 3.4443 | 1.0 |
| 3.3967 | 1.93 | 26700 | 3.4392 | 1.0 |
| 3.3203 | 1.94 | 26800 | 3.4609 | 1.0 |
| 3.4581 | 1.95 | 26900 | 3.4388 | 1.0 |
| 3.2519 | 1.95 | 27000 | 3.4434 | 1.0 |
| 3.488 | 1.96 | 27100 | 3.4653 | 1.0 |
| 3.3446 | 1.97 | 27200 | 3.4465 | 1.0 |
| 3.4035 | 1.98 | 27300 | 3.4535 | 1.0 |
| 3.2898 | 1.98 | 27400 | 3.4442 | 1.0 |
| 3.3309 | 1.99 | 27500 | 3.4491 | 1.0 |
| 3.2765 | 2.0 | 27600 | 3.4477 | 1.0 |
| 3.3352 | 2.01 | 27700 | 3.4540 | 1.0 |
| 3.4456 | 2.01 | 27800 | 3.4602 | 1.0 |
| 3.6378 | 2.02 | 27900 | 3.4578 | 1.0 |
| 6.4491 | 2.03 | 28000 | 3.4494 | 1.0 |
| 6.1705 | 2.03 | 28100 | 3.4570 | 1.0 |
| 3.4253 | 2.04 | 28200 | 3.4504 | 1.0 |
| 3.4053 | 2.05 | 28300 | 3.4399 | 1.0 |
| 3.6719 | 2.06 | 28400 | 3.4464 | 1.0 |
| 3.2769 | 2.06 | 28500 | 3.4473 | 1.0 |
| 3.3132 | 2.07 | 28600 | 3.4484 | 1.0 |
| 3.3756 | 2.08 | 28700 | 3.4413 | 1.0 |
| 5.5583 | 2.08 | 28800 | 3.4411 | 1.0 |
| 3.6191 | 2.09 | 28900 | 3.4406 | 1.0 |
| 3.4681 | 2.1 | 29000 | 3.4461 | 1.0 |
| 4.463 | 2.11 | 29100 | 3.4409 | 1.0 |
| 3.4645 | 2.11 | 29200 | 3.4556 | 1.0 |
| 3.6549 | 2.12 | 29300 | 3.4545 | 1.0 |
| 3.437 | 2.13 | 29400 | 3.4410 | 1.0 |
| 3.5002 | 2.14 | 29500 | 3.4370 | 1.0 |
| 3.4375 | 2.14 | 29600 | 3.4407 | 1.0 |
| 3.3798 | 2.15 | 29700 | 3.4390 | 1.0 |
| 3.6778 | 2.16 | 29800 | 3.4386 | 1.0 |
| 3.4647 | 2.16 | 29900 | 3.4600 | 1.0 |
| 3.4328 | 2.17 | 30000 | 3.4492 | 1.0 |
| 3.4381 | 2.18 | 30100 | 3.4406 | 1.0 |
| 3.3253 | 2.19 | 30200 | 3.4461 | 1.0 |
| 3.4112 | 2.19 | 30300 | 3.4478 | 1.0 |
| 3.6158 | 2.2 | 30400 | 3.4482 | 1.0 |
| 3.5541 | 2.21 | 30500 | 3.4424 | 1.0 |
| 4.3339 | 2.22 | 30600 | 3.4432 | 1.0 |
| 3.818 | 2.22 | 30700 | 3.4453 | 1.0 |
| 3.8914 | 2.23 | 30800 | 3.4457 | 1.0 |
| 5.5706 | 2.24 | 30900 | 3.4605 | 1.0 |
| 4.3359 | 2.24 | 31000 | 3.4700 | 1.0 |
| 3.6418 | 2.25 | 31100 | 3.4558 | 1.0 |
| 3.4288 | 2.26 | 31200 | 3.4396 | 1.0 |
| 3.4512 | 2.27 | 31300 | 3.4411 | 1.0 |
| 3.3326 | 2.27 | 31400 | 3.4473 | 1.0 |
| 3.5872 | 2.28 | 31500 | 3.4400 | 1.0 |
| 3.5426 | 2.29 | 31600 | 3.4469 | 1.0 |
| 4.2227 | 2.29 | 31700 | 3.4499 | 1.0 |
| 3.5461 | 2.3 | 31800 | 3.4388 | 1.0 |
| 3.5507 | 2.31 | 31900 | 3.4503 | 1.0 |
| 3.5177 | 2.32 | 32000 | 3.4429 | 1.0 |
| 3.7237 | 2.32 | 32100 | 3.4617 | 1.0 |
| 3.3513 | 2.33 | 32200 | 3.4487 | 1.0 |
| 3.3827 | 2.34 | 32300 | 3.4678 | 1.0 |
| 3.3311 | 2.35 | 32400 | 3.4441 | 1.0 |
| 3.2852 | 2.35 | 32500 | 3.4433 | 1.0 |
| 3.5712 | 2.36 | 32600 | 3.4514 | 1.0 |
| 4.6259 | 2.37 | 32700 | 3.4520 | 1.0 |
| 3.8864 | 2.37 | 32800 | 3.4544 | 1.0 |
| 3.3284 | 2.38 | 32900 | 3.4444 | 1.0 |
| 3.6078 | 2.39 | 33000 | 3.4450 | 1.0 |
| 3.4026 | 2.4 | 33100 | 3.4454 | 1.0 |
| 3.7527 | 2.4 | 33200 | 3.4541 | 1.0 |
| 3.3741 | 2.41 | 33300 | 3.4386 | 1.0 |
| 3.4498 | 2.42 | 33400 | 3.4518 | 1.0 |
| 3.3424 | 2.43 | 33500 | 3.4554 | 1.0 |
| 4.8226 | 2.43 | 33600 | 3.4412 | 1.0 |
| 3.3503 | 2.44 | 33700 | 3.4434 | 1.0 |
| 3.509 | 2.45 | 33800 | 3.4393 | 1.0 |
| 3.586 | 2.45 | 33900 | 3.4375 | 1.0 |
| 3.5242 | 2.46 | 34000 | 3.4402 | 1.0 |
| 3.4351 | 2.47 | 34100 | 3.4389 | 1.0 |
| 3.4445 | 2.48 | 34200 | 3.4416 | 1.0 |
| 6.6676 | 2.48 | 34300 | 3.4571 | 1.0 |
| 4.3937 | 2.49 | 34400 | 3.4560 | 1.0 |
| 3.4177 | 2.5 | 34500 | 3.4482 | 1.0 |
| 3.3966 | 2.5 | 34600 | 3.4640 | 1.0 |
| 3.2845 | 2.51 | 34700 | 3.4538 | 1.0 |
| 3.438 | 2.52 | 34800 | 3.4555 | 1.0 |
| 3.3874 | 2.53 | 34900 | 3.4524 | 1.0 |
| 3.5068 | 2.53 | 35000 | 3.4448 | 1.0 |
| 4.2406 | 2.54 | 35100 | 3.4503 | 1.0 |
| 3.2986 | 2.55 | 35200 | 3.4538 | 1.0 |
| 3.4044 | 2.56 | 35300 | 3.4443 | 1.0 |
| 3.3105 | 2.56 | 35400 | 3.4391 | 1.0 |
| 3.4048 | 2.57 | 35500 | 3.4411 | 1.0 |
| 3.5645 | 2.58 | 35600 | 3.4488 | 1.0 |
| 3.4912 | 2.58 | 35700 | 3.4400 | 1.0 |
| 3.4028 | 2.59 | 35800 | 3.4390 | 1.0 |
| 3.4601 | 2.6 | 35900 | 3.4455 | 1.0 |
| 3.6066 | 2.61 | 36000 | 3.4441 | 1.0 |
| 4.5312 | 2.61 | 36100 | 3.4414 | 1.0 |
| 3.6372 | 2.62 | 36200 | 3.4421 | 1.0 |
| 4.1912 | 2.63 | 36300 | 3.4572 | 1.0 |
| 3.4793 | 2.64 | 36400 | 3.4419 | 1.0 |
| 4.5538 | 2.64 | 36500 | 3.4407 | 1.0 |
| 3.3823 | 2.65 | 36600 | 3.4446 | 1.0 |
| 3.3592 | 2.66 | 36700 | 3.4396 | 1.0 |
| 3.4974 | 2.66 | 36800 | 3.4529 | 1.0 |
| 3.4599 | 2.67 | 36900 | 3.4380 | 1.0 |
| 4.7097 | 2.68 | 37000 | 3.4654 | 1.0 |
| 6.7037 | 2.69 | 37100 | 3.4386 | 1.0 |
| 3.3465 | 2.69 | 37200 | 3.4652 | 1.0 |
| 4.9762 | 2.7 | 37300 | 3.4506 | 1.0 |
| 3.9189 | 2.71 | 37400 | 3.4427 | 1.0 |
| 3.4746 | 2.71 | 37500 | 3.4465 | 1.0 |
| 3.3842 | 2.72 | 37600 | 3.4470 | 1.0 |
| 3.2445 | 2.73 | 37700 | 3.4480 | 1.0 |
| 3.382 | 2.74 | 37800 | 3.4456 | 1.0 |
| 3.7279 | 2.74 | 37900 | 3.4431 | 1.0 |
| 3.4329 | 2.75 | 38000 | 3.4374 | 1.0 |
| 3.4607 | 2.76 | 38100 | 3.4447 | 1.0 |
| 3.2394 | 2.77 | 38200 | 3.4476 | 1.0 |
| 3.7795 | 2.77 | 38300 | 3.4380 | 1.0 |
| 3.4419 | 2.78 | 38400 | 3.4526 | 1.0 |
| 3.6452 | 2.79 | 38500 | 3.4428 | 1.0 |
| 3.3474 | 2.79 | 38600 | 3.4424 | 1.0 |
| 3.4645 | 2.8 | 38700 | 3.4479 | 1.0 |
| 4.1143 | 2.81 | 38800 | 3.4580 | 1.0 |
| 4.6453 | 2.82 | 38900 | 3.4585 | 1.0 |
| 4.022 | 2.82 | 39000 | 3.4567 | 1.0 |
| 4.3049 | 2.83 | 39100 | 3.4377 | 1.0 |
| 3.3382 | 2.84 | 39200 | 3.4413 | 1.0 |
| 3.6022 | 2.85 | 39300 | 3.4548 | 1.0 |
| 4.4217 | 2.85 | 39400 | 3.4411 | 1.0 |
| 3.5139 | 2.86 | 39500 | 3.4552 | 1.0 |
| 3.1215 | 2.87 | 39600 | 3.4471 | 1.0 |
| 3.4514 | 2.87 | 39700 | 3.4378 | 1.0 |
| 4.822 | 2.88 | 39800 | 3.4605 | 1.0 |
| 5.6699 | 2.89 | 39900 | 3.4489 | 1.0 |
| 3.4183 | 2.9 | 40000 | 3.4644 | 1.0 |
| 5.7492 | 2.9 | 40100 | 3.4514 | 1.0 |
| 3.2879 | 2.91 | 40200 | 3.4543 | 1.0 |
| 3.3076 | 2.92 | 40300 | 3.4450 | 1.0 |
| 5.2845 | 2.92 | 40400 | 3.4459 | 1.0 |
| 3.7927 | 2.93 | 40500 | 3.4481 | 1.0 |
| 7.1549 | 2.94 | 40600 | 3.4554 | 1.0 |
| 3.4544 | 2.95 | 40700 | 3.4486 | 1.0 |
| 3.2332 | 2.95 | 40800 | 3.4415 | 1.0 |
| 3.3714 | 2.96 | 40900 | 3.4521 | 1.0 |
| 3.5205 | 2.97 | 41000 | 3.4395 | 1.0 |
| 4.6267 | 2.98 | 41100 | 3.4622 | 1.0 |
| 6.7747 | 2.98 | 41200 | 3.4407 | 1.0 |
| 3.3091 | 2.99 | 41300 | 3.4422 | 1.0 |
| 3.7135 | 3.0 | 41400 | 3.4383 | 1.0 |
| 3.6261 | 3.0 | 41500 | 3.4482 | 1.0 |
| 3.3323 | 3.01 | 41600 | 3.4366 | 1.0 |
| 3.4544 | 3.02 | 41700 | 3.4376 | 1.0 |
| 3.6486 | 3.03 | 41800 | 3.4511 | 1.0 |
| 3.3333 | 3.03 | 41900 | 3.4397 | 1.0 |
| 3.35 | 3.04 | 42000 | 3.4486 | 1.0 |
| 3.3522 | 3.05 | 42100 | 3.4626 | 1.0 |
| 3.4359 | 3.06 | 42200 | 3.4462 | 1.0 |
| 3.4548 | 3.06 | 42300 | 3.4435 | 1.0 |
| 3.2711 | 3.07 | 42400 | 3.4450 | 1.0 |
| 3.2679 | 3.08 | 42500 | 3.4394 | 1.0 |
| 3.3703 | 3.08 | 42600 | 3.4539 | 1.0 |
| 3.3846 | 3.09 | 42700 | 3.4443 | 1.0 |
| 3.334 | 3.1 | 42800 | 3.4384 | 1.0 |
| 3.3429 | 3.11 | 42900 | 3.4625 | 1.0 |
| 3.282 | 3.11 | 43000 | 3.4419 | 1.0 |
| 3.3503 | 3.12 | 43100 | 3.4653 | 1.0 |
| 3.4923 | 3.13 | 43200 | 3.4380 | 1.0 |
| 3.4309 | 3.13 | 43300 | 3.4534 | 1.0 |
| 3.3292 | 3.14 | 43400 | 3.4448 | 1.0 |
| 3.4219 | 3.15 | 43500 | 3.4665 | 1.0 |
| 3.3848 | 3.16 | 43600 | 3.4473 | 1.0 |
| 3.3004 | 3.16 | 43700 | 3.4509 | 1.0 |
| 3.2002 | 3.17 | 43800 | 3.4493 | 1.0 |
| 3.2654 | 3.18 | 43900 | 3.4384 | 1.0 |
| 3.3394 | 3.19 | 44000 | 3.4388 | 1.0 |
| 3.2365 | 3.19 | 44100 | 3.4491 | 1.0 |
| 3.2846 | 3.2 | 44200 | 3.4404 | 1.0 |
| 3.3973 | 3.21 | 44300 | 3.4426 | 1.0 |
| 3.3367 | 3.21 | 44400 | 3.4690 | 1.0 |
| 3.2747 | 3.22 | 44500 | 3.4378 | 1.0 |
| 3.4307 | 3.23 | 44600 | 3.4395 | 1.0 |
| 3.3685 | 3.24 | 44700 | 3.4431 | 1.0 |
| 3.321 | 3.24 | 44800 | 3.4557 | 1.0 |
| 3.3541 | 3.25 | 44900 | 3.4489 | 1.0 |
| 3.2282 | 3.26 | 45000 | 3.4393 | 1.0 |
| 3.3811 | 3.27 | 45100 | 3.4463 | 1.0 |
| 3.3014 | 3.27 | 45200 | 3.4505 | 1.0 |
| 3.3617 | 3.28 | 45300 | 3.4475 | 1.0 |
| 3.3953 | 3.29 | 45400 | 3.4430 | 1.0 |
| 3.3999 | 3.29 | 45500 | 3.4417 | 1.0 |
| 3.4098 | 3.3 | 45600 | 3.4503 | 1.0 |
| 3.1994 | 3.31 | 45700 | 3.4414 | 1.0 |
| 3.2185 | 3.32 | 45800 | 3.4485 | 1.0 |
| 3.2554 | 3.32 | 45900 | 3.4477 | 1.0 |
| 3.4302 | 3.33 | 46000 | 3.4508 | 1.0 |
| 3.366 | 3.34 | 46100 | 3.4440 | 1.0 |
| 3.4143 | 3.34 | 46200 | 3.4382 | 1.0 |
| 4.318 | 3.35 | 46300 | 3.4524 | 1.0 |
| 3.4233 | 3.36 | 46400 | 3.4451 | 1.0 |
| 3.3492 | 3.37 | 46500 | 3.4526 | 1.0 |
| 3.2399 | 3.37 | 46600 | 3.4462 | 1.0 |
| 3.421 | 3.38 | 46700 | 3.4432 | 1.0 |
| 3.2847 | 3.39 | 46800 | 3.4419 | 1.0 |
| 3.4062 | 3.4 | 46900 | 3.4405 | 1.0 |
| 3.3822 | 3.4 | 47000 | 3.4434 | 1.0 |
| 3.2789 | 3.41 | 47100 | 3.4444 | 1.0 |
| 3.2508 | 3.42 | 47200 | 3.4501 | 1.0 |
| 3.3867 | 3.42 | 47300 | 3.4498 | 1.0 |
| 3.3275 | 3.43 | 47400 | 3.4505 | 1.0 |
| 3.424 | 3.44 | 47500 | 3.4448 | 1.0 |
| 3.2418 | 3.45 | 47600 | 3.4450 | 1.0 |
| 3.3037 | 3.45 | 47700 | 3.4493 | 1.0 |
| 3.2562 | 3.46 | 47800 | 3.4466 | 1.0 |
| 3.3241 | 3.47 | 47900 | 3.4385 | 1.0 |
| 3.5569 | 3.47 | 48000 | 3.4427 | 1.0 |
| 3.298 | 3.48 | 48100 | 3.4667 | 1.0 |
| 3.3401 | 3.49 | 48200 | 3.4440 | 1.0 |
| 3.2824 | 3.5 | 48300 | 3.4427 | 1.0 |
| 3.3829 | 3.5 | 48400 | 3.4398 | 1.0 |
| 3.3595 | 3.51 | 48500 | 3.4421 | 1.0 |
| 3.286 | 3.52 | 48600 | 3.4517 | 1.0 |
| 3.3494 | 3.53 | 48700 | 3.4429 | 1.0 |
| 3.3507 | 3.53 | 48800 | 3.4422 | 1.0 |
| 3.3598 | 3.54 | 48900 | 3.4439 | 1.0 |
| 3.3141 | 3.55 | 49000 | 3.4544 | 1.0 |
| 3.4548 | 3.55 | 49100 | 3.4415 | 1.0 |
| 3.3278 | 3.56 | 49200 | 3.4474 | 1.0 |
| 3.4088 | 3.57 | 49300 | 3.4498 | 1.0 |
| 3.4046 | 3.58 | 49400 | 3.4554 | 1.0 |
| 3.2847 | 3.58 | 49500 | 3.4393 | 1.0 |
| 3.3162 | 3.59 | 49600 | 3.4594 | 1.0 |
| 3.2493 | 3.6 | 49700 | 3.4514 | 1.0 |
| 3.3466 | 3.61 | 49800 | 3.4514 | 1.0 |
| 3.3279 | 3.61 | 49900 | 3.4462 | 1.0 |
| 3.29 | 3.62 | 50000 | 3.4466 | 1.0 |
| 3.2374 | 3.63 | 50100 | 3.4575 | 1.0 |
| 3.3499 | 3.63 | 50200 | 3.4392 | 1.0 |
| 3.251 | 3.64 | 50300 | 3.4556 | 1.0 |
| 3.3692 | 3.65 | 50400 | 3.4498 | 1.0 |
| 3.3743 | 3.66 | 50500 | 3.4569 | 1.0 |
| 3.3662 | 3.66 | 50600 | 3.4463 | 1.0 |
| 3.302 | 3.67 | 50700 | 3.4445 | 1.0 |
| 3.2863 | 3.68 | 50800 | 3.4475 | 1.0 |
| 3.4266 | 3.68 | 50900 | 3.4370 | 1.0 |
| 3.2988 | 3.69 | 51000 | 3.4476 | 1.0 |
| 3.9581 | 3.7 | 51100 | 3.4382 | 1.0 |
| 3.4516 | 3.71 | 51200 | 3.4526 | 1.0 |
| 3.4259 | 3.71 | 51300 | 3.4414 | 1.0 |
| 3.3913 | 3.72 | 51400 | 3.4386 | 1.0 |
| 3.3606 | 3.73 | 51500 | 3.4458 | 1.0 |
| 3.4698 | 3.74 | 51600 | 3.4450 | 1.0 |
| 3.4285 | 3.74 | 51700 | 3.4493 | 1.0 |
| 3.265 | 3.75 | 51800 | 3.4369 | 1.0 |
| 3.4819 | 3.76 | 51900 | 3.4472 | 1.0 |
| 3.2869 | 3.76 | 52000 | 3.4580 | 1.0 |
| 3.2663 | 3.77 | 52100 | 3.4469 | 1.0 |
| 3.4325 | 3.78 | 52200 | 3.4423 | 1.0 |
| 3.3355 | 3.79 | 52300 | 3.4411 | 1.0 |
| 3.4324 | 3.79 | 52400 | 3.4456 | 1.0 |
| 3.3105 | 3.8 | 52500 | 3.4389 | 1.0 |
| 3.3588 | 3.81 | 52600 | 3.4403 | 1.0 |
| 3.3524 | 3.82 | 52700 | 3.4458 | 1.0 |
| 3.2466 | 3.82 | 52800 | 3.4447 | 1.0 |
| 3.2375 | 3.83 | 52900 | 3.4448 | 1.0 |
| 3.4006 | 3.84 | 53000 | 3.4456 | 1.0 |
| 3.3572 | 3.84 | 53100 | 3.4427 | 1.0 |
| 3.6162 | 3.85 | 53200 | 3.4379 | 1.0 |
| 3.3351 | 3.86 | 53300 | 3.4482 | 1.0 |
| 3.7101 | 3.87 | 53400 | 3.4393 | 1.0 |
| 3.3836 | 3.87 | 53500 | 3.4474 | 1.0 |
| 3.3357 | 3.88 | 53600 | 3.4573 | 1.0 |
| 3.3434 | 3.89 | 53700 | 3.4475 | 1.0 |
| 3.3349 | 3.89 | 53800 | 3.4659 | 1.0 |
| 3.3474 | 3.9 | 53900 | 3.4411 | 1.0 |
| 3.4007 | 3.91 | 54000 | 3.4446 | 1.0 |
| 3.4218 | 3.92 | 54100 | 3.4406 | 1.0 |
| 3.2115 | 3.92 | 54200 | 3.4422 | 1.0 |
| 3.2726 | 3.93 | 54300 | 3.4383 | 1.0 |
| 3.2999 | 3.94 | 54400 | 3.4423 | 1.0 |
| 3.3657 | 3.95 | 54500 | 3.4377 | 1.0 |
| 3.4015 | 3.95 | 54600 | 3.4433 | 1.0 |
| 3.3373 | 3.96 | 54700 | 3.4457 | 1.0 |
| 4.9872 | 3.97 | 54800 | 3.4420 | 1.0 |
| 3.3221 | 3.97 | 54900 | 3.4501 | 1.0 |
| 3.8059 | 3.98 | 55000 | 3.4501 | 1.0 |
| 3.2628 | 3.99 | 55100 | 3.4511 | 1.0 |
| 3.3822 | 4.0 | 55200 | 3.4409 | 1.0 |
| 3.5464 | 4.0 | 55300 | 3.4527 | 1.0 |
| 3.3661 | 4.01 | 55400 | 3.4436 | 1.0 |
| 3.4146 | 4.02 | 55500 | 3.4458 | 1.0 |
| 3.5756 | 4.03 | 55600 | 3.4409 | 1.0 |
| 3.3945 | 4.03 | 55700 | 3.4378 | 1.0 |
| 4.5275 | 4.04 | 55800 | 3.4558 | 1.0 |
| 3.7913 | 4.05 | 55900 | 3.4523 | 1.0 |
| 3.4445 | 4.05 | 56000 | 3.4446 | 1.0 |
| 3.51 | 4.06 | 56100 | 3.4488 | 1.0 |
| 6.5935 | 4.07 | 56200 | 3.4497 | 1.0 |
| 3.3548 | 4.08 | 56300 | 3.4443 | 1.0 |
| 3.4544 | 4.08 | 56400 | 3.4547 | 1.0 |
| 3.4206 | 4.09 | 56500 | 3.4476 | 1.0 |
| 3.3979 | 4.1 | 56600 | 3.4459 | 1.0 |
| 3.296 | 4.1 | 56700 | 3.4461 | 1.0 |
| 3.7186 | 4.11 | 56800 | 3.4407 | 1.0 |
| 3.8726 | 4.12 | 56900 | 3.4498 | 1.0 |
| 3.6704 | 4.13 | 57000 | 3.4535 | 1.0 |
| 3.4735 | 4.13 | 57100 | 3.4470 | 1.0 |
| 3.399 | 4.14 | 57200 | 3.4461 | 1.0 |
| 3.3507 | 4.15 | 57300 | 3.4405 | 1.0 |
| 3.3948 | 4.16 | 57400 | 3.4582 | 1.0 |
| 3.613 | 4.16 | 57500 | 3.4462 | 1.0 |
| 3.3553 | 4.17 | 57600 | 3.4507 | 1.0 |
| 3.5798 | 4.18 | 57700 | 3.4476 | 1.0 |
| 7.6315 | 4.18 | 57800 | 3.4412 | 1.0 |
| 3.4873 | 4.19 | 57900 | 3.4605 | 1.0 |
| 3.3193 | 4.2 | 58000 | 3.4458 | 1.0 |
| 3.4065 | 4.21 | 58100 | 3.4368 | 1.0 |
| 3.4813 | 4.21 | 58200 | 3.4464 | 1.0 |
| 3.2523 | 4.22 | 58300 | 3.4601 | 1.0 |
| 3.3384 | 4.23 | 58400 | 3.4449 | 1.0 |
| 3.2839 | 4.24 | 58500 | 3.4544 | 1.0 |
| 3.4564 | 4.24 | 58600 | 3.4412 | 1.0 |
| 3.3995 | 4.25 | 58700 | 3.4408 | 1.0 |
| 3.2107 | 4.26 | 58800 | 3.4463 | 1.0 |
| 4.0565 | 4.26 | 58900 | 3.4402 | 1.0 |
| 3.6744 | 4.27 | 59000 | 3.4537 | 1.0 |
| 3.3658 | 4.28 | 59100 | 3.4435 | 1.0 |
| 3.8134 | 4.29 | 59200 | 3.4491 | 1.0 |
| 3.3783 | 4.29 | 59300 | 3.4480 | 1.0 |
| 3.6206 | 4.3 | 59400 | 3.4403 | 1.0 |
| 3.4018 | 4.31 | 59500 | 3.4433 | 1.0 |
| 3.2325 | 4.31 | 59600 | 3.4419 | 1.0 |
| 3.3935 | 4.32 | 59700 | 3.4420 | 1.0 |
| 3.9773 | 4.33 | 59800 | 3.4477 | 1.0 |
| 3.3477 | 4.34 | 59900 | 3.4557 | 1.0 |
| 3.4817 | 4.34 | 60000 | 3.4421 | 1.0 |
| 3.8685 | 4.35 | 60100 | 3.4470 | 1.0 |
| 3.679 | 4.36 | 60200 | 3.4457 | 1.0 |
| 5.3659 | 4.37 | 60300 | 3.4416 | 1.0 |
| 3.2615 | 4.37 | 60400 | 3.4415 | 1.0 |
| 3.6087 | 4.38 | 60500 | 3.4398 | 1.0 |
| 4.1801 | 4.39 | 60600 | 3.4532 | 1.0 |
| 5.013 | 4.39 | 60700 | 3.4465 | 1.0 |
| 3.333 | 4.4 | 60800 | 3.4498 | 1.0 |
| 3.4247 | 4.41 | 60900 | 3.4542 | 1.0 |
| 3.424 | 4.42 | 61000 | 3.4436 | 1.0 |
| 3.317 | 4.42 | 61100 | 3.4405 | 1.0 |
| 3.4018 | 4.43 | 61200 | 3.4467 | 1.0 |
| 7.2156 | 4.44 | 61300 | 3.4436 | 1.0 |
| 3.3726 | 4.45 | 61400 | 3.4473 | 1.0 |
| 3.2895 | 4.45 | 61500 | 3.4400 | 1.0 |
| 3.2293 | 4.46 | 61600 | 3.4536 | 1.0 |
| 3.8397 | 4.47 | 61700 | 3.4489 | 1.0 |
| 3.3358 | 4.47 | 61800 | 3.4443 | 1.0 |
| 3.4085 | 4.48 | 61900 | 3.4472 | 1.0 |
| 3.4413 | 4.49 | 62000 | 3.4421 | 1.0 |
| 3.4222 | 4.5 | 62100 | 3.4480 | 1.0 |
| 3.4665 | 4.5 | 62200 | 3.4435 | 1.0 |
| 3.4058 | 4.51 | 62300 | 3.4399 | 1.0 |
| 3.4228 | 4.52 | 62400 | 3.4457 | 1.0 |
| 3.3362 | 4.52 | 62500 | 3.4453 | 1.0 |
| 4.3383 | 4.53 | 62600 | 3.4564 | 1.0 |
| 3.2802 | 4.54 | 62700 | 3.4392 | 1.0 |
| 5.0224 | 4.55 | 62800 | 3.4491 | 1.0 |
| 4.1092 | 4.55 | 62900 | 3.4400 | 1.0 |
| 3.6467 | 4.56 | 63000 | 3.4454 | 1.0 |
| 3.4197 | 4.57 | 63100 | 3.4411 | 1.0 |
| 3.4549 | 4.58 | 63200 | 3.4464 | 1.0 |
| 3.2333 | 4.58 | 63300 | 3.4454 | 1.0 |
| 3.3108 | 4.59 | 63400 | 3.4437 | 1.0 |
| 3.3897 | 4.6 | 63500 | 3.4382 | 1.0 |
| 3.2956 | 4.6 | 63600 | 3.4478 | 1.0 |
| 3.4244 | 4.61 | 63700 | 3.4439 | 1.0 |
| 4.3236 | 4.62 | 63800 | 3.4400 | 1.0 |
| 3.263 | 4.63 | 63900 | 3.4542 | 1.0 |
| 3.5322 | 4.63 | 64000 | 3.4548 | 1.0 |
| 3.613 | 4.64 | 64100 | 3.4442 | 1.0 |
| 3.7147 | 4.65 | 64200 | 3.4396 | 1.0 |
| 3.6781 | 4.66 | 64300 | 3.4444 | 1.0 |
| 3.1597 | 4.66 | 64400 | 3.4642 | 1.0 |
| 4.8173 | 4.67 | 64500 | 3.4397 | 1.0 |
| 3.7878 | 4.68 | 64600 | 3.4529 | 1.0 |
| 3.3288 | 4.68 | 64700 | 3.4423 | 1.0 |
| 3.3931 | 4.69 | 64800 | 3.4376 | 1.0 |
| 5.6842 | 4.7 | 64900 | 3.4396 | 1.0 |
| 3.62 | 4.71 | 65000 | 3.4419 | 1.0 |
| 3.3742 | 4.71 | 65100 | 3.4419 | 1.0 |
| 3.3207 | 4.72 | 65200 | 3.4392 | 1.0 |
| 3.6216 | 4.73 | 65300 | 3.4369 | 1.0 |
| 3.2954 | 4.73 | 65400 | 3.4461 | 1.0 |
| 3.3943 | 4.74 | 65500 | 3.4442 | 1.0 |
| 3.5041 | 4.75 | 65600 | 3.4433 | 1.0 |
| 3.5168 | 4.76 | 65700 | 3.4529 | 1.0 |
| 3.3715 | 4.76 | 65800 | 3.4446 | 1.0 |
| 3.3734 | 4.77 | 65900 | 3.4507 | 1.0 |
| 10.6923 | 4.78 | 66000 | 3.4468 | 1.0 |
| 3.4432 | 4.79 | 66100 | 3.4400 | 1.0 |
| 3.5521 | 4.79 | 66200 | 3.4573 | 1.0 |
| 4.9372 | 4.8 | 66300 | 3.4400 | 1.0 |
| 3.48 | 4.81 | 66400 | 3.4374 | 1.0 |
| 3.1794 | 4.81 | 66500 | 3.4379 | 1.0 |
| 3.4121 | 4.82 | 66600 | 3.4364 | 1.0 |
| 3.581 | 4.83 | 66700 | 3.4444 | 1.0 |
| 3.1135 | 4.84 | 66800 | 3.4380 | 1.0 |
| 3.4506 | 4.84 | 66900 | 3.4595 | 1.0 |
| 3.3243 | 4.85 | 67000 | 3.4433 | 1.0 |
| 3.3814 | 4.86 | 67100 | 3.4550 | 1.0 |
| 3.3557 | 4.86 | 67200 | 3.4374 | 1.0 |
| 3.2991 | 4.87 | 67300 | 3.4423 | 1.0 |
| 3.8854 | 4.88 | 67400 | 3.4398 | 1.0 |
| 3.7073 | 4.89 | 67500 | 3.4425 | 1.0 |
| 3.3739 | 4.89 | 67600 | 3.4492 | 1.0 |
| 3.435 | 4.9 | 67700 | 3.4512 | 1.0 |
| 10.5515 | 4.91 | 67800 | 3.4512 | 1.0 |
| 3.5227 | 4.92 | 67900 | 3.4493 | 1.0 |
| 3.2475 | 4.92 | 68000 | 3.4413 | 1.0 |
| 3.3387 | 4.93 | 68100 | 3.4474 | 1.0 |
| 3.365 | 4.94 | 68200 | 3.4426 | 1.0 |
| 4.1377 | 4.94 | 68300 | 3.4457 | 1.0 |
| 3.9188 | 4.95 | 68400 | 3.4437 | 1.0 |
| 3.5646 | 4.96 | 68500 | 3.4438 | 1.0 |
| 3.3686 | 4.97 | 68600 | 3.4477 | 1.0 |
| 3.1943 | 4.97 | 68700 | 3.4508 | 1.0 |
| 3.3747 | 4.98 | 68800 | 3.4453 | 1.0 |
| 3.8971 | 4.99 | 68900 | 3.4560 | 1.0 |
| 3.9434 | 5.0 | 69000 | 3.4457 | 1.0 |
| 3.3862 | 5.0 | 69100 | 3.4575 | 1.0 |
| 3.2693 | 5.01 | 69200 | 3.4436 | 1.0 |
| 3.2971 | 5.02 | 69300 | 3.4494 | 1.0 |
| 3.3175 | 5.02 | 69400 | 3.4432 | 1.0 |
| 3.3889 | 5.03 | 69500 | 3.4371 | 1.0 |
| 3.382 | 5.04 | 69600 | 3.4426 | 1.0 |
| 3.3396 | 5.05 | 69700 | 3.4383 | 1.0 |
| 3.5613 | 5.05 | 69800 | 3.4472 | 1.0 |
| 3.4392 | 5.06 | 69900 | 3.4437 | 1.0 |
| 3.2599 | 5.07 | 70000 | 3.4544 | 1.0 |
| 3.2819 | 5.07 | 70100 | 3.4459 | 1.0 |
| 3.3131 | 5.08 | 70200 | 3.4552 | 1.0 |
| 3.3471 | 5.09 | 70300 | 3.4513 | 1.0 |
| 3.4194 | 5.1 | 70400 | 3.4446 | 1.0 |
| 3.3565 | 5.1 | 70500 | 3.4424 | 1.0 |
| 3.3411 | 5.11 | 70600 | 3.4482 | 1.0 |
| 3.3473 | 5.12 | 70700 | 3.4514 | 1.0 |
| 3.3197 | 5.13 | 70800 | 3.4491 | 1.0 |
| 3.3466 | 5.13 | 70900 | 3.4573 | 1.0 |
| 3.3856 | 5.14 | 71000 | 3.4420 | 1.0 |
| 3.1905 | 5.15 | 71100 | 3.4469 | 1.0 |
| 3.3756 | 5.15 | 71200 | 3.4467 | 1.0 |
| 3.3498 | 5.16 | 71300 | 3.4479 | 1.0 |
| 3.3914 | 5.17 | 71400 | 3.4426 | 1.0 |
| 3.3885 | 5.18 | 71500 | 3.4419 | 1.0 |
| 3.4713 | 5.18 | 71600 | 3.4434 | 1.0 |
| 3.4077 | 5.19 | 71700 | 3.4472 | 1.0 |
| 3.3633 | 5.2 | 71800 | 3.4443 | 1.0 |
| 3.3677 | 5.21 | 71900 | 3.4413 | 1.0 |
| 3.3545 | 5.21 | 72000 | 3.4491 | 1.0 |
| 3.3415 | 5.22 | 72100 | 3.4423 | 1.0 |
| 3.3796 | 5.23 | 72200 | 3.4420 | 1.0 |
| 3.4989 | 5.23 | 72300 | 3.4415 | 1.0 |
| 3.3875 | 5.24 | 72400 | 3.4453 | 1.0 |
| 3.3728 | 5.25 | 72500 | 3.4534 | 1.0 |
| 3.3134 | 5.26 | 72600 | 3.4396 | 1.0 |
| 3.3634 | 5.26 | 72700 | 3.4472 | 1.0 |
| 3.2482 | 5.27 | 72800 | 3.4448 | 1.0 |
| 3.299 | 5.28 | 72900 | 3.4571 | 1.0 |
| 3.3579 | 5.28 | 73000 | 3.4440 | 1.0 |
| 3.6011 | 5.29 | 73100 | 3.4507 | 1.0 |
| 3.2451 | 5.3 | 73200 | 3.4430 | 1.0 |
| 3.399 | 5.31 | 73300 | 3.4443 | 1.0 |
| 3.3605 | 5.31 | 73400 | 3.4525 | 1.0 |
| 3.3511 | 5.32 | 73500 | 3.4520 | 1.0 |
| 3.3946 | 5.33 | 73600 | 3.4402 | 1.0 |
| 3.3602 | 5.34 | 73700 | 3.4383 | 1.0 |
| 3.3105 | 5.34 | 73800 | 3.4492 | 1.0 |
| 3.3346 | 5.35 | 73900 | 3.4428 | 1.0 |
| 3.4219 | 5.36 | 74000 | 3.4534 | 1.0 |
| 3.3491 | 5.36 | 74100 | 3.4603 | 1.0 |
| 3.4207 | 5.37 | 74200 | 3.4512 | 1.0 |
| 3.2418 | 5.38 | 74300 | 3.4474 | 1.0 |
| 3.2637 | 5.39 | 74400 | 3.4402 | 1.0 |
| 3.4331 | 5.39 | 74500 | 3.4576 | 1.0 |
| 3.3483 | 5.4 | 74600 | 3.4518 | 1.0 |
| 3.2825 | 5.41 | 74700 | 3.4526 | 1.0 |
| 3.5443 | 5.42 | 74800 | 3.4380 | 1.0 |
| 3.3637 | 5.42 | 74900 | 3.4525 | 1.0 |
| 3.2016 | 5.43 | 75000 | 3.4483 | 1.0 |
| 3.3641 | 5.44 | 75100 | 3.4389 | 1.0 |
| 3.3869 | 5.44 | 75200 | 3.4511 | 1.0 |
| 3.2595 | 5.45 | 75300 | 3.4498 | 1.0 |
| 3.401 | 5.46 | 75400 | 3.4496 | 1.0 |
| 3.4416 | 5.47 | 75500 | 3.4502 | 1.0 |
| 3.3949 | 5.47 | 75600 | 3.4400 | 1.0 |
| 3.279 | 5.48 | 75700 | 3.4461 | 1.0 |
| 3.343 | 5.49 | 75800 | 3.4419 | 1.0 |
| 3.3848 | 5.49 | 75900 | 3.4470 | 1.0 |
| 3.3605 | 5.5 | 76000 | 3.4430 | 1.0 |
| 3.2786 | 5.51 | 76100 | 3.4479 | 1.0 |
| 3.4013 | 5.52 | 76200 | 3.4469 | 1.0 |
| 3.2064 | 5.52 | 76300 | 3.4420 | 1.0 |
| 3.5022 | 5.53 | 76400 | 3.4475 | 1.0 |
| 3.3093 | 5.54 | 76500 | 3.4431 | 1.0 |
| 3.3647 | 5.55 | 76600 | 3.4392 | 1.0 |
| 3.3971 | 5.55 | 76700 | 3.4434 | 1.0 |
| 3.3352 | 5.56 | 76800 | 3.4485 | 1.0 |
| 3.3756 | 5.57 | 76900 | 3.4453 | 1.0 |
| 3.2675 | 5.57 | 77000 | 3.4456 | 1.0 |
| 3.3187 | 5.58 | 77100 | 3.4471 | 1.0 |
| 3.3915 | 5.59 | 77200 | 3.4434 | 1.0 |
| 3.522 | 5.6 | 77300 | 3.4579 | 1.0 |
| 3.3715 | 5.6 | 77400 | 3.4459 | 1.0 |
| 3.2879 | 5.61 | 77500 | 3.4450 | 1.0 |
| 3.4566 | 5.62 | 77600 | 3.4446 | 1.0 |
| 3.3802 | 5.63 | 77700 | 3.4458 | 1.0 |
| 3.3286 | 5.63 | 77800 | 3.4417 | 1.0 |
| 3.3506 | 5.64 | 77900 | 3.4582 | 1.0 |
| 3.3646 | 5.65 | 78000 | 3.4382 | 1.0 |
| 3.3679 | 5.65 | 78100 | 3.4399 | 1.0 |
| 3.2344 | 5.66 | 78200 | 3.4389 | 1.0 |
| 3.362 | 5.67 | 78300 | 3.4528 | 1.0 |
| 3.3598 | 5.68 | 78400 | 3.4411 | 1.0 |
| 3.4368 | 5.68 | 78500 | 3.4416 | 1.0 |
| 3.3668 | 5.69 | 78600 | 3.4501 | 1.0 |
| 3.4889 | 5.7 | 78700 | 3.4469 | 1.0 |
| 3.5421 | 5.7 | 78800 | 3.4499 | 1.0 |
| 3.4562 | 5.71 | 78900 | 3.4489 | 1.0 |
| 3.4175 | 5.72 | 79000 | 3.4456 | 1.0 |
| 3.3624 | 5.73 | 79100 | 3.4457 | 1.0 |
| 3.338 | 5.73 | 79200 | 3.4480 | 1.0 |
| 3.2783 | 5.74 | 79300 | 3.4398 | 1.0 |
| 3.3664 | 5.75 | 79400 | 3.4454 | 1.0 |
| 3.3883 | 5.76 | 79500 | 3.4511 | 1.0 |
| 3.3578 | 5.76 | 79600 | 3.4480 | 1.0 |
| 3.2831 | 5.77 | 79700 | 3.4425 | 1.0 |
| 3.5258 | 5.78 | 79800 | 3.4522 | 1.0 |
| 3.2697 | 5.78 | 79900 | 3.4398 | 1.0 |
| 3.291 | 5.79 | 80000 | 3.4395 | 1.0 |
| 3.3994 | 5.8 | 80100 | 3.4401 | 1.0 |
| 3.3379 | 5.81 | 80200 | 3.4414 | 1.0 |
| 3.334 | 5.81 | 80300 | 3.4576 | 1.0 |
| 3.4343 | 5.82 | 80400 | 3.4524 | 1.0 |
| 3.3857 | 5.83 | 80500 | 3.4445 | 1.0 |
| 3.3657 | 5.84 | 80600 | 3.4437 | 1.0 |
| 3.3229 | 5.84 | 80700 | 3.4539 | 1.0 |
| 3.2913 | 5.85 | 80800 | 3.4466 | 1.0 |
| 3.2929 | 5.86 | 80900 | 3.4471 | 1.0 |
| 3.4581 | 5.86 | 81000 | 3.4367 | 1.0 |
| 3.3521 | 5.87 | 81100 | 3.4395 | 1.0 |
| 3.6423 | 5.88 | 81200 | 3.4395 | 1.0 |
| 3.3993 | 5.89 | 81300 | 3.4488 | 1.0 |
| 3.3382 | 5.89 | 81400 | 3.4626 | 1.0 |
| 3.2858 | 5.9 | 81500 | 3.4393 | 1.0 |
| 3.3802 | 5.91 | 81600 | 3.4430 | 1.0 |
| 3.4808 | 5.91 | 81700 | 3.4421 | 1.0 |
| 3.2911 | 5.92 | 81800 | 3.4458 | 1.0 |
| 3.199 | 5.93 | 81900 | 3.4411 | 1.0 |
| 3.7089 | 5.94 | 82000 | 3.4402 | 1.0 |
| 3.32 | 5.94 | 82100 | 3.4524 | 1.0 |
| 3.2283 | 5.95 | 82200 | 3.4465 | 1.0 |
| 3.3001 | 5.96 | 82300 | 3.4429 | 1.0 |
| 3.33 | 5.97 | 82400 | 3.4535 | 1.0 |
| 3.3269 | 5.97 | 82500 | 3.4445 | 1.0 |
| 3.3572 | 5.98 | 82600 | 3.4459 | 1.0 |
| 3.2905 | 5.99 | 82700 | 3.4475 | 1.0 |
| 3.4236 | 5.99 | 82800 | 3.4455 | 1.0 |
| 4.1378 | 6.0 | 82900 | 3.4454 | 1.0 |
| 3.4648 | 6.01 | 83000 | 3.4569 | 1.0 |
| 3.2289 | 6.02 | 83100 | 3.4562 | 1.0 |
| 3.511 | 6.02 | 83200 | 3.4452 | 1.0 |
| 5.6152 | 6.03 | 83300 | 3.4684 | 1.0 |
| 3.2102 | 6.04 | 83400 | 3.4555 | 1.0 |
| 3.389 | 6.05 | 83500 | 3.4429 | 1.0 |
| 3.773 | 6.05 | 83600 | 3.4436 | 1.0 |
| 3.3612 | 6.06 | 83700 | 3.4383 | 1.0 |
| 3.316 | 6.07 | 83800 | 3.4421 | 1.0 |
| 3.4754 | 6.07 | 83900 | 3.4444 | 1.0 |
| 3.4536 | 6.08 | 84000 | 3.4461 | 1.0 |
| 3.4987 | 6.09 | 84100 | 3.4441 | 1.0 |
| 3.5025 | 6.1 | 84200 | 3.4423 | 1.0 |
| 3.167 | 6.1 | 84300 | 3.4381 | 1.0 |
| 3.3875 | 6.11 | 84400 | 3.4458 | 1.0 |
| 3.3446 | 6.12 | 84500 | 3.4491 | 1.0 |
| 3.4824 | 6.12 | 84600 | 3.4476 | 1.0 |
| 3.4264 | 6.13 | 84700 | 3.4443 | 1.0 |
| 3.3786 | 6.14 | 84800 | 3.4391 | 1.0 |
| 3.3554 | 6.15 | 84900 | 3.4447 | 1.0 |
| 3.2566 | 6.15 | 85000 | 3.4410 | 1.0 |
| 3.7839 | 6.16 | 85100 | 3.4471 | 1.0 |
| 10.7563 | 6.17 | 85200 | 3.4516 | 1.0 |
| 3.501 | 6.18 | 85300 | 3.4458 | 1.0 |
| 3.3805 | 6.18 | 85400 | 3.4441 | 1.0 |
| 3.3758 | 6.19 | 85500 | 3.4384 | 1.0 |
| 3.4565 | 6.2 | 85600 | 3.4457 | 1.0 |
| 3.3889 | 6.2 | 85700 | 3.4542 | 1.0 |
| 3.6664 | 6.21 | 85800 | 3.4572 | 1.0 |
| 3.4372 | 6.22 | 85900 | 3.4442 | 1.0 |
| 3.3461 | 6.23 | 86000 | 3.4430 | 1.0 |
| 3.3446 | 6.23 | 86100 | 3.4410 | 1.0 |
| 4.1477 | 6.24 | 86200 | 3.4521 | 1.0 |
| 3.2528 | 6.25 | 86300 | 3.4441 | 1.0 |
| 5.4615 | 6.25 | 86400 | 3.4386 | 1.0 |
| 3.3977 | 6.26 | 86500 | 3.4507 | 1.0 |
| 3.3648 | 6.27 | 86600 | 3.4488 | 1.0 |
| 3.875 | 6.28 | 86700 | 3.4477 | 1.0 |
| 3.8437 | 6.28 | 86800 | 3.4421 | 1.0 |
| 3.2904 | 6.29 | 86900 | 3.4458 | 1.0 |
| 3.6029 | 6.3 | 87000 | 3.4536 | 1.0 |
| 3.2774 | 6.31 | 87100 | 3.4452 | 1.0 |
| 3.3557 | 6.31 | 87200 | 3.4491 | 1.0 |
| 3.344 | 6.32 | 87300 | 3.4550 | 1.0 |
| 3.1771 | 6.33 | 87400 | 3.4414 | 1.0 |
| 3.2468 | 6.33 | 87500 | 3.4407 | 1.0 |
| 3.3878 | 6.34 | 87600 | 3.4409 | 1.0 |
| 3.3175 | 6.35 | 87700 | 3.4402 | 1.0 |
| 3.3398 | 6.36 | 87800 | 3.4422 | 1.0 |
| 3.3925 | 6.36 | 87900 | 3.4480 | 1.0 |
| 3.2327 | 6.37 | 88000 | 3.4380 | 1.0 |
| 3.5039 | 6.38 | 88100 | 3.4449 | 1.0 |
| 4.6598 | 6.39 | 88200 | 3.4443 | 1.0 |
| 3.2816 | 6.39 | 88300 | 3.4471 | 1.0 |
| 3.2072 | 6.4 | 88400 | 3.4370 | 1.0 |
| 3.2164 | 6.41 | 88500 | 3.4455 | 1.0 |
| 3.1742 | 6.41 | 88600 | 3.4416 | 1.0 |
| 3.298 | 6.42 | 88700 | 3.4424 | 1.0 |
| 4.2488 | 6.43 | 88800 | 3.4485 | 1.0 |
| 3.3554 | 6.44 | 88900 | 3.4421 | 1.0 |
| 3.469 | 6.44 | 89000 | 3.4442 | 1.0 |
| 3.7796 | 6.45 | 89100 | 3.4478 | 1.0 |
| 3.357 | 6.46 | 89200 | 3.4493 | 1.0 |
| 3.3099 | 6.46 | 89300 | 3.4422 | 1.0 |
| 3.343 | 6.47 | 89400 | 3.4484 | 1.0 |
| 3.1808 | 6.48 | 89500 | 3.4493 | 1.0 |
| 3.3544 | 6.49 | 89600 | 3.4404 | 1.0 |
| 3.2563 | 6.49 | 89700 | 3.4427 | 1.0 |
| 4.8257 | 6.5 | 89800 | 3.4409 | 1.0 |
| 3.3544 | 6.51 | 89900 | 3.4435 | 1.0 |
| 3.3013 | 6.52 | 90000 | 3.4442 | 1.0 |
| 3.4374 | 6.52 | 90100 | 3.4389 | 1.0 |
| 3.3702 | 6.53 | 90200 | 3.4461 | 1.0 |
| 3.8491 | 6.54 | 90300 | 3.4469 | 1.0 |
| 3.3713 | 6.54 | 90400 | 3.4456 | 1.0 |
| 3.36 | 6.55 | 90500 | 3.4600 | 1.0 |
| 3.4559 | 6.56 | 90600 | 3.4541 | 1.0 |
| 3.9838 | 6.57 | 90700 | 3.4411 | 1.0 |
| 3.3675 | 6.57 | 90800 | 3.4448 | 1.0 |
| 3.3384 | 6.58 | 90900 | 3.4437 | 1.0 |
| 3.3098 | 6.59 | 91000 | 3.4401 | 1.0 |
| 3.344 | 6.6 | 91100 | 3.4412 | 1.0 |
| 3.3974 | 6.6 | 91200 | 3.4383 | 1.0 |
| 3.3255 | 6.61 | 91300 | 3.4468 | 1.0 |
| 3.3193 | 6.62 | 91400 | 3.4410 | 1.0 |
| 3.3432 | 6.62 | 91500 | 3.4429 | 1.0 |
| 3.5861 | 6.63 | 91600 | 3.4501 | 1.0 |
| 3.4078 | 6.64 | 91700 | 3.4466 | 1.0 |
| 3.4045 | 6.65 | 91800 | 3.4507 | 1.0 |
| 3.2148 | 6.65 | 91900 | 3.4440 | 1.0 |
| 3.446 | 6.66 | 92000 | 3.4431 | 1.0 |
| 3.2581 | 6.67 | 92100 | 3.4421 | 1.0 |
| 3.4569 | 6.67 | 92200 | 3.4477 | 1.0 |
| 3.3271 | 6.68 | 92300 | 3.4384 | 1.0 |
| 3.3428 | 6.69 | 92400 | 3.4379 | 1.0 |
| 5.7004 | 6.7 | 92500 | 3.4444 | 1.0 |
| 3.3441 | 6.7 | 92600 | 3.4525 | 1.0 |
| 3.4577 | 6.71 | 92700 | 3.4529 | 1.0 |
| 3.2188 | 6.72 | 92800 | 3.4386 | 1.0 |
| 3.3738 | 6.73 | 92900 | 3.4421 | 1.0 |
| 3.309 | 6.73 | 93000 | 3.4421 | 1.0 |
| 3.6994 | 6.74 | 93100 | 3.4476 | 1.0 |
| 3.4694 | 6.75 | 93200 | 3.4479 | 1.0 |
| 3.6629 | 6.75 | 93300 | 3.4433 | 1.0 |
| 3.2603 | 6.76 | 93400 | 3.4455 | 1.0 |
| 3.5258 | 6.77 | 93500 | 3.4466 | 1.0 |
| 3.3443 | 6.78 | 93600 | 3.4444 | 1.0 |
| 3.3363 | 6.78 | 93700 | 3.4389 | 1.0 |
| 3.8168 | 6.79 | 93800 | 3.4411 | 1.0 |
| 3.4222 | 6.8 | 93900 | 3.4447 | 1.0 |
| 3.6458 | 6.81 | 94000 | 3.4432 | 1.0 |
| 3.246 | 6.81 | 94100 | 3.4473 | 1.0 |
| 3.5288 | 6.82 | 94200 | 3.4468 | 1.0 |
| 3.4141 | 6.83 | 94300 | 3.4379 | 1.0 |
| 3.3348 | 6.83 | 94400 | 3.4394 | 1.0 |
| 3.3027 | 6.84 | 94500 | 3.4433 | 1.0 |
| 3.7383 | 6.85 | 94600 | 3.4431 | 1.0 |
| 3.2835 | 6.86 | 94700 | 3.4385 | 1.0 |
| 3.3132 | 6.86 | 94800 | 3.4435 | 1.0 |
| 3.5486 | 6.87 | 94900 | 3.4457 | 1.0 |
| 3.2407 | 6.88 | 95000 | 3.4401 | 1.0 |
| 5.9865 | 6.88 | 95100 | 3.4526 | 1.0 |
| 3.7244 | 6.89 | 95200 | 3.4456 | 1.0 |
| 3.4583 | 6.9 | 95300 | 3.4419 | 1.0 |
| 3.3585 | 6.91 | 95400 | 3.4406 | 1.0 |
| 3.3433 | 6.91 | 95500 | 3.4582 | 1.0 |
| 3.3487 | 6.92 | 95600 | 3.4446 | 1.0 |
| 3.2941 | 6.93 | 95700 | 3.4538 | 1.0 |
| 3.4637 | 6.94 | 95800 | 3.4380 | 1.0 |
| 3.6811 | 6.94 | 95900 | 3.4385 | 1.0 |
| 3.3364 | 6.95 | 96000 | 3.4476 | 1.0 |
| 3.3127 | 6.96 | 96100 | 3.4376 | 1.0 |
| 3.301 | 6.96 | 96200 | 3.4442 | 1.0 |
| 3.407 | 6.97 | 96300 | 3.4419 | 1.0 |
| 3.3103 | 6.98 | 96400 | 3.4444 | 1.0 |
| 3.514 | 6.99 | 96500 | 3.4496 | 1.0 |
| 3.257 | 6.99 | 96600 | 3.4499 | 1.0 |
| 3.4131 | 7.0 | 96700 | 3.4408 | 1.0 |
| 3.3395 | 7.01 | 96800 | 3.4395 | 1.0 |
| 3.3651 | 7.02 | 96900 | 3.4373 | 1.0 |
| 3.4559 | 7.02 | 97000 | 3.4431 | 1.0 |
| 3.8799 | 7.03 | 97100 | 3.4419 | 1.0 |
| 3.4603 | 7.04 | 97200 | 3.4411 | 1.0 |
| 3.3208 | 7.04 | 97300 | 3.4413 | 1.0 |
| 3.3491 | 7.05 | 97400 | 3.4389 | 1.0 |
| 3.3667 | 7.06 | 97500 | 3.4447 | 1.0 |
| 3.3628 | 7.07 | 97600 | 3.4418 | 1.0 |
| 3.322 | 7.07 | 97700 | 3.4448 | 1.0 |
| 3.4562 | 7.08 | 97800 | 3.4479 | 1.0 |
| 3.2331 | 7.09 | 97900 | 3.4522 | 1.0 |
| 3.4535 | 7.09 | 98000 | 3.4465 | 1.0 |
| 3.3035 | 7.1 | 98100 | 3.4444 | 1.0 |
| 3.3541 | 7.11 | 98200 | 3.4380 | 1.0 |
| 3.2874 | 7.12 | 98300 | 3.4413 | 1.0 |
| 3.4224 | 7.12 | 98400 | 3.4519 | 1.0 |
| 3.4403 | 7.13 | 98500 | 3.4447 | 1.0 |
| 3.2964 | 7.14 | 98600 | 3.4424 | 1.0 |
| 3.297 | 7.15 | 98700 | 3.4403 | 1.0 |
| 3.3279 | 7.15 | 98800 | 3.4469 | 1.0 |
| 3.3393 | 7.16 | 98900 | 3.4477 | 1.0 |
| 3.3377 | 7.17 | 99000 | 3.4437 | 1.0 |
| 3.3256 | 7.17 | 99100 | 3.4376 | 1.0 |
| 3.383 | 7.18 | 99200 | 3.4397 | 1.0 |
| 3.3298 | 7.19 | 99300 | 3.4414 | 1.0 |
| 5.1176 | 7.2 | 99400 | 3.4438 | 1.0 |
| 3.2854 | 7.2 | 99500 | 3.4463 | 1.0 |
| 3.3177 | 7.21 | 99600 | 3.4558 | 1.0 |
| 3.3946 | 7.22 | 99700 | 3.4420 | 1.0 |
| 3.3175 | 7.23 | 99800 | 3.4485 | 1.0 |
| 3.3535 | 7.23 | 99900 | 3.4416 | 1.0 |
| 3.332 | 7.24 | 100000 | 3.4375 | 1.0 |
| 3.2779 | 7.25 | 100100 | 3.4437 | 1.0 |
| 3.2977 | 7.25 | 100200 | 3.4438 | 1.0 |
| 3.3777 | 7.26 | 100300 | 3.4448 | 1.0 |
| 3.3096 | 7.27 | 100400 | 3.4414 | 1.0 |
| 3.3538 | 7.28 | 100500 | 3.4464 | 1.0 |
| 3.3164 | 7.28 | 100600 | 3.4456 | 1.0 |
| 3.4028 | 7.29 | 100700 | 3.4494 | 1.0 |
| 3.4322 | 7.3 | 100800 | 3.4554 | 1.0 |
| 3.2851 | 7.3 | 100900 | 3.4499 | 1.0 |
| 3.3666 | 7.31 | 101000 | 3.4394 | 1.0 |
| 3.2821 | 7.32 | 101100 | 3.4396 | 1.0 |
| 3.3335 | 7.33 | 101200 | 3.4454 | 1.0 |
| 3.3327 | 7.33 | 101300 | 3.4484 | 1.0 |
| 3.2771 | 7.34 | 101400 | 3.4416 | 1.0 |
| 3.2928 | 7.35 | 101500 | 3.4433 | 1.0 |
| 3.3341 | 7.36 | 101600 | 3.4482 | 1.0 |
| 3.2928 | 7.36 | 101700 | 3.4420 | 1.0 |
| 3.2428 | 7.37 | 101800 | 3.4428 | 1.0 |
| 3.3266 | 7.38 | 101900 | 3.4455 | 1.0 |
| 3.3004 | 7.38 | 102000 | 3.4481 | 1.0 |
| 3.3588 | 7.39 | 102100 | 3.4414 | 1.0 |
| 3.3312 | 7.4 | 102200 | 3.4510 | 1.0 |
| 3.4165 | 7.41 | 102300 | 3.4375 | 1.0 |
| 3.3087 | 7.41 | 102400 | 3.4522 | 1.0 |
| 3.353 | 7.42 | 102500 | 3.4400 | 1.0 |
| 3.1741 | 7.43 | 102600 | 3.4413 | 1.0 |
| 3.2123 | 7.44 | 102700 | 3.4472 | 1.0 |
| 3.1993 | 7.44 | 102800 | 3.4452 | 1.0 |
| 3.239 | 7.45 | 102900 | 3.4418 | 1.0 |
| 3.3241 | 7.46 | 103000 | 3.4496 | 1.0 |
| 3.2586 | 7.46 | 103100 | 3.4498 | 1.0 |
| 3.5903 | 7.47 | 103200 | 3.4465 | 1.0 |
| 3.3286 | 7.48 | 103300 | 3.4488 | 1.0 |
| 3.4615 | 7.49 | 103400 | 3.4486 | 1.0 |
| 3.3855 | 7.49 | 103500 | 3.4440 | 1.0 |
| 3.3819 | 7.5 | 103600 | 3.4534 | 1.0 |
| 3.3003 | 7.51 | 103700 | 3.4502 | 1.0 |
| 3.4232 | 7.51 | 103800 | 3.4429 | 1.0 |
| 3.2926 | 7.52 | 103900 | 3.4442 | 1.0 |
| 3.7337 | 7.53 | 104000 | 3.4516 | 1.0 |
| 3.3338 | 7.54 | 104100 | 3.4469 | 1.0 |
| 3.32 | 7.54 | 104200 | 3.4545 | 1.0 |
| 3.6807 | 7.55 | 104300 | 3.4449 | 1.0 |
| 3.3397 | 7.56 | 104400 | 3.4479 | 1.0 |
| 3.2993 | 7.57 | 104500 | 3.4424 | 1.0 |
| 3.3652 | 7.57 | 104600 | 3.4507 | 1.0 |
| 3.2885 | 7.58 | 104700 | 3.4437 | 1.0 |
| 3.4006 | 7.59 | 104800 | 3.4403 | 1.0 |
| 3.3361 | 7.59 | 104900 | 3.4432 | 1.0 |
| 3.4084 | 7.6 | 105000 | 3.4423 | 1.0 |
| 3.3251 | 7.61 | 105100 | 3.4418 | 1.0 |
| 3.3079 | 7.62 | 105200 | 3.4398 | 1.0 |
| 3.4738 | 7.62 | 105300 | 3.4497 | 1.0 |
| 3.5048 | 7.63 | 105400 | 3.4429 | 1.0 |
| 3.4189 | 7.64 | 105500 | 3.4410 | 1.0 |
| 3.3132 | 7.64 | 105600 | 3.4437 | 1.0 |
| 3.2738 | 7.65 | 105700 | 3.4457 | 1.0 |
| 3.2876 | 7.66 | 105800 | 3.4404 | 1.0 |
| 3.3413 | 7.67 | 105900 | 3.4458 | 1.0 |
| 3.3014 | 7.67 | 106000 | 3.4535 | 1.0 |
| 3.2244 | 7.68 | 106100 | 3.4436 | 1.0 |
| 3.2715 | 7.69 | 106200 | 3.4470 | 1.0 |
| 3.3593 | 7.7 | 106300 | 3.4410 | 1.0 |
| 3.334 | 7.7 | 106400 | 3.4525 | 1.0 |
| 3.3547 | 7.71 | 106500 | 3.4513 | 1.0 |
| 3.9896 | 7.72 | 106600 | 3.4381 | 1.0 |
| 3.4202 | 7.72 | 106700 | 3.4395 | 1.0 |
| 3.34 | 7.73 | 106800 | 3.4426 | 1.0 |
| 3.3778 | 7.74 | 106900 | 3.4508 | 1.0 |
| 3.3374 | 7.75 | 107000 | 3.4464 | 1.0 |
| 3.4008 | 7.75 | 107100 | 3.4365 | 1.0 |
| 3.2595 | 7.76 | 107200 | 3.4496 | 1.0 |
| 3.3261 | 7.77 | 107300 | 3.4543 | 1.0 |
| 3.2551 | 7.78 | 107400 | 3.4490 | 1.0 |
| 3.2967 | 7.78 | 107500 | 3.4404 | 1.0 |
| 3.4232 | 7.79 | 107600 | 3.4492 | 1.0 |
| 3.3992 | 7.8 | 107700 | 3.4448 | 1.0 |
| 3.3268 | 7.8 | 107800 | 3.4465 | 1.0 |
| 3.283 | 7.81 | 107900 | 3.4424 | 1.0 |
| 3.3488 | 7.82 | 108000 | 3.4446 | 1.0 |
| 3.3232 | 7.83 | 108100 | 3.4432 | 1.0 |
| 3.5081 | 7.83 | 108200 | 3.4460 | 1.0 |
| 3.2686 | 7.84 | 108300 | 3.4499 | 1.0 |
| 3.2465 | 7.85 | 108400 | 3.4429 | 1.0 |
| 3.5602 | 7.85 | 108500 | 3.4398 | 1.0 |
| 3.299 | 7.86 | 108600 | 3.4376 | 1.0 |
| 3.3437 | 7.87 | 108700 | 3.4428 | 1.0 |
| 3.3221 | 7.88 | 108800 | 3.4492 | 1.0 |
| 3.5462 | 7.88 | 108900 | 3.4414 | 1.0 |
| 3.3901 | 7.89 | 109000 | 3.4506 | 1.0 |
| 3.3598 | 7.9 | 109100 | 3.4421 | 1.0 |
| 3.3946 | 7.91 | 109200 | 3.4389 | 1.0 |
| 3.3013 | 7.91 | 109300 | 3.4444 | 1.0 |
| 3.3094 | 7.92 | 109400 | 3.4464 | 1.0 |
| 3.4829 | 7.93 | 109500 | 3.4379 | 1.0 |
| 3.2769 | 7.93 | 109600 | 3.4401 | 1.0 |
| 3.3359 | 7.94 | 109700 | 3.4437 | 1.0 |
| 3.3079 | 7.95 | 109800 | 3.4455 | 1.0 |
| 3.3623 | 7.96 | 109900 | 3.4447 | 1.0 |
| 3.3439 | 7.96 | 110000 | 3.4404 | 1.0 |
| 3.3045 | 7.97 | 110100 | 3.4520 | 1.0 |
| 3.2657 | 7.98 | 110200 | 3.4409 | 1.0 |
| 3.3187 | 7.99 | 110300 | 3.4430 | 1.0 |
| 3.349 | 7.99 | 110400 | 3.4430 | 1.0 |
| 3.3262 | 8.0 | 110500 | 3.4412 | 1.0 |
| 3.2603 | 8.01 | 110600 | 3.4440 | 1.0 |
| 3.4284 | 8.01 | 110700 | 3.4456 | 1.0 |
| 3.5993 | 8.02 | 110800 | 3.4518 | 1.0 |
| 5.6854 | 8.03 | 110900 | 3.4411 | 1.0 |
| 3.3856 | 8.04 | 111000 | 3.4430 | 1.0 |
| 3.5339 | 8.04 | 111100 | 3.4394 | 1.0 |
| 3.2691 | 8.05 | 111200 | 3.4425 | 1.0 |
| 3.3462 | 8.06 | 111300 | 3.4422 | 1.0 |
| 3.3469 | 8.06 | 111400 | 3.4458 | 1.0 |
| 3.3598 | 8.07 | 111500 | 3.4429 | 1.0 |
| 3.554 | 8.08 | 111600 | 3.4438 | 1.0 |
| 3.3207 | 8.09 | 111700 | 3.4480 | 1.0 |
| 3.2963 | 8.09 | 111800 | 3.4434 | 1.0 |
| 3.4644 | 8.1 | 111900 | 3.4417 | 1.0 |
| 3.4265 | 8.11 | 112000 | 3.4404 | 1.0 |
| 3.3026 | 8.12 | 112100 | 3.4442 | 1.0 |
| 3.2747 | 8.12 | 112200 | 3.4433 | 1.0 |
| 7.3735 | 8.13 | 112300 | 3.4403 | 1.0 |
| 3.4803 | 8.14 | 112400 | 3.4464 | 1.0 |
| 4.9879 | 8.14 | 112500 | 3.4454 | 1.0 |
| 3.4249 | 8.15 | 112600 | 3.4421 | 1.0 |
| 3.3493 | 8.16 | 112700 | 3.4403 | 1.0 |
| 3.3514 | 8.17 | 112800 | 3.4445 | 1.0 |
| 3.262 | 8.17 | 112900 | 3.4457 | 1.0 |
| 3.3517 | 8.18 | 113000 | 3.4479 | 1.0 |
| 3.2408 | 8.19 | 113100 | 3.4413 | 1.0 |
| 3.2346 | 8.2 | 113200 | 3.4415 | 1.0 |
| 3.2397 | 8.2 | 113300 | 3.4414 | 1.0 |
| 3.3794 | 8.21 | 113400 | 3.4502 | 1.0 |
| 3.516 | 8.22 | 113500 | 3.4507 | 1.0 |
| 3.4129 | 8.22 | 113600 | 3.4455 | 1.0 |
| 3.3381 | 8.23 | 113700 | 3.4540 | 1.0 |
| 3.3172 | 8.24 | 113800 | 3.4473 | 1.0 |
| 3.5307 | 8.25 | 113900 | 3.4431 | 1.0 |
| 3.3424 | 8.25 | 114000 | 3.4511 | 1.0 |
| 3.4004 | 8.26 | 114100 | 3.4434 | 1.0 |
| 3.4061 | 8.27 | 114200 | 3.4435 | 1.0 |
| 3.5333 | 8.27 | 114300 | 3.4415 | 1.0 |
| 3.2974 | 8.28 | 114400 | 3.4472 | 1.0 |
| 3.3827 | 8.29 | 114500 | 3.4469 | 1.0 |
| 3.5697 | 8.3 | 114600 | 3.4427 | 1.0 |
| 3.4561 | 8.3 | 114700 | 3.4433 | 1.0 |
| 3.5205 | 8.31 | 114800 | 3.4474 | 1.0 |
| 3.2541 | 8.32 | 114900 | 3.4475 | 1.0 |
| 3.4251 | 8.33 | 115000 | 3.4394 | 1.0 |
| 3.2477 | 8.33 | 115100 | 3.4524 | 1.0 |
| 3.4003 | 8.34 | 115200 | 3.4438 | 1.0 |
| 3.3378 | 8.35 | 115300 | 3.4447 | 1.0 |
| 3.2828 | 8.35 | 115400 | 3.4493 | 1.0 |
| 3.6974 | 8.36 | 115500 | 3.4507 | 1.0 |
| 3.3466 | 8.37 | 115600 | 3.4384 | 1.0 |
| 3.2601 | 8.38 | 115700 | 3.4538 | 1.0 |
| 3.8384 | 8.38 | 115800 | 3.4408 | 1.0 |
| 3.5255 | 8.39 | 115900 | 3.4446 | 1.0 |
| 3.3517 | 8.4 | 116000 | 3.4445 | 1.0 |
| 3.37 | 8.41 | 116100 | 3.4530 | 1.0 |
| 3.4486 | 8.41 | 116200 | 3.4446 | 1.0 |
| 3.4104 | 8.42 | 116300 | 3.4447 | 1.0 |
| 3.5267 | 8.43 | 116400 | 3.4410 | 1.0 |
| 3.4422 | 8.43 | 116500 | 3.4546 | 1.0 |
| 3.1616 | 8.44 | 116600 | 3.4400 | 1.0 |
| 3.3557 | 8.45 | 116700 | 3.4458 | 1.0 |
| 3.4674 | 8.46 | 116800 | 3.4443 | 1.0 |
| 3.3114 | 8.46 | 116900 | 3.4390 | 1.0 |
| 3.4986 | 8.47 | 117000 | 3.4405 | 1.0 |
| 3.4579 | 8.48 | 117100 | 3.4459 | 1.0 |
| 3.3369 | 8.48 | 117200 | 3.4403 | 1.0 |
| 3.4802 | 8.49 | 117300 | 3.4480 | 1.0 |
| 3.3244 | 8.5 | 117400 | 3.4447 | 1.0 |
| 3.3096 | 8.51 | 117500 | 3.4525 | 1.0 |
| 3.3415 | 8.51 | 117600 | 3.4516 | 1.0 |
| 3.416 | 8.52 | 117700 | 3.4396 | 1.0 |
| 3.3363 | 8.53 | 117800 | 3.4510 | 1.0 |
| 3.2588 | 8.54 | 117900 | 3.4439 | 1.0 |
| 3.4127 | 8.54 | 118000 | 3.4370 | 1.0 |
| 3.4268 | 8.55 | 118100 | 3.4472 | 1.0 |
| 3.3877 | 8.56 | 118200 | 3.4437 | 1.0 |
| 3.386 | 8.56 | 118300 | 3.4448 | 1.0 |
| 3.9643 | 8.57 | 118400 | 3.4500 | 1.0 |
| 3.2205 | 8.58 | 118500 | 3.4410 | 1.0 |
| 3.3372 | 8.59 | 118600 | 3.4486 | 1.0 |
| 3.3919 | 8.59 | 118700 | 3.4485 | 1.0 |
| 3.3279 | 8.6 | 118800 | 3.4408 | 1.0 |
| 3.3251 | 8.61 | 118900 | 3.4379 | 1.0 |
| 3.2832 | 8.62 | 119000 | 3.4388 | 1.0 |
| 3.2708 | 8.62 | 119100 | 3.4522 | 1.0 |
| 4.0701 | 8.63 | 119200 | 3.4436 | 1.0 |
| 3.5261 | 8.64 | 119300 | 3.4475 | 1.0 |
| 3.2695 | 8.64 | 119400 | 3.4411 | 1.0 |
| 3.4095 | 8.65 | 119500 | 3.4451 | 1.0 |
| 3.2641 | 8.66 | 119600 | 3.4527 | 1.0 |
| 3.6962 | 8.67 | 119700 | 3.4495 | 1.0 |
| 3.407 | 8.67 | 119800 | 3.4523 | 1.0 |
| 3.5073 | 8.68 | 119900 | 3.4612 | 1.0 |
| 3.4697 | 8.69 | 120000 | 3.4491 | 1.0 |
| 3.4643 | 8.69 | 120100 | 3.4427 | 1.0 |
| 3.5253 | 8.7 | 120200 | 3.4457 | 1.0 |
| 3.2562 | 8.71 | 120300 | 3.4545 | 1.0 |
| 3.2946 | 8.72 | 120400 | 3.4570 | 1.0 |
| 3.393 | 8.72 | 120500 | 3.4432 | 1.0 |
| 3.2528 | 8.73 | 120600 | 3.4391 | 1.0 |
| 3.4529 | 8.74 | 120700 | 3.4530 | 1.0 |
| 3.506 | 8.75 | 120800 | 3.4425 | 1.0 |
| 3.3464 | 8.75 | 120900 | 3.4420 | 1.0 |
| 3.3287 | 8.76 | 121000 | 3.4463 | 1.0 |
| 3.3165 | 8.77 | 121100 | 3.4509 | 1.0 |
| 3.3102 | 8.77 | 121200 | 3.4418 | 1.0 |
| 3.4206 | 8.78 | 121300 | 3.4495 | 1.0 |
| 3.5963 | 8.79 | 121400 | 3.4432 | 1.0 |
| 3.2621 | 8.8 | 121500 | 3.4455 | 1.0 |
| 3.3275 | 8.8 | 121600 | 3.4483 | 1.0 |
| 3.3654 | 8.81 | 121700 | 3.4476 | 1.0 |
| 3.4913 | 8.82 | 121800 | 3.4525 | 1.0 |
| 3.4162 | 8.83 | 121900 | 3.4409 | 1.0 |
| 3.221 | 8.83 | 122000 | 3.4415 | 1.0 |
| 3.3024 | 8.84 | 122100 | 3.4385 | 1.0 |
| 3.3451 | 8.85 | 122200 | 3.4428 | 1.0 |
| 3.3909 | 8.85 | 122300 | 3.4417 | 1.0 |
| 3.3237 | 8.86 | 122400 | 3.4472 | 1.0 |
| 3.2992 | 8.87 | 122500 | 3.4406 | 1.0 |
| 3.2422 | 8.88 | 122600 | 3.4492 | 1.0 |
| 3.3713 | 8.88 | 122700 | 3.4411 | 1.0 |
| 3.4062 | 8.89 | 122800 | 3.4412 | 1.0 |
| 3.3616 | 8.9 | 122900 | 3.4464 | 1.0 |
| 3.3811 | 8.9 | 123000 | 3.4382 | 1.0 |
| 3.3592 | 8.91 | 123100 | 3.4442 | 1.0 |
| 3.8331 | 8.92 | 123200 | 3.4423 | 1.0 |
| 3.3764 | 8.93 | 123300 | 3.4492 | 1.0 |
| 3.3964 | 8.93 | 123400 | 3.4390 | 1.0 |
| 3.5063 | 8.94 | 123500 | 3.4411 | 1.0 |
| 3.3627 | 8.95 | 123600 | 3.4467 | 1.0 |
| 4.1315 | 8.96 | 123700 | 3.4409 | 1.0 |
| 3.7114 | 8.96 | 123800 | 3.4456 | 1.0 |
| 3.3446 | 8.97 | 123900 | 3.4413 | 1.0 |
| 3.3777 | 8.98 | 124000 | 3.4464 | 1.0 |
| 3.6232 | 8.98 | 124100 | 3.4478 | 1.0 |
| 3.3275 | 8.99 | 124200 | 3.4474 | 1.0 |
| 3.5736 | 9.0 | 124300 | 3.4427 | 1.0 |
| 3.2052 | 9.01 | 124400 | 3.4455 | 1.0 |
| 3.3101 | 9.01 | 124500 | 3.4485 | 1.0 |
| 3.3523 | 9.02 | 124600 | 3.4389 | 1.0 |
| 3.3095 | 9.03 | 124700 | 3.4433 | 1.0 |
| 3.3152 | 9.03 | 124800 | 3.4402 | 1.0 |
| 3.2351 | 9.04 | 124900 | 3.4452 | 1.0 |
| 3.5137 | 9.05 | 125000 | 3.4458 | 1.0 |
| 3.3489 | 9.06 | 125100 | 3.4431 | 1.0 |
| 3.3822 | 9.06 | 125200 | 3.4370 | 1.0 |
| 3.3842 | 9.07 | 125300 | 3.4359 | 1.0 |
| 3.306 | 9.08 | 125400 | 3.4439 | 1.0 |
| 3.3784 | 9.09 | 125500 | 3.4538 | 1.0 |
| 3.3313 | 9.09 | 125600 | 3.4410 | 1.0 |
| 3.2891 | 9.1 | 125700 | 3.4397 | 1.0 |
| 3.321 | 9.11 | 125800 | 3.4457 | 1.0 |
| 3.2479 | 9.11 | 125900 | 3.4448 | 1.0 |
| 3.3723 | 9.12 | 126000 | 3.4409 | 1.0 |
| 3.3203 | 9.13 | 126100 | 3.4439 | 1.0 |
| 3.2906 | 9.14 | 126200 | 3.4388 | 1.0 |
| 3.2164 | 9.14 | 126300 | 3.4427 | 1.0 |
| 3.2608 | 9.15 | 126400 | 3.4396 | 1.0 |
| 3.3739 | 9.16 | 126500 | 3.4536 | 1.0 |
| 3.3479 | 9.17 | 126600 | 3.4533 | 1.0 |
| 3.4664 | 9.17 | 126700 | 3.4491 | 1.0 |
| 3.326 | 9.18 | 126800 | 3.4402 | 1.0 |
| 3.3056 | 9.19 | 126900 | 3.4398 | 1.0 |
| 3.3528 | 9.19 | 127000 | 3.4424 | 1.0 |
| 3.2717 | 9.2 | 127100 | 3.4409 | 1.0 |
| 3.3564 | 9.21 | 127200 | 3.4497 | 1.0 |
| 3.4015 | 9.22 | 127300 | 3.4435 | 1.0 |
| 3.3325 | 9.22 | 127400 | 3.4478 | 1.0 |
| 3.4459 | 9.23 | 127500 | 3.4479 | 1.0 |
| 3.2151 | 9.24 | 127600 | 3.4519 | 1.0 |
| 3.2456 | 9.24 | 127700 | 3.4408 | 1.0 |
| 3.3108 | 9.25 | 127800 | 3.4430 | 1.0 |
| 3.3965 | 9.26 | 127900 | 3.4427 | 1.0 |
| 3.4911 | 9.27 | 128000 | 3.4430 | 1.0 |
| 3.3996 | 9.27 | 128100 | 3.4458 | 1.0 |
| 3.3408 | 9.28 | 128200 | 3.4435 | 1.0 |
| 3.353 | 9.29 | 128300 | 3.4468 | 1.0 |
| 3.5449 | 9.3 | 128400 | 3.4401 | 1.0 |
| 3.3564 | 9.3 | 128500 | 3.4481 | 1.0 |
| 3.4768 | 9.31 | 128600 | 3.4450 | 1.0 |
| 3.3972 | 9.32 | 128700 | 3.4467 | 1.0 |
| 3.3295 | 9.32 | 128800 | 3.4385 | 1.0 |
| 3.3181 | 9.33 | 128900 | 3.4435 | 1.0 |
| 3.3224 | 9.34 | 129000 | 3.4467 | 1.0 |
| 3.3471 | 9.35 | 129100 | 3.4415 | 1.0 |
| 3.3379 | 9.35 | 129200 | 3.4458 | 1.0 |
| 3.3991 | 9.36 | 129300 | 3.4420 | 1.0 |
| 3.4037 | 9.37 | 129400 | 3.4433 | 1.0 |
| 3.3157 | 9.38 | 129500 | 3.4450 | 1.0 |
| 3.3739 | 9.38 | 129600 | 3.4426 | 1.0 |
| 3.2556 | 9.39 | 129700 | 3.4473 | 1.0 |
| 3.3451 | 9.4 | 129800 | 3.4413 | 1.0 |
| 3.3694 | 9.4 | 129900 | 3.4462 | 1.0 |
| 3.343 | 9.41 | 130000 | 3.4408 | 1.0 |
| 3.4286 | 9.42 | 130100 | 3.4495 | 1.0 |
| 3.4468 | 9.43 | 130200 | 3.4450 | 1.0 |
| 3.3417 | 9.43 | 130300 | 3.4457 | 1.0 |
| 3.4661 | 9.44 | 130400 | 3.4409 | 1.0 |
| 3.2859 | 9.45 | 130500 | 3.4412 | 1.0 |
| 3.3164 | 9.45 | 130600 | 3.4495 | 1.0 |
| 3.3542 | 9.46 | 130700 | 3.4428 | 1.0 |
| 3.2783 | 9.47 | 130800 | 3.4398 | 1.0 |
| 3.421 | 9.48 | 130900 | 3.4408 | 1.0 |
| 3.3765 | 9.48 | 131000 | 3.4443 | 1.0 |
| 3.3822 | 9.49 | 131100 | 3.4458 | 1.0 |
| 3.2261 | 9.5 | 131200 | 3.4437 | 1.0 |
| 3.362 | 9.51 | 131300 | 3.4388 | 1.0 |
| 3.3203 | 9.51 | 131400 | 3.4498 | 1.0 |
| 3.2326 | 9.52 | 131500 | 3.4415 | 1.0 |
| 3.3897 | 9.53 | 131600 | 3.4556 | 1.0 |
| 3.3434 | 9.53 | 131700 | 3.4421 | 1.0 |
| 3.3297 | 9.54 | 131800 | 3.4394 | 1.0 |
| 3.4889 | 9.55 | 131900 | 3.4420 | 1.0 |
| 3.3502 | 9.56 | 132000 | 3.4425 | 1.0 |
| 3.4079 | 9.56 | 132100 | 3.4370 | 1.0 |
| 3.213 | 9.57 | 132200 | 3.4479 | 1.0 |
| 3.3935 | 9.58 | 132300 | 3.4433 | 1.0 |
| 3.2598 | 9.59 | 132400 | 3.4431 | 1.0 |
| 3.3968 | 9.59 | 132500 | 3.4442 | 1.0 |
| 3.338 | 9.6 | 132600 | 3.4433 | 1.0 |
| 3.3268 | 9.61 | 132700 | 3.4447 | 1.0 |
| 3.3656 | 9.61 | 132800 | 3.4394 | 1.0 |
| 3.3782 | 9.62 | 132900 | 3.4397 | 1.0 |
| 3.3787 | 9.63 | 133000 | 3.4440 | 1.0 |
| 5.5557 | 9.64 | 133100 | 3.4396 | 1.0 |
| 3.4011 | 9.64 | 133200 | 3.4448 | 1.0 |
| 3.7319 | 9.65 | 133300 | 3.4447 | 1.0 |
| 3.5717 | 9.66 | 133400 | 3.4387 | 1.0 |
| 3.3051 | 9.66 | 133500 | 3.4460 | 1.0 |
| 3.3485 | 9.67 | 133600 | 3.4513 | 1.0 |
| 3.4845 | 9.68 | 133700 | 3.4506 | 1.0 |
| 3.335 | 9.69 | 133800 | 3.4415 | 1.0 |
| 3.2942 | 9.69 | 133900 | 3.4439 | 1.0 |
| 3.2748 | 9.7 | 134000 | 3.4390 | 1.0 |
| 3.392 | 9.71 | 134100 | 3.4490 | 1.0 |
| 3.3396 | 9.72 | 134200 | 3.4463 | 1.0 |
| 3.3097 | 9.72 | 134300 | 3.4440 | 1.0 |
| 3.3421 | 9.73 | 134400 | 3.4498 | 1.0 |
| 3.5204 | 9.74 | 134500 | 3.4514 | 1.0 |
| 3.8217 | 9.74 | 134600 | 3.4463 | 1.0 |
| 3.3094 | 9.75 | 134700 | 3.4402 | 1.0 |
| 3.3267 | 9.76 | 134800 | 3.4425 | 1.0 |
| 3.3396 | 9.77 | 134900 | 3.4429 | 1.0 |
| 3.3117 | 9.77 | 135000 | 3.4415 | 1.0 |
| 3.4302 | 9.78 | 135100 | 3.4406 | 1.0 |
| 3.2691 | 9.79 | 135200 | 3.4405 | 1.0 |
| 3.337 | 9.8 | 135300 | 3.4416 | 1.0 |
| 3.3437 | 9.8 | 135400 | 3.4427 | 1.0 |
| 3.3744 | 9.81 | 135500 | 3.4477 | 1.0 |
| 3.3151 | 9.82 | 135600 | 3.4388 | 1.0 |
| 3.3742 | 9.82 | 135700 | 3.4448 | 1.0 |
| 3.3093 | 9.83 | 135800 | 3.4462 | 1.0 |
| 3.4145 | 9.84 | 135900 | 3.4413 | 1.0 |
| 3.3858 | 9.85 | 136000 | 3.4459 | 1.0 |
| 3.3464 | 9.85 | 136100 | 3.4432 | 1.0 |
| 3.3831 | 9.86 | 136200 | 3.4467 | 1.0 |
| 3.2715 | 9.87 | 136300 | 3.4442 | 1.0 |
| 3.3594 | 9.87 | 136400 | 3.4444 | 1.0 |
| 3.3679 | 9.88 | 136500 | 3.4498 | 1.0 |
| 3.346 | 9.89 | 136600 | 3.4380 | 1.0 |
| 3.3156 | 9.9 | 136700 | 3.4501 | 1.0 |
| 3.3689 | 9.9 | 136800 | 3.4403 | 1.0 |
| 3.3157 | 9.91 | 136900 | 3.4461 | 1.0 |
| 3.2955 | 9.92 | 137000 | 3.4460 | 1.0 |
| 3.2288 | 9.93 | 137100 | 3.4429 | 1.0 |
| 3.3068 | 9.93 | 137200 | 3.4442 | 1.0 |
| 3.3965 | 9.94 | 137300 | 3.4400 | 1.0 |
| 3.3238 | 9.95 | 137400 | 3.4464 | 1.0 |
| 3.3469 | 9.95 | 137500 | 3.4496 | 1.0 |
| 3.3818 | 9.96 | 137600 | 3.4446 | 1.0 |
| 3.3677 | 9.97 | 137700 | 3.4487 | 1.0 |
| 3.4811 | 9.98 | 137800 | 3.4441 | 1.0 |
| 3.3636 | 9.98 | 137900 | 3.4456 | 1.0 |
| 3.3305 | 9.99 | 138000 | 3.4417 | 1.0 |
| 3.4025 | 10.0 | 138100 | 3.4401 | 1.0 |
| 3.4951 | 10.01 | 138200 | 3.4392 | 1.0 |
| 3.2803 | 10.01 | 138300 | 3.4411 | 1.0 |
| 4.6095 | 10.02 | 138400 | 3.4446 | 1.0 |
| 3.3677 | 10.03 | 138500 | 3.4465 | 1.0 |
| 3.4183 | 10.03 | 138600 | 3.4434 | 1.0 |
| 3.3482 | 10.04 | 138700 | 3.4430 | 1.0 |
| 3.2795 | 10.05 | 138800 | 3.4449 | 1.0 |
| 3.282 | 10.06 | 138900 | 3.4455 | 1.0 |
| 3.2617 | 10.06 | 139000 | 3.4442 | 1.0 |
| 3.5404 | 10.07 | 139100 | 3.4375 | 1.0 |
| 3.3432 | 10.08 | 139200 | 3.4447 | 1.0 |
| 3.3643 | 10.08 | 139300 | 3.4429 | 1.0 |
| 3.3022 | 10.09 | 139400 | 3.4415 | 1.0 |
| 3.4062 | 10.1 | 139500 | 3.4415 | 1.0 |
| 3.374 | 10.11 | 139600 | 3.4405 | 1.0 |
| 3.2843 | 10.11 | 139700 | 3.4435 | 1.0 |
| 3.6033 | 10.12 | 139800 | 3.4473 | 1.0 |
| 3.3374 | 10.13 | 139900 | 3.4428 | 1.0 |
| 3.3877 | 10.14 | 140000 | 3.4513 | 1.0 |
| 3.3533 | 10.14 | 140100 | 3.4484 | 1.0 |
| 3.3678 | 10.15 | 140200 | 3.4481 | 1.0 |
| 3.276 | 10.16 | 140300 | 3.4416 | 1.0 |
| 3.3052 | 10.16 | 140400 | 3.4483 | 1.0 |
| 3.4821 | 10.17 | 140500 | 3.4390 | 1.0 |
| 3.2748 | 10.18 | 140600 | 3.4389 | 1.0 |
| 3.2742 | 10.19 | 140700 | 3.4482 | 1.0 |
| 3.2824 | 10.19 | 140800 | 3.4416 | 1.0 |
| 3.37 | 10.2 | 140900 | 3.4435 | 1.0 |
| 3.3768 | 10.21 | 141000 | 3.4458 | 1.0 |
| 3.2652 | 10.22 | 141100 | 3.4454 | 1.0 |
| 3.4041 | 10.22 | 141200 | 3.4425 | 1.0 |
| 3.4062 | 10.23 | 141300 | 3.4465 | 1.0 |
| 3.2338 | 10.24 | 141400 | 3.4438 | 1.0 |
| 3.4214 | 10.24 | 141500 | 3.4425 | 1.0 |
| 3.3741 | 10.25 | 141600 | 3.4389 | 1.0 |
| 3.3156 | 10.26 | 141700 | 3.4468 | 1.0 |
| 3.43 | 10.27 | 141800 | 3.4430 | 1.0 |
| 3.3447 | 10.27 | 141900 | 3.4456 | 1.0 |
| 3.2682 | 10.28 | 142000 | 3.4517 | 1.0 |
| 3.3296 | 10.29 | 142100 | 3.4484 | 1.0 |
| 3.2508 | 10.29 | 142200 | 3.4420 | 1.0 |
| 3.3328 | 10.3 | 142300 | 3.4472 | 1.0 |
| 3.2838 | 10.31 | 142400 | 3.4439 | 1.0 |
| 3.3274 | 10.32 | 142500 | 3.4408 | 1.0 |
| 3.4848 | 10.32 | 142600 | 3.4448 | 1.0 |
| 3.5383 | 10.33 | 142700 | 3.4423 | 1.0 |
| 3.231 | 10.34 | 142800 | 3.4463 | 1.0 |
| 3.1536 | 10.35 | 142900 | 3.4437 | 1.0 |
| 3.281 | 10.35 | 143000 | 3.4436 | 1.0 |
| 3.2452 | 10.36 | 143100 | 3.4393 | 1.0 |
| 3.5728 | 10.37 | 143200 | 3.4406 | 1.0 |
| 3.3216 | 10.37 | 143300 | 3.4403 | 1.0 |
| 3.3496 | 10.38 | 143400 | 3.4397 | 1.0 |
| 3.3177 | 10.39 | 143500 | 3.4559 | 1.0 |
| 3.3153 | 10.4 | 143600 | 3.4460 | 1.0 |
| 3.4076 | 10.4 | 143700 | 3.4441 | 1.0 |
| 3.4137 | 10.41 | 143800 | 3.4397 | 1.0 |
| 3.3806 | 10.42 | 143900 | 3.4488 | 1.0 |
| 3.366 | 10.42 | 144000 | 3.4462 | 1.0 |
| 3.4151 | 10.43 | 144100 | 3.4446 | 1.0 |
| 3.3399 | 10.44 | 144200 | 3.4447 | 1.0 |
| 3.3705 | 10.45 | 144300 | 3.4392 | 1.0 |
| 3.5029 | 10.45 | 144400 | 3.4513 | 1.0 |
| 3.3149 | 10.46 | 144500 | 3.4458 | 1.0 |
| 3.3677 | 10.47 | 144600 | 3.4442 | 1.0 |
| 3.408 | 10.48 | 144700 | 3.4403 | 1.0 |
| 3.3738 | 10.48 | 144800 | 3.4405 | 1.0 |
| 3.2886 | 10.49 | 144900 | 3.4447 | 1.0 |
| 3.3321 | 10.5 | 145000 | 3.4455 | 1.0 |
| 3.4341 | 10.5 | 145100 | 3.4476 | 1.0 |
| 3.4789 | 10.51 | 145200 | 3.4436 | 1.0 |
| 3.4361 | 10.52 | 145300 | 3.4488 | 1.0 |
| 3.3073 | 10.53 | 145400 | 3.4495 | 1.0 |
| 3.3372 | 10.53 | 145500 | 3.4461 | 1.0 |
| 3.31 | 10.54 | 145600 | 3.4512 | 1.0 |
| 3.4571 | 10.55 | 145700 | 3.4473 | 1.0 |
| 3.3517 | 10.56 | 145800 | 3.4435 | 1.0 |
| 3.4304 | 10.56 | 145900 | 3.4428 | 1.0 |
| 3.4364 | 10.57 | 146000 | 3.4369 | 1.0 |
| 3.5522 | 10.58 | 146100 | 3.4431 | 1.0 |
| 3.421 | 10.58 | 146200 | 3.4426 | 1.0 |
| 3.3087 | 10.59 | 146300 | 3.4436 | 1.0 |
| 3.2905 | 10.6 | 146400 | 3.4417 | 1.0 |
| 3.4746 | 10.61 | 146500 | 3.4419 | 1.0 |
| 3.3347 | 10.61 | 146600 | 3.4396 | 1.0 |
| 3.2969 | 10.62 | 146700 | 3.4471 | 1.0 |
| 3.3403 | 10.63 | 146800 | 3.4453 | 1.0 |
| 3.8747 | 10.63 | 146900 | 3.4447 | 1.0 |
| 3.3049 | 10.64 | 147000 | 3.4458 | 1.0 |
| 3.3451 | 10.65 | 147100 | 3.4441 | 1.0 |
| 3.4467 | 10.66 | 147200 | 3.4439 | 1.0 |
| 3.3037 | 10.66 | 147300 | 3.4425 | 1.0 |
| 3.3891 | 10.67 | 147400 | 3.4427 | 1.0 |
| 3.2158 | 10.68 | 147500 | 3.4436 | 1.0 |
| 3.3726 | 10.69 | 147600 | 3.4438 | 1.0 |
| 3.3391 | 10.69 | 147700 | 3.4548 | 1.0 |
| 3.2352 | 10.7 | 147800 | 3.4414 | 1.0 |
| 3.3604 | 10.71 | 147900 | 3.4408 | 1.0 |
| 3.3056 | 10.71 | 148000 | 3.4407 | 1.0 |
| 3.3201 | 10.72 | 148100 | 3.4404 | 1.0 |
| 3.4137 | 10.73 | 148200 | 3.4423 | 1.0 |
| 3.3336 | 10.74 | 148300 | 3.4455 | 1.0 |
| 3.317 | 10.74 | 148400 | 3.4426 | 1.0 |
| 3.2644 | 10.75 | 148500 | 3.4427 | 1.0 |
| 3.4462 | 10.76 | 148600 | 3.4429 | 1.0 |
| 3.448 | 10.77 | 148700 | 3.4479 | 1.0 |
| 3.8269 | 10.77 | 148800 | 3.4428 | 1.0 |
| 3.2383 | 10.78 | 148900 | 3.4400 | 1.0 |
| 3.4066 | 10.79 | 149000 | 3.4412 | 1.0 |
| 3.2348 | 10.79 | 149100 | 3.4491 | 1.0 |
| 3.2971 | 10.8 | 149200 | 3.4464 | 1.0 |
| 3.2493 | 10.81 | 149300 | 3.4509 | 1.0 |
| 3.4274 | 10.82 | 149400 | 3.4420 | 1.0 |
| 3.4327 | 10.82 | 149500 | 3.4441 | 1.0 |
| 3.7189 | 10.83 | 149600 | 3.4377 | 1.0 |
| 3.3102 | 10.84 | 149700 | 3.4484 | 1.0 |
| 3.4991 | 10.84 | 149800 | 3.4460 | 1.0 |
| 3.2776 | 10.85 | 149900 | 3.4428 | 1.0 |
| 3.4605 | 10.86 | 150000 | 3.4469 | 1.0 |
| 3.8307 | 10.87 | 150100 | 3.4500 | 1.0 |
| 3.3874 | 10.87 | 150200 | 3.4454 | 1.0 |
| 3.3007 | 10.88 | 150300 | 3.4433 | 1.0 |
| 3.4145 | 10.89 | 150400 | 3.4434 | 1.0 |
| 3.1793 | 10.9 | 150500 | 3.4401 | 1.0 |
| 3.27 | 10.9 | 150600 | 3.4459 | 1.0 |
| 3.3434 | 10.91 | 150700 | 3.4400 | 1.0 |
| 3.3301 | 10.92 | 150800 | 3.4389 | 1.0 |
| 3.622 | 10.92 | 150900 | 3.4451 | 1.0 |
| 3.2369 | 10.93 | 151000 | 3.4417 | 1.0 |
| 3.4093 | 10.94 | 151100 | 3.4520 | 1.0 |
| 3.3885 | 10.95 | 151200 | 3.4448 | 1.0 |
| 3.4032 | 10.95 | 151300 | 3.4453 | 1.0 |
| 3.4659 | 10.96 | 151400 | 3.4445 | 1.0 |
| 5.0434 | 10.97 | 151500 | 3.4457 | 1.0 |
| 3.5397 | 10.98 | 151600 | 3.4409 | 1.0 |
| 3.4057 | 10.98 | 151700 | 3.4426 | 1.0 |
| 3.2813 | 10.99 | 151800 | 3.4471 | 1.0 |
| 3.2432 | 11.0 | 151900 | 3.4465 | 1.0 |
| 3.3493 | 11.0 | 152000 | 3.4466 | 1.0 |
| 3.4295 | 11.01 | 152100 | 3.4379 | 1.0 |
| 3.2836 | 11.02 | 152200 | 3.4421 | 1.0 |
| 3.3436 | 11.03 | 152300 | 3.4429 | 1.0 |
| 3.2982 | 11.03 | 152400 | 3.4473 | 1.0 |
| 3.3687 | 11.04 | 152500 | 3.4428 | 1.0 |
| 3.362 | 11.05 | 152600 | 3.4387 | 1.0 |
| 3.3621 | 11.05 | 152700 | 3.4410 | 1.0 |
| 3.4442 | 11.06 | 152800 | 3.4392 | 1.0 |
| 3.247 | 11.07 | 152900 | 3.4536 | 1.0 |
| 3.3843 | 11.08 | 153000 | 3.4479 | 1.0 |
| 3.3548 | 11.08 | 153100 | 3.4425 | 1.0 |
| 3.376 | 11.09 | 153200 | 3.4394 | 1.0 |
| 3.3866 | 11.1 | 153300 | 3.4389 | 1.0 |
| 3.3348 | 11.11 | 153400 | 3.4484 | 1.0 |
| 3.3206 | 11.11 | 153500 | 3.4468 | 1.0 |
| 3.4335 | 11.12 | 153600 | 3.4445 | 1.0 |
| 3.3921 | 11.13 | 153700 | 3.4456 | 1.0 |
| 3.434 | 11.13 | 153800 | 3.4422 | 1.0 |
| 3.3742 | 11.14 | 153900 | 3.4434 | 1.0 |
| 3.3157 | 11.15 | 154000 | 3.4444 | 1.0 |
| 3.4209 | 11.16 | 154100 | 3.4411 | 1.0 |
| 3.3413 | 11.16 | 154200 | 3.4457 | 1.0 |
| 3.3626 | 11.17 | 154300 | 3.4451 | 1.0 |
| 3.3541 | 11.18 | 154400 | 3.4391 | 1.0 |
| 3.2927 | 11.19 | 154500 | 3.4515 | 1.0 |
| 3.3222 | 11.19 | 154600 | 3.4498 | 1.0 |
| 3.2971 | 11.2 | 154700 | 3.4521 | 1.0 |
| 3.3817 | 11.21 | 154800 | 3.4482 | 1.0 |
| 3.3806 | 11.21 | 154900 | 3.4467 | 1.0 |
| 3.2959 | 11.22 | 155000 | 3.4417 | 1.0 |
| 3.4212 | 11.23 | 155100 | 3.4438 | 1.0 |
| 3.3606 | 11.24 | 155200 | 3.4382 | 1.0 |
| 3.3119 | 11.24 | 155300 | 3.4381 | 1.0 |
| 3.4004 | 11.25 | 155400 | 3.4403 | 1.0 |
| 3.2865 | 11.26 | 155500 | 3.4469 | 1.0 |
| 3.3606 | 11.26 | 155600 | 3.4492 | 1.0 |
| 3.2771 | 11.27 | 155700 | 3.4407 | 1.0 |
| 3.3281 | 11.28 | 155800 | 3.4461 | 1.0 |
| 3.3006 | 11.29 | 155900 | 3.4505 | 1.0 |
| 3.3116 | 11.29 | 156000 | 3.4440 | 1.0 |
| 3.4326 | 11.3 | 156100 | 3.4475 | 1.0 |
| 3.2976 | 11.31 | 156200 | 3.4517 | 1.0 |
| 3.3424 | 11.32 | 156300 | 3.4429 | 1.0 |
| 3.5005 | 11.32 | 156400 | 3.4398 | 1.0 |
| 3.2623 | 11.33 | 156500 | 3.4382 | 1.0 |
| 3.331 | 11.34 | 156600 | 3.4472 | 1.0 |
| 3.3657 | 11.34 | 156700 | 3.4413 | 1.0 |
| 3.3101 | 11.35 | 156800 | 3.4496 | 1.0 |
| 3.3516 | 11.36 | 156900 | 3.4465 | 1.0 |
| 3.752 | 11.37 | 157000 | 3.4419 | 1.0 |
| 3.2446 | 11.37 | 157100 | 3.4416 | 1.0 |
| 3.2753 | 11.38 | 157200 | 3.4406 | 1.0 |
| 3.2386 | 11.39 | 157300 | 3.4420 | 1.0 |
| 3.3541 | 11.4 | 157400 | 3.4409 | 1.0 |
| 3.4276 | 11.4 | 157500 | 3.4430 | 1.0 |
| 3.2635 | 11.41 | 157600 | 3.4442 | 1.0 |
| 3.4478 | 11.42 | 157700 | 3.4413 | 1.0 |
| 3.3043 | 11.42 | 157800 | 3.4491 | 1.0 |
| 3.3014 | 11.43 | 157900 | 3.4413 | 1.0 |
| 3.3542 | 11.44 | 158000 | 3.4436 | 1.0 |
| 3.3745 | 11.45 | 158100 | 3.4465 | 1.0 |
| 3.3318 | 11.45 | 158200 | 3.4463 | 1.0 |
| 3.3373 | 11.46 | 158300 | 3.4444 | 1.0 |
| 3.4279 | 11.47 | 158400 | 3.4386 | 1.0 |
| 3.3588 | 11.47 | 158500 | 3.4449 | 1.0 |
| 3.338 | 11.48 | 158600 | 3.4399 | 1.0 |
| 3.4119 | 11.49 | 158700 | 3.4376 | 1.0 |
| 3.2989 | 11.5 | 158800 | 3.4462 | 1.0 |
| 3.1883 | 11.5 | 158900 | 3.4398 | 1.0 |
| 3.277 | 11.51 | 159000 | 3.4457 | 1.0 |
| 3.2838 | 11.52 | 159100 | 3.4481 | 1.0 |
| 3.3205 | 11.53 | 159200 | 3.4496 | 1.0 |
| 3.2713 | 11.53 | 159300 | 3.4435 | 1.0 |
| 3.3927 | 11.54 | 159400 | 3.4441 | 1.0 |
| 3.5806 | 11.55 | 159500 | 3.4466 | 1.0 |
| 3.3704 | 11.55 | 159600 | 3.4462 | 1.0 |
| 3.3217 | 11.56 | 159700 | 3.4444 | 1.0 |
| 3.2637 | 11.57 | 159800 | 3.4481 | 1.0 |
| 3.2525 | 11.58 | 159900 | 3.4456 | 1.0 |
| 3.3364 | 11.58 | 160000 | 3.4445 | 1.0 |
| 3.3219 | 11.59 | 160100 | 3.4431 | 1.0 |
| 3.3982 | 11.6 | 160200 | 3.4489 | 1.0 |
| 3.2253 | 11.61 | 160300 | 3.4409 | 1.0 |
| 3.2497 | 11.61 | 160400 | 3.4427 | 1.0 |
| 3.3137 | 11.62 | 160500 | 3.4454 | 1.0 |
| 3.566 | 11.63 | 160600 | 3.4419 | 1.0 |
| 3.3203 | 11.63 | 160700 | 3.4460 | 1.0 |
| 3.3048 | 11.64 | 160800 | 3.4439 | 1.0 |
| 3.371 | 11.65 | 160900 | 3.4432 | 1.0 |
| 3.249 | 11.66 | 161000 | 3.4412 | 1.0 |
| 3.2731 | 11.66 | 161100 | 3.4430 | 1.0 |
| 3.3787 | 11.67 | 161200 | 3.4426 | 1.0 |
| 3.2696 | 11.68 | 161300 | 3.4479 | 1.0 |
| 3.7056 | 11.68 | 161400 | 3.4417 | 1.0 |
| 3.3999 | 11.69 | 161500 | 3.4455 | 1.0 |
| 3.292 | 11.7 | 161600 | 3.4458 | 1.0 |
| 3.2673 | 11.71 | 161700 | 3.4398 | 1.0 |
| 3.4488 | 11.71 | 161800 | 3.4445 | 1.0 |
| 3.2858 | 11.72 | 161900 | 3.4422 | 1.0 |
| 3.4464 | 11.73 | 162000 | 3.4466 | 1.0 |
| 3.2651 | 11.74 | 162100 | 3.4460 | 1.0 |
| 3.2518 | 11.74 | 162200 | 3.4520 | 1.0 |
| 3.4483 | 11.75 | 162300 | 3.4447 | 1.0 |
| 3.2609 | 11.76 | 162400 | 3.4373 | 1.0 |
| 3.398 | 11.76 | 162500 | 3.4432 | 1.0 |
| 3.5529 | 11.77 | 162600 | 3.4435 | 1.0 |
| 3.3348 | 11.78 | 162700 | 3.4452 | 1.0 |
| 3.398 | 11.79 | 162800 | 3.4393 | 1.0 |
| 3.5933 | 11.79 | 162900 | 3.4418 | 1.0 |
| 3.3373 | 11.8 | 163000 | 3.4434 | 1.0 |
| 3.3553 | 11.81 | 163100 | 3.4463 | 1.0 |
| 3.3234 | 11.81 | 163200 | 3.4421 | 1.0 |
| 3.3678 | 11.82 | 163300 | 3.4417 | 1.0 |
| 3.2942 | 11.83 | 163400 | 3.4454 | 1.0 |
| 3.5065 | 11.84 | 163500 | 3.4490 | 1.0 |
| 3.2952 | 11.84 | 163600 | 3.4468 | 1.0 |
| 3.7354 | 11.85 | 163700 | 3.4450 | 1.0 |
| 3.3021 | 11.86 | 163800 | 3.4439 | 1.0 |
| 3.3754 | 11.87 | 163900 | 3.4455 | 1.0 |
| 3.2568 | 11.87 | 164000 | 3.4400 | 1.0 |
| 3.3191 | 11.88 | 164100 | 3.4391 | 1.0 |
| 3.379 | 11.89 | 164200 | 3.4435 | 1.0 |
| 3.3221 | 11.89 | 164300 | 3.4440 | 1.0 |
| 3.3765 | 11.9 | 164400 | 3.4452 | 1.0 |
| 3.2364 | 11.91 | 164500 | 3.4445 | 1.0 |
| 3.6366 | 11.92 | 164600 | 3.4424 | 1.0 |
| 3.3871 | 11.92 | 164700 | 3.4398 | 1.0 |
| 3.3257 | 11.93 | 164800 | 3.4414 | 1.0 |
| 3.298 | 11.94 | 164900 | 3.4388 | 1.0 |
| 3.2322 | 11.95 | 165000 | 3.4410 | 1.0 |
| 3.4019 | 11.95 | 165100 | 3.4453 | 1.0 |
| 3.5989 | 11.96 | 165200 | 3.4435 | 1.0 |
| 3.3113 | 11.97 | 165300 | 3.4439 | 1.0 |
| 3.3364 | 11.97 | 165400 | 3.4416 | 1.0 |
| 3.3256 | 11.98 | 165500 | 3.4465 | 1.0 |
| 3.3355 | 11.99 | 165600 | 3.4434 | 1.0 |
| 3.3243 | 12.0 | 165700 | 3.4420 | 1.0 |
| 3.277 | 12.0 | 165800 | 3.4429 | 1.0 |
| 3.3413 | 12.01 | 165900 | 3.4418 | 1.0 |
| 3.3576 | 12.02 | 166000 | 3.4432 | 1.0 |
| 3.2624 | 12.02 | 166100 | 3.4493 | 1.0 |
| 3.4131 | 12.03 | 166200 | 3.4429 | 1.0 |
| 3.3717 | 12.04 | 166300 | 3.4460 | 1.0 |
| 3.4403 | 12.05 | 166400 | 3.4413 | 1.0 |
| 3.3418 | 12.05 | 166500 | 3.4425 | 1.0 |
| 3.2016 | 12.06 | 166600 | 3.4429 | 1.0 |
| 3.2851 | 12.07 | 166700 | 3.4427 | 1.0 |
| 3.3627 | 12.08 | 166800 | 3.4436 | 1.0 |
| 3.176 | 12.08 | 166900 | 3.4473 | 1.0 |
| 3.3159 | 12.09 | 167000 | 3.4431 | 1.0 |
| 3.335 | 12.1 | 167100 | 3.4425 | 1.0 |
| 3.2585 | 12.1 | 167200 | 3.4438 | 1.0 |
| 3.311 | 12.11 | 167300 | 3.4420 | 1.0 |
| 3.2594 | 12.12 | 167400 | 3.4402 | 1.0 |
| 3.3877 | 12.13 | 167500 | 3.4427 | 1.0 |
| 3.3837 | 12.13 | 167600 | 3.4468 | 1.0 |
| 3.4012 | 12.14 | 167700 | 3.4431 | 1.0 |
| 3.3258 | 12.15 | 167800 | 3.4405 | 1.0 |
| 3.5918 | 12.16 | 167900 | 3.4420 | 1.0 |
| 3.1809 | 12.16 | 168000 | 3.4487 | 1.0 |
| 3.2878 | 12.17 | 168100 | 3.4453 | 1.0 |
| 3.3626 | 12.18 | 168200 | 3.4469 | 1.0 |
| 3.3128 | 12.18 | 168300 | 3.4452 | 1.0 |
| 3.3257 | 12.19 | 168400 | 3.4466 | 1.0 |
| 3.3226 | 12.2 | 168500 | 3.4416 | 1.0 |
| 3.5412 | 12.21 | 168600 | 3.4479 | 1.0 |
| 3.2933 | 12.21 | 168700 | 3.4476 | 1.0 |
| 3.5552 | 12.22 | 168800 | 3.4431 | 1.0 |
| 3.3288 | 12.23 | 168900 | 3.4424 | 1.0 |
| 3.4587 | 12.23 | 169000 | 3.4423 | 1.0 |
| 3.3286 | 12.24 | 169100 | 3.4449 | 1.0 |
| 3.2894 | 12.25 | 169200 | 3.4432 | 1.0 |
| 4.5148 | 12.26 | 169300 | 3.4424 | 1.0 |
| 3.3809 | 12.26 | 169400 | 3.4472 | 1.0 |
| 3.2641 | 12.27 | 169500 | 3.4456 | 1.0 |
| 3.3429 | 12.28 | 169600 | 3.4443 | 1.0 |
| 3.2988 | 12.29 | 169700 | 3.4423 | 1.0 |
| 3.3795 | 12.29 | 169800 | 3.4408 | 1.0 |
| 3.2812 | 12.3 | 169900 | 3.4468 | 1.0 |
| 3.2393 | 12.31 | 170000 | 3.4415 | 1.0 |
| 3.3997 | 12.31 | 170100 | 3.4426 | 1.0 |
| 3.3112 | 12.32 | 170200 | 3.4424 | 1.0 |
| 3.4299 | 12.33 | 170300 | 3.4434 | 1.0 |
| 3.486 | 12.34 | 170400 | 3.4454 | 1.0 |
| 3.2899 | 12.34 | 170500 | 3.4451 | 1.0 |
| 3.4311 | 12.35 | 170600 | 3.4456 | 1.0 |
| 3.2727 | 12.36 | 170700 | 3.4472 | 1.0 |
| 3.3182 | 12.37 | 170800 | 3.4409 | 1.0 |
| 3.5047 | 12.37 | 170900 | 3.4412 | 1.0 |
| 3.3801 | 12.38 | 171000 | 3.4403 | 1.0 |
| 3.3643 | 12.39 | 171100 | 3.4400 | 1.0 |
| 3.3132 | 12.39 | 171200 | 3.4417 | 1.0 |
| 3.3558 | 12.4 | 171300 | 3.4440 | 1.0 |
| 3.4187 | 12.41 | 171400 | 3.4470 | 1.0 |
| 3.3376 | 12.42 | 171500 | 3.4450 | 1.0 |
| 3.3095 | 12.42 | 171600 | 3.4456 | 1.0 |
| 3.3304 | 12.43 | 171700 | 3.4465 | 1.0 |
| 3.4092 | 12.44 | 171800 | 3.4500 | 1.0 |
| 3.4149 | 12.44 | 171900 | 3.4459 | 1.0 |
| 5.8155 | 12.45 | 172000 | 3.4422 | 1.0 |
| 3.3086 | 12.46 | 172100 | 3.4405 | 1.0 |
| 3.2699 | 12.47 | 172200 | 3.4439 | 1.0 |
| 3.2727 | 12.47 | 172300 | 3.4465 | 1.0 |
| 3.4084 | 12.48 | 172400 | 3.4495 | 1.0 |
| 3.3246 | 12.49 | 172500 | 3.4451 | 1.0 |
| 3.4584 | 12.5 | 172600 | 3.4404 | 1.0 |
| 3.3491 | 12.5 | 172700 | 3.4407 | 1.0 |
| 3.3103 | 12.51 | 172800 | 3.4417 | 1.0 |
| 3.3413 | 12.52 | 172900 | 3.4452 | 1.0 |
| 3.3625 | 12.52 | 173000 | 3.4437 | 1.0 |
| 3.3988 | 12.53 | 173100 | 3.4452 | 1.0 |
| 3.3915 | 12.54 | 173200 | 3.4428 | 1.0 |
| 3.2812 | 12.55 | 173300 | 3.4445 | 1.0 |
| 3.2952 | 12.55 | 173400 | 3.4450 | 1.0 |
| 3.4923 | 12.56 | 173500 | 3.4419 | 1.0 |
| 3.4275 | 12.57 | 173600 | 3.4420 | 1.0 |
| 3.8005 | 12.58 | 173700 | 3.4465 | 1.0 |
| 3.5748 | 12.58 | 173800 | 3.4437 | 1.0 |
| 3.283 | 12.59 | 173900 | 3.4441 | 1.0 |
| 3.3727 | 12.6 | 174000 | 3.4444 | 1.0 |
| 3.285 | 12.6 | 174100 | 3.4443 | 1.0 |
| 3.4836 | 12.61 | 174200 | 3.4422 | 1.0 |
| 3.5803 | 12.62 | 174300 | 3.4426 | 1.0 |
| 3.2655 | 12.63 | 174400 | 3.4420 | 1.0 |
| 3.3653 | 12.63 | 174500 | 3.4463 | 1.0 |
| 3.3581 | 12.64 | 174600 | 3.4464 | 1.0 |
| 3.2738 | 12.65 | 174700 | 3.4435 | 1.0 |
| 3.3552 | 12.65 | 174800 | 3.4409 | 1.0 |
| 3.3571 | 12.66 | 174900 | 3.4467 | 1.0 |
| 3.3093 | 12.67 | 175000 | 3.4423 | 1.0 |
| 3.6147 | 12.68 | 175100 | 3.4444 | 1.0 |
| 3.2892 | 12.68 | 175200 | 3.4420 | 1.0 |
| 3.4071 | 12.69 | 175300 | 3.4455 | 1.0 |
| 3.3201 | 12.7 | 175400 | 3.4502 | 1.0 |
| 3.308 | 12.71 | 175500 | 3.4428 | 1.0 |
| 3.3885 | 12.71 | 175600 | 3.4452 | 1.0 |
| 3.3285 | 12.72 | 175700 | 3.4418 | 1.0 |
| 3.3647 | 12.73 | 175800 | 3.4446 | 1.0 |
| 3.2559 | 12.73 | 175900 | 3.4433 | 1.0 |
| 3.4547 | 12.74 | 176000 | 3.4484 | 1.0 |
| 3.395 | 12.75 | 176100 | 3.4464 | 1.0 |
| 3.4244 | 12.76 | 176200 | 3.4468 | 1.0 |
| 3.4961 | 12.76 | 176300 | 3.4441 | 1.0 |
| 3.4281 | 12.77 | 176400 | 3.4419 | 1.0 |
| 3.4241 | 12.78 | 176500 | 3.4407 | 1.0 |
| 3.2563 | 12.79 | 176600 | 3.4430 | 1.0 |
| 3.3779 | 12.79 | 176700 | 3.4437 | 1.0 |
| 3.3268 | 12.8 | 176800 | 3.4457 | 1.0 |
| 3.4255 | 12.81 | 176900 | 3.4437 | 1.0 |
| 3.3086 | 12.81 | 177000 | 3.4422 | 1.0 |
| 3.3619 | 12.82 | 177100 | 3.4447 | 1.0 |
| 3.2334 | 12.83 | 177200 | 3.4457 | 1.0 |
| 3.4318 | 12.84 | 177300 | 3.4413 | 1.0 |
| 3.2553 | 12.84 | 177400 | 3.4425 | 1.0 |
| 3.225 | 12.85 | 177500 | 3.4435 | 1.0 |
| 3.3984 | 12.86 | 177600 | 3.4518 | 1.0 |
| 3.5566 | 12.86 | 177700 | 3.4481 | 1.0 |
| 4.3006 | 12.87 | 177800 | 3.4463 | 1.0 |
| 3.2232 | 12.88 | 177900 | 3.4454 | 1.0 |
| 3.2224 | 12.89 | 178000 | 3.4452 | 1.0 |
| 3.3974 | 12.89 | 178100 | 3.4430 | 1.0 |
| 3.4688 | 12.9 | 178200 | 3.4441 | 1.0 |
| 3.293 | 12.91 | 178300 | 3.4422 | 1.0 |
| 3.7722 | 12.92 | 178400 | 3.4459 | 1.0 |
| 3.3155 | 12.92 | 178500 | 3.4451 | 1.0 |
| 3.3955 | 12.93 | 178600 | 3.4438 | 1.0 |
| 3.2985 | 12.94 | 178700 | 3.4411 | 1.0 |
| 3.3729 | 12.94 | 178800 | 3.4415 | 1.0 |
| 3.3966 | 12.95 | 178900 | 3.4433 | 1.0 |
| 3.2917 | 12.96 | 179000 | 3.4422 | 1.0 |
| 3.3772 | 12.97 | 179100 | 3.4426 | 1.0 |
| 3.2921 | 12.97 | 179200 | 3.4458 | 1.0 |
| 3.2751 | 12.98 | 179300 | 3.4429 | 1.0 |
| 3.4227 | 12.99 | 179400 | 3.4429 | 1.0 |
| 3.3031 | 13.0 | 179500 | 3.4463 | 1.0 |
| 3.3257 | 13.0 | 179600 | 3.4496 | 1.0 |
| 3.3472 | 13.01 | 179700 | 3.4436 | 1.0 |
| 3.4014 | 13.02 | 179800 | 3.4484 | 1.0 |
| 3.4494 | 13.02 | 179900 | 3.4418 | 1.0 |
| 3.559 | 13.03 | 180000 | 3.4425 | 1.0 |
| 3.3253 | 13.04 | 180100 | 3.4412 | 1.0 |
| 3.2797 | 13.05 | 180200 | 3.4464 | 1.0 |
| 3.3854 | 13.05 | 180300 | 3.4484 | 1.0 |
| 3.24 | 13.06 | 180400 | 3.4446 | 1.0 |
| 3.2406 | 13.07 | 180500 | 3.4453 | 1.0 |
| 3.3609 | 13.07 | 180600 | 3.4425 | 1.0 |
| 3.3496 | 13.08 | 180700 | 3.4465 | 1.0 |
| 3.2963 | 13.09 | 180800 | 3.4437 | 1.0 |
| 3.2781 | 13.1 | 180900 | 3.4481 | 1.0 |
| 3.1707 | 13.1 | 181000 | 3.4465 | 1.0 |
| 3.5305 | 13.11 | 181100 | 3.4460 | 1.0 |
| 3.3498 | 13.12 | 181200 | 3.4423 | 1.0 |
| 3.276 | 13.13 | 181300 | 3.4402 | 1.0 |
| 3.2264 | 13.13 | 181400 | 3.4432 | 1.0 |
| 3.2517 | 13.14 | 181500 | 3.4408 | 1.0 |
| 3.3312 | 13.15 | 181600 | 3.4455 | 1.0 |
| 3.4057 | 13.15 | 181700 | 3.4476 | 1.0 |
| 3.34 | 13.16 | 181800 | 3.4415 | 1.0 |
| 3.2458 | 13.17 | 181900 | 3.4409 | 1.0 |
| 3.3949 | 13.18 | 182000 | 3.4405 | 1.0 |
| 3.289 | 13.18 | 182100 | 3.4431 | 1.0 |
| 3.4016 | 13.19 | 182200 | 3.4393 | 1.0 |
| 3.256 | 13.2 | 182300 | 3.4410 | 1.0 |
| 3.2597 | 13.2 | 182400 | 3.4391 | 1.0 |
| 3.2483 | 13.21 | 182500 | 3.4387 | 1.0 |
| 3.3637 | 13.22 | 182600 | 3.4409 | 1.0 |
| 3.2936 | 13.23 | 182700 | 3.4399 | 1.0 |
| 3.2666 | 13.23 | 182800 | 3.4458 | 1.0 |
| 3.3675 | 13.24 | 182900 | 3.4494 | 1.0 |
| 3.3538 | 13.25 | 183000 | 3.4430 | 1.0 |
| 3.3276 | 13.26 | 183100 | 3.4442 | 1.0 |
| 3.3851 | 13.26 | 183200 | 3.4425 | 1.0 |
| 3.3579 | 13.27 | 183300 | 3.4410 | 1.0 |
| 3.2882 | 13.28 | 183400 | 3.4400 | 1.0 |
| 3.3541 | 13.28 | 183500 | 3.4436 | 1.0 |
| 3.392 | 13.29 | 183600 | 3.4445 | 1.0 |
| 3.3857 | 13.3 | 183700 | 3.4477 | 1.0 |
| 3.3084 | 13.31 | 183800 | 3.4463 | 1.0 |
| 3.327 | 13.31 | 183900 | 3.4451 | 1.0 |
| 3.3967 | 13.32 | 184000 | 3.4483 | 1.0 |
| 3.3657 | 13.33 | 184100 | 3.4471 | 1.0 |
| 3.3732 | 13.34 | 184200 | 3.4465 | 1.0 |
| 3.366 | 13.34 | 184300 | 3.4459 | 1.0 |
| 3.2545 | 13.35 | 184400 | 3.4451 | 1.0 |
| 4.2873 | 13.36 | 184500 | 3.4425 | 1.0 |
| 3.6525 | 13.36 | 184600 | 3.4432 | 1.0 |
| 3.2921 | 13.37 | 184700 | 3.4437 | 1.0 |
| 3.273 | 13.38 | 184800 | 3.4420 | 1.0 |
| 3.267 | 13.39 | 184900 | 3.4445 | 1.0 |
| 3.3585 | 13.39 | 185000 | 3.4459 | 1.0 |
| 3.3271 | 13.4 | 185100 | 3.4424 | 1.0 |
| 3.3752 | 13.41 | 185200 | 3.4406 | 1.0 |
| 3.2715 | 13.41 | 185300 | 3.4424 | 1.0 |
| 3.2668 | 13.42 | 185400 | 3.4440 | 1.0 |
| 3.4546 | 13.43 | 185500 | 3.4464 | 1.0 |
| 3.2931 | 13.44 | 185600 | 3.4444 | 1.0 |
| 3.4428 | 13.44 | 185700 | 3.4443 | 1.0 |
| 3.4004 | 13.45 | 185800 | 3.4475 | 1.0 |
| 3.3416 | 13.46 | 185900 | 3.4447 | 1.0 |
| 3.3598 | 13.47 | 186000 | 3.4458 | 1.0 |
| 3.3348 | 13.47 | 186100 | 3.4420 | 1.0 |
| 3.2879 | 13.48 | 186200 | 3.4410 | 1.0 |
| 3.3791 | 13.49 | 186300 | 3.4481 | 1.0 |
| 3.3066 | 13.49 | 186400 | 3.4440 | 1.0 |
| 3.2824 | 13.5 | 186500 | 3.4447 | 1.0 |
| 3.4092 | 13.51 | 186600 | 3.4447 | 1.0 |
| 3.2679 | 13.52 | 186700 | 3.4443 | 1.0 |
| 3.3921 | 13.52 | 186800 | 3.4447 | 1.0 |
| 3.3348 | 13.53 | 186900 | 3.4424 | 1.0 |
| 3.2365 | 13.54 | 187000 | 3.4392 | 1.0 |
| 3.3355 | 13.55 | 187100 | 3.4387 | 1.0 |
| 3.2654 | 13.55 | 187200 | 3.4393 | 1.0 |
| 3.3085 | 13.56 | 187300 | 3.4404 | 1.0 |
| 3.3127 | 13.57 | 187400 | 3.4400 | 1.0 |
| 3.219 | 13.57 | 187500 | 3.4422 | 1.0 |
| 3.3733 | 13.58 | 187600 | 3.4391 | 1.0 |
| 3.2622 | 13.59 | 187700 | 3.4420 | 1.0 |
| 3.2188 | 13.6 | 187800 | 3.4445 | 1.0 |
| 3.2977 | 13.6 | 187900 | 3.4437 | 1.0 |
| 3.2994 | 13.61 | 188000 | 3.4463 | 1.0 |
| 3.2897 | 13.62 | 188100 | 3.4438 | 1.0 |
| 3.3194 | 13.62 | 188200 | 3.4452 | 1.0 |
| 3.3566 | 13.63 | 188300 | 3.4446 | 1.0 |
| 3.3442 | 13.64 | 188400 | 3.4509 | 1.0 |
| 3.58 | 13.65 | 188500 | 3.4509 | 1.0 |
| 3.4537 | 13.65 | 188600 | 3.4479 | 1.0 |
| 3.342 | 13.66 | 188700 | 3.4428 | 1.0 |
| 3.2765 | 13.67 | 188800 | 3.4410 | 1.0 |
| 3.2765 | 13.68 | 188900 | 3.4422 | 1.0 |
| 3.3381 | 13.68 | 189000 | 3.4400 | 1.0 |
| 3.2883 | 13.69 | 189100 | 3.4411 | 1.0 |
| 3.2861 | 13.7 | 189200 | 3.4417 | 1.0 |
| 3.3049 | 13.7 | 189300 | 3.4431 | 1.0 |
| 3.7184 | 13.71 | 189400 | 3.4446 | 1.0 |
| 3.3307 | 13.72 | 189500 | 3.4449 | 1.0 |
| 3.3274 | 13.73 | 189600 | 3.4456 | 1.0 |
| 3.3481 | 13.73 | 189700 | 3.4417 | 1.0 |
| 3.3763 | 13.74 | 189800 | 3.4439 | 1.0 |
| 3.3005 | 13.75 | 189900 | 3.4442 | 1.0 |
| 3.3775 | 13.76 | 190000 | 3.4458 | 1.0 |
| 3.284 | 13.76 | 190100 | 3.4427 | 1.0 |
| 3.2496 | 13.77 | 190200 | 3.4465 | 1.0 |
| 3.4141 | 13.78 | 190300 | 3.4422 | 1.0 |
| 3.3689 | 13.78 | 190400 | 3.4441 | 1.0 |
| 3.2925 | 13.79 | 190500 | 3.4446 | 1.0 |
| 3.334 | 13.8 | 190600 | 3.4447 | 1.0 |
| 3.4054 | 13.81 | 190700 | 3.4442 | 1.0 |
| 3.5985 | 13.81 | 190800 | 3.4418 | 1.0 |
| 3.307 | 13.82 | 190900 | 3.4437 | 1.0 |
| 3.2475 | 13.83 | 191000 | 3.4418 | 1.0 |
| 3.4217 | 13.83 | 191100 | 3.4429 | 1.0 |
| 3.2629 | 13.84 | 191200 | 3.4417 | 1.0 |
| 3.4471 | 13.85 | 191300 | 3.4420 | 1.0 |
| 3.3174 | 13.86 | 191400 | 3.4400 | 1.0 |
| 3.3505 | 13.86 | 191500 | 3.4430 | 1.0 |
| 3.4601 | 13.87 | 191600 | 3.4409 | 1.0 |
| 3.2617 | 13.88 | 191700 | 3.4439 | 1.0 |
| 3.4259 | 13.89 | 191800 | 3.4451 | 1.0 |
| 3.4135 | 13.89 | 191900 | 3.4424 | 1.0 |
| 3.2713 | 13.9 | 192000 | 3.4425 | 1.0 |
| 3.3399 | 13.91 | 192100 | 3.4450 | 1.0 |
| 3.375 | 13.91 | 192200 | 3.4440 | 1.0 |
| 3.2318 | 13.92 | 192300 | 3.4449 | 1.0 |
| 3.2925 | 13.93 | 192400 | 3.4430 | 1.0 |
| 3.416 | 13.94 | 192500 | 3.4440 | 1.0 |
| 3.283 | 13.94 | 192600 | 3.4441 | 1.0 |
| 3.249 | 13.95 | 192700 | 3.4436 | 1.0 |
| 3.3415 | 13.96 | 192800 | 3.4435 | 1.0 |
| 3.3123 | 13.97 | 192900 | 3.4427 | 1.0 |
| 3.3019 | 13.97 | 193000 | 3.4414 | 1.0 |
| 3.3949 | 13.98 | 193100 | 3.4409 | 1.0 |
| 3.3118 | 13.99 | 193200 | 3.4413 | 1.0 |
| 3.4302 | 13.99 | 193300 | 3.4431 | 1.0 |
| 3.382 | 14.0 | 193400 | 3.4439 | 1.0 |
| 3.4496 | 14.01 | 193500 | 3.4429 | 1.0 |
| 3.2643 | 14.02 | 193600 | 3.4454 | 1.0 |
| 3.2298 | 14.02 | 193700 | 3.4439 | 1.0 |
| 3.3804 | 14.03 | 193800 | 3.4429 | 1.0 |
| 3.2049 | 14.04 | 193900 | 3.4429 | 1.0 |
| 3.3818 | 14.04 | 194000 | 3.4420 | 1.0 |
| 3.2901 | 14.05 | 194100 | 3.4433 | 1.0 |
| 3.2989 | 14.06 | 194200 | 3.4419 | 1.0 |
| 3.2548 | 14.07 | 194300 | 3.4434 | 1.0 |
| 3.454 | 14.07 | 194400 | 3.4432 | 1.0 |
| 3.3365 | 14.08 | 194500 | 3.4433 | 1.0 |
| 3.3799 | 14.09 | 194600 | 3.4443 | 1.0 |
| 3.3536 | 14.1 | 194700 | 3.4438 | 1.0 |
| 3.5929 | 14.1 | 194800 | 3.4441 | 1.0 |
| 4.2116 | 14.11 | 194900 | 3.4433 | 1.0 |
| 3.4121 | 14.12 | 195000 | 3.4437 | 1.0 |
| 3.3715 | 14.12 | 195100 | 3.4442 | 1.0 |
| 3.4325 | 14.13 | 195200 | 3.4467 | 1.0 |
| 3.3585 | 14.14 | 195300 | 3.4450 | 1.0 |
| 3.3374 | 14.15 | 195400 | 3.4421 | 1.0 |
| 3.3519 | 14.15 | 195500 | 3.4421 | 1.0 |
| 3.4128 | 14.16 | 195600 | 3.4416 | 1.0 |
| 3.3448 | 14.17 | 195700 | 3.4412 | 1.0 |
| 3.4239 | 14.18 | 195800 | 3.4418 | 1.0 |
| 3.6124 | 14.18 | 195900 | 3.4440 | 1.0 |
| 3.3607 | 14.19 | 196000 | 3.4444 | 1.0 |
| 3.3141 | 14.2 | 196100 | 3.4433 | 1.0 |
| 3.4431 | 14.2 | 196200 | 3.4432 | 1.0 |
| 3.4539 | 14.21 | 196300 | 3.4426 | 1.0 |
| 3.3409 | 14.22 | 196400 | 3.4418 | 1.0 |
| 3.2736 | 14.23 | 196500 | 3.4422 | 1.0 |
| 3.8002 | 14.23 | 196600 | 3.4431 | 1.0 |
| 3.501 | 14.24 | 196700 | 3.4421 | 1.0 |
| 3.3537 | 14.25 | 196800 | 3.4420 | 1.0 |
| 3.4373 | 14.25 | 196900 | 3.4412 | 1.0 |
| 3.359 | 14.26 | 197000 | 3.4412 | 1.0 |
| 3.302 | 14.27 | 197100 | 3.4425 | 1.0 |
| 3.3282 | 14.28 | 197200 | 3.4424 | 1.0 |
| 3.3941 | 14.28 | 197300 | 3.4424 | 1.0 |
| 4.4183 | 14.29 | 197400 | 3.4435 | 1.0 |
| 3.4406 | 14.3 | 197500 | 3.4432 | 1.0 |
| 3.285 | 14.31 | 197600 | 3.4432 | 1.0 |
| 3.3289 | 14.31 | 197700 | 3.4442 | 1.0 |
| 3.3085 | 14.32 | 197800 | 3.4426 | 1.0 |
| 3.2033 | 14.33 | 197900 | 3.4446 | 1.0 |
| 3.3691 | 14.33 | 198000 | 3.4448 | 1.0 |
| 3.3715 | 14.34 | 198100 | 3.4448 | 1.0 |
| 4.5572 | 14.35 | 198200 | 3.4432 | 1.0 |
| 3.3509 | 14.36 | 198300 | 3.4431 | 1.0 |
| 3.3179 | 14.36 | 198400 | 3.4426 | 1.0 |
| 3.2891 | 14.37 | 198500 | 3.4436 | 1.0 |
| 3.3872 | 14.38 | 198600 | 3.4436 | 1.0 |
| 3.3177 | 14.38 | 198700 | 3.4442 | 1.0 |
| 3.4302 | 14.39 | 198800 | 3.4446 | 1.0 |
| 3.3834 | 14.4 | 198900 | 3.4441 | 1.0 |
| 3.4318 | 14.41 | 199000 | 3.4430 | 1.0 |
| 3.4176 | 14.41 | 199100 | 3.4431 | 1.0 |
| 4.6882 | 14.42 | 199200 | 3.4431 | 1.0 |
| 3.2657 | 14.43 | 199300 | 3.4436 | 1.0 |
| 3.3929 | 14.44 | 199400 | 3.4436 | 1.0 |
| 5.337 | 14.44 | 199500 | 3.4432 | 1.0 |
| 3.4289 | 14.45 | 199600 | 3.4432 | 1.0 |
| 3.2498 | 14.46 | 199700 | 3.4435 | 1.0 |
| 3.3635 | 14.46 | 199800 | 3.4432 | 1.0 |
| 5.4355 | 14.47 | 199900 | 3.4418 | 1.0 |
| 3.2158 | 14.48 | 200000 | 3.4427 | 1.0 |
| 3.4885 | 14.49 | 200100 | 3.4435 | 1.0 |
| 3.3739 | 14.49 | 200200 | 3.4430 | 1.0 |
| 3.4712 | 14.5 | 200300 | 3.4434 | 1.0 |
| 3.3742 | 14.51 | 200400 | 3.4444 | 1.0 |
| 3.3465 | 14.52 | 200500 | 3.4429 | 1.0 |
| 3.3277 | 14.52 | 200600 | 3.4430 | 1.0 |
| 3.3073 | 14.53 | 200700 | 3.4431 | 1.0 |
| 3.33 | 14.54 | 200800 | 3.4432 | 1.0 |
| 3.3857 | 14.54 | 200900 | 3.4436 | 1.0 |
| 3.4481 | 14.55 | 201000 | 3.4430 | 1.0 |
| 3.546 | 14.56 | 201100 | 3.4416 | 1.0 |
| 3.4435 | 14.57 | 201200 | 3.4404 | 1.0 |
| 3.3237 | 14.57 | 201300 | 3.4408 | 1.0 |
| 3.3347 | 14.58 | 201400 | 3.4420 | 1.0 |
| 4.5461 | 14.59 | 201500 | 3.4420 | 1.0 |
| 3.3307 | 14.59 | 201600 | 3.4430 | 1.0 |
| 3.3899 | 14.6 | 201700 | 3.4439 | 1.0 |
| 3.2613 | 14.61 | 201800 | 3.4435 | 1.0 |
| 3.2693 | 14.62 | 201900 | 3.4426 | 1.0 |
| 3.3621 | 14.62 | 202000 | 3.4430 | 1.0 |
| 3.4383 | 14.63 | 202100 | 3.4434 | 1.0 |
| 3.5096 | 14.64 | 202200 | 3.4444 | 1.0 |
| 3.3962 | 14.65 | 202300 | 3.4445 | 1.0 |
| 3.3854 | 14.65 | 202400 | 3.4441 | 1.0 |
| 3.3116 | 14.66 | 202500 | 3.4445 | 1.0 |
| 3.3691 | 14.67 | 202600 | 3.4445 | 1.0 |
| 3.3821 | 14.67 | 202700 | 3.4440 | 1.0 |
| 3.2872 | 14.68 | 202800 | 3.4431 | 1.0 |
| 3.3575 | 14.69 | 202900 | 3.4431 | 1.0 |
| 3.2881 | 14.7 | 203000 | 3.4435 | 1.0 |
| 3.4115 | 14.7 | 203100 | 3.4440 | 1.0 |
| 3.3814 | 14.71 | 203200 | 3.4439 | 1.0 |
| 3.3609 | 14.72 | 203300 | 3.4435 | 1.0 |
| 3.3261 | 14.73 | 203400 | 3.4430 | 1.0 |
| 3.2983 | 14.73 | 203500 | 3.4435 | 1.0 |
| 3.3094 | 14.74 | 203600 | 3.4431 | 1.0 |
| 3.2582 | 14.75 | 203700 | 3.4431 | 1.0 |
| 3.2963 | 14.75 | 203800 | 3.4435 | 1.0 |
| 3.361 | 14.76 | 203900 | 3.4435 | 1.0 |
| 3.2636 | 14.77 | 204000 | 3.4440 | 1.0 |
| 3.2908 | 14.78 | 204100 | 3.4439 | 1.0 |
| 3.4743 | 14.78 | 204200 | 3.4445 | 1.0 |
| 3.2633 | 14.79 | 204300 | 3.4444 | 1.0 |
| 3.6696 | 14.8 | 204400 | 3.4440 | 1.0 |
| 3.4295 | 14.8 | 204500 | 3.4439 | 1.0 |
| 3.2838 | 14.81 | 204600 | 3.4439 | 1.0 |
| 3.285 | 14.82 | 204700 | 3.4439 | 1.0 |
| 3.2501 | 14.83 | 204800 | 3.4443 | 1.0 |
| 3.2872 | 14.83 | 204900 | 3.4443 | 1.0 |
| 3.3486 | 14.84 | 205000 | 3.4443 | 1.0 |
| 3.2943 | 14.85 | 205100 | 3.4443 | 1.0 |
| 3.2908 | 14.86 | 205200 | 3.4438 | 1.0 |
| 4.0962 | 14.86 | 205300 | 3.4443 | 1.0 |
| 3.2306 | 14.87 | 205400 | 3.4433 | 1.0 |
| 3.4682 | 14.88 | 205500 | 3.4433 | 1.0 |
| 3.2785 | 14.88 | 205600 | 3.4438 | 1.0 |
| 3.4161 | 14.89 | 205700 | 3.4438 | 1.0 |
| 3.299 | 14.9 | 205800 | 3.4438 | 1.0 |
| 3.3116 | 14.91 | 205900 | 3.4438 | 1.0 |
| 3.3456 | 14.91 | 206000 | 3.4439 | 1.0 |
| 3.263 | 14.92 | 206100 | 3.4439 | 1.0 |
| 3.4408 | 14.93 | 206200 | 3.4444 | 1.0 |
| 3.3478 | 14.94 | 206300 | 3.4443 | 1.0 |
| 3.1718 | 14.94 | 206400 | 3.4438 | 1.0 |
| 3.2811 | 14.95 | 206500 | 3.4439 | 1.0 |
| 3.4132 | 14.96 | 206600 | 3.4439 | 1.0 |
| 3.2337 | 14.96 | 206700 | 3.4439 | 1.0 |
| 3.3859 | 14.97 | 206800 | 3.4439 | 1.0 |
| 3.3501 | 14.98 | 206900 | 3.4439 | 1.0 |
| 3.5111 | 14.99 | 207000 | 3.4439 | 1.0 |
| 3.5375 | 14.99 | 207100 | 3.4439 | 1.0 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
MhF/xlm-roberta-base-finetuned-panx-en
|
MhF
| 2022-02-17T03:40:08Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6807563959955506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3856
- F1: 0.6808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1038 | 1.0 | 50 | 0.5255 | 0.5331 |
| 0.4922 | 2.0 | 100 | 0.4377 | 0.6379 |
| 0.3664 | 3.0 | 150 | 0.3856 | 0.6808 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
MhF/xlm-roberta-base-finetuned-panx-it
|
MhF
| 2022-02-17T03:37:20Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8213114754098361
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2491
- F1: 0.8213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8192 | 1.0 | 70 | 0.3300 | 0.7184 |
| 0.2949 | 2.0 | 140 | 0.2817 | 0.7959 |
| 0.189 | 3.0 | 210 | 0.2491 | 0.8213 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
MhF/xlm-roberta-base-finetuned-panx-fr
|
MhF
| 2022-02-17T03:34:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8353494623655915
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2736
- F1: 0.8353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5826 | 1.0 | 191 | 0.3442 | 0.7888 |
| 0.2669 | 2.0 | 382 | 0.2848 | 0.8326 |
| 0.1818 | 3.0 | 573 | 0.2736 | 0.8353 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
MhF/xlm-roberta-base-finetuned-panx-de-fr
|
MhF
| 2022-02-17T03:30:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1576
- F1: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2924 | 1.0 | 715 | 0.1819 | 0.8286 |
| 0.1503 | 2.0 | 1430 | 0.1580 | 0.8511 |
| 0.0972 | 3.0 | 2145 | 0.1576 | 0.8571 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
MhF/xlm-roberta-base-finetuned-panx-de
|
MhF
| 2022-02-17T03:21:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.862053266560437
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1354
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.254 | 1.0 | 525 | 0.1652 | 0.8254 |
| 0.1293 | 2.0 | 1050 | 0.1431 | 0.8489 |
| 0.0797 | 3.0 | 1575 | 0.1354 | 0.8621 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ali2066/finetuned_token_itr0_0.0002_all_16_02_2022-21_13_10
|
ali2066
| 2022-02-16T20:15:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_0.0002_all_16_02_2022-21_13_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_0.0002_all_16_02_2022-21_13_10
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3057
- Precision: 0.2857
- Recall: 0.4508
- F1: 0.3497
- Accuracy: 0.8741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3018 | 0.2097 | 0.2546 | 0.2300 | 0.8727 |
| No log | 2.0 | 60 | 0.2337 | 0.3444 | 0.3652 | 0.3545 | 0.9024 |
| No log | 3.0 | 90 | 0.2198 | 0.3463 | 0.3869 | 0.3655 | 0.9070 |
| No log | 4.0 | 120 | 0.2112 | 0.3757 | 0.4405 | 0.4056 | 0.9173 |
| No log | 5.0 | 150 | 0.2131 | 0.4163 | 0.5126 | 0.4595 | 0.9212 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_0.0002_editorials_16_02_2022-21_07_38
|
ali2066
| 2022-02-16T20:08:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_0.0002_editorials_16_02_2022-21_07_38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_0.0002_editorials_16_02_2022-21_07_38
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1146
- Precision: 0.4662
- Recall: 0.4718
- F1: 0.4690
- Accuracy: 0.9773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.0756 | 0.2960 | 0.4505 | 0.3573 | 0.9775 |
| No log | 2.0 | 30 | 0.0626 | 0.3615 | 0.4231 | 0.3899 | 0.9808 |
| No log | 3.0 | 45 | 0.0602 | 0.4898 | 0.5275 | 0.5079 | 0.9833 |
| No log | 4.0 | 60 | 0.0719 | 0.5517 | 0.5275 | 0.5393 | 0.9849 |
| No log | 5.0 | 75 | 0.0754 | 0.5765 | 0.5385 | 0.5568 | 0.9849 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_3e-05_editorials_16_02_2022-21_06_22
|
ali2066
| 2022-02-16T20:07:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_3e-05_editorials_16_02_2022-21_06_22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_3e-05_editorials_16_02_2022-21_06_22
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1060
- Precision: 0.2003
- Recall: 0.1154
- F1: 0.1464
- Accuracy: 0.9712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.0897 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 2.0 | 30 | 0.0798 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 3.0 | 45 | 0.0743 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 4.0 | 60 | 0.0707 | 0.0741 | 0.0110 | 0.0191 | 0.9802 |
| No log | 5.0 | 75 | 0.0696 | 0.2727 | 0.1648 | 0.2055 | 0.9805 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_2e-05_editorials_16_02_2022-21_05_05
|
ali2066
| 2022-02-16T20:06:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_2e-05_editorials_16_02_2022-21_05_05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_2e-05_editorials_16_02_2022-21_05_05
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1114
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.0921 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 2.0 | 30 | 0.0816 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 3.0 | 45 | 0.0781 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 4.0 | 60 | 0.0746 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
| No log | 5.0 | 75 | 0.0737 | 0.08 | 0.0110 | 0.0193 | 0.9801 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_2e-05_essays_16_02_2022-21_01_51
|
ali2066
| 2022-02-16T20:02:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_2e-05_essays_16_02_2022-21_01_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_2e-05_essays_16_02_2022-21_01_51
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2525
- Precision: 0.3997
- Recall: 0.5117
- F1: 0.4488
- Accuracy: 0.9115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.4652 | 0.1528 | 0.3588 | 0.2144 | 0.7851 |
| No log | 2.0 | 22 | 0.3646 | 0.2913 | 0.4847 | 0.3639 | 0.8521 |
| No log | 3.0 | 33 | 0.3453 | 0.3789 | 0.5611 | 0.4523 | 0.8708 |
| No log | 4.0 | 44 | 0.3270 | 0.3673 | 0.5496 | 0.4404 | 0.8729 |
| No log | 5.0 | 55 | 0.3268 | 0.4011 | 0.5725 | 0.4717 | 0.8760 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_0.0002_webDiscourse_16_02_2022-21_00_50
|
ali2066
| 2022-02-16T20:01:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_0.0002_webDiscourse_16_02_2022-21_00_50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_0.0002_webDiscourse_16_02_2022-21_00_50
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5530
- Precision: 0.0044
- Recall: 0.0182
- F1: 0.0071
- Accuracy: 0.7268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.7051 | 0.0645 | 0.0323 | 0.0430 | 0.4465 |
| No log | 2.0 | 20 | 0.6928 | 0.0476 | 0.0161 | 0.0241 | 0.5546 |
| No log | 3.0 | 30 | 0.6875 | 0.0069 | 0.0484 | 0.0120 | 0.5533 |
| No log | 4.0 | 40 | 0.6966 | 0.0064 | 0.0323 | 0.0107 | 0.5832 |
| No log | 5.0 | 50 | 0.7093 | 0.0061 | 0.0323 | 0.0102 | 0.5742 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_3e-05_webDiscourse_16_02_2022-20_59_50
|
ali2066
| 2022-02-16T20:00:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_3e-05_webDiscourse_16_02_2022-20_59_50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_3e-05_webDiscourse_16_02_2022-20_59_50
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5450
- Precision: 0.0049
- Recall: 0.0146
- F1: 0.0074
- Accuracy: 0.7431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6830 | 0.0109 | 0.0323 | 0.0163 | 0.5685 |
| No log | 2.0 | 20 | 0.7187 | 0.0256 | 0.0323 | 0.0286 | 0.5668 |
| No log | 3.0 | 30 | 0.6839 | 0.0076 | 0.0484 | 0.0131 | 0.5848 |
| No log | 4.0 | 40 | 0.6988 | 0.0092 | 0.0484 | 0.0155 | 0.5918 |
| No log | 5.0 | 50 | 0.7055 | 0.0100 | 0.0484 | 0.0165 | 0.5946 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.