package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
amuse-tests
This package installs the Astrophysical Multipurpose Software Environment (AMUSE).
amuse-tutorial
This package installs the tutorial for the Astrophysical Multipurpose Software Environment (AMUSE).
amuse-twobody
This package installs the twobody community code for AMUSE.
amuse-vader
This package installs the VADER community code for AMUSE.
amusing
aMusing v0.1.1InstallationIntroductionScoreMuminExample CodeScoreMunimInspirationExamplesInstallationpip install amusingIntroductionprogrammatic animation of sheet musicnotes appearing consecutivelyusesMuseScoreas notation softwareScoregenerates full resolution frames of the videonotsynchronized to audioMunimspectogram.pySTFTlinear spectrumMorlet (DWT)logarithmic spectrumoscilloscope.py2D OscilloscopeExample CodeScoreGet the individual frames of the score into output directoryoutdir:fromamusing.score.animateimportAmusing,Noteif__name__=='__main__':WIDTH_IN_PIXELS:int=1820NUMBER_OF_THREADS:int=8MUSESCORE_FILEPATH:str='score.mscx'amusing=Amusing(width=WIDTH_IN_PIXELS,outdir='frames',threads=NUMBER_OF_THREADS)amusing.read_score(MUSESCORE_FILEPATH)amusing.add_job(measures=1,subdivision=Note(16))amusing.add_job(measures=[2,3],subdivision=Note(4).triplet())amusing.add_job(measures=range(4,7),subdivision=Note(8).n_tuplet(5,4))amusing.generate_frames()Delete all jobs:amusing.delete_jobs()MunimMusic AnimationRender video of the frequency spectrum usingMorlet waveletfromamusing.munim.spectogramimportMorletAUDIO_FILEPATH:str='example.mp3'TO_VIDEO_FILEPATH:str='example.mp4'morlet=Morlet(fps,width,height)morlet.read_audio(AUDIO_FILEPATH)morlet.transform()morlet.render_video(TO_VIDEO_FILEPATH)UsingShort-time Fourier Transform(STFT)fromamusing.munim.spectogramimportSTFTmorlet=STFT(fps,width,height)morlet.read_audio(AUDIO_FILEPATH)morlet.transform()morlet.render_video(TO_VIDEO_FILEPATH)2d-Oscilloscope:fromamusing.munim.oscilloscopeimportOscilloscopeoscilloscope=Oscilloscope(fps,width)oscilloscope.read_audio(AUDIO_FILEPATH)oscilloscope.render_video(TO_VIDEO_FILEPATH)InspirationExamples
amusing-app
๐ŸŽง Amusing ๐ŸŽธA CLI to help download music independently or from your exported apple music library.Why should you useAmusing?To download your entire Apple Music Library and store it locally in one goTo search and download individual songs from YouTubeTo keep track of your ever growing music collection๐Ÿ› ๏ธ Install it!$pipinstallamusing-appโœจ Getting set upThere are three things to know before moving on to the next section:The CLI takes in aappconfig.yamlfile similar to what's indicated inappconfig.example.yaml. You can simply rename it. The file looks like this:root_download_path:"..."db_name:"..."A dedicated sqlite database calleddb_namewill be created inroot_download_path/db_name.dbto store two tablesSongandAlbumas defined inamusing/db/models.py. All songs downloaded locally will be getting a row in theSongtable and a row for their corresponding album in theAlbumtable.The songs are downloaded inroot_download_path/songsdirectory.That's it. You're done. Let's look at the commands available next.๐Ÿ’ฌ Available commandsThere are currently 6 commands available, excluding theamusing --version.The first time you run a command (eg. --help), anAmusingdirectory will be created in yourpathlib.Path.home()/Downloadsfolder. For eg., on MacOS, it's in/Users/Username/Downloads.$amusing--helpCreated a new config file: /Users/username/Downloads/Amusing/appconfig.yamlUsage: amusing [OPTIONS] COMMAND [ARGS]...Amusing CLI to help download music independently or from your exported apple music library.โ•ญโ”€ Options โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎโ”‚ --version -v โ”‚โ”‚ --help Show this message and exit. โ”‚โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏโ•ญโ”€ Commands โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎโ”‚ download Parse the entire AM library and download songs and make/update the db as needed. โ”‚โ”‚ showsimilar Look up the db and show if similar/exact song(s) are found. โ”‚โ”‚ showsimilaralbum Look up the db and show albums similar to the album searched. โ”‚โ”‚ showsimilarartist Look up the db and show songs for similar/exact artist searched. โ”‚โ”‚ song Search and download the song and add it to the db. Use --force to overwrite the existing song in the db. Creates a new โ”‚โ”‚ album if not already present. โ”‚โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏTo parse an exportedLibrary.xmlfile from your Apple Music account, use:$amusingdownload--helpUsage: amusing download [OPTIONS] [PATH]Parse the entire AM library and download songs and make/update the db as needed.โ•ญโ”€ Arguments โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎโ”‚ path [PATH] The path to the Library.xml exported from Apple Music. [default: ./Library.xml] โ”‚โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏโ•ญโ”€ Options โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎโ”‚ --help Show this message and exit. โ”‚โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ#Example$amusingdownload"your/path/to/Library.xml"To download a song individually, use:$amusingsong--helpUsage: amusing song [OPTIONS] NAME ARTIST ALBUMSearch and download the song and add it to the db. Use --force to overwrite the existing song in the db. Creates a new album if not alreadypresent.โ•ญโ”€ Arguments โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎโ”‚ * name TEXT Name of the song. [default: None] [required] โ”‚โ”‚ * artist TEXT Aritst of the song. [default: None] [required] โ”‚โ”‚ * album TEXT Album the song belongs to. [default: None] [required] โ”‚โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏโ•ญโ”€ Options โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎโ”‚ --force --no-force Overwrite the song if present. [default: no-force] โ”‚โ”‚ --help Show this message and exit. โ”‚โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ#Example,thesearchkeywordsneednotbeexactofcourse:$amusingsong"Run""One Republic""Human"Search for a similar song, album or artist in your db/downloads:$amusingshowsimilar"Someday"Song to look up: somedayโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“โ”ƒ Song โ”ƒ Artist โ”ƒ Album โ”ƒโ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉโ”‚ Someday โ”‚ OneRepublic โ”‚ Human (Deluxe) โ”‚โ”‚ Someday At Christmas โ”‚ Justin Bieber โ”‚ Under the Mistletoe (Deluxe Edition) โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜$amusingshowsimilarartist"OneRepublic"Artist to look up: OneRepublicโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“โ”ƒ Song โ”ƒ Artist โ”ƒ Album โ”ƒโ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉโ”‚ Run โ”‚ OneRepublic โ”‚ Human (Deluxe) โ”‚โ”‚ Someday โ”‚ OneRepublic โ”‚ Human (Deluxe) โ”‚โ”‚ No Vacancy โ”‚ OneRepublic โ”‚ No Vacancy - Single โ”‚โ”‚ RUNAWAY โ”‚ OneRepublic โ”‚ RUNAWAY - Single โ”‚โ”‚ Sunshine โ”‚ OneRepublic โ”‚ Sunshine - Single โ”‚โ”‚ I Ain't Worried โ”‚ OneRepublic โ”‚ Top Gun: Maverick (Music from the Motion Picture) โ”‚โ”‚ West Coast โ”‚ OneRepublic โ”‚ West Coast - Single โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜$amusingshowsimilaralbum"Human"Album to look up: Humanโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“โ”ƒ Album โ”ƒ Number of songs โ”ƒโ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉโ”‚ Human (Deluxe) โ”‚ 2 โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜TODO ๐Ÿ“Provide an option to choose which searched result is downloaded.Provide a command to show all songs in an album
amuzapdf
THIS IS A HOME PAGE OF OUR PROJECTHello, our project is for practicing how to publish a package. Thank you!
amv7
Automation v7A Python package which contains Automation v7 tool for bypassing PUBGM Security.UsageFor starting the automation v7 from any location just write downamv7
amvernoncal
This Python package is great for taking Arthur Murray Vernonโ€™s Google Calendar events and arrange them in a calendar structure in an Excel file. That can then be copy-and-pasted into Microsoft Office Publisher to create a printable PDF calendar.For those who want to go from the printable PDF calendars to a digital one, youโ€™re in luck! I use machine learning to parse through printable PDF calendars and create JSONs out of them, where each event has a title, dance_style and time (if applicable), ripe for creating Google Calendar events from them.While this project is geared towards use at Arthur Murray Dance Studios, feel free to take a look at the source code and modify it for your own calendarโ€™s needs.Have fun!Setup from source code (GitHub)Clone the repo.git clonehttps://github.com/vincentchov/amvernon-cal.gitInstallPython 3.xwith pip.Install Java 8.Create and activate a virtual environment.Install the corporapython-mtextblob.download_corpora.Install dependencies:pip install-rrequirements.txt.Profit!Setup from from PyPI (Pip)Follow steps 2-5 from above.Install amvernoncal from PyPI.pip install amvernoncalHow to go from Google Calendar to an Excel fileActivate the Google Calendar API for your account and obtain yourclient_secret.jsonfile.Activate your virtual environment.Import the module that will use your client secret:from amvernoncal.gcal_to_xlsx import gcal_events_to_xlsx.Give the gcal_events_to_xlsx() function a month and year to search, and the name of the Google Calendar youโ€™re converting from, making sure to surround each of the two arguments by quotes. Example:gcal_events_to_xlsx('September2017', 'Classes')That will then create 3 folders: JSONs, PDFs, and Output. Your Excel file will be in the Output folder.Alternatively, you can invoke gcal_events_to_xlsx() directly in the Terminal usingamvernon_gcal_to_xlsx, which comes with a help screen, thanks to Docopt.How to go from a printable PDF calendar to a JSONFollow steps 1 and from above.Import the function that will parse your calendar:from amvernoncal.pdfproc.pdf_to_json import parse_calendarGive the parse_calendar() function a path to your calendar, named based on the month and year, as well as tell it if you want to save to a JSON file or just return the JSON. Example:parse_calendar('september_2017.pdf',to_file=True)
am-viewer
Audio Metadata (AM) ViewerOverviewThis is a python based realtime viewer for Serialized ADM insde a SMPTE ST 2110 container. Two flavours of SMPTE ST 2110 are supported: SMPTE ST 2110-31 and -41. The is designed to view the Serialized ADM metadata produced by PMD Studio available athttps://github.com/DolbyLaboratories/pmd_tool. Once running the viewer can detect real-time changes in the metadata and display them in real-time. The purpose of the viewer is to be able to demonstrate the real-time capabilities of the various formats by allowing the dynamic behaviour to be viewed. The viewer also supports static display of the underlying XML representation.The User interface has three main sections:Audio BedsAudio ObjectsPresentationsThe audio beds section provides a list of the main audio scenes available. These will normally be channel based and will have a configuration such as 5.1 or 5.1.4 for 5.1 with 4 overhead channels. Normally there will only be a single bed but certain use-cases make use of two beds such as home and away crowd sounds for a live sports broadcast.If the audio bed does not contain the complete audio program and only contains music and effects then additional objects will be required and will be listed in the middle section of the user interface. This will normally include dialogue objects. This is very common where multiple language support is required. Objects that specify divergence signal to the downstream renderer that the object should be rendered as two discrete sound sources left and right of the intended position. This effect is sometimes used for dialogue objects.The list of presentations represent a set configurations that could be made available to use. Each presentation specifies a bed and selection of objects or elements to be included. Each presentation has a name and the language used for the name is specified as well as the language of the audio itself which is specified as the presentation language. Each presentation can be monitored by pressing the buttons on the left hand side. If the buttons are not enabled then GStreamer has not been detected as being installed. Pressing a button twice switches the audio off. The audio playback will react to changes in gain and object X position. Changes in object Y and Z position can not be detected as only stereo playback is supported.General System RequirementsMacOS, Windows and Linux supported.Python 3 (Tested with Python v3.8.1 from python.org)Python Modules (install using PIP) scapy (2.5.0 or later), zeroconf (0.26.3 or earlier),Windows RequirementsNPcap (https://nmap.org/npcap/)Mac OS RequirementsMay require the use of the "-libpcap" switch. If using homebrew then Tkinter must be install viabrew install python-tkLinux RequirementsAs well as Python, some distros may require Tkinter to be installed e.g. on Ubuntu:sudo apt-get install python3-tkBefore running allow python and tcpdump to open raw sockets. Failure to do this will result in permission problems when packet reception is attempted.setcap cap_net_raw=eip <python executable>setcap cap_net_raw=eip <tcpdump executable> (normally /usr/sbin/tcpdump)Audio Playback Support System RequirementsTo enable the audio playback feature, GStreamer must be installed. This will be detected on startup and the presentation selection buttons will be enabled. Make sure that gst-launch-1.0 is in the path.Windows GStreamer installationSeehttps://gstreamer.freedesktop.org/documentation/installing/on-windows.htmlandhttps://gstreamer.freedesktop.org/data/pkg/windows/. Any package type should work although msvs was used for testingMac OS GStreamer installationThe easiest way is to install Homebrew and then usebrew install gstreamer gst-plugins-base gst-plugins-goodLinux (Ubuntu) installationUbuntu generally comes with GStreamer preinstalled. If not thensudo apt-get install gstreamer1.0-toolsInstallationpip3 install am_viewerInstructionsIf installed typeam_viewer. If not installed, execute the run script.The viewer can use either an XML or a IP stream as input. To use a file as input use the -xml option to specify the filename. If a stream is used as input then there are two possible ways for the viewer to obtain RTP session information. Either an SDP file can be provided using the -sdp option or the stream can be discovered using the Ravenna discovery protocol (Bonjour & RTSP).If XML file mode is used the display will immediately update its display based on the XML file. The only control at this point is to quit the application. This is essentially a debug mode.The stream mode is used then a list of interfaces and a list of available services will be provided on the bottom option bar. Once the appropriate interface and service has been selected then the 'Run' button can be pressed to start the monitoring of the stream. If more than one service is available then the service list can be used to switch between services. If the '-sdp' option was used then one option in the service list should correspond to the sdp file.When receiving a stream the 'XML' can be pressed at anytime to yield a static XMl snapshot. The XML tree can be explored by expand the various levels of the tree in the XML viewer windows.Several indicators on the bottom bar show status. The PMD indicator will be green when receiving PMD and grey when not. If an error is received the PMD indicator will flash or stay red for continuous errors. The SADM and AES-X242 indicators work in the same way. The subframe-mode / frame mode indicators indicate whether the received stream is using the frame mode or subframe mode of the SMPTE ST 337 container format inside the SMPTE 2110-31 stream. This indicator is not applicable for AES-X242 streams which do not use SMPTE frames.If receiving theKnown LimitationsOnly sADM full frame mode is supported. Both gzip and plain XML sADM modes are supported but only gzip has been thoroughly tested.When playing back presentations that include VDS (Audio Description) dialogue objects. The main audio is not ducked as it should be but rather everything is statically mixed. To add the ducking requires a new custom gstreamer plug-in so this is unlikely to fixed in the next version.LicenseAM Viewer Copyright (c) 2023, Dolby Laboratories Inc. All rights reserved.Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR APARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
amvvmath
No description available on PyPI.
amwater
amwater: Alert CLI for American WaterTable of contentsInstallationGetting startedamwater Alert CLI for American Wateramwater setupamwater checkInstallationThis assumes that you have native python & pip installed in your system, you can test this by going to the terminal (or windows command prompt) and tryingpythonand thenpip listamwater only support Python v3.4 or higherTo installamwater: Alert CLI for American Wateryou can install using two methods.pip install amwateror you can also trygit clone https://github.com/samapriya/amwater.git cd amwater python setup.py installFor Linux use sudo or trypip install amwater --user.I recommend installation within a virtual environment. Find more information oncreating virtual environments here.Getting startedAs usual, to print help:amwater -h usage: amwater [-h] {setup,amcheck} ... Alert CLI for American water positional arguments: {setup,amcheck} setup Setup default address and optional (slack webhook) check Check for any american water issued alerts for given adddress optional arguments: -h, --help show this help message and exitTo obtain help for specific functionality, simply call it withhelpswitch, e.g.:amwater amcheck -h. If you didn't install amwater, then you can run it just by going toamwaterdirectory and runningpython amwater.py [arguments go here]amwater Alert CLI for American WaterAmerican water releases alerts for things like pipe repairs and water boil orders among other things for Illinois and areas it serves within the state. This tool is focused on allowing the user to quickly check if a give address has any alerts issued within a given number of days (defaults of last 1 day). Since there is no current API to fetch this information standard XML is parsed and a geocoding API endpoint from openstreetmap is used to confirm a geometry match.amwater setupThis allows you to save your default address, this also allows you to save a slack webhook which can be used to send messages incase there is an actual alert. This does require setting up a slackbot and enabling incoming webhook and is an experimental feature of the tool.amwater setupamwater checkThis allows you to check any address for any alerts issues by american water within a given number of days. The number of days is an optional argument and the tool chooses 1 day as default for alert notification. The function also allows you to optinally pass the slack webhook url incase webhook url was not set during setup but it is completely optional.amwater check -h usage: amwater check [-h] [--address ADDRESS] [--days DAYS] [--webhook WEBHOOK] optional arguments: -h, --help show this help message and exit Optional named arguments: --address ADDRESS Your address --days DAYS Number of days to check for alert default is 1 day --webhook WEBHOOK Slack webhook to send alert link (experimental)if you have used the setup tool to set up a default address you can run the tool as isamwater check. Other setups can be as followingamwater check --address "Hessel Boulevard,Champaign,IL"with slack webhook incase you didn't save it using the setup toolamwater check --address "Hessel Boulevard,Champaign,IL" --webhook "https://hooks.slack.com/services/T6U1JC/BVL5/Lw8uEYNWX4D7"Known issuesGeometry search for this application is based on the responsiveness of the alert URL, while this usually works, it sometimes fails with the server not returning and expected result.This tool can be used without the slack webhook functionality to run spot checks (I will include a tutorial about setting up slack bots and webhook later at some point)Changelogv0.0.2added date time parser to get date and time in 12 hr formatintegrated slack notificationa and webhook blocksgeneral improvements to error handlingoverall improvements
amway-eap-migration
No description available on PyPI.
amway-eap-packagerit
No description available on PyPI.
amwds
amwdsAdd a short description here!A longer description of your project goes here...InstallationIn order to set up the necessary environment:review and uncomment what you need inenvironment.ymland create an environmentamwdswith the help ofconda:conda env create -f environment.ymlactivate the new environment with:conda activate amwdsNOTE:The conda environment will have amwds installed in editable mode. Some changes, e.g. insetup.cfg, might require you to runpip install -e .again.Optional and needed only once aftergit clone:install severalpre-commitgit hooks with:pre-commitinstall# You might also want to run `pre-commit autoupdate`and checkout the configuration under.pre-commit-config.yaml. The-n, --no-verifyflag ofgit commitcan be used to deactivate pre-commit hooks temporarily.installnbstripoutgit hooks to remove the output cells of committed notebooks with:nbstripout--install--attributesnotebooks/.gitattributesThis is useful to avoid large diffs due to plots in your notebooks. A simplenbstripout --uninstallwill revert these changes.Then take a look into thescriptsandnotebooksfolders.Dependency Management & ReproducibilityAlways keep your abstract (unpinned) dependencies updated inenvironment.ymland eventually insetup.cfgif you want to ship and install your package viapiplater on.Create concrete dependencies asenvironment.lock.ymlfor the exact reproduction of your environment with:condaenvexport-namwds-fenvironment.lock.ymlFor multi-OS development, consider using--no-buildsduring the export.Update your current environment with respect to a newenvironment.lock.ymlusing:condaenvupdate-fenvironment.lock.yml--pruneProject Organizationโ”œโ”€โ”€ AUTHORS.md <- List of developers and maintainers. โ”œโ”€โ”€ CHANGELOG.md <- Changelog to keep track of new features and fixes. โ”œโ”€โ”€ CONTRIBUTING.md <- Guidelines for contributing to this project. โ”œโ”€โ”€ Dockerfile <- Build a docker container with `docker build .`. โ”œโ”€โ”€ LICENSE.txt <- License as chosen on the command-line. โ”œโ”€โ”€ README.md <- The top-level README for developers. โ”œโ”€โ”€ configs <- Directory for configurations of model & application. โ”œโ”€โ”€ data โ”‚ โ”œโ”€โ”€ external <- Data from third party sources. โ”‚ โ”œโ”€โ”€ interim <- Intermediate data that has been transformed. โ”‚ โ”œโ”€โ”€ processed <- The final, canonical data sets for modeling. โ”‚ โ””โ”€โ”€ raw <- The original, immutable data dump. โ”œโ”€โ”€ docs <- Directory for Sphinx documentation in rst or md. โ”œโ”€โ”€ environment.yml <- The conda environment file for reproducibility. โ”œโ”€โ”€ models <- Trained and serialized models, model predictions, โ”‚ or model summaries. โ”œโ”€โ”€ notebooks <- Jupyter notebooks. Naming convention is a number (for โ”‚ ordering), the creator's initials and a description, โ”‚ e.g. `1.0-fw-initial-data-exploration`. โ”œโ”€โ”€ pyproject.toml <- Build configuration. Don't change! Use `pip install -e .` โ”‚ to install for development or to build `tox -e build`. โ”œโ”€โ”€ references <- Data dictionaries, manuals, and all other materials. โ”œโ”€โ”€ reports <- Generated analysis as HTML, PDF, LaTeX, etc. โ”‚ โ””โ”€โ”€ figures <- Generated plots and figures for reports. โ”œโ”€โ”€ scripts <- Analysis and production scripts which import the โ”‚ actual PYTHON_PKG, e.g. train_model. โ”œโ”€โ”€ setup.cfg <- Declarative configuration of your project. โ”œโ”€โ”€ setup.py <- [DEPRECATED] Use `python setup.py develop` to install for โ”‚ development or `python setup.py bdist_wheel` to build. โ”œโ”€โ”€ src โ”‚ โ””โ”€โ”€ amwds <- Actual Python package where the main functionality goes. โ”œโ”€โ”€ tests <- Unit tests which can be run with `pytest`. โ”œโ”€โ”€ .coveragerc <- Configuration for coverage reports of unit tests. โ”œโ”€โ”€ .isort.cfg <- Configuration for git hook that sorts imports. โ””โ”€โ”€ .pre-commit-config.yaml <- Configuration of pre-commit git hooks.NoteThis project has been set up usingPyScaffold4.3.1 and thedsproject extension0.7.2.
amwsistest
No description available on PyPI.
amw-theme
Ansible Middlware Sphinx ThemeA clean and modern Sphinx theme based on piccolo.
amxfirmware
amxfirmwareuse amxtelnet results to check master and device firmware against versions provided. asyncio: n/aMasterFirmware(firmware_dir):firmware_dir: Directory to store firmware lists that will be created.set_systems(systems):List of dicts, where each dict is an AMX system (or at least an AMX master controller).Required keys in each dict:full_name: string. Name that will end up being imported into your Netlinx address book.ip_address: string. Formatted like '192.168.1.1'master_model: string. Formatted like 'NI-700'master_serial: string. AMX serial number.set_versions(ni_700_current,ni_x100_current,nx_current):ni_700_current: string in this format: '1.23.456'. Length of segments does not matter. No letters.ni_x100_current: string in this format: '1.23.456'. Length of segments does not matter. No letters.nx_current: string in this format: '1.23.456'. Length of segments does not matter. No letters.run():Begins parsing information from the files located in input_path, then creates .csv files of each version that has updates. Masters that are already up to date will be put in a list named 'master fw up to date.csv'. Masters that didn't have all of the required keys will be put into 'master fw skipped.csv'. New k:v pairs named 'master_update_list' and 'master_update_info' will also be added to each system which is then bundled with the rest and returned in a list.DeviceFirmware(firmware_dir):firmware_dir: Directory to store firmware lists that will be created.set_systems(systems):List of dicts, where each dict is an AMX system (or at least an AMX master controller).Required keys in each dict:full_name: string. Name that will end up being imported into your Netlinx address book.ip_address: string. Formatted like '192.168.1.1'master_model: string. Formatted like 'NI-700'master_serial: string. AMX serial number.set_versions(ni_700_current,nx_current):ni_current: string in this format: '1.23.456'. Length of segments does not matter. No letters.nx_current: string in this format: '1.23.456'. Length of segments does not matter. No letters.run():Begins parsing information from the files located in input_path, then creates .csv files of each version that has updates. Controllers that are already up to date will be put in a list named 'device fw up to date.csv'. Masters that didn't have all of the required keys will be put into 'device fw skipped.csv'. New k:v pairs named 'device_update_list' and 'device_update_info' will also be added to each system which is then bundled with the rest and returned in a list.
amxlogs
amxlogsfinds, extracts, and clears logs from AMX devices using ftpLogSniffer():returns:logs are written to file and/or logged.set_systems():list of dicts where each dict is an AMX system.minimum key requirements:'full_name' (string)'master_ip' (string)config():user_name: user name used to login to AMXpassword: password used to login to AMXlog_type: default 'error_log', case insensitive.Also try 'camera_log'. Additional types depend on what you name them when you create them in the AMX program. So if you had AMX create logs called late_night_usage.txt, log_type would be 'late_night_usage'.output_dir: path to dir used to store received files.File name is created using 'full_name'amx logfile nameclear_logs: default False.Use True to delete the log files after they are downloaded.debug_ftp: default 0.Set to 1 to view ftplib's builtin debugger on stdout.timeout: default 10. Seconds to timeout socket connection.run():Begin connecting to systems in set_systems(), download logs that match log_type, using settings from config()
amxmaintenance
amxmaintenancepulls together telnet info, log checking, firmware checks, etc. run() returns the full object for inspection, but what you'll most likely end up working with is instance.campus, which is a list of AMX system dicts.FullScan():telnet_output_dir: (default: amx telnet responses/)error_log_dir: (default: error log/)camera_log_dir: (default: camera log/)master_fw_dir: (default: firmware lists/)device_fw_dir: (default: firmware lists/)set_systems(systems):list of dicts where each dict is an AMX systemminimum key requirements:'full_name' (string)'master_ip' (string)config_excel():uses the amxtoexcel and exceltoamx packagessource_xlsx_path: default 'campus_rooms.xlsx'. This is the excel file that everything else depends on. Each AMX system row will be turned into a dictionary, and then they'll all be put into a list.bare minimum requirements for each system:'full_name''master_ip''master_model' in the format NI-700, NX-2200, etc.export_xlsx: default True. Do you want to export the results to an excel file?xlsx_output_path: default 'campus_complete.xlsx'. If export_xlsx is true, the results are written to this location.config_telnet():Uses the amxtelnet pkg. To send specific commands, use the amxtelnet pkg directly. Also checkout the amxbroadcast pkg (coming soon) if replies don't matter.telnet_user_name: user name for logging into AMX masters.telnet_password: password to use with telnet_user_name.telnet_alt_username: default: 'administrator',telnet_alt_password: default: 'password',scan_telnet: default: True.export_telnet_txt: default: True.def config_logs():Uses the amxlogs pkg. To check for other types of log files, use the amxlogs pkg directly.check_error_log: default: True.clear_error_logs: default: False.error_log_type: default: 'error_log'.check_camera_log: default: True.clear_camera_logs: default: False.camera_log_type: default: 'camera_log'.def config_firmware():ni_700_current: default: '4.1.419'.ni_x100_current: default: '4.1.419'.nx_current: default: '1.6.179'.run():Performs telnet scan of systems defined in set_systems, checks their firmware against the versions provided in config_firmware, checks for logs defined in config_logs, and exports the results to excel.Returns everything to the instance for inspection, but the part you will be working with is instance.campus, which is a list of dicts, with each dict being an AMX master controller.
amxtelnet
amxtelnetamx telnet functionsTelnet():Handles the actual telnet connection and communication with the AMX devices. This class is very universal, but if you run into issues using it on a non-AMX device, it may be caused by the modifications I had to make for AMX:Using the default telnetlib.py with AMX was causing an infinite handshake loop._write() lines 269, 270 disabled IAC (telnet negotiation) doubling.process_rawq() lines 373, 374 changed when raw chars go to buf when IAC.AMXConnect():returns:commands are send to AMX masters. If desired, replies are written to file as .txt and/or logged.set_systems():list of dicts where each dict is an AMX system.minimum key requirements:'full_name' (string)'master_ip' (string)'master_model' (string) (NX-1200, NI-700, etc.)config():user_name: user name to login to AMXpassword: password to login to AMXalt_username: user name to use if user_name failsalt_password: password to use with alt_user_namewrite_results: True or False; write replies to individual .txt files per system.output_path: file path to use if write_results is True. This path should also be used for the path in ParseAMXResponse().set_requests(): list of strings to send to the AMX master. $0D is automaticallyv appended.run(): Begin connecting to systems in set_systems(), sending requests from set_requests(), using settings from config()ParseAMXResponse():Use this class to parse the information gathered from AMXConnect().This class is less universal in that it expects the .txt files to contain responses to the following commands in the following order:'show device','get ip','program info','list'You can append additional commands as needed. The output of ParseAMXResponse().read_telnet_text() is a list of amx system dicts. Current uses of this list:export to excel using amxtoexcel.py to archive campus system statuscode creation using code_creator_django.py or code_creator_usm.pyAMXConnect.path and ParseAMXResponse.path will normally refer to the same location. If you use the default locations in each class, they'll work together using systems/telnet responses/.If there's already .txt files in 'path' you can skip AMXConnect() and go straight to ParseAMXResponse() if using potentially outdated information is acceptable.
amxtoexcel
amxtoexcelprovide list of amx system dicts and a .xlsx file path to exportrequires pandas
amy
AMY PluginAmy AssistantJust import amy to write your plugin.fromamyimportPlugin,Instance,MessageclassMessenger(FacebookClient,Instance):defonCreate(self,username):self.username=usernamedefonAuth(self,token):FacebookClient.__init__(self,self.username,token)defonStart(self):FacebookClient.listen()defonStop(self):FacebookClient.stopListening()defmyNewMessageFunc(message)unifiedMessage=Message().setPlatform('messanger')# .setUser(message['user'])# ...Plugin.publishMessange(unifiedMessage.toDict())defmain():Messenger=Plugin('messenger',Messenger)if__name__=="__main__":main()
amygda
Automated Mycobacterial Growth Detection Algorithm (AMyGDA)This is apython3module that takes a photograph of a 96 well plate and assesses each well for the presence of bacterial growth (hereMycobacterial tuberculosis). Since each well contains a different concentration of a different antibiotic, the minimum inhibitory concentration, as used in clinical microbiology, can be determined.Apaperdescribing the software and demonstrating its reproducibility and accuracy is available from Microbiology.The development of this software was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre (BRC) to aid theCRyPTIC project.Philip W [email protected] January 2020CitingPlease citeAutomated detection of bacterial growth on 96-well plates for high-throughput drug susceptibility testing of Mycobacterium tuberculosis Philip W Fowler, Ana Luiza Gibertoni Cruz, Sarah J Hoosdally, Lisa Jarrett, Emanuele Borroni, Matteo Chiacchiaretta, Priti Rathod, Sarah Lehmann, Nikolay Molodtsov, Clara Grazian, Timothy M Walker, Esther Robinson, Harald Hoffmann, Timothy EA Peto, Daniela Maria M. Cirillo, E Grace Smith, Derrick W Crook Microbiology (2018) 164:1522-1530 doi:10.1099/mic.0.000733InstallationThis is python3; python2 will not work. Installation is straightforward using the includedsetup.pyscript. First clone the repository (or download it directly from this GitHub page)$ git clone https://github.com/philipwfowler/amygda.gitThis will download the repository, creating a folder on your computer calledamygda/. If you only wish to install the package in your$HOMEdirectory (or don't have sudo access) issue the--userflag$ cd amygda/ $ python setup.py install --userAlternatively, to install system-wide$ sudo python setup.py installThe setup.py will automatically looks for the required following python packages and, if they are not present, will install them, or if they are an old version, will update them.The information below is only included in case this process does not work. The prerequisites arenumpyandscipy. Your python installation often includes numpy and scipy. To check, issue the following in a terminal$ python -c "import numpy" $ python -c "import scipy"If you see an error, indicatingnumpyand/orscipyis not installed, please install the scipy stack by followingthese instructions. -matplotlib. If your python installation includes numpy and scipy, there is a good chance it also includes matplotlib. Again to check$ python -c "import matplotlib"You can find installation instructionshere.opencv-python. This can be installed using standard python tools, such as pip$ pip install opencv-pythonAMyGDAwas developed and tested using version 3.4.0 ofOpenCV. If you do not havesudoaccess on your machine you can install this (and any other python module) in your$HOMEdirectory using the following command$ pip install opencv-python --userdatreant. This provides a neat way of storing and discovering metadata for each image using the native filesystem. It is not essential for the operation ofAMyGDA, but the code would need re-factoring to remove this dependency. Again it can be installed using pip$ pip install datreantNote thatdatreantworks best if each image is containing within its own folder.datreantautomatically stores all metadata associated with each image within twoJSONfiles in a hidden.datreantfolder in the same location as the input file.TutorialThe code is structured as a python module; all files for which can be found in theamygda/subfolder.$ ls LICENCE.md amygda/ setup.py README.md examples/(You may see other folders likebuild/if you are run thesetup.pyscript. To run the tutorial move into theexamples/sub-folder.$ cd examples/ $ ls analyse-plate-with-amygda.py plate-configuration/ sample-images/analyse-plate-with-amygda.pyis a simple python file showing how the module can be used to analyse a single image. The fifteen images shown in Figure S1 in the Supplement of the accompanying paper (see above) are provided so you can reconstruct Figures S2, S3, S4 & S12. The images are organised as follows$ ls sample-images/ 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 $ ls sample-images/01/ image-01-raw.pngTo process and analyse a single image using the default settings is simply$ analyse-plate-with-amygda.py --image sample-images/01/image-01-raw.pngAnd should take no more than 10 seconds. No output is written to the terminal, instead you will find a series of new files have been written in thesamples-images/01folder.$ ls -a sample-images/01/ .datreant/ image-01-arrays.npz image-01-filtered.png image-01-mics.txt image-01-processed.png image-01-raw.pngThe hidden.datreant/folder contains twoJSONfiles.categories.jsoncontains all the MICs and other metadata about the plate and both can be automatically discovered and read using thedatreantmodule to make systematic analyses simpler.image-01-mics.txtcontains the same information as theJSONfile but in a simpler format that is easier for humans to read.image-01-arrays.npzcontains a series ofnumpyarrays that specify e.g. the percentage growth in each wellimage-01-raw.pngis the original image of the plate.image-01-msf.jpgis a JPEG of the plate following mean shift filteringimage-01-clahe.jpgis a JPEG of the plate following mean shift filtering and then a Contrast Limited Adaptive Histogram Equalization filter to improve contrast and equalise the illumination across the plate.image-01-final.jpgis a JPEG of the plate following both the above filtering operations and a histogram stretch to ensure uniform brightness.image-01-growth.pngadds some annotation; specifically the locations of the wells are drawn, each well is labelled with the name and concentration of drug and wells which AMyGDA has classified as containing bacterial growth are highlighted with a coloured circle.To see the other options available for theanalyse-plate-with-amygda.pypython script$ analyse-plate-with-amygda.py --help usage: analyse-plate-with-amygda.py [-h] [--image IMAGE] [--growth_pixel_threshold GROWTH_PIXEL_THRESHOLD] [--growth_percentage GROWTH_PERCENTAGE] [--measured_region MEASURED_REGION] [--sensitivity SENSITIVITY] [--file_ending FILE_ENDING] optional arguments: -h, --help show this help message and exit --image IMAGE the path to the image --growth_pixel_threshold GROWTH_PIXEL_THRESHOLD the pixel threshold, below which a pixel is considered to be growth (0-255, default=130) --growth_percentage GROWTH_PERCENTAGE if the central measured region in a well has more than this percentage of pixels labelled as growing, then the well is classified as growth (default=2). --measured_region MEASURED_REGION the radius of the central measured circle, as a decimal proportion of the whole well (default=0.5). --sensitivity SENSITIVITY if the average growth in the control wells is more than (sensitivity x growth_percentage), then consider growth down to this sensitivity (default=4) --file_ending FILE_ENDING the ending of the input file that is stripped. Default is '-raw'To analyse all plates, you can either use a simple bash loop$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15; do analyse-plate-with-amygda.py --image sample-images/$i/image-$i-raw.png done;Alternatively if you haveGNU parallelinstalled you can use all the cores on your machine to speed up the process.$ find sample-images/ -name '*raw.png' | parallel --bar analyse-plate-with-amygda.py --image {}To delete all the output files, thereby returning sample-images/ to its clean state, a bash script is provided. Use with caution!$ cd samples-images/ $ ls 01/ image-01-mics.txt image-01-arrays.npz image-01-clahe.jpg image-01-filtered.jpg image-01-raw.png image-01-growth.jpg image-01-msf.jpg $ bash remove-output-images.sh $ ls 01/ image-01-raw.pngLicenceThe software is available subject to the terms of the attached academic-use licence.Adapting for different plate designsAMyGDA is written to be agnostic to the particular design of plate, or even the number of wells on each plate. The concentration (or dilution) of drug in each well is defined by a series of plaintext files inconfig/For example the drugs on the UKMYC5 plate is defined within theconfig/UKMYC5-drug-matrix.txtfile and looks like.BDQ,KAN,KAN,KAN,KAN,KAN,ETH,ETH,ETH,ETH,ETH,ETH BDQ,AMI,EMB,INH,LEV,MXF,DLM,LZD,CFZ,RIF,RFB,PAS BDQ,AMI,EMB,INH,LEV,MXF,DLM,LZD,CFZ,RIF,RFB,PAS BDQ,AMI,EMB,INH,LEV,MXF,DLM,LZD,CFZ,RIF,RFB,PAS BDQ,AMI,EMB,INH,LEV,MXF,DLM,LZD,CFZ,RIF,RFB,PAS BDQ,AMI,EMB,INH,LEV,MXF,DLM,LZD,CFZ,RIF,RFB,PAS BDQ,AMI,EMB,INH,LEV,MXF,DLM,LZD,CFZ,RIF,RFB,PAS BDQ,EMB,EMB,INH,LEV,MXF,DLM,LZD,CFZ,RIF,POS,POSAdding a new plate design is simply a matter of creating new files specifying the drug, concentration and dilution of each well. Note that changing thenumberof wells at present also involves specifying the well_dimensions when creating a PlateMeasurement object. Currently this defaults to (8,12) i.e. a 96-well plate in landscape orientation. As an example, the configuration files for the UKMYC6 plate, which is the successor to the UKMYC5 plate, are included although all the provided examples are of UKMYC5 plates.
amypad
Alias foramypet.
amypad-core
Basic functionality needed forAMYPAD.InstallIntended for inclusion in requirements for other packages. The package name isamypad-core. Options include:niiTo install options and their dependencies, use the package nameamypad-core[option1,option2].
amypet
AmyPETAmyloid imaging to prevent Alzheimerโ€™s Disease.InstallRequires Python 3.6 or greater.pipinstallamypetFor certain functionality (image trimming & resampling, DICOM to NIfTI converters) it may be required to alsopip install nimpa.Usageamypet.web# Web UI versionamypet--help# CLI version
amy-py
amy_py - Custom Python Syntax Extensionamy_py is a powerful Python package that introduces a unique and expressive custom syntax extension for Python developers. With amy_py, you can leverage an innovative set of keywords and constructs that enhance your coding experience and productivity.FeaturesCustom Syntax: amy_py extends Python's capabilities with custom keywords likeVAR,IF, and more, allowing you to write code in a more intuitive and concise manner.Improved Readability: Our custom syntax enhances code readability, making it easier to understand and maintain complex logic.Boosted Productivity: amy_py streamlines common programming tasks, reducing boilerplate code and saving you time.Seamless Integration: You can seamlessly integrate amy_py into your existing Python projects without disrupting your workflow.InstallationTo get started with amy_py, simply install it usingpip:pipinstallamy_py Usage amy_pyisstraightforwardtouse.Importthepackageandbeginwritingcodewiththecustomsyntax: python Copycode importamy_py# Use custom syntaxamy_py.VAR('a',42)amy_py.IF(a==42,"Hello, amy_py!")# Your code continues hereDocumentation Fordetailedinformationonhowtouseamy_pyandacompletelistofavailablekeywordsandfeatures,pleaserefertoourofficialdocumentation. Examples Explorepracticalexamplesandusecasesintheexamplesdirectoryofthisrepository. Support Ifyouencounteranyissuesorhavequestions,feelfreetoreachouttousonourGitHubIssuespage. Contributing Wewelcomecontributions!Ifyou'dliketoenhanceamy_pyorreportissues,pleasecheckourContributingGuidelines. License ThisprojectislicensedundertheMITLicense-seetheLICENSEfilefordetails. Acknowledgments WewouldliketothankthePythoncommunityforitscontinuoussupportandinspiration. Enjoycodingwithamy_pyandunlockanewworldofpossibilitiesinPythondevelopment!
amzget
No description available on PyPI.
amzn.fire-cookie
Failed to fetch description. HTTP Status Code: 404
amzn-micro-coral
amzn-micro-coralA minimalistic implementation of a Coral client, used mainly for people who are working in contexts where they may not be able to import Coral clients directly.UsageService calls are entirely unopinionated, so you better be good at reading Coral client configs. A regular instantiation of the service would be:from amzn_micro_coral import CoralService, CoralAuth my_service = CoralService( url="https://my-service.amazon.com", auth=CoralAuth.midway(sentry=False), ) r = my_service.post("MyService.MyOperation", data={"param1": "value1"}) result = r.json()The client does do a basic level of error checking in case the Coral service returns the standard error message in the form{"__type": "<message>"}.SamplesThis module also provides some very basic classes for interacting with generic services:from amzn_micro_coral import crux r = crux.post(<...>)Some may provide more features than others but have no guarantee of always working into the future.
amzon.line
No description available on PyPI.
amzon.ppi
No description available on PyPI.
amz-parser
No description available on PyPI.
amz-ppc-optimizer
Amazon PPC OptimizerThis project is a Python script that optimizes Amazon PPC campaigns by analyzing search term reports and adjusting campaign keywords and bids to improve advertising performance.RequirementsBefore running the script, make sure you have the following prerequisites installed:Python 3.xRequired Python packages (specified inrequirements.txt)You can install the required packages by running:pipinstallamz-ppc-optimizerUsageData PreparationBulk sheet report containing campaign dataSearch term report for sponsored productsSample usagesheet_handler=AmzSheetHandler()sheet_handler.read_bulk_sheet_report(filename="amz-ppc-bulk-file-name.xlsx")keyword_optimizer=ApexOptimizer(sheet_handler.sponsored_prod_camp,desired_acos=0.3,min_bid=0.734)keyword_optimizer.optimize_spa_keywords(exclude_dynamic_bids=False)datagram=keyword_optimizer.datasheetsearch_termed=""ifoptimize_search_termsisTrue:sheet_handler.read_search_terms_report(filename="Sponsored Products Search term report.xlsx")search_terms_optimizer=SearchTermOptimizer(sheet_handler.sponsored_product_search_terms)# Get profitable and unprofitable search terms based on ACOS valueprofitable_st=search_terms_optimizer.filter_profitable_search_terms(desired_acos=0.3)unprofitable_st=search_terms_optimizer.filter_unprofitable_search_terms(desired_acos=0.3)# Add profitable search terms to exact campaignsdatagram=search_terms_optimizer.add_search_terms(datagram,profitable_st,1)datagram=search_terms_optimizer.add_search_terms(datagram,unprofitable_st,0.6)search_termed="ST_"filename="Sponsored_Products_Campaigns_"+search_termed+str(datetime.datetime.utcnow().date())+".xlsx"sheet_handler.write_data_file(filename,datagram,"Sponsored Products Campaigns")Optimization ProcessThe script reads the campaign data from data.xlsx and optimizes keywords in sponsored product campaigns. It also reads the search term report from Sponsored Products Search term report.xlsx. Profitable and unprofitable search terms are determined based on the specified ACOS (Advertising Cost of Sales) threshold. Profitable search terms are added to exact match campaigns, and bids are adjusted. Unprofitable search terms are added with reduced bids to maintain visibility but control costs.ConfigurationYou can configure the optimization settings in the script:Adjust the desired_acos value to set your desired ACOS threshold for filtering profitable and unprofitable search terms.LicenseThis project is licensed under the MIT License - see the LICENSE file for details.
amzqr
Amazing-QR่ฝฌๅˆฐไธญๆ–‡็‰ˆOverviewPython QR Code GeneratorGeneratecommon qr-code,artistic qr-code (black & white or colorized),animated qr-code (black & white or colorized).Contents[toc]ExamplesInstall# via pippipinstallamzqrUsageTerminal Way(TIPS: If you haven't installamzqr, you shouldpython(3) amzqr.pyinstead ofamzqrblow.)# summaryamzqrWords[-v{1,2,3,...,40}][-l{L,M,Q,H}][-noutput-filename][-doutput-directory][-ppicture_file][-c][-concontrast][-bribrightness]seeCommon QR-CodeforWords,-v,-l,-n,-dseeArtistic QR-Codefor-p,-c,-con,-briseeAnimated GIF QR-Codeabout GIFCommon QR-Code#1 Wordsamzqr https://github.comJust input a URL or a sentence, then get your QR-Code named 'qrcode.png' in the current directory.#2 -v, -lamzqr https://github.com -v 10 -l QThedefaultsize of QR-Code depends both on the numbers of words you input and the level, while thedefaultlevel (Error Correction Level) isH(the highest).Customize: If you want to control the size and the error-correction-level, use the-vand-larguments.-vrepresenting the length is from a minimum of1to a maximum of40.-lrepresenting the error correction level is one ofL, M, Q and H, where L is the lowest level and H is the highest.#3 -n, -damzqr https://github.com -n github_qr.jpg -d .../paths/Thedefaultoutput-filename is 'qrcode.png', while thedefaultoutput-directory is current directory.Customize: You can name the output-file and decide the output-directory.Noticethat if the name is as same as a existing file, the old one will be deleted.-nrepresenting the output-filename could be in the format one of.jpg๏ผŒ.png๏ผŒ.bmp๏ผŒ.gif.-dmeans directory.Artistic QR-Code#1 -pamzqr https://github.com -p github.jpgThe-pis to combine the QR-Code with the following picture which is in the same directory as the program. The resulting picture isblack and whiteby default.#2 -camzqr https://github.com -p github.jpg -cThe-cis to make the resulting picturecolorized.#3 -con, -briamzqr https://github.com -p github.jpg [-c] -con 1.5 -bri 1.6The-conflag changes thecontrastof the picture - a low number corresponds to low contrast and a high number to high contrast.Default: 1.0.The-briflag changes thebrightnessand the parameter values work the same as those for-con.Default: 1.0.Animated GIF QR-CodeThe only difference from Artistic QR-Code mentioned above is that you should input an image file in the.gifformat. The you can get your black-and-white or colorful qr-code. Remember that when you use-nto customize the output-filename, then the output-filename must end by.gif.Import Wayfromamzqrimportamzqrversion,level,qr_name=amzqr.run(words,version=1,level='H',picture=None,colorized=False,contrast=1.0,brightness=1.0,save_name=None,save_dir=os.getcwd())details about each parameter are as mentionedabove# help(amzqr)Positionalparameterwords:strOptionalparametersversion:int,from1to40level:str,justoneof('L','M','Q','H')picutre:str,afilenameofaimagecolorized:boolconstrast:floatbrightness:floatsave_name:str,theoutputfilenamelike'example.png'save_dir:str,theoutputdirectoryTipsUse a nearlysquarepicture instead of a rectangle one.If the size of the picture is large, you should also choose arightlylarge-vinstead of using the default one.If part of the picture is transparent, the qr code will look like:You can change the transparent layer to white, and then it will look like:Supported CharactersNumbers:0~9Letters:a~z, A~ZCommon punctuations:ยท , . : ; + - * / \ ~ ! @ # $ % ^ & ` ' = < > [ ] ( ) ? _ { } | and (space)EnvironmentPython 3LicenseGPLv3
amzquery
No description available on PyPI.
amz-query
No description available on PyPI.
amz-request
Failed to fetch description. HTTP Status Code: 404
amzscraper
Change Log1.0.0 (10-02-2021)-First Release
amzsear
No description available on PyPI.
amz-tool
No description available on PyPI.
amz-widgets
AMZ_WidgetsA collection of useful tkinter widgets.WidgetsFormEntry: Entry widget with top label. It should be used in forms.
an
anScraping and parsing amazonTo install:pip install anAmazon Scraping LibraryOverviewThis Python library is designed for scraping and parsing data from Amazon product pages. It offers functionalities to extract various information like sales ranks, product reviews, and product titles from Amazon's different regional websites.InstallationThis library is not a standalone package and should be incorporated directly into your existing Python project. Copy the code into your project's directory.DependenciespandasnumpyrequestsBeautifulSouppymongomatplotlibEnsure these dependencies are installed in your environment.UsageExtracting Sales RankThe library can extract sales ranks of products from Amazon. Here's an example of how to get the sales rank of a product:asin='YOUR_PRODUCT_ASIN'country='co.uk'# Change to desired Amazon regionsales_rank=Amazon.get_sales_rank(asin=asin,country=country)print(sales_rank)Parsing Product TitleTo parse and get the product title from an Amazon product page:html_content=Amazon.slurp(what='product_page',asin=asin,country=country)title=Amazon.parse_product_title(html_content)print(title)Getting Number of ReviewsTo retrieve the number of customer reviews for a product:number_of_reviews=Amazon.get_number_of_reviews(asin=asin,country=country)print(number_of_reviews)ContributingContributions to this library are welcome. Please send pull requests with improvements or bug fixes.
ana
No description available on PyPI.
anab-ai-toolkit
No description available on PyPI.
anabel
anabelAn end to end differentiable finite element framework.FoundationsInstallationThebaseAnabel package can be installed from a terminal with the following command:$pipinstallanabelThis installation includes basic tools for composing "neural network" -like models along with some convenient IO utilities. However, both automatic differentiation and JIT capabilities require Google's Jaxlib module which is currently in early development and only packaged for Ubuntu systems. On Windows systems this can be easily overcome by downloading the Ubuntu terminal emulator from Microsoft's app store and enabling the Windows Subsystem for Linux (WSL). The following extended command will install Anabel along with all necessary dependencies for automatic differentiation and JIT compilation:$pipinstallanabel[jax]The in-development version can be installed the following command:$pipinstallhttps://github.com/claudioperez/anabel/archive/master.zipCore API - Modeling PDEsfromanabelimporttemplate,diff,MappedMeshfromanabel.interpolateimportlagrange_t6@template(6)defpoisson_template(u,v,iso,f,):defpoisson(uh,xyz):returndiff.jacx(u)(u,v)Utility Modulesanabel.sectionsfromanabel.sectionsimportTeet_section=Tee(bf=60,tf=6,tw=18,d=24)t_section.plot()anabel.transientBuilding The DocumentationThe following additional dependencies are required to build the project documentation:PandocElstir (pip install elstir)To build the documentation, run the following command from the project root directory:$elstirbuildOrganization of Source CodeDocumentationelstir.ymlstyle/Directory holding style/template/theme files for documentation.docs/api/Automatically generated API documentation files.Source Codesetup.pyInstallation/setup; used for pip installation.src/anabel/Python source code[lib/] C++ source code for extension libraryDatadat/quadrature/Quadrature scheme data.Source Control, Testing, Continuous Integration.gitignoreConfiguration forGitsource control..appveyor.ymlconfiguration file forAppveyor.coveragercconfiguration file forCodeCov, used to measure testing coverage.pytest.iniconfiguration file forPyTest, used to setup testing.Changelog0.1.0 (2021-05-21)First documented release0.0.0 (2020-07-15)First release on PyPI.
anac
anacPythonAsyncNetBoxAPIClient, based onhttpxandpydanticDocumentationhttps://timeforplanb123.github.io/anacFeaturesMinimalistic interfaceAsync onlyPython interpreter autocompletionSupportsNetBox2.x, 3.xFlexibility. All the objects are coroutines or coroutine iteratorsSimple integration with parsers (TextFSM,TTP)Quick StartInstallPlease, at first, check the dependencies inpyproject.tomland create new virtual environment if necessary and then:with pip:pip install anacwith git:git clone https://github.com/timeforplanb123/anac.git cd anac pip install . # or poetry installSimple ExamplesApi Instantiatingfromanacimportapia=api("https://demo.netbox.dev",token="cf1dc7b04de5f27cfc93aba9e3f537d2ad6fdf8c",)# get openapi spec and create attributes/endpointsawaita.openapi()getsome device andpatchitIn[1]:some_device=awaita.dcim_devices(get={"name":"dmi01-rochster-sw01"})In[2]:some_device.nameOut[2]:'dmi01-rochster-sw01'In[3]:some_device.device_typeOut[3]:{'id':7,'url':'https://demo.netbox.dev/api/dcim/device-types/7/','display':'C9200-48P','manufacturer':{'id':3,'url':'https://demo.netbox.dev/api/dcim/manufacturers/3/','display':'Cisco','name':'Cisco','slug':'cisco'},'model':'C9200-48P','slug':'c9200-48p'}In[4]:some_device.statusOut[4]:{'value':'active','label':'Active'}In[5]:some_device=awaitsome_device(patch={"status":"failed"})In[6]:some_device.statusOut[6]:{'value':'failed','label':'Failed'}getsome 2 devices andput+patchthemIn[7]:some_devices=awaita.dcim_devices(...:get=[{"name":"dmi01-rochster-sw01"},{"name":"dmi01-rochester-rtr01"}]...:)# EndpointAsIterator is a coroutine iterator with 2 coroutinesIn[8]:some_devicesOut[8]:EndpointAsIterator(api=Api,url='https://demo.netbox.dev/api',endpoint='/dcim/devices/')In[9]:importasyncio# run 2 coroutines in the event loopIn[10]:some_devices=awaitasyncio.gather(*some_devices)# EndpointId is a NetBox '/dcim/devices/' object and coroutineIn[11]:some_devicesOut[11]:[EndpointId(api=Api,url='https://demo.netbox.dev/api/dcim/devices/21/',endpoint='/dcim/devices/'),EndpointId(api=Api,url='https://demo.netbox.dev/api/dcim/devices/8/',endpoint='/dcim/devices/')]In[12]:patch_some_devices=[coro(patch={"status":"failed"})forcoroinsome_devices]# run 2 coroutines in the event loopIn[13]:patch_some_devices=awaitasyncio.gather(*patch_some_devices)# EndpointId is a coroutine, againIn[14]:patch_some_devicesOut[14]:[EndpointId(api=Api,url='https://demo.netbox.dev/api/dcim/devices/21/',endpoint='/dcim/devices/{id}/'),EndpointId(api=Api,url='https://demo.netbox.dev/api/dcim/devices/8/',endpoint='/dcim/devices/{id}/')]In[15]:patch_some_devices[0].nameOut[15]:'dmi01-rochster-sw01'In[16]:patch_some_devices[0].statusOut[16]:{'value':'failed','label':'Failed'}In[17]:patch_some_devices[1].nameOut[17]:'dmi01-rochester-rtr01'In[18]:patch_some_devices[1].statusOut[18]:{'value':'failed','label':'Failed'}getall devicesIn[19]:all_devices=awaita.dcim_devices(get={})# orIn[20]:all_devices=awaita.dcim_devices()# EndpointIdIterator is an coroutine iterator with EndpointId objectsIn[21]:all_devicesOut[21]:EndpointIdIterator(api=Api,url='https://demo.netbox.dev/api',endpoint='/dcim/devices/')In[22]:len(all_devices)Out[22]:50In[23]:all_devices[49]Out[23]:EndpointId(api=Api,url='https://demo.netbox.dev/api/dcim/devices/95/',endpoint='/dcim/devices/')In[24]:all_devices[49].nameOut[24]:'ncsu118-distswitch1'# by default, 'limit' parameter = 50, but you can run 'get' request with custom 'limit'In[25]:all_devices=awaita.dcim_devices(get={"limit":100})In[26]:len(all_devices)Out[26]:75In[27]:all_devices[74]Out[27]:EndpointId(api=Api,url='https://demo.netbox.dev/api/dcim/devices/106/',endpoint='/dcim/devices/')In[28]:all_devices[74].idOut[28]:106getall devices andpost2 new devicesIn[29]:all_test=awaita.dcim_devices(...:get={},...:post=[...:{...:"name":"test1",...:"device_role":1,...:"site":1,...:"device_type":1,...:"status":"planned",...:},...:{...:"name":"test2",...:"device_role":1,...:"site":1,...:"device_type":1,...:"status":"planned",...:},...:],...:)# run 3 coroutines in the event loopIn[30]:all_test=awaitasyncio.gather(*all_test)# EndpointIdIterator is an coroutine iterator with EndpointId objects# EndpointId is a NetBox '/dcim/devices/' object and coroutineIn[31]:all_testOut[31]:[EndpointIdIterator(api=Api,url='https://demo.netbox.dev/api',endpoint='/dcim/devices/'),EndpointId(api=Api,url='https://demo.netbox.dev/api/dcim/devices/110/',endpoint='/dcim/devices/'),EndpointId(api=Api,url='https://demo.netbox.dev/api/dcim/devices/111/',endpoint='/dcim/devices/')]In[32]:all_test[0][49].nameOut[32]:'ncsu118-distswitch1'In[33]:all_test[1].nameOut[34]:'test1'In[35]:all_test[2].nameOut[35]:'test2'# httpx.Response is available with .response attributeIn[36]:all_test[1].response.json()Out[36]:{'id':110,'url':'https://demo.netbox.dev/api/dcim/devices/110/','display':'test1','name':'test1','device_type':{'id':1,'url':'https://demo.netbox.dev/api/dcim/device-types/1/','display':'MX480','manufacturer':{'id':7,...}
anacal
AnaCalAnalytic Calibration for Perturbation Estimation from Galaxy Images.This framework is devised to bridge various analytic shear estimators that have been developed or are anticipated to be created in the future. We intend to develop a suite of analytical shear estimators capable of inferring shear with subpercent accuracy, all while maintaining minimal computational time. The currently supported analytic shear estimators are:FPFSInstallationUsers can clone this repository and install the latest package bygitclonehttps://github.com/mr-superonion/AnaCal.gitcdAnaCal pipinstall.or install stable verion from pypipip install anacalExamplesExamples can be foundhere.DevelopmentBefore sending pull request, please make sure that the modified code passed the pytest and flake8 tests. Run the following commands under the root directory for the tests:flake8 pytest-vv
anachem
No description available on PyPI.
anachronos
AnachronosA testing framework for testing frameworks.Anachronos is an end-to-end testing framework usable with a wide variety of applications. To get started, define anApplicationRunnerwhich can be used to start your application. Then, write your test classes by inheriting fromanachronos.TestCase.How it worksThe framework provides access to a specialAnachronosobject which is accessible both from the tested application, and from the testing suite. This object effectively acts as a logger on which assertions can be run afterwards. Anachronos assertions are accessible by using theself.assertThatmethod from within a TestCase. Below is a simple TestCase example taken from the Jivago framework.importanachronosfrome2e_test.runnerimporthttpfrome2e_test.testing_messagesimportSIMPLE_GETclassSimpleResourceTest(anachronos.TestCase):deftest_simple_get(self):http.get("/")self.assertThat(SIMPLE_GET).is_stored()deftest_post_dto(self):response=http.post("/",json={'name':'Paul Atreides','age':17}).json()self.assertEqual('Paul Atreides',response['name'])if__name__=='__main__':anachronos.run_tests()With the matching application logicย :importanachronosfrome2e_test.app.components.dtos.request_dtoimportRequestDtofrome2e_test.app.components.dtos.response_dtoimportResponseDtofrome2e_test.testing_messagesimportSIMPLE_GETfromjivago.lang.annotationsimportInjectfromjivago.wsgi.annotationsimportResourcefromjivago.wsgi.methodsimportGET,POST@Resource("/")classSimpleResource(object):def__init__(self):self.anachronos=anachronos.get_instance()@GETdefsimple_get(self)->str:self.anachronos.store(SIMPLE_GET)return"OK"@POSTdefpost_body(self,request:RequestDto)->ResponseDto:returnResponseDto(request.name,True)
anack
No description available on PyPI.
ana-cli
No description available on PyPI.
anacode
No description available on PyPI.
anacom
No description available on PyPI.
anaconda
Anaconda Python cannot be installed via pip or other PyPI-based installers. Please use the Anaconda Installer to bootstrap Anaconda Python onto your system.
anacondaasdf2
No description available on PyPI.
anaconda-build
UNKNOWN
anaconda-catalogs
Anaconda Catalogs clientA light client library for interfacing with the Anaconda catalogs service.UsageCurrently, the catalogs are referenced by their unique ID of the formcid/09e802da-65b3-4ea0-b60d-642c88c7a541. A user can load a catalog with the following:importanaconda_catalogscat=anaconda_catalogs.open_catalog("cid/09e802da-65b3-4ea0-b60d-642c88c7a541")Alternately, the nativeintakeplugin can be used:importintakecat=intake.open_anaconda_catalog("cid/09e802da-65b3-4ea0-b60d-642c88c7a541")Or in anIntake catalog file,## contents of catalog.yamlsources:anaconda:driver:anaconda_catalogargs:name:cid/09e802da-65b3-4ea0-b60d-642c88c7a541which is opened in Intake asimportintakecat=intake.open_catalog('catalog.yaml')Development guideA contributing guide can be found inCONTRIBUTING.md.
anaconda-cli
No description available on PyPI.
anaconda-cli-base
anaconda-cli-baseA base CLI entrypoint supporting Anaconda CLI pluginsRegistering pluginsSubcommands can be registered as follows:# In pyproject.toml[project.entry-points."anaconda_cli.subcommand"]auth="anaconda_cloud_auth.cli:app"In the example above:"anaconda_cloud_cli.subcommand"is the required string to use for registration. The quotes are important.authis the name of the new subcommand, i.e.anaconda authanaconda_cloud_auth.cli:appsignifies the object namedappin theanaconda_cloud_auth.climodule is the entry point for the subcommand.Setup for developmentEnsure you havecondainstalled. Then run:makesetupRun the unit testsmaketestRun the unit tests across isolated environments with toxmaketox
anaconda-client
UNKNOWN
anaconda-cloud
anaconda-cloudThe Anaconda Cloud metapackage.This package provides a recipe allowing us to bundle a set of plugins and libraries for a consolidated experience for users.Users can install withconda install anaconda-cloud.Setup for developmentEnsure you havecondainstalled. Then run:makesetupRun the unit testsmaketestRun the unit tests across isolated environments with toxmaketox
anaconda-cloud-auth
anaconda-cloud-authA client library for Anaconda.cloud APIs to authenticate and securely store API keys.This package provides arequestsclient class that handles loading the API key for requests made to Anaconda Cloud services.This package provides aPanel OAuth plugincalledanaconda_cloud.Installationconda install anaconda-cloud-authInteractive login/logoutIn order to use the request client class you must first login interactively. This can be done using the Python API or CLI (see below).Login APIfromanaconda_cloud_authimportloginlogin()Thelogin()function initiates a browser-based login flow. It will automatically open your browser and once you have completed the login flow it will store an API key on your system.Typically, these API keys will have a one year expiration so you will only need to login once and requests using the client class will read the token from the keyring storage.If you calllogin()while there is a valid (non-expired) API key no action is taken. You can replace the valid API key withlogin(force=True).Password-based flow (Deprecated)WARNING: Password-based login flow will be disable in the near future.You can login into Anaconda Cloud using username/password flow (non-browser) with thebasic=Truekeyword argument. Thelogin()function will interactively request your username and password before completing login and storing the API key.fromanaconda_cloud_authimportloginlogin(basic=True)LogoutTo remove the API key from your keyring storage use thelogout()function.fromanaconda_cloud_authimportlogoutlogout()API requestsThe BaseClient class is a subclass ofrequests.Session. It will automatically load the API key from the keyring on each request. If the API key is expired it will raise aTokenExpiredError.The Client class can be used for non-authenticated requests, if the API key cannot be found and the request returns 401 or 403 error codes theLoginRequiredErrorwill be raised.fromanaconda_cloud_auth.clientimportBaseClientclient=BaseClient()response=client.get("/api/<endpoint>")print(response.json())BaseClient accepts the following optional arguments.domain: Domain to use for requests, defaults toanaconda.cloudapi_key: API key to use for requests, if unspecified uses token set byanaconda loginuser_agent: Defaults toanaconda-cloud-auth/<package-version>api_version: Requested API version, defaults to latest available from the domainextra_headers: Dictionary or JSON string of extra headers to send in requestsTo create a Client class specific to your package, subclass BaseClient and set an appropriate user-agent and API version for your needs. This is automatically done if you use thecookiecutterin this repository to create a new package.fromanaconda_cloud_auth.clientimportBaseClientclassClient(BaseClient):_user_agent="anaconda-cloud-<package>/<version>"_api_version="<api-version>"CLI usageTo useanaconda-cloud-authas a CLI you will need to install theanaconda-cloudpackage. Once installed you can use theanacondaCLI to login and logout of Anaconda Cloud.โฏ anaconda login --help Usage: anaconda login [OPTIONS] Login to your Anaconda account. โ•ญโ”€ Options โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ --domain TEXT [default: None] โ”‚ โ”‚ --basic --no-basic Deprecated [default: no-basic] โ”‚ โ”‚ --force --no-force [default: no-force] โ”‚ โ”‚ --help Show this message and exit. โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏConfigurationYou can configureanaconda-cloud-authby setting one or moreANACONDA_CLOUD_environment variables or use a.envfile. The.envfile must be in your current working directory. An example template is provided in the repo, which contains the following options, which are the default values.# Logging level LOGGING_LEVEL="INFO" # Base URL for all API endpoints ANACONDA_CLOUD_API_DOMAIN="anaconda.cloud" # Authentication settings ANACONDA_CLOUD_AUTH_DOMAIN="id.anaconda.cloud" ANACONDA_CLOUD_AUTH_CLIENT_ID="b4ad7f1d-c784-46b5-a9fe-106e50441f5a"In addition to the variables above you can set the following# API key to use for all requests, this will ignore the keyring token set by `anaconda login` ANACONDA_CLOUD_API_KEY="<api-key>" # Extra headers to use in all requests; must be parsable JSON format ANACONDA_CLOUD_API_EXTRA_HEADERS='<json-parsable-dictionary>'Panel OAuth ProviderIn order to use the anaconda_cloud auth plugin you will need an OAuth client ID (key) and secret. The client must be configured as followsSet scopes: offline_access, openid, email, profile Set redirect url to http://localhost:5006 Set grant type: Authorization Code Set response types: ID Token, Token, Code Set access token type: JWT Set Authentication Method: HTTP BodyTo run the app with the anaconda_cloud auth provider you will need to set several environment variables or command-line arguments. See thePanel OAuth documentationfor more detailsPANEL_OAUTH_PROVIDER=anaconda_cloud or --oauth-provider anaconda_cloud PANEL_OAUTH_KEY=<key> or --oauth-key=<key> PANEL_OAUTH_SECRET=<secret> or --oauth-secret=<key> PANEL_COOKIE_SECRET=<cookie-name> or --cookie-secret=<value> PANEL_OAUTH_REFRESH_TOKENS=1 or --oauth-refresh-tokens PANEL_OAUTH_OPTIONAL=1 or --oauth-optionalpanel serve <arguments> ...If you do not specify the.envfile, the production configuration should be the default. Please file an issue if you see any errors.Setup for developmentEnsure you havecondainstalled. Then run:makesetupRun the unit testsmaketestRun the unit tests across isolated environments with toxmaketox
anaconda-cloud-cli
anaconda-cloud-cliThe base CLI for Anaconda Cloud. It currently provides the handling of cloud login/logout, and backwards-compatible passthrough of arguments to the coreanaconda-clientCLI.This CLI is intended to provide identical behavior toanaconda-client, except for minor changes to the login/logout flow, to provide a gentle deprecation path.Setup for developmentEnsure you havecondainstalled. Then run:makesetupRun the unit testsmaketestRun the unit tests across isolated environments with toxmaketox
anaconda-cloud-internal-shared
No description available on PyPI.
anaconda-toolbox
anaconda-toolboxPlease install usingconda install anaconda-toolbox.
ana-cont
No description available on PyPI.
anacore
No description available on PyPI.
anacostia-pipeline
Welcome to the Anacostia Pipeline!Requirements:Python 3.11 or greater, if you need to update Python, here are the steps:Updating Python on Mac Follow these steps to update your Python installation to the latest version on Mac:Check Current Python Version Open a terminal and use the python3 --version command to check which Python version you have installed. For example:python3 --version Python 3.7.4Install Homebrew If you don't already have Homebrew installed, you can install it with:/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"Update Homebrew Before installing Python, update Homebrew's package index with:brew updateSearch for Latest Python Now search Brew for the latest available Python version:brew search pythonMake note of the latest 3.x release.Install Latest Python Use brew to install the latest Python version:brew install [email protected] Old Python Version Remove the outdated Python version, replacing 3.7 with your current outdated version:sudo rm -rf /Library/Frameworks/Python.framework/Versions/3.7Set New Python as Default Set the new Python as the default by adding it to your PATH in bash profile:source ~/.zshrcConfirm New Version Check that the new version is now set as default:python --versionHow to install the pipeline:Create the python virtual env (Linux/Mac)python3 -m venv venv source venv/bin/activateCreate the python virtual env (Windows)python -m venv venv venv\Scripts\activatepip install package locallypip install -e .Update the requirements.txt if neccessaryOnce you have all requirements installed, let's try to run one of the tests: Go into the tests folder and run test_phase1.py:python3 test_phase1.pyCreate a .env file inside the tests folder: Insert these credentials and set them:.envUSERNAME= PASSWORD=Other notes: If you already have a Python virtual environment set up, follow these steps:Check Python version inside virtual environment:(venv) python --versionFor example:(venv) Python 3.8.2Deactivate the virtual environment:(venv) deactivateCheck system Python version:python --versionThis may be older than your virtual environment version. 4. Check Python 3 version:python3 --version5. Create a new virtual environment with Python 3:python3 -m venv venv6. Activate the new virtual environment:source venv/bin/activate7. Confirm Python version is updated:(venv) python --versionUnderstanding the repo:Engine folder - has two important files 1. node.py - which contains all teh base classes for the action node and the resource nodes, other nodes inherit the base node logic. This is where all the synctionaization primitives are defined. 2. pipeline.py is our way of intereacting with the dag, that will have commands for staring, terminating, shutting down, pausing the node and pipelines, The ability to export the graph.json file is here as well. constants.py defines the different statuses we have in the pipeline for the nodes.Resource Folder: Defines the basic anacostia resources that users have right off the bat. It has 3 files: 2.1 data_store.py refines where the user would input their data files in order to be consumed by the pipeline. Ex: if user has folder of images, it would take that folder and start keeping track of the state for that folder via data_store.json 2.2 feature_store.py - default anocostia datastore, this file converts features into a numpy array and saves the numpy array as a .npy file and keeps track of the state. 2.3 model_registry - where models are going to be kept. Keeps state of models i.e current, old, and new(this is applicatable to anacostia resources)Tests folder - where all the tests for pipeline and nodes reside The most uptodate is tests_phase1.py which includes all teh tests for the phase 1 configuration which is the ETL pipeline. tests_data_store.py includes all the tests for the data store tests_feature_store includes all the tests for the feature store tests_model_registry include all tests for model registry - this is currently not up to date tests_utils. includes all functionaliteis for creating the tests tests_node and test_basic includes all tests for nodes which might not be used anymore. We might get rid of these.setup.py - the main folder that is used to turn ana into a python package that we can put on PyPI - python package index. Allows you topip install anacostiaFuture work:Take the tests from phase1 and create the suite of tests for phase 2, which is a full retraining pipeline without model evaluation.We will create a metadata store node using a database (Most likely Mongo)We will upgrade phase 2 tests to include model evaluation (now you can have full retraining pipeline)Example Usagen1=TrueNode(name='n1')n2=FalseNode(name='n2')n3=TrueNode(name='n3',listen_to=n1|n2)n4=TrueNode(name="n4",listen_to=n3)p1=Pipeline(nodes=[n1,n2,n3,n4])
anacreonlib
anacreonlib|PyPI Version| |Documentation Status|Thisunofficiallibrary provides a Python interface to the API ofAnacreon 3 <https://anacreon.kronosaur.com>, which is an online4X <https://en.wikipedia.org/wiki/4X>game produced byKronosaur Productions, LLC. <http://kronosaur.com/>_.The minimum supported Python version is 3.8Make sure to read the "Scripts and Bots" section of theKronosaur Terms of Service <https://multiverse.kronosaur.com/news.hexm?id=97#:~:text=scripts%20and%20bots>_.Installationanacreonlibcan be installed using pip::$ pip install anacreonlibUsageBelow is a minimum working example to get started with using the Anacreon API.. code-block:: pythonfrom anacreonlib import Anacreon, Fleet import asyncio async def main(): ## Step 1: Log in client: Anacreon = await Anacreon.log_in( game_id="8JNJ7FNZ", username="username", password="password" ) ## Step 2: do cool stuff, automatically! # find all of our fleets all_my_fleets = [ fleet for fleet in client.space_objects.values() if isinstance(fleet, Fleet) and fleet.sovereign_id == client.sov_id ] # send all our fleets to world ID 100 for fleet in all_my_fleets: await client.set_fleet_destination(fleet.id, 100) if __name__ == "__main__": asyncio.run(main())Rate LimitsThe API has rate limits which are detailed inthis Ministry record <https://ministry.kronosaur.com/record.hexm?id=79981>_. Beware that they apply to both any scripts you write AND the online client... |PyPI Version| image::https://img.shields.io/pypi/v/anacreonlib.svg:target:https://pypi.python.org/pypi/anacreonlib.. |Documentation Status| image::https://readthedocs.org/projects/anacreonlib/badge/?version=latest:target:http://anacreonlib.readthedocs.io/en/latest/?badge=latest:alt: Documentation Status
anacron
Python background task manager without dependencies and no configuration.This project is in alpha-statePlease donโ€™t use it right now.
anadama2
AnADAMA2 is the next generation of AnADAMA. AnADAMA is a tool to create reproducible workflows and execute them efficiently. Tasks can be run locally or in a grid computing environment to increase efficiency. Essential information from all tasks is recorded, using the default logger and command line reporters, to ensure reproducibility. A auto-doc feature allows for workflows to generate documentation automatically to further ensure reproducibility by capturing the latest essential workflow information. AnADAMA2 was architected to be modular allowing users to customize the application by subclassing the base grid meta-schedulers, reporters, and tracked objects (ie files, executables, etc).
anadaptor
UNKNOWN
anadb-tools
Analytics Database ToolsA suite of tools and libraries for managing the analytics database servers.InstallationApplicationsThe following command-line applications are installed as part ofanadb-tools:CommandDescriptionanadb-analyzeInspect each analytics server via CloudFormation and each analytics database in PostgreSQL, updating statistics in the analytics broker database.anadb-assignCreate & assign a new analytics database for a customer.anadb-cleanupBackup & Delete analytics databases for closed accounts.anadb-deployDeploy DDL to all of the analytics databases.anadb-migrateMove multiple analytics databases to a different server.anadb-moveMove an analytics database to a different server.anadb-orphansCleanup orphaned analytics databases.anadb-pageview-auditExamine all analytics databases to determine which have pageview data.anadb-s3backupBackup the data in an analytics database to Amazon S3.anadb-sequence-fixReset all of the sequences in an analytics database.anadb-skeletonCreate a skeleton CLI application using anadb-tools.Common Environment VariablesThe following environment variables allow for overriding the database connection information that can be provided by command line switches:VariableDescriptionANABROKER_HOSTThe database server to connect to the analytics broker onANABROKER_PORTThe port to connect to the analytics broker database onANABROKER_DBNAMEThe analytics broker database name. Default: analytics_brokerANABROKER_USERThe user to connect as to the analytics broker db server. Default: the current shell userANADB_USERThe user to connect to an analytics database server as. Default: the current shell userAPPDB_HOSTThe database server to connect to for the app database.APPDB_PORTThe database server port to connect on for the app database.APPDB_DBNAMEThe database name to connect to for the app database. Default:appAPPDB_USERThe user to connect to the app database server as. Default: the current shell userLibrary ModulesModuleDescriptionanadb_tools.anabrokerMethods for connecting to and interacting with anabroker.anadb_tools.anadbMethods for connecting to and interacting with account specified analytics databases.anadb_tools.appdbMethods for connecting to and interacting with the app database.anadb_tools.commonMisc common methods including argument parsing and logging configuration.anadb_tools.databaseLow-level database commands, including backup.anadb_tools.exceptionsCommon exceptions for the anadb-tools package.anadb_tools.memcachedClears caching keys for analytics broker assignment info.Development
anad_dev
tools for developmentโ€ฆ.. in anad laboInstallationpip install anad_devUsageecrypte.g.:>>>fromanad_devimportecrypt>>>pw='test_pass_phrase'>>>ec_pw=ecrypt.ecrypt.compr(pw)>>>ec_pw'/Td6WFoAAATm1rRGAgAhARYAAAB0L+WjAQAPdGVzdF9wYXNzX3BocmFzZQA3khspehQ3iwABKBDl\nC2xgH7bzfQEAAAAABFla\n'>>>>>>ecrypt.ecrypt.decompr(ec_pw)'test_pass_phrase'this is just a easy encryption everybody CAN decrypt.I just use it when I donโ€™t want to show my password at glimpse.send_gmaile.g.:fromanad_devimportecrypt,send_gmailaccount='[email protected]'passwd=ecrypt.ecrypt.decompr('/Td6WFoAAATm1rRGAgAhARYAAAB0L+WjAQAPdGVzdF9wYXNzX3BocmFzZQA3khspehQ3iwABKBDl\nC2xgH7bzfQEAAAAABFla\n')body='test\nhehehe'subject='gmail test send'msg_to='[email protected]'send_gmail.send_gmail.doIt(account,passwd,body,subject,msg_to)*** you need to turn on as your gmail account can used low level access on your gmail setting page.any questions to:[email protected]
anadroid
PyAnaDroidPyAnaDroid is a tool capable of automating the process of analyzing and benchmarking Android applications' energy consumption, using state-of-the-art energy analysis tools. PyAnaDroid can be configured to use different energy profilers and test frameworks in its execution pipeline, being able to perform automatic instrumentation and building of application source code. It can be used to perform both white-box and black-box testing.Documentationhttps://greensoftwarelab.github.io/PyAnaDroid/anadroid.html#Video Demohttps://www.youtube.com/watch?v=7AV3nrh4Qc8Use casesApplication Benchmarking: Replicating test work/procedures on different applications to carry out comparative studies of energy consumption.Detection of energy hotspots in application code;Detection of energy-greedy coding practices;Calibration of energy consumption prediction models;Many others.Supported Test FrameworksJUnit-based frameworks (Robotium, Espresso, JUnit);Application UI/Exerciser Monkey;Monkeyrunner;DroidBot;App Crawler;RERAN;Monkey++ (soon).Supported energy profilers:Trepn Profiler;Manafa;GreenScaler;Monsoon (soon);Petra (soon).WorkflowBy default, PyAnaDroid is configured to perform white-box testing of applications, instrumenting its code (Java and/or Kotlin), in order to collect tracing of the methods invoked during application execution and estimate the energy consumed by these. After the instrumentation phase, a project is created in the original directory, which is a copy of it, with the code and build scripts already instrumented. Then, the source code and apk are built from the sources of the instrumented project (both debug and release builds are supported), and the application is installed on the device. After installation, the energy profiler is enabled and the application tests are executed. At the end of the process, the monitoring process is stopped and its results collected, and the application is uninstalled.Installation:Using python-pip$ pip install anadroidFrom sauce$ git clone --recurse-submodules https://github.com/greensoftwarelab/pyanadroid.gitExamplesPlug-and-play execution$ usage: pyanadroid [-h] [-t {Monkey,Monkeyrunner,JUnit,RERAN,Espresso,Robotium,Crawler,Droidbot,Custom,Other}] [-p {Trepn,GreenScaler,Petra,Monsoon,E-manafa,None}] [-b {Release,Debug,Custom}] [-i {JInst,Hunter,None}] [-it {MethodOriented,TestOriented,'ActivityOriented',),AnnotationOriented,None}] [-a {MethodOriented,TestOriented,('ActivityOriented',,AnnotationOriented,None}] [-d DIRETORY] [-bo] [-record] [-run] [-rb] [-ri] [-ja] [-sc {USB,WIFI}] [-ds DEVICE_SERIAL] [-td TESTS_DIR] [-n PACKAGE_NAMES [PACKAGE_NAMES ...]] [-apk APPLICATION_PACKAGES [APPLICATION_PACKAGES ...]] [-rec] [-cmd COMMAND] [-nt N_TIMES] optional arguments: -h, --help show this help message and exit -t {Monkey,Monkeyrunner,JUnit,RERAN,Espresso,Robotium,Crawler,Droidbot,Custom,Other}, --testingframework {Monkey,Monkeyrunner,JUnit,RERAN,Espresso,Robotium,Crawler,Droidbot,Custom,Other} testing framework to exercise app(s) -p {Trepn,GreenScaler,Petra,Monsoon,E-manafa,None}, --profiler {Trepn,GreenScaler,Petra,Monsoon,E-manafa,None} energy profiler -b {Release,Debug,Custom}, --buildtype {Release,Debug,Custom} app build type -i {JInst,Hunter,None}, --instrumenter {JInst,Hunter,None} Source code instrumenter -it {MethodOriented,TestOriented,('ActivityOriented',),AnnotationOriented,None}, --instrumentationtype {MethodOriented,TestOriented,('ActivityOriented',),AnnotationOriented,None} instrumentation type -a {MethodOriented,TestOriented,('ActivityOriented',),AnnotationOriented,None}, --analyzer {MethodOriented,TestOriented,('ActivityOriented',),AnnotationOriented,None} results analyzer -d DIRETORY, --diretory DIRETORY app(s)' folder -bo, --buildonly just build apps -record, --record record test -run, --run_only run only -rb, --rebuild rebuild apps -ri, --reinstrument reinstrument app -ja, --justanalyze just analyze apps -sc {USB,WIFI}, --setconnection {USB,WIFI} set connection to device and exit -ds DEVICE_SERIAL, --device_serial DEVICE_SERIAL device serial id -td TESTS_DIR, --tests_dir TESTS_DIR tests directory -n PACKAGE_NAMES [PACKAGE_NAMES ...], --package_names PACKAGE_NAMES [PACKAGE_NAMES ...] package(s) of already installed apps -apk APPLICATION_PACKAGES [APPLICATION_PACKAGES ...], --application_packages APPLICATION_PACKAGES [APPLICATION_PACKAGES ...] path of apk(s) to process -rec, --recover recover progress of the previous run -cmd COMMAND, --command COMMAND test command -nt N_TIMES, --n_times N_TIMES times to repeat test (overrides config)From SauceExecute a simple Monkey test over an applicationBy default, PyAnaDroid uses Manafa profiler to estimate energy consumption. The Monkey test (or any other test with other supported testing framework) and its parameters can be configured by modifying the .cfg present in the resources/testingFrameworks/ directory. The results are stored in the results/<app_id>/<app_version> directoryfrom anadroid.Anadroid import AnaDroid folder_of_app = "demoProjects/SampleApp" anadroid = AnaDroid(folder_of_app, testing_framework=TESTING_FRAMEWORK.MONKEY) anadroid.defaultWorkflow()Working ExamplesExample 1 - Using DroidBot to automatically test an Android project(s) and monitor its energy consumption (from command-line)$ pyanadroid -d projects_dir> -t DroidbotExample 2 - Perform a custom test (e.g touch app screen)$ pyanadroid -d <projects_dir> -t Custom -cmd 'adb shell input touchscreen tap 500 500'Example 3 - Extend PyAnaDroid workflow to perform custom actions1) Create a new subclass of the AnaDroid class and implement and override the default_workflow methodfrom anadroid.Anadroid import AnaDroid class MyCustomAnaDroidWorkflow(AnaDroid): def default_workflow(): # example: reboot device after each test suite super(AnaDroid, self).default_workflow() self.device.reboot()2) Invoke the new custom workflowcustom_wkflow = MyCustomAnaDroidWorkflow() custom_wkflow.defaultWorkflow()Example 4 - Skip instrumentation and building phase and perform black-box analysis only over the apks.Note: the process will still be monitored using the profiler but the performance metrics will only be given at the test level (e.g. the energy consumption of each test execution).$ pyanadroid -d <projects_dir> -run -t Custom 'adb shell input touchscreen tap 500 500'PyAnaDroid produces a large amount of results from the analysis it does on its execution blocks. These results are stored in the form of files in specific directories. For each execution of a certain version of a certain app, a subdirectory is created in the directory anadroid_results/<app-name>--<app-package>/<app-version> where all the results of the analyzes carried out on the applications will appear. For each execution of a test framework on an application, a subdirectory <testing-framework><instrumentation-type><timestamp> is created inside the previous directory and that contains the results related to that execution. The result files are as follows:tests_index.json: contains the list of files associated with each test run, identified by test id.test_<test-id>.logcat: contains device logs captured during test execution;test_<test-id>logresume.json: contains a summary made from the analysis of the logs contained in the file test.logcat. It has metrics such as the number of exceptions thrown, fatal or error log messages, etc.device.json: contains the specs of the device where the tests were conducted (brand, model, ram, cpu cores, serial nr, etc)manafa_resume_<test_id>.json: contains test-level performance metrics reported by E-Manafa (if used);functions_<timestamp>.json: contains performance metrics for each of the executed functions/methods of the app in a certain test.trace-<timestamp>-<timestamp>.systrace: contains the cpu frequency changes logged during a certain test id;
anafero
Provides a Django site with referrals functionality.DocumentationDocumentation can be found online athttp://anafero.readthedocs.org/.Commercial SupportThis app, and many others like it, have been built in support of many of Eldarionโ€™s own sites, and sites of our clients. We would love to help you on your next project so get in touch by dropping us a note [email protected].
anafit
Anafit is a package providing fitting tools for matplotlib figures. It is largely inspired from the Ezyfit toolbox for Matlab.Source code repository and issue tracker:https://gitlab.com/xamcosta/AnafitPython Package Index:https://pypi.python.org/pypi/anafit/0.1.5RequirementsPython:Anafit has been coded using Python 3.5 fromAnacondadistribution.pip/setuptools:Those are the most convenient way to install Anafit and its dependencies. Most likely they are already installed in your system, but if not, please refer topipdoc webpage.Matplotlib and PyQt5:When called, Anafit menu, based onPyQt5, will appear as a new button in thematplotlibfigure toolbar. However, this requires Qt5Agg as matplotlibโ€™s backend. WARNING : When imported, anafit will switch your actual backend to Qt5Agg, destroying all figures already constructed during your session.Other packages:To fit, Anafit uses scipy.optimize.curve_fit function fromscipymodule. It also usesnumpy, os , sys , functools and finally json (for custom fit function saving in a text file).InstallationOnce you have installed the above-mentioned dependencies, you can use pip to download and install the latest release with a single command:python3 -m pip install anafitTo un-install, use:python3 -m pip uninstall anafitNote that you can also just download and add the anafit repository to your PYTHONPATH.UsageFirst, import anafit:importanafitNote that importing anafit will switch matplotlibโ€™s backend to โ€˜Qt5Aggโ€™, destroying your current figures ! To prevent this, the best is to import anafit BEFORE importing matplotlib.pyplot or pylab.Adding anafit button to a matplotlib figureThis is done simply by calling anafit.Figure() class:fig=plt.figure()ana=anafit.Figure(fig)If no argument is passed to anafit.Figure(), the anafit button will be added to the current active figure.Fitting a curveIn case several curves are plotted, you can select the one you wanna fit in the โ€œDatasetโ€ menu. The dataset are represented by a icon filled with the color of the curve, followed by their marker.Then, in the โ€œShow Fitโ€ menu, you can select predefined fitting functions, sorted by types (linear, power, etcโ€ฆ), or your own saved fitting functions, or any function you want to define on the way, using โ€œOther Fitโ€ฆโ€.The fitting curve will appear as an orange line on your figure, and its parameters will appear in the Python console. You can access them anytime through the attribute ana.lastFit . More generally, an history of fits is stored in ana.fits . These anafit.Fit object contains not only the fit informations, but also the handles of the fit line, allowing to easily change the style of the fit curve. For instance, you can change the color of the last fit by simply running:ana.fits[-1].linfit.set_color(โ€˜rโ€™)Defining a region of interest (ROI)You can restrict the range on which you wanna fit your datas in the โ€œDefine Rangeโ€ menu. This menu displays the current range, and offers the possibility to set the range manually in a dialog (โ€˜Defineโ€ฆโ€™) or by selecting two points on the figure (โ€˜Define ROIโ€™). You can restore the full range by selecting โ€˜Resetโ€™.Creating custom fit functionsYou can create your own fitting functions in the โ€˜Edit User Fitโ€™ menu. They will then appear in the โ€™Show Fitโ€™ menu. Those fitting functions are stored in a text file in the anafit repository, that you can edit by hand. Clicking โ€˜Resetโ€™ deletes all custom fitting functions, but let one as an example.Getting slopes from drawn linesYou can draw a line on the figure by selecting โ€˜Draw Lineโ€™, and remove it using โ€˜Undo Lineโ€™. Use โ€˜Get Slopeโ€™ to access the parameters of this line: in log-log scale, this returns the prefactor and the exponent of a power law.You can draw a line corresponding to a given slope (a given exponent in log-log scale) using โ€˜Show Slopeโ€™.Displaying fit infosYou can display the range of confidence of the fit curve by selecting โ€™Show Confidenceโ€™. The interval of confidence is evaluated using the square root of the diagonal of the covariance matrix.
anaflow
Welcome to AnaFlowPurposeAnaFlow provides several analytical and semi-analytical solutions for the groundwater-flow equation.InstallationYou can install the latest version with the following command:pip install anaflowDocumentation for AnaFlowYou can find the documentation underhttps://anaflow.readthedocs.io.ExampleIn the following the well known Theis function is called an plotted for three different time-steps.importnumpyasnpfrommatplotlibimportpyplotaspltfromanaflowimporttheistime=[10,100,1000]rad=np.geomspace(0.1,10)head=theis(time=time,rad=rad,transmissivity=1e-4,storage=1e-4,rate=-1e-4)fori,stepinenumerate(time):plt.plot(rad,head[i],label="Theis(t={})".format(step))plt.legend()plt.show()Provided FunctionsThe following functions are provided directlythiemThiem solution for steady state pumpingtheisTheis solution for transient pumpingext_thiem_2dextended Thiem solution in 2D fromZech 2013ext_theis_2dextended Theis solution in 2D fromMueller 2015ext_thiem_3dextended Thiem solution in 3D fromZech 2013ext_theis_3dextended Theis solution in 3D fromMueller 2015neuman2004transient solution fromNeuman 2004neuman2004_steadysteady solution fromNeuman 2004grf"General Radial Flow" Model fromBarker 1988ext_grfthe transient extended GRF modelext_grf_steadythe steady extended GRF modelext_thiem_tplextended Thiem solution for truncated power lawsext_theis_tplextended Theis solution for truncated power lawsext_thiem_tpl_3dextended Thiem solution in 3D for truncated power lawsext_theis_tpl_3dextended Theis solution in 3D for truncated power lawsLaplace TransformationWe provide routines to calculate the laplace-transformation as well as the inverse laplace-transformation of a given functionget_lapGet the laplace transformation of a functionget_lap_invGet the inverse laplace transformation of a functionRequirementsNumPy >= 1.14.5SciPy >= 1.1.0pentapy >= 1.1.0ContactYou can contact us [email protected]ยฉ 2019 - 2023
anaforatools
anaforatoolsThe anaforatools project provides utilities for working withAnaforaannotations including:anafora.validate- checks Anafora XML files for syntactic and semantic errorsanafora.evaluate- compares two sets of Anafora XML files in terms of precision, recall, etc.anafora.regex- trains and applies simple regular expression models from Anafora XML filesanafora.copy_text- copies text into Anafora directory structureanafora.labelstudio- converts Anafora schemas and data files into Label Studio schemas and data filesFor details on the command line interfaces to these modules, use the--helpargument. For example:$ python -m anafora.validate --helpRequirements:Python 3.6 or laterPythonregexmodule
ana-gces
No description available on PyPI.
anage
python -manage packagesanage is a package manager for Python.
anagenex-mdtraj
MDTraj is a python library that allows users to manipulate molecular dynamics (MD) trajectories and perform a variety of analyses, including fast RMSD, solvent accessible surface area, hydrogen bonding, etc. A highlight of MDTraj is the wide variety of molecular dynamics trajectory file formats which are supported, including RCSB pdb, GROMACS xtc, tng, and trr, CHARMM / NAMD dcd, AMBER binpos, AMBER NetCDF, AMBER mdcrd, TINKER arc and MDTraj HDF5.
anago
# anaGo**anaGo** is a Python library for sequence labeling(NER, PoS Tagging,...), implemented in Keras.anaGo can solve sequence labeling tasks such as named entity recognition (NER), part-of-speech tagging (POS tagging), semantic role labeling (SRL) and so on. Unlike traditional sequence labeling solver, anaGo don't need to define any language dependent features. Thus, we can easily use anaGo for any languages.As an example of anaGo, the following image shows named entity recognition in English:[anaGo Demo](https://anago.herokuapp.com/)![English NER](./docs/images/anago.gif)<!--![English NER](https://github.com/Hironsan/anago/blob/docs/docs/images/example.en2.png?raw=true)![Japanese NER](https://github.com/Hironsan/anago/blob/docs/docs/images/example.ja2.png?raw=true)-->## Get StartedIn anaGo, the simplest type of model is the `Sequence` model. Sequence model includes essential methods like `fit`, `score`, `analyze` and `save`/`load`. For more complex features, you should use the anaGo modules such as `models`, `preprocessing` and so on.Here is the data loader:```python>>> from anago.utils import load_data_and_labels>>> x_train, y_train = load_data_and_labels('train.txt')>>> x_test, y_test = load_data_and_labels('test.txt')>>> x_train[0]['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.']>>> y_train[0]['B-ORG', 'O', 'B-MISC', 'O', 'O', 'O', 'B-MISC', 'O', 'O']```You can now iterate on your training data in batches:```python>>> import anago>>> model = anago.Sequence()>>> model.fit(x_train, y_train, epochs=15)Epoch 1/15541/541 [==============================] - 166s 307ms/step - loss: 12.9774...```Evaluate your performance in one line:```python>>> model.score(x_test, y_test)80.20 # f1-micro score# For more performance, you have to use pre-trained word embeddings.# For now, anaGo's best score is 90.90 f1-micro score.```Or tagging text on new data:```python>>> text = 'President Obama is speaking at the White House.'>>> model.analyze(text){"words": ["President","Obama","is","speaking","at","the","White","House."],"entities": [{"beginOffset": 1,"endOffset": 2,"score": 1,"text": "Obama","type": "PER"},{"beginOffset": 6,"endOffset": 8,"score": 1,"text": "White House.","type": "LOC"}]}```To download a pre-trained model, call `download` function:```python>>> from anago.utils import download>>> url = 'https://storage.googleapis.com/chakki/datasets/public/ner/conll2003_en.zip'>>> weights, params, preprocessor = download(url)>>> model = anago.Sequence.load(weights, params, preprocessor)>>> model.score(x_test, y_test)0.9090262970859986```## Feature SupportanaGo supports following features:* Model Training* Model Evaluation* Tagging Text* Custom Model Support* Downloading pre-trained model* GPU Support* Character feature* CRF Support* Custom Callback SupportanaGo officially supports Python 3.4โ€“3.6.## InstallationTo install anaGo, simply use `pip`:```bash$ pip install anago```or install from the repository:```bash$ git clone https://github.com/Hironsan/anago.git$ cd anago$ python setup.py install```## Documentation(coming soon)Fantastic documentation is available at [http://example.com/](http://example.com/).<!--## Data and Word VectorsTraining data takes a tsv format.The following text is an example of training data:```EU B-ORGrejects OGerman B-MISCcall Oto Oboycott OBritish B-MISClamb O. OPeter B-PERBlackburn I-PER```anaGo supports pre-trained word embeddings like [GloVe vectors](https://nlp.stanford.edu/projects/glove/).-->## ReferenceThis library uses bidirectional LSTM + CRF model based on[Neural Architectures for Named Entity Recognition](https://arxiv.org/abs/1603.01360)by Lample, Guillaume, et al., NAACL 2016.
anago-py367
anago python3.7 by Hironsan athttps://github.com/Hironsan/anago
anago-py3.7
anago python3.7 by Hironsan athttps://github.com/Hironsan/anago
anagram
Iโ€™m a small script that help you get anagram use wordsmith.org.Installsudo pip install anagramUsageuse a single word$ anagram cat 1. act 2. catuse a sentence$ anagram a dog 1. goad 2. a god 3. a dog 4. ad goLICENSEMIT
anagramgen
No description available on PyPI.
anagrams
No description available on PyPI.
anagram-solver
Solve anagrams using a dictionary of close to one hundred thousand English words
anaharid-looping
No description available on PyPI.
anahita
No description available on PyPI.
anaio
ANA.IO - A Python module for ANA f0 file I/OThis isanaio, a Python module to perform file input and output operations with the ANA f0 file format, originally developed by Robert Shine. This module is mostly a wrapper around the slightly modified code of the IDL DLM library by Michiel van Noort. This library in turn borrows code from the old ANA routines.This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.InstallationThe easiest and recommended way is to install viapip:$pipinstallanaioThis module can be installed using the standard NumPy distutils. Therefore, simply runningpython setup.py installwill install this module to the default installation location. Runningpython setup.pywill start an interactive installation process.UsageImport it as usualimportanaioTo read a file:anadata=anaio.fzread(filename)which will return a dict with the data inanadata['data']and some meta info inanadata['header'].To return only either the data or header, useanaio.getdata()oranaio.getheader()respectively. The letter will also not read the data and therefore speed up the process if you are interested in the header only.To write a file:anaio.fzwrite(filename,data):or useanaio.writeto(), which is an alias tofzwrite().Version history20230301, v1.0.0:Renamed to anaio to prepare publishing on pipy20220926, v0.5.0:Forked from Tim van Werkhoven's PyANA v0.4.3Added support to read ana headers without the data.Added more test cases.20090422, v0.4.0:Fixed dimension problem, ANA and numpy expect dimensions in different order20090422, v0.3.3:Made errors a little nicer to read & more understandable.Added pyana.writeto() wrapper for fzwrite, similar to pyfits.20090331, v0.3.2:Updated segfault fix in anadecrunch(). Illegal read beyond memory allocation can be as much as 4 bytes in the worst case in 32-bit decrunching (although this is rarely used).20090327, v0.3.1:Fixed a segfault error in anadecrunch(). Problem was pre-caching of a few bytes of compressed data, however the malloc() used for the compressed data did not have those few bytes extra, causing a 1 or 2 byte illegal read. Normally this shouldn't be a problem, but sometimes (like when I needed it) it is.20090326, v0.3.0:Old code had memory leaks, trying Michiel van Noort's improved code from libf0, the IDL DLM library. Hopefully this works better.Renamed functions to correspond with the (original) IDL functions. load() -> fzread() and save() -> fzwrite(). Parameters still the same.20090218, v0.2.2:Added file exists check before calling _pyana.load C-routineContributionsBased on Tim van Werkhoven's original PyANA implementation. A wrapper around Michiel van Noort's ANACompress library.Currently maintained by J. HรถlkenLicenseMITRepository / Code / Issuetracker:https://gitlab.gwdg.de/hoelken/pyana
anai-opensource
ANAI an Automated Machine Learning Library byRevcaAboutANAI is an Automated Machine Learning Python Library that works with tabular data. It is intended to save time when performing data analysis. It will assist you with everything right from the beginning i.e Ingesting data using the inbuilt connectors, preprocessing, feature engineering, model building, model evaluation, model tuning and much more.Our GoalOur Goal is to democratize Machine Learning and make it accessible to everyone.Let's get startedInstallation1) Python venv: pip install anai-opensourceAvailable Modelling TechniquesClassificationAvailable Models for Classification - "lr": "Logistic Regression" - "sgd": "Stochastic Gradient Descent" - "perc": "Perceptron" - "pass": "Passive Aggressive Classifier" - "ridg": "Ridge Classifier" - "svm": "Support Vector Machine" - "knn": "K-Nearest Neighbors" - "dt": "Decision Trees" - "nb": "Naive Bayes" - "rfc": "Random Forest Classifier" - "gbc": "Gradient Boosting Classifier" - "ada": "AdaBoost Classifier" - "bag": "Bagging Classifier" - "ext": "Extra Trees Classifier" - "lgbm": "LightGBM Classifier" - "cat": "CatBoost Classifier" - "xgb": "XGBoost Classifier" - "ann": "Multi Layer Perceptron Classifier" - "poisson": "Poisson Classifier" - "huber": "Huber Classifiers" - "ridge_cv": "RidgeCV Classifier" - "encv": "ElasticNet CV Classifier" - "lcv": "LassoCV Classifier" - "llic": "LassoLarsIC Classifier" - "llcv": "LassoLarsCV Classifier" - "ransac": "RANSACClassifiers", - "ompcv": "OrthogonalMatchingPursuitCV Classifier", - "omp": "OrthogonalMatchingPursuit Classifier", - "iso": "IsotonicRegression Classifier", - "rad": "RadiusNeighbors Classifier", - "quantile": "QuantileRegression Classifier", - "theil": "TheilSenRegressor Classifier", - "lars": "Lars Classifeir", - "lcv": "LarsCV Classifier", - "tweedie": "TweedieClassifiers", - "all": "All Classifiers"RegressionAvailable Models for Regression - "lin": "Linear Regression" - "sgd": "Stochastic Gradient Descent Regressor" - "krr": "Kernel Ridge Regression" - "elas": "Elastic Net Regression" - "br": "Bayesian Ridge Regression" - "svr": "Support Vector Regressor" - "knn": "K-Nearest Neighbors" - "dt": "Decision Trees Regressor" - "rfr": "Random Forest Regressor" - "gbr": "Gradient Boosted Regressor" - "ada": "AdaBoostRegressor" - "bag": "Bagging Regressor" - "ext": "Extra Trees Regressor" - "lgbm": "LightGBM Regressor" - "xgb": "XGBoost Regressor" - "cat": "Catboost Regressor" - "ann": "Multi-Layer Perceptron Regressor" - "poisson": "Poisson Regressor" - "huber": "Huber Regressor" - "gamma": "Gamma Regressor" - "ridge": "Ridge CV Regressor" - "encv": "ElasticNetCV Regressor" - "lcv": "LassoCV Regressor" - "llic": "LassoLarsIC Regressor" - "llcv": "LassoLarsCV Regressor" - "ransac": "RANSACRegressor", - "ompcv": "OrthogonalMatchingPursuitCV", - "gpr": "GaussianProcessRegressor", - "omp": "OrthogonalMatchingPursuit", - "llars": "LassoLars", - "iso": "IsotonicRegression", - "rnr": "Radius Neighbors Regressor Regressors", - "qr": "Quantile Regression Regressors", - "theil": "TheilSenRegressor Regressors", - "all": "All Regressors"Usage Exampleimport anai ai = anai.run( filepath='examples/Folds5x2_pp.xlsx', target='PE', predictor=['lin'], )Hyperparameter TuningANAI is powered byOptunafor Hyperparam tuning. Just pass "tune = True" in run arguments and it will start tuning the model/s with Optuna.PersistenceANAI's model can be saved as a pickle file. It will save both the model and the scaler to the pickle file.- Saving Ex: ai.save([<path-to-model.pkl>, <path-to-scaler.pkl>])A new ANAI Object can be loaded as well by specifying path of model and scaler- Loading Ex: ai = anai.run(path = [<path-to-model.pkl>, <path-to-scaler.pkl>])More ExamplesYou can find more examples/tutorialshereDocumentationMore information about ANAI can be foundhereContributingIf you have any suggestions or bug reports, please open an issuehereIf you want to join the ANAI Team send us your resumehereLicenseAPACHE 2.0 LicenseContactE-mailLinkedInWebsiteRoadmapANAI's roadmap
anakin
Add a long description here if you want.
anakin-language-server
anakin-language-serverYet another Jedi Python language serverRequirementsPython >= 3.6pygls >= 1.1, <1.2Jedi >= 0.19pyflakes ~= 2.2pycodestyle ~= 2.5yapf ~=0.30Optional requirementsmypyImplemented featurestextDocument/completiontextDocument/hovertextDocument/signatureHelptextDocument/definitiontextDocument/referencestextDocument/publishDiagnosticstextDocument/documentSymboltextDocument/codeAction(Inline variable)textDocument/formattingtextDocument/rangeFormattingtextDocument/renametextDocument/documentHighlightInitialization optionvenv- path to virtualenv. This option will be passed to Jedi'screate_environment.Also one can setVIRTUAL_ENVorCONDA_PREFIXbefore runninganakinlsso Jedi will find proper environment. Seeget_default_environment.DiagnosticsDiagnostics are published on document open and save.Diagnostics providers:JediSeeget_syntax_errors.pyflakespycodestyleServer restart is needed after changing one of theconfiguration files.mypyInstallmypyin the same environment asanakinlsand setmypy_enabledconfiguration option.Configuration optionsConfiguration options must be passed underanakinlskey inworkspace/didChangeConfigurationnotification.Available options:OptionDescriptionDefaulthelp_on_hoverUsehelpinstead ofinferfortextDocument/hover.Truecompletion_snippet_firstTweaksortTextproperty so snippet completion appear before plain completion.Falsecompletion_fuzzyValue of thefuzzyparameter forcomplete.Falsediagnostic_on_openPublish diagnostics ontextDocument/didOpenTruediagnostic_on_changePublish diagnostics ontextDocument/didChangeFalsediagnostic_on_savePublish diagnostics ontextDocument/didSaveTruepyflakes_errorsDiagnostic severity will be set toErrorif Pyflakes message class name is in this list. SeePyflakes messages.['UndefinedName']pycodestyle_configIn addition to project and user level config, specify pycodestyle config file. Same as--configoption forpycodestyle.Nonemypy_enabledUsemypyto provide diagnostics.Falseyapf_style_configEither a style name or a path to a file that contains formatting style settings.'pep8'jedi_settingsGlobalJedi settings.E.g. set it to{"case_insensitive_completion": False}to turn off case insensitive completion{}Configuration exampleHere iseglotconfiguration:(defvarmy/lsp-venvnil"Name of virtualenv.Set it in project's dir-locals file.")(defclassmy/eglot-anakinls(eglot-lsp-server)():documentation"Own eglot server class.")(cl-defmethodeglot-initialization-options((_servermy/eglot-anakinls))"Pass initialization param to anakinls."`(:venv,(whenmy/lsp-venv(expand-file-name(concat"~/.virtualenvs/"my/lsp-venv)))));; Add this server to eglot programs to handle python-mode and run `anakinls'(add-to-list'eglot-server-programs'(python-modemy/eglot-anakinls"anakinls"));; Also treat UnusedVariable as error(setq-defaulteglot-workspace-configuration'((:anakinls:pyflakes_errors["UndefinedName""UnusedVariable"])))Installationpip install anakin-language-serverDevelopmentpip install pre-commit pre-commit install
anaksetan
Kynanlibs LibraryCore library ofNaya-Pyro, a python based telegram userbot.Installationpip3install-Upy-AyraDocumentationSee more working plugins onthe offical repository!Made with ๐Ÿ’• byKynan.LicenseUltroid is licensed underGNU Affero General Public Licensev3 or later.Credits
anal
analUtility to stdout based on template and file.RequirementPython3Installation$pipinstallanal
anal-chem
No description available on PyPI.