markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
RETOObtener:- destino- hora de llegada- duración del vuelo- duración de la escala. *Tip: el último segmento no tendrá esta información*- número del vuelo- modelo del avión
# Destino segmento.find_element_by_xpath('.//div[@class="arrival"]/span[@class="ground-point-name"]').text # Hora de llegada segmento.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime') # Duración del vuelo segmento.find_element_by_xpath('.//span[@class="duration flight-schedule-duration"]/time').get_attribute('datetime') # Numero del vuelo segmento.find_element_by_xpath('.//span[@class="equipment-airline-number"]').text # Modelo de avion segmento.find_element_by_xpath('.//span[@class="equipment-airline-material"]').text # Duracion de la escala segmento.find_element_by_xpath('.//div[@class="stop connection"]//p[@class="stop-wait-time"]//time').get_attribute('datetime')
_____no_output_____
MIT
NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases/Módulo 3_ Scraping con Selenium/M3C4. Scrapeando escalas y tarifas - Script.ipynb
Alejandro-sin/Learning_Notebooks
CLASEUna vez que hayamos obtenido toda la información, debemos cerrar el modal/pop-up.
driver.find_element_by_xpath('//div[@class="modal-dialog"]//button[@class="close"]').click()
_____no_output_____
MIT
NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases/Módulo 3_ Scraping con Selenium/M3C4. Scrapeando escalas y tarifas - Script.ipynb
Alejandro-sin/Learning_Notebooks
Por último debemos obtener la información de las tarifas. Para eso, debemos clickear sobre el vuelo (sobre cualquier parte)
vuelo.click()
_____no_output_____
MIT
NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases/Módulo 3_ Scraping con Selenium/M3C4. Scrapeando escalas y tarifas - Script.ipynb
Alejandro-sin/Learning_Notebooks
La información de los precios para cada tarifa está contenida en una tabla. Los precios en sí están en el footer y podemos sacar los nombres de la clase de cada elemento
tarifas = vuelo.find_elements_by_xpath('.//div[@class="fares-table-container"]//tfoot//td[contains(@class, "fare-")]') precios = [] for tarifa in tarifas: nombre = tarifa.find_element_by_xpath('.//label').get_attribute('for') moneda = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="currency-symbol"]').text valor = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="value"]').text dict_tarifa={nombre:{'moneda':moneda, 'valor':valor}} precios.append(dict_tarifa) print(dict_tarifa)
{'LIGHT': {'moneda': 'US$', 'valor': '1282,40'}} {'PLUS': {'moneda': 'US$', 'valor': '1335,90'}} {'TOP': {'moneda': 'US$', 'valor': '1773,50'}}
MIT
NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases/Módulo 3_ Scraping con Selenium/M3C4. Scrapeando escalas y tarifas - Script.ipynb
Alejandro-sin/Learning_Notebooks
Será de gran utilidad armar funciones que resuelvan la extracción de información de cada sección de la página. Por eso te propongo que armes 3 funciones de las cuales te dejo las estructuras: RETOArmar funciones para obtener los datos de las escalas y las tarifas. Te dejo los prototipos:
def obtener_precios(vuelo): tarifas = vuelo.find_elements_by_xpath( './/div[@class="fares-table-container"]//tfoot//td[contains(@class, "fare-")]') precios = [] for tarifa in tarifas: nombre = tarifa.find_element_by_xpath('.//label').get_attribute('for') moneda = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="currency-symbol"]').text valor = tarifa.find_element_by_xpath('.//span[@class="price"]/span[@class="value"]').text dict_tarifa={nombre:{'moneda':moneda, 'valor':valor}} precios.append(dict_tarifa) return precios def obtener_datos_escalas(vuelo): segmentos = vuelo.find_elements_by_xpath('//div[@class="segments-graph"]/div[@class="segments-graph-segment"]') info_escalas = [] for segmento in segmentos: # Origen origen = segmento.find_element_by_xpath( './/div[@class="departure"]/span[@class="ground-point-name"]').text # Hora de salida dep_time = segmento.find_element_by_xpath( './/div[@class="departure"]/time').get_attribute('datetime') # Destino destino = segmento.find_element_by_xpath( './/div[@class="arrival"]/span[@class="ground-point-name"]').text # Hora de llegada arr_time = segmento.find_element_by_xpath( './/div[@class="arrival"]/time').get_attribute('datetime') # Duración del vuelo duracion_vuelo = segmento.find_element_by_xpath( './/span[@class="duration flight-schedule-duration"]/time').get_attribute('datetime') # Numero del vuelo numero_vuelo = segmento.find_element_by_xpath( './/span[@class="equipment-airline-number"]').text # Modelo de avion modelo_avion = segmento.find_element_by_xpath( './/span[@class="equipment-airline-material"]').text # Duracion de la escala if segmento != segmentos[-1]: duracion_escala = segmento.find_element_by_xpath( './/div[@class="stop connection"]//p[@class="stop-wait-time"]//time').get_attribute('datetime') else: duracion_escala = '' # Armo un diccionario para almacenar los datos data_dict={'origen': origen, 'dep_time': dep_time, 'destino': destino, 'arr_time': arr_time, 'duracion_vuelo': duracion_vuelo, 'numero_vuelo': numero_vuelo, 'modelo_avion': modelo_avion, 'duracion_escala': duracion_escala} info_escalas.append(data_dict) return info_escalas def obtener_tiempos(vuelo): # Hora de salida salida = vuelo.find_element_by_xpath('.//div[@class="departure"]/time').get_attribute('datetime') # Hora de llegada llegada = vuelo.find_element_by_xpath('.//div[@class="arrival"]/time').get_attribute('datetime') # Duracion duracion = vuelo.find_element_by_xpath('.//span[@class="duration"]/time').get_attribute('datetime') tiempos = {'hora_salida': salida, 'hora_llegada': llegada, 'duracion': duracion} return tiempos driver.close()
_____no_output_____
MIT
NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases/Módulo 3_ Scraping con Selenium/M3C4. Scrapeando escalas y tarifas - Script.ipynb
Alejandro-sin/Learning_Notebooks
In this chapter, we study how to work with PDF and Microsoft Word files using Python. PDF and Word documents are binary files, which makes them much more complex than plaintext files. In addition to text, they store lots of font, color, and layout information. If you want your programs to read or write to PDFs or Word documents, you’ll need to do more than simply pass their filenames to open().
!pip install PyPDF2
Collecting PyPDF2 Downloading PyPDF2-1.26.0.tar.gz (77 kB) Building wheels for collected packages: PyPDF2 Building wheel for PyPDF2 (setup.py): started Building wheel for PyPDF2 (setup.py): finished with status 'done' Created wheel for PyPDF2: filename=PyPDF2-1.26.0-py3-none-any.whl size=61087 sha256=21f0c54caa5aa7c4fe4f0b5eaaa4baf06d5d66e2d6a6cbfc6f741ae91ea83389 Stored in directory: c:\users\pgao\appdata\local\pip\cache\wheels\80\1a\24\648467ade3a77ed20f35cfd2badd32134e96dd25ca811e64b3 Successfully built PyPDF2 Installing collected packages: PyPDF2 Successfully installed PyPDF2-1.26.0
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
PDF stands for 'Portable Document Format' and uses the .pdf file extension. Although PDFs support many features, this chapter will focus on the two things you’ll be doing most often with them: reading text content from PDFs and crafting new PDFs from existing documents.PDFs are actually very hard to work with in Python. While PDF files are great for laying out text in a way that’s easy for people to print and read, they’re not straightforward for software to parse into plain text. As such, 'PyPDF2' might make mistakes when extracting text from a PDF and may even be unable to open some PDFs at all. There isn’t much you can do about this, unfortunately. PyPDF2 may simply be unable to work with some of your particular PDF files.PyPDF2 does not have a way to extract images, charts, or other media from PDF documents, but it can extract text and return it as a Python string.
import PyPDF2 import os path='C:\\Users\\pgao\\Documents\\PGZ Documents\\Programming Workshop\PYTHON\\Python Books\\Automate the Boring Stuff with Python\\Datasets and Files' os.chdir(path) pdfFileObj = open('meetingminutes.pdf', 'rb') pdfReader = PyPDF2.PdfFileReader(pdfFileObj) print(type(pdfReader)) print('Number of the pages for the current PDF file: ', pdfReader.numPages) pageObj = pdfReader.getPage(0) # getting a 'Page' object by calling the getPage() method (here we get the first page) pageObj.extractText()
<class 'PyPDF2.pdf.PdfFileReader'> Number of the pages for the current PDF file: 19
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
As you see from the example above, text extractions aren't always perfect: The text Charles E. "Chas" Roemer, President from the PDF is absent from the string returned by extractText(), and the spacing is sometimes off. Still, this approximation of the PDF text content may be good enough for your program in many cases. Some PDF documents have an encryption feature that will keep them from being read until whoever is opening the document provides a password. All 'PdfFileReader' objects have an 'isEncrypted' attribute that is 'True' if the PDF is encrypted and 'False' if it isn’t. Any attempt to call a function that reads the file before it has been decrypted with the correct password will result in an error.To read an encrypted PDF, we can call the decrypt() function and pass the password as a string. After you call decrypt() with the correct password, you’ll see that calling getPage() no longer causes an error. If given the wrong password, the decrypt() function will return 0 and getPage() will continue to fail. Note that the decrypt() method decrypts only the 'PdfFileReader' object, not the actual PDF file. After your program terminates, the file on your hard drive remains encrypted. Your program will have to call decrypt() again the next time it is run. Below is an example:
pdfReader = PyPDF2.PdfFileReader(open('encrypted.pdf', 'rb')) print(pdfReader.isEncrypted) try: pdfReader.getPage(0) except: print("PdfReadError: file has not been decrypted") pdfReader.decrypt('rosebud') # the password is rosebud pageObj = pdfReader.getPage(0) print(pageObj)
{'/CropBox': [0, 0, 612, 792], '/Parent': IndirectObject(4, 0), '/Type': '/Page', '/Contents': [IndirectObject(946, 0), IndirectObject(947, 0), IndirectObject(948, 0), IndirectObject(949, 0), IndirectObject(950, 0), IndirectObject(951, 0), IndirectObject(952, 0), IndirectObject(953, 0)], '/Resources': {'/ExtGState': {'/GS0': IndirectObject(954, 0)}, '/XObject': {'/Im0': IndirectObject(955, 0)}, '/ColorSpace': {'/CS1': IndirectObject(956, 0), '/CS2': IndirectObject(956, 0), '/CS0': IndirectObject(6, 0)}, '/Font': {'/TT2': IndirectObject(957, 0), '/TT1': IndirectObject(958, 0), '/TT0': IndirectObject(959, 0), '/TT5': IndirectObject(960, 0), '/TT4': IndirectObject(961, 0), '/TT3': IndirectObject(962, 0)}}, '/MediaBox': [0, 0, 612, 792], '/StructParents': 0, '/Rotate': 0}
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
Notice that in the original package, there is a bug. If you use the original package, you may encounter an error. The error is the following: after decrypting the 'PdfFileReader' object, calling pdfReader.getPage(0) raises an error with the message: 'IndexError: list index out of range'. The reason is because there is an exception to the source code. To fix this, you will need to go to the location where the library 'PyPDF2' is located. The actual code is in the Python script "pdf.py". What you need to do is to follow the instruction below. The line in red needs to be deleted and the line in green must be added. The complete solution to the issue is explained in the following site: https://github.com/mstamy2/PyPDF2/issues/327.
from IPython.display import Image Image("ch13_snapshot_1.jpg", width=900, height=800)
_____no_output_____
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
The counterpart in the package to 'PdfFileReader' objects is 'PdfFileWriter' objects, which can create new PDF files. But 'PyPDF2' cannot write arbitrary text to a PDF like Python can do with plaintext files. Instead, the PDF-writing capabilities are limited to copying pages from other PDFs, rotating pages, overlaying pages, and encrypting files.'PyPDF2' doesn’t allow you to directly edit a PDF. Instead, you have to create a new PDF and then copy contents over from an existing document. The examples in this section will follow this general approach: 1) open one or more existing PDFs (the source PDFs) into 'PdfFileReader' objects. 2) Create a new 'PdfFileWriter' object. 3) Copy pages from the 'PdfFileReader' objects into the 'PdfFileWriter' object. 4) Finally, use the 'PdfFileWriter' object to write the output.Creating a 'PdfFileWriter' object generates only a value that represents a PDF document in Python. It doesn’t create the actual PDF file. For that, you must call the write() method from 'PdfFileWriter’. The write() method takes a regular 'File' object that has been opened in write-binary mode. You can get such a 'File' object by calling Python’s open() function with two arguments: the string of what you want the PDF’s filename to be and 'wb' to indicate the file should be opened in write-binary mode. Now let's start with copying pages. 'PyPDF2' can help us copy pages from one PDF document to another. This allows us to combine multiple PDF files, cut unwanted pages, or reorder pages. Below is an example:
pdf1File = open('meetingminutes.pdf', 'rb') pdf2File = open('meetingminutes2.pdf', 'rb') pdf1Reader = PyPDF2.PdfFileReader(pdf1File) pdf2Reader = PyPDF2.PdfFileReader(pdf2File) pdfWriter = PyPDF2.PdfFileWriter() # creating a blank PDF document here for pageNum in range(pdf1Reader.numPages): # copy all the pages from the PDF and add them to the 'PdfFileWriter' object pageObj = pdf1Reader.getPage(pageNum) pdfWriter.addPage(pageObj) for pageNum in range(pdf2Reader.numPages): # copy all the pages from the PDF and add them to the 'PdfFileWriter' object pageObj = pdf2Reader.getPage(pageNum) pdfWriter.addPage(pageObj) pdfOutputFile = open('combinedminutes.pdf', 'wb') pdfWriter.write(pdfOutputFile) pdfOutputFile.close() pdf1File.close() pdf2File.close()
_____no_output_____
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
One cautionary note: 'PyPDF2' cannot insert pages in the middle of a 'PdfFileWriter' object. The addPage() method will only add pages to the end. Also keep in mind that the 'File' object passed to PyPDF2.PdfFileReader() needs to be opened in read-binary mode by passing 'rb' as the second argument to open(). Likewise, the 'File' object passed to PyPDF2.PdfFileWriter() needs to be opened in write-binary mode with 'wb'. We now talk about rotating PDF files. This is very useful if you have a scanned copy of PDF files from someone else and you want to rotate the pages. The pages can be rotated in 90-degree increments with the rotateClockwise() and rotateCounterClockwise() methods. Below is an example. The resulting PDF will have one page, rotated 90 degrees clockwise.
minutesFile = open('meetingminutes.pdf', 'rb') pdfReader = PyPDF2.PdfFileReader(minutesFile) page = pdfReader.getPage(0) page.rotateClockwise(90) pdfWriter = PyPDF2.PdfFileWriter() # creating a blank PDF output file pdfWriter.addPage(page) # adding the rotated page resultPdfFile = open('rotatedPage.pdf', 'wb') pdfWriter.write(resultPdfFile) resultPdfFile.close() minutesFile.close()
_____no_output_____
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
Now let's study overlaying pages. 'PyPDF2' can overlay the contents of one page over another, which is useful for adding a logo, timestamp, or watermark to a page. With Python, it’s easy to add watermarks to multiple files and only to pages your program specifies.Here in the example below, we make a 'PdfFileReader' object of 'meetingminutes.pdf'. We first call the getPage(0) method to get a 'Page' object for the first page and store this object in 'minutesFirstPage'. We then make a 'PdfFileReader' object for 'watermark.pdf' and call mergePage() on 'minutesFirstPage'. The argument we pass to mergePage() is a 'Page' object for the first page of 'watermark.pdf'.Now that we’ve called mergePage() on 'minutesFirstPag', 'minutesFirstPage' represents the watermarked first page. We make a 'PdfFileWriter' object and add the watermarked first page. Then we loop through the rest of the pages in 'meetingminutes.pdf' and add them to the 'PdfFileWriter' object. Finally, we open a new PDF file called 'watermarkedCover.pdf' and write the contents of the 'PdfFileWriter' to the new PDF. Our new PDF, called 'watermarkedCover.pdf', has all the contents of the 'meetingminutes.pdf' with its first page watermarked.
minutesFile = open('meetingminutes.pdf', 'rb') pdfReader = PyPDF2.PdfFileReader(minutesFile) minutesFirstPage = pdfReader.getPage(0) pdfWatermarkReader = PyPDF2.PdfFileReader(open('watermark.pdf', 'rb')) minutesFirstPage.mergePage(pdfWatermarkReader.getPage(0)) pdfWriter = PyPDF2.PdfFileWriter() pdfWriter.addPage(minutesFirstPage) for pageNum in range(1, pdfReader.numPages): pageObj = pdfReader.getPage(pageNum) pdfWriter.addPage(pageObj) resultPdfFile = open('watermarkedCover.pdf', 'wb') pdfWriter.write(resultPdfFile) minutesFile.close() resultPdfFile.close()
_____no_output_____
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
Lastly, a 'PdfFileWriter' object can also add encryption to a PDF document. Below is an example. The key is to use the encrypy() method. In general, PDFs can have a user password (allowing you to view the PDF) and an owner password (allowing you to set permissions for printing, commenting, extracting text, and other features). The user password and owner password are the first and second arguments to encrypt(), respectively. If only one string argument is passed to encrypt(), it will be used for both passwords.In this example, we copied the pages of 'meetingminutes.pdf' to a 'PdfFileWriter' object. We encrypted the 'PdfFileWriter' with the password 'swordfish', opened a new PDF called 'encryptedminutes.pdf', and wrote the contents of the 'PdfFileWriter' to the new PDF. Before anyone can view 'encryptedminutes.pdf', they’ll have to enter this password.
pdfFile = open('meetingminutes.pdf', 'rb') pdfReader = PyPDF2.PdfFileReader(pdfFile) pdfWriter = PyPDF2.PdfFileWriter() for pageNum in range(pdfReader.numPages): pdfWriter.addPage(pdfReader.getPage(pageNum)) pdfWriter.encrypt('swordfish') # encrypting with a password resultPdf = open('encryptedminutes.pdf', 'wb') # this file now is encrypted with the password 'swordfish' pdfWriter.write(resultPdf) resultPdf.close()
_____no_output_____
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
We now study how to manipulate Microsoft Word documents. This is achieved through the "Python-Docx" package, which needs to be installed first. The full documentation for this package is available at https://python-docx.readthedocs.org/.
!pip install python-docx
Requirement already satisfied: python-docx in c:\programdata\anaconda3\lib\site-packages Requirement already satisfied: lxml>=2.3.2 in c:\programdata\anaconda3\lib\site-packages (from python-docx)
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
Although there is a version of Word for OS X, this chapter will focus on Word for Windows. Compared to plaintext, ".docx" files have a lot of structure. This structure is represented by three different data types in 'Python-Docx'. At the highest level, a 'Document' object represents the entire document. The 'Document' object contains a list of 'Paragraph' objects for the paragraphs in the document. Each of these 'Paragraph' objects contains a list of one or more 'Run' objects. For example, the single-sentence paragraph in the next example has four 'Runs':
from IPython.display import Image Image("ch13_snapshot_2.jpg")
_____no_output_____
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
You can think of each run as a block of strings that has its own special properties. This is because the text in a (Microsoft) Word document is more than just a string. It has font, size, color, and other styling information associated with it. A 'style' in Word is a collection of these attributes. A 'Run' object is a contiguous run of text with the same 'style'. A new 'Run' object is needed whenever the text 'style' changes.Now let's read in a Word document and parse each objects:
import docx doc = docx.Document('demo.docx') print('Number of paragraph objects: ', len(doc.paragraphs)) ob1=doc.paragraphs[0].text print(type(ob1)) # string print(ob1) ob2=doc.paragraphs[1].text print(type(ob2)) # string print(ob2) ob3=doc.paragraphs[1].runs print(type(ob3)) # list print(ob3) print(doc.paragraphs[1].runs[0].text) print(doc.paragraphs[1].runs[1].text) print(doc.paragraphs[1].runs[2].text) print(doc.paragraphs[1].runs[3].text) print(doc.paragraphs[1].runs[4].text)
A plain paragraph with some bold and some italic
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
If you care only about the text, not the styling information, in the Word document, you can use the user-defined getText() function. It accepts a filename of a '.docx' file and returns a single string value of its text:
def getText(filename): doc = docx.Document(filename) fullText = [] for paragraph in doc.paragraphs: fullText.append(paragraph.text) return '\n'.join(fullText)
_____no_output_____
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
The getText() function opens the Word document, loops over all the 'Paragraph' objects in the paragraphs list, and then appends their text to the list in the 'fullText' list (originally set to be empty). After the loop, the strings in 'fullText' are joined together with newline characters.
print(getText('demo.docx'))
Document Title A plain paragraph with some bold and some italic Heading, level 1 Intense quote first item in unordered list first item in ordered list
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
Microsoft Word and other word processors use styles to keep the visual presentation of similar types of text consistent and easy to change. For example, perhaps you want to set body paragraphs in 11-point, Times New Roman, left-justified, ragged-right text. You can create a style with these settings and assign it to all body paragraphs. Then, if you later want to change the presentation of all body paragraphs in the document, you can just change the style, and all those paragraphs will be automatically updated.For Word documents, there are three types of styles: 1. Paragraph styles can be applied to 'Paragraph' objects. 2. Character styles can be applied to 'Run' objects. 3. Linked styles can be applied to both kinds of objects. You can give both 'Paragraph' and 'Run' objects styles by setting their 'style' attribute to a string. This string should be the name of a style. If 'style' is set to 'None', then there will be no style associated with the 'Paragraph' or 'Run' object.The string values for the default Word styles are as follows: 'Normal', 'Heading5', 'ListBullet', 'ListParagraph', 'BodyText', 'Heading6', 'ListBullet2', 'MacroText', 'BodyText2', 'Heading7', 'ListBullet3', 'NoSpacing', 'BodyText3', 'Heading8', 'ListContinue', 'Quote', 'Caption', 'Heading9', 'ListContinue2', 'Subtitle', 'Heading1', 'IntenseQuote', 'ListContinue3', 'TOCHeading', 'Heading2', 'List', 'ListNumber', 'Title', 'Heading3', 'List2', 'ListNumber2', 'Heading4', 'List3', and 'ListNumber3'.In some some early version of the package, when setting the 'style' attribute, we cannot use spaces in the style name. For example, while the style name may be 'Subtle Emphasis', you should set the 'style' attribute to the string value 'SubtleEmphasis' instead of using the 'Subtle Emphasis' string with empty space in between. Including spaces will cause Word to misread the style name and not apply it. But this type of phenomenon depends on what version of the package you are using and refer to the specific documentation. When using a linked style for a 'Run' object, you will need to add the string 'Char' to the end of its name. For example, to set the 'Quote' linked style for a 'Paragraph' object, you would use "paragraphObj.style = 'Quote'", but for a 'Run' object, you would use "runObj.style = 'Quote Char'".'Run' objects can be further styled using text attributes. Each attribute can be set to one of three values: 'True' (the attribute is always enabled, no matter what other styles are applied to the run), 'False' (the attribute is always disabled), or 'None' (defaults to whatever the run’s style is set to).Below lists some of the text attributes that can be set on 'Run' objects:
from IPython.display import Image Image("ch13_snapshot_3.jpg")
_____no_output_____
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
For example, to change the styles of demo.docx, The following commands will help us get the styles for the document and change styles based on different 'Paragraph' objects and 'Run' objects. Here in the example below, we use the text and style attributes to easily see what’s in the paragraphs in our document. We can see that it’s simple to divide a paragraph into runs and access each run individiaully. So we get the first, second, and fourth runs in the second paragraph, style each run, and save the results to a new document.
doc = docx.Document('demo.docx') print('doc: ', doc.paragraphs[0].text) # 'Document Title' print('The style of the paragraph: ', doc.paragraphs[0].style) # 'Title' tupleobject=(doc.paragraphs[1].runs[0].text, doc.paragraphs[1].runs[1].text, doc.paragraphs[1].runs[2].text, doc.paragraphs[1].runs[3].text) print(tupleobject) doc.paragraphs[0].style.name print(doc.paragraphs[0].style.name) doc.paragraphs[0].style='Body Text' print(doc.paragraphs[0].style.name) doc.paragraphs[1].runs[0].style = 'Quote Char' doc.paragraphs[1].runs[1].underline = True doc.paragraphs[1].runs[3].underline = True doc.save('restyled.docx')
Title Body Text
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
Now let's study how to write Word document using Python. To do, the most important methods include docx.Document(), which is to return a new, blank Word 'Document' object. In addition, the add_paragraph() document method adds a new paragraph of text to the document and returns a reference to the 'Paragraph' object that was added. When you’re done adding text, you may pass a filename string to the save() document method to save the 'Document' object to a file.In a similar fashion, calling add_heading() adds a paragraph with one of the heading styles. The arguments to add_heading() are a string of the heading text and an integer from 0 to 4. The integer 0 makes the heading the 'Title' style, which is used for the top of the document. Integers 1 to 4 are for various heading levels, with 1 being the main heading and 4 the lowest subheading. The add_heading() function returns a 'Paragraph' object to save you the step of extracting it from the 'Document' object as a separate step.
doc = docx.Document() doc.add_paragraph('Hello world!', 'Title') # adding a title paraObj1 = doc.add_paragraph('This is a second paragraph.') paraObj2 = doc.add_paragraph('This is a yet another paragraph.') paraObj1.add_run(' This text is being added to the second paragraph.') doc.add_heading('Header 0', 0) doc.add_heading('Header 1', 1) doc.add_heading('Header 2', 2) doc.add_heading('Header 3', 3) doc.add_heading('Header 4', 4) doc.save('multipleParagraphs.docx')
_____no_output_____
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
To add a line break (rather than starting a whole new paragraph), you can call the add_break() method on the 'Run' object you want to have the break appear after. To create page break, you can use the 'docx.enum.text.WD_BREAK.PAGE' argument in the add_break() method. For details can be found here: https://stackoverflow.com/questions/37608315/python-attributeerror-module-object-has-no-attribute-wd-break Last but not least, let's talk about inserting pictures. 'Document' objects have an add_picture() method that will let you add an image to the end of the document. Say you have a file in the current working directory. You can add the picture file (say a PNG or JPG file) to the end of your document with a width and height (Word can use both imperial and metric units).The example below creates a two-page Word document with 'This is on the first page!' on the first page and 'This is on the second page!' on the second. Finally, we add in a picture with a width of one inch and a height of 4 centimeters. Even though there was still plenty of space on the first page after the text 'This is on the first page!', we forced the next paragraph to begin on a new page by inserting a page break after the first run of the first paragraph:
doc = docx.Document() doc.add_paragraph('This is on the first page!') doc.paragraphs[0].runs[0].add_break(docx.enum.text.WD_BREAK.PAGE) # adding a page break doc.add_paragraph('This is on the second page!') doc.add_picture("ch13_snapshot_3.jpg", width=docx.shared.Inches(1), height=docx.shared.Cm(4)) # width 1 inch and height 4 cm doc.save('twoPage.docx')
_____no_output_____
MIT
Automate the Boring Stuff with Python Ch13.ipynb
pgaods/Automate-the-Boring-Stuff-with-Python
Overview:In order to deal with accessing and storing the mounds of data associated with the matrix project, I have written a script called matrix_manager. The main workhorse of this script is custom class called 'Database' that uses the 'shelve' package (https://docs.python.org/3.4/library/shelve.html). There is also an accessory function called filter_data in this script that I use a lot.The purpose of the 'Database' class is to store relevant information related to the project (locations of coolers, location of analysis, location of dot calls), as well as to provide a set of methods to easily access the various type of features we extract from Hi-C maps (scalings, eigvectors, pileups etc)This notebook is meant to be a tutorial to show you how to create and use these Database object. All my other scripts rely on this class to access the data that I'm analyzing.
import matrix_manager as mm import shelve %matplotlib notebook
_____no_output_____
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
Code to create matrix database One can create an instance of the Databse object by giving it the path to where the database file are or will be stored. Since the database is being made for the first, all it's attributes are set to either None, '' or [ ].
imp.reload(mm) db_path = '/net/levsha/share/sameer/U54/matrix_shared/sameer/metadata/U54_matrix_info' db = mm.Database(db_path) print(db.metadata, db.keys, db.analysis_path, db.cooler_paths, db.dot_paths)
None []
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
Now I will create the database for the matrix project. For this I will need to feed the Database object 4 things:1) A list of paths the point to where the coolers are located.2) The path to the base directory where all the analysis will be stored3) A list of paths for the dot calls4) A DataFrame that contains the metadata about the library. This table must contain 6 columns: lib_name, celltyep, xlink, enzyme, cycle, seq. a) The 'lib_name' column contains the name of the library upto the first '.' in it's name. So U54-ESC-DSG-DdeI-20161014-R1-T1__hg38.hg38.mapq_30.1000.mcool becomes U54-ESC-DSG-DdeI-20161014-R1-T1__hg38 b) 'celltype', 'xlink' and 'enzyme' should be obvious. c) 'cycle' column represents whether the library is synchronized or not. Most libraries will be classified as NS (non-synchronous) but the HelaS3 libraries will be split into NS, G1 and M d) 'seq' refers to if the library is a deeply sequenced library or not. This column can take 3 values - 'deep', 'control' or '-'. Libraries labelled 'deep' are deeply sequenced, while libraries called 'control' are not deeply sequenced but have the same ('celltype','xlink','enzyme') combination as a deep library. Libraries called '-' do not have a deep equivalent.
cooler_paths = ['/net/levsha/share/lab/U54/2019_mapping_hg38/U54_deep/cooler_library_group/', '/net/levsha/share/lab/U54/2019_mapping_hg38/U54_matrix/cooler_library/'] analysis_path = '/net/levsha/share/sameer/U54/hic_matrix/' dot_paths = ['/net/levsha/share/lab/U54/2019_mapping_hg38/U54_matrix/snakedots/', '/net/levsha/share/lab/U54/2019_mapping_hg38/U54_deep/snakedots/'] ## The details of this cell are not important. I'm just creating the metadata table from the cooler names. df_dict = defaultdict(list) for path in cooler_paths: for file in os.listdir(path): lib_name = file.split('.')[0] df_dict['lib_name'].append(lib_name) if '-END' in lib_name: df_dict['celltype'].append('END') elif ('-ESC' in lib_name) or ('H1ESC' in lib_name): df_dict['celltype'].append('ESC') elif '-HFF' in lib_name: df_dict['celltype'].append('HFF') else: df_dict['celltype'].append('HelaS3') if '-DSG-' in lib_name: df_dict['xlink'].append('DSG') elif '-EGS-' in lib_name: df_dict['xlink'].append('EGS') else: df_dict['xlink'].append('FA') if '-MNase-' in lib_name: df_dict['enzyme'].append('MNase') elif '-DdeI-DpnII-' in lib_name: df_dict['enzyme'].append('double') elif '-DdeI-' in lib_name: df_dict['enzyme'].append('DdeI') elif '-DpnII-' in lib_name: df_dict['enzyme'].append('DpnII') else: df_dict['enzyme'].append('HindIII') if 'deep' in path: df_dict['seq'].append('deep') else: df_dict['seq'].append('-') if '-G1-' in lib_name: df_dict['cycle'].append('G1') elif '-M-' in lib_name: df_dict['cycle'].append('M') else: df_dict['cycle'].append('NS') df = pd.DataFrame(df_dict).sort_values(['celltype','xlink','enzyme','cycle','seq']).reset_index(drop=True) df = df.drop([4, 5, 21, 22, 39, 40, 42]) metadata = df[['lib_name','seq','celltype','xlink','enzyme','cycle']].reset_index(drop=True) deep_indices = metadata.loc[(metadata['seq']=='deep') & (metadata['enzyme'] != 'double')].index.values metadata.loc[deep_indices-1, 'seq'] = 'control' metadata
_____no_output_____
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
Once we create the dataset, we see that most of the attributes of the object as now filled. Note: If you try to use the create_dataset method once you have already created the shelve object, it will raise an error.
db.create_dataset(metadata, cooler_paths, analysis_path, dot_paths) display(db.metadata) print(db.keys, db.analysis_path, db.cooler_paths, db.dot_paths)
_____no_output_____
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
Accessing and modifying an already existing databaseNow that we've created the database, you can access it by initializing the object with the right database_path.
imp.reload(mm) db = mm.Database(db_path) display(db.metadata) # I can also alternative access the metadata using the get_tables() method. display(db.get_tables())
_____no_output_____
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
Adding to databaseFor each feature of Hi-C (pileups for example), I like to create various metrics that quantify that feature (dot enrichment score for example) and store these away permanently. I can do this by using the add_table method. The add table method takes in a DataFrame. However this data __must__ have a column named 'lib_name' that has identical entries to the 'lib_name' column in db.metadata
df = db.get_tables() df = df[['lib_name']].copy() df.loc[:, 'dummy'] = 1 df db.add_table('dummy', df)
_____no_output_____
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
Now even if I reinitialize the object, it will retreive the 'dummy' table in addition to the metadata
db = mm.Database(db_path) print(db.keys) db.get_tables('dummy') # I can give this function a any list of keys that I know the database contains. # It will append these tables to the metadata table and return it
['dummy']
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
Modifying the databaseModifying an existing table is done using the modify_table() method.
df['dummy'] = np.nan db.modify_table('dummy', df) db.get_tables('dummy')
_____no_output_____
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
Removing from the databaseRemoving an existing table is done using the remove_table() method.
db.remove_table('dummy') db.keys
_____no_output_____
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
Accessing coolers from the databaseI've created this database to allow easy access to the various data files associated with the matrix project. I've created methods for retrieving coolers, scalings, eigenvectors, pileups and insulation tracks. Here I will show you how to access cooler files. Storing the cooler objects in the dataframe allows me to easily iterate through the dataframe and apply my operation sequentially.
table = db.get_tables() table = db.get_coolers(table, res=100000) table
_____no_output_____
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
You may be wondering why I chose to feed in the metadata table to to get_coolers() method, we the database object already has access to the metadata. The reason for this is that I can now chain several get_coolers() methods together as shown below. I use this methodology regularly for analysis that requires multiple types of data as input. For example, for making saddleplots, I would need coolers, expected curves and eigenvectors. Using this, I can easily pipe the result of get_coolers() into the get_scalings() method and further pipe the output of that into the get_eigendecomps() method. This allows me to access and keep track of all the required data to create saddleplots for the entire matrix project in one shot.
table = db.get_tables() table = db.get_coolers(table, res=100000) display(table.head()) table = db.get_coolers(table, res=1000) display(table.head())
_____no_output_____
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
I've tried to make the code as flexible as possible but there are some bottlenecks. For example, P(s) curves are expected to be stored in hdf5 formats because it allows me to store P(s) as well as average trans interactions in the same location. Similiarly, similarly eigenvectors and eigenvalues are stored together in an hdf5 file. Pileups are expected to be stored in the .npy format and insulation tracks are .txt file with '\t' separation.The my various notebooks should be able to show you how I use this database class to do all the analysis I've done The last function that will be used often is the filter_data function. This is **NOT** a method of the Database class. It is used to filter the table using values of the metadata. For example, I will show you, how I filter for libraries that are either 'HFF' or 'ESC' but only 'DSG'
mm.filter_data(table, filter_dict={'celltype':['ESC','HFF'],'xlink':'DSG'})
_____no_output_____
MIT
sameer/database_construction_(README).ipynb
dekkerlab/matrix_shared
Self join Edinburgh Buses[Details of the database](https://sqlzoo.net/wiki/Edinburgh_Buses.) Looking at the data```stops(id, name)route(num, company, pos, stop)```
library(tidyverse) library(DBI) library(getPass) drv <- switch(Sys.info()['sysname'], Windows="PostgreSQL Unicode(x64)", Darwin="/usr/local/lib/psqlodbcw.so", Linux="PostgreSQL") con <- dbConnect( odbc::odbc(), driver = drv, Server = "localhost", Database = "sqlzoo", UID = "postgres", PWD = getPass("Password?"), Port = 5432 ) options(repr.matrix.max.rows=20)
-- Attaching packages --------------------------------------- tidyverse 1.3.0 -- v ggplot2 3.3.0 v purrr  0.3.4 v tibble  3.0.1 v dplyr  0.8.5 v tidyr  1.0.2 v stringr 1.4.0 v readr  1.3.1 v forcats 0.5.0 -- Conflicts ------------------------------------------ tidyverse_conflicts() -- x dplyr::filter() masks stats::filter() x dplyr::lag() masks stats::lag()
MIT
R/09 Self join.ipynb
madlogos/sqlzoo
1.How many **stops** are in the database.
stops <- dbReadTable(con, 'stops') route <- dbReadTable(con, 'route') stops %>% tally
_____no_output_____
MIT
R/09 Self join.ipynb
madlogos/sqlzoo
2.Find the **id** value for the stop 'Craiglockhart'
stops %>% filter(name=='Craiglockhart') %>% select(id)
_____no_output_____
MIT
R/09 Self join.ipynb
madlogos/sqlzoo
3.Give the **id** and the **name** for the **stops** on the '4' 'LRT' service.
stops %>% inner_join(route, by=c(id="stop")) %>% filter(num=='4' & company=='LRT') %>% select(id, name)
_____no_output_____
MIT
R/09 Self join.ipynb
madlogos/sqlzoo
4. Routes and stopsThe query shown gives the number of routes that visit either London Road (149) or Craiglockhart (53). Run the query and notice the two services that link these stops have a count of 2. Add a HAVING clause to restrict the output to these two routes.
route %>% filter(stop==149 | stop==53) %>% group_by(company, num) %>% summarise(n_route=n()) %>% filter(n_route==2)
_____no_output_____
MIT
R/09 Self join.ipynb
madlogos/sqlzoo
5.Execute the self join shown and observe that b.stop gives all the places you can get to from Craiglockhart, without changing routes. Change the query so that it shows the services from Craiglockhart to London Road.
route %>% inner_join(route, by=c(company="company", num="num")) %>% filter(stop.x==53 & stop.y==149) %>% select(company, num, stop.x, stop.y)
_____no_output_____
MIT
R/09 Self join.ipynb
madlogos/sqlzoo
6.The query shown is similar to the previous one, however by joining two copies of the **stops** table we can refer to **stops** by **name** rather than by number. Change the query so that the services between 'Craiglockhart' and 'London Road' are shown. If you are tired of these places try 'Fairmilehead' against 'Tollcross'
route %>% inner_join(stops, by=c(stop="id")) %>% inner_join(route %>% inner_join(stops, by=c(stop="id")), by=c(company="company", num="num") ) %>% filter(name.x=='Craiglockhart' & name.y=='London Road') %>% select(company, num, name.x, name.y)
_____no_output_____
MIT
R/09 Self join.ipynb
madlogos/sqlzoo
7. [Using a self join](https://sqlzoo.net/wiki/Using_a_self_join)Give a list of all the services which connect stops 115 and 137 ('Haymarket' and 'Leith')
route %>% inner_join(route, by=c(company="company", num="num")) %>% filter(stop.x==115 & stop.y==137) %>% distinct(company, num)
_____no_output_____
MIT
R/09 Self join.ipynb
madlogos/sqlzoo
8.Give a list of the services which connect the stops 'Craiglockhart' and 'Tollcross'
route %>% inner_join(stops, by=c(stop="id")) %>% inner_join(route %>% inner_join(stops, by=c(stop="id")), by=c(company="company", num="num") ) %>% filter(name.x=='Craiglockhart' & name.y=='Tollcross') %>% distinct(company, num)
_____no_output_____
MIT
R/09 Self join.ipynb
madlogos/sqlzoo
9.Give a distinct list of the **stops** which may be reached from 'Craiglockhart' by taking one bus, including 'Craiglockhart' itself, offered by the LRT company. Include the company and bus no. of the relevant services.
route %>% inner_join(stops, by=c(stop="id")) %>% inner_join(route %>% inner_join(stops, by=c(stop="id")), by=c(company="company", num="num") ) %>% filter(name.x=='Craiglockhart' & company=='LRT') %>% distinct(name.y, company, num)
_____no_output_____
MIT
R/09 Self join.ipynb
madlogos/sqlzoo
10.Find the routes involving two buses that can go from **Craiglockhart** to **Lochend**.Show the bus no. and company for the first bus, the name of the stop for the transfer,and the bus no. and company for the second bus.> _Hint_ > Self-join twice to find buses that visit Craiglockhart and Lochend, then join those on matching stops.
bus1 <- route %>% inner_join(stops, by=c(stop="id")) %>% inner_join(route %>% inner_join(stops, by=c(stop="id")), by=c(company="company", num="num") ) %>% filter(name.x=='Craiglockhart') bus2 <- route %>% inner_join(stops, by=c(stop="id")) %>% inner_join(route %>% inner_join(stops, by=c(stop="id")), by=c(company="company", num="num") ) %>% filter(name.y=='Lochend') bus1 %>% inner_join(bus2, by=c(stop.y="stop.x")) %>% select(num.x, company.x, name.y.x, num.y, company.y) %>% `names<-`(c('num1', 'company1', 'transfer', 'num2', 'company2')) dbDisconnect(con)
_____no_output_____
MIT
R/09 Self join.ipynb
madlogos/sqlzoo
Tarea 4Con base a los métodos vistos en clase resuelva las siguientes dos preguntas (A) Integrales* $\int_{0}^{1}x^{-1/2}\,\text{d}x$* $\int_{0}^{\infty}e^{-x}\ln{x}\,\text{d}x$* $\int_{0}^{\infty}\frac{\sin{x}}{x}\,\text{d}x$
def f(x): return x**(-0.5) n=1000000 def Integrando1(f): x,y = np.linspace(0,1, num = n +1, retstep = True) return (5/4)*y*f(x[0] + f(x[-1])) + y*np.sum(f(x[1:-1])) Integrando1(f) def f(x): return math.exp(-x) def trapecio2(f, n, a, b): h = (b - a) / float(n) integrando = 0.5 * h * (f(a) + f(b)) for i in range(1, int(n)): integrando = integrando + h * f(a + i * h) return integrando a = 0 b = 10 n = 100 while(abs(trapecio2(f, n, a, b) - trapecio2(f, n * 4, a * 2, b * 2)) > 1e-6): n *= 4 a *= 2 b *= 2 trapecio2(f, n, a, b) def Integrando2(x): funcion = np.exp(-x) return funcion solucion2 = quad(Integrando2,0,np.inf) solucion2 Integrando3 = integrate.quad(lambda x : (np.sin(x))/x, 0, np.inf)[0] print("valor exacto de la integral 3:", Integrando3) n=100 x = np.linspace(0.000001,n,1000001) f = [] for i in range(len(x)): f.append(np.sin(x[i])/x[i]) f = np.array(f) def integrate(in_x,ft_f)->float: calculo=0 for i in range(len(x)-1): calculo = calculo + ((ft_f[i+1])+(ft_f[i]))*abs(in_x[i+1]-in_x[i])/2 return(calculo) integral_3 = integrate(x,f) print(f" Integrando 3 {integral_3}")
Integrando 3 1.5622244668962069
MIT
soluciones/ce.rueda12/tarea4/solucion.ipynb
SamuelCanas/FISI2028-202120
(B) FourierCalcule la transformada rápida de Fourier para la función de la **Tarea 3 (D)** en el intervalo $[0,4]$ ($k$ máximo $2\pi n/L$ para $n=25$). Ajuste la transformada de Fourier para los datos de la **Tarea 3** usando el método de regresión exacto de la **Tarea 3 (C)** y compare con el anterior resultado. Para ambos ejercicios haga una interpolación y grafique para comparar.
df = pd.read_pickle(r"C:\Users\Camilo Rueda\Downloads\ex1.gz") sns.scatterplot(x='x',y='y',data=df) plt.show() df x = df["x"] y = df["y"] lx = [] ly = [] for i in range(len(x)): if x[i]<=1.5 : lx.append(x[i]) ly.append(y[i]) x = np.array(lx) y = np.array(ly) def f(p,x): return (p[0])/((x-p[1])**2 + p[2])**p[3] def L_ajuste(p,x,y): deltaY=f(p,x) - y return np.dot(deltaY,deltaY)/len(y) Nf = 25 def a_j(j): global x, y k_j = 2*np.pi*j/4 n_y = y*np.cos(k_j*x) return integrate.simpson(n_y, x) def b_j(j): global x, y k_j = 2*np.pi*j/4 n_y = y*np.sin(k_j*x) return integrate.simpson(n_y, x) A_j = np.array([a_j(j) for j in range(Nf)]) B_j = np.array([b_j(j) for j in range(Nf)]) x_tilde = np.linspace(0, 4, 10000) k_j = np. array([2*np.pi*j/4 for j in range(Nf)]) y_tilde = np.sum([ (A_j[j]*np.cos(k_j[j]*x_tilde) + B_j[j]*np.sin(k_j[j]*x_tilde)) for j in range(Nf) ], axis=0) plt.plot(x,y) plt.plot(x_tilde, y_tilde)
_____no_output_____
MIT
soluciones/ce.rueda12/tarea4/solucion.ipynb
SamuelCanas/FISI2028-202120
Cyber Literacy in the World of CyberinfrastructureHere you will learn about Cyber Literacy for GIScience.
# This code cell starts the necessary setup for Hour of CI lesson notebooks. # First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below. # Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets. # Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience # This is an initialization cell # It is not displayed because the Slide Type is 'Skip' from IPython.display import HTML, IFrame, Javascript, display from ipywidgets import interactive import ipywidgets as widgets from ipywidgets import Layout import getpass # This library allows us to get the username (User agent string) # import package for hourofci project import sys sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook) import hourofci # load javascript to initialize/hide cells, get user agent string, and hide output indicator # hide code by introducing a toggle button "Toggle raw code" HTML(''' <script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script> <style> .output_prompt{opacity:0;} </style> <input id="toggle_code" type="button" value="Toggle raw code"> ''')
_____no_output_____
BSD-3-Clause
gateway-lesson/gateway/gateway-6.ipynb
mohsen-gis/test2
ReminderContinue with the lessonBy continuing with this lesson you are granting your permission to take part in this research study for the Hour of Cyberinfrastructure: Developing Cyber Literacy for GIScience project. In this study, you will be learning about cyberinfrastructure and related concepts using a web-based platform that will take approximately one hour per lesson. Participation in this study is voluntary.Participants in this research must be 18 years or older. If you are under the age of 18 then please exit this webpage or navigate to another website such as the Hour of Code at https://hourofcode.com, which is designed for K-12 students.If you are not interested in participating please exit the browser or navigate to this website: http://www.umn.edu. Your participation is voluntary and you are free to stop the lesson at any time.For the full description please navigate to this website: Gateway Lesson Research Study Permission. What is in the World of Cyberinfrastructure?To become a user of cyberinfrastructure to solve geospatial problems you must first know what it is all about.You need to develop 'Cyber Literacy,' but what does that mean? Cyber Literacy for GIScience> The ability to understand and use established and emerging technologies to transform all forms and magnitudes of geospatial data into information for interdisciplinary problem solving. Literacy and the Three R's In the 18th and 19th centuries, general education was framed around gaining literacy in the Three R’s:1. Reading2. wRiting3. Reckoning (or, aRithmetic) Here, "literacy" meant the ability to decode and comprehend written language at a rudimentary level. Literacies Literacies outline essential abilities and foundational knowledge. In the 21st century we recognize many new literacies ... * Financial literacy * Health literacy * EcoliteracyAnd now * Cyber literacy Cyber Literacy Basically ... Cyber Literacy for GIScience helps us make sense of the data-rich world using geospatial technologies and cyberinfrastructure. Cyber Literacy: Breaking it down> “**the ability to understand and use > established and emerging technologies**> to transform all forms and magnitudes> of geospatial data into information> for interdisciplinary problem solving.” You mean like Jupyter Notebooks?! Like these lessons?! Yes! You have learned how to use Jupyter Notebooks and are currently using Cyberinfrastructure. You are using a National Science and Engineering Cloud resource called **'Jetstream'** led by the Indiana University Pervasive Technology Institute (PTI). Click hear to learn more about Jetstream including how you can use Jetstream for free. Cyber Literacy: Breaking it down> “the ability to understand and use > established and emerging technologies> **to transform all forms and magnitudes> of geospatial data into information**> for interdisciplinary problem solving.” You mean like mapping Covid-19 using Python?! Yes! You have learned how to use Python to transform geospatial data into a useful map. Cyber Literacy: Breaking it down> “the ability to understand and use > established and emerging technologies> to transform all forms and magnitudes> of geospatial data into information> **for interdisciplinary problem solving.**” You mean like combining Covid-19 cases (health science) and county geometry using geospatial technologies (geographic information science) and cyberinfrastructure (computational science)? Yes! Can you write? Are you a poet?Cyber Literacy is NOT about being a computer genius or a programming wizard. Most people have learned basic literacy--the ability to read and write--however most people are not poets or experts in modern Nepali literature. Similarly, many people can learn basic cyber literacy while not being a programming wizard or expert in high-performance computing resources.You are already on your way to learning cyber literacy. So let's take a closer look at the eight core areas of cyber literacy for GIScience. Eight core areas ![Eight core areas of cyber literacy for GIScience](supplementary/cyberliteracyareas.png) Here are the eight core areas of cyber literacy for GIScience. Eight core areas ![Eight core areas of cyber literacy for GIScience](supplementary/cyberliteracyareas.png) The left side represents three key knowledge areas in GIScience: 1. **Spatial Modeling and Analytics** 2. **Geospatial Data** 3. **Spatial Thinking** Eight core areas ![Eight core areas of cyber literacy for GIScience](supplementary/cyberliteracyareas.png) The right side represents three key knowledge areas in computational science: 1. **Parallel Computing** 2. **Big Data** 3. **Computational Thinking** Eight core areas ![Eight core areas of cyber literacy for GIScience](supplementary/cyberliteracyareas.png) The top center represents a knowledge area to help these two disciplines integrate technologically: 1. **Cyberinfrastructure** Eight core areas ![Eight core areas of cyber literacy for GIScience](supplementary/cyberliteracyareas.png) Just as important; the bottom center represents a knowledge area to help these two disciplines integrate on the people and problem solving side: 1. **Interdisciplinary Communication** Let's check inDo you have to be an expert in parallel computing to be cyber literate?
# Multiple choice question using a ToggleButton widget # This code cell has tags "Init", "Hide", and "5A" import sys sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook) import hourofci widget1=widgets.ToggleButtons( options=['Yes, absolutely!','No'], description='', disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltips=['Yes!', 'No way!'], ) # Show the options. display(widget1) def out1(): print("Go to the next slide to see if you were correct.") hourofci.SubmitBtn(widget1,out1)
_____no_output_____
BSD-3-Clause
gateway-lesson/gateway/gateway-6.ipynb
mohsen-gis/test2
NopeNo! You do *not* need to be an expert in parallel programming to be cyber literate. Just like basic literacy, you can have a basic understanding of parallel programming and rely on other experts or tools to make use of the technology to advance your own research. Let's check inFind all of the core areas of cyber literacy for GIScience. Check all of them. Hold Ctrl to select multiple entries.
# Multiple choice question using a ToggleButton widget # This code cell has tags "Init", "Hide", and "5A" widget2=widgets.SelectMultiple( options=['Interdisciplinary communication', 'The Internet', 'Parallel Computing', 'Geospatial Data', 'A Shark with a Laser', 'Computational Thinking', 'Cyberinfrastructure', 'Spatial Modeling and Analytics', 'Cyber Security', 'Spatial Thinking', 'Hip Po the Hippo', 'Big Data'], rows=12, description='', disabled=False ) # Show the options. display(widget2) def out2(): print("Go to the next slide to see if you were correct.") hourofci.SubmitBtn(widget2,out2)
_____no_output_____
BSD-3-Clause
gateway-lesson/gateway/gateway-6.ipynb
mohsen-gis/test2
Core knowledge areas and your next stepWhat core knowledge area are you most excited about?
widget3=widgets.RadioButtons( options=['Interdisciplinary communication', 'Parallel Computing', 'Geospatial Data', 'Computational Thinking', 'Cyberinfrastructure', 'Spatial Modeling and Analytics', 'Spatial Thinking', 'Big Data'], description='', disabled=False ) # Show the options. display(widget3) def out3(): print("Click the next slide to see how you can learn more about it!") hourofci.SubmitBtn(widget3,out3)
_____no_output_____
BSD-3-Clause
gateway-lesson/gateway/gateway-6.ipynb
mohsen-gis/test2
Demo: Probabilistic neural network training for denoising of synthetic 2D dataThis notebook demonstrates training a probabilistic CARE model for a 2D denoising task, using provided synthetic training data. Note that training a neural network for actual use should be done on more (representative) data and with more training time.More documentation is available at http://csbdeep.bioimagecomputing.com/doc/.
from __future__ import print_function, unicode_literals, absolute_import, division import numpy as np import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' from tifffile import imread from csbdeep.utils import download_and_extract_zip_file, axes_dict, plot_some, plot_history from csbdeep.utils.tf import limit_gpu_memory from csbdeep.io import load_training_data from csbdeep.models import Config, CARE
_____no_output_____
BSD-3-Clause
examples/denoising2D_probabilistic/1_training.ipynb
uschmidt83/CSBDeep
The TensorFlow backend uses all available GPU memory by default, hence it can be useful to limit it:
# limit_gpu_memory(fraction=1/2)
_____no_output_____
BSD-3-Clause
examples/denoising2D_probabilistic/1_training.ipynb
uschmidt83/CSBDeep
Training dataDownload and read provided training data, use 10% as validation data.
download_and_extract_zip_file ( url = 'http://csbdeep.bioimagecomputing.com/example_data/synthetic_disks.zip', targetdir = 'data', ) (X,Y), (X_val,Y_val), axes = load_training_data('data/synthetic_disks/data.npz', validation_split=0.1, verbose=True) c = axes_dict(axes)['C'] n_channel_in, n_channel_out = X.shape[c], Y.shape[c] plt.figure(figsize=(12,5)) plot_some(X_val[:5],Y_val[:5]) plt.suptitle('5 example validation patches (top row: source, bottom row: target)');
_____no_output_____
BSD-3-Clause
examples/denoising2D_probabilistic/1_training.ipynb
uschmidt83/CSBDeep
CARE modelBefore we construct the actual CARE model, we have to define its configuration via a `Config` object, which includes * parameters of the underlying neural network,* the learning rate,* the number of parameter updates per epoch,* the loss function, and* whether the model is probabilistic or not.The defaults should be sensible in many cases, so a change should only be necessary if the training process fails. For a probabilistic model, we have to explicitly set `probabilistic=True`.---Important: Note that for this notebook we use a very small number of update steps per epoch for immediate feedback, whereas this number should be increased considerably (e.g. `train_steps_per_epoch=400`) to obtain a well-trained model.
config = Config(axes, n_channel_in, n_channel_out, probabilistic=True, train_steps_per_epoch=30) print(config) vars(config)
_____no_output_____
BSD-3-Clause
examples/denoising2D_probabilistic/1_training.ipynb
uschmidt83/CSBDeep
We now create a CARE model with the chosen configuration:
model = CARE(config, 'my_model', basedir='models')
_____no_output_____
BSD-3-Clause
examples/denoising2D_probabilistic/1_training.ipynb
uschmidt83/CSBDeep
TrainingTraining the model will likely take some time. We recommend to monitor the progress with [TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard) (example below), which allows you to inspect the losses during training.Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.You can start TensorBoard from the current working directory with `tensorboard --logdir=.`Then connect to [http://localhost:6006/](http://localhost:6006/) with your browser.![](http://csbdeep.bioimagecomputing.com/img/tensorboard_denoising2D_probabilistic.png)
history = model.train(X,Y, validation_data=(X_val,Y_val))
_____no_output_____
BSD-3-Clause
examples/denoising2D_probabilistic/1_training.ipynb
uschmidt83/CSBDeep
Plot final training history (available in TensorBoard during training):
print(sorted(list(history.history.keys()))) plt.figure(figsize=(16,5)) plot_history(history,['loss','val_loss'],['mse','val_mse','mae','val_mae']);
_____no_output_____
BSD-3-Clause
examples/denoising2D_probabilistic/1_training.ipynb
uschmidt83/CSBDeep
EvaluationExample results for validation images.
plt.figure(figsize=(12,10)) _P = model.keras_model.predict(X_val[:5]) _P_mean = _P[...,:(_P.shape[-1]//2)] _P_scale = _P[...,(_P.shape[-1]//2):] plot_some(X_val[:5],Y_val[:5],_P_mean,_P_scale,pmax=99.5) plt.suptitle('5 example validation patches\n' 'first row: input (source), ' 'second row: target (ground truth), ' 'third row: predicted Laplace mean, ' 'forth row: predicted Laplace scale');
_____no_output_____
BSD-3-Clause
examples/denoising2D_probabilistic/1_training.ipynb
uschmidt83/CSBDeep
Export model to be used with CSBDeep **Fiji** plugins and **KNIME** workflowsSee https://github.com/CSBDeep/CSBDeep_website/wiki/Your-Model-in-Fiji for details.
model.export_TF()
_____no_output_____
BSD-3-Clause
examples/denoising2D_probabilistic/1_training.ipynb
uschmidt83/CSBDeep
Imports
import numpy as np from numpy import pi, cos, sin, array from scipy import signal from scipy.linalg import toeplitz, inv import matplotlib.pyplot as plt plt.style.use('dark_background') np.set_printoptions(precision=3, suppress=True) from warnings import filterwarnings filterwarnings('ignore', category=UserWarning)
_____no_output_____
MIT
chap4.ipynb
mohantyk/dsp_in_comm
Useful functions
def calculate_error_rms(x, x_est): error = x[:len(x_est)] - x_est error_rms = np.linalg.norm(error)/len(error) return error_rms.round(5)
_____no_output_____
MIT
chap4.ipynb
mohantyk/dsp_in_comm
Zero Forcing Equalizer
x = np.random.choice([-1, 1], 40) channel = array([.1, -.1, 0.05, 1, .05]) y = np.convolve(x, channel) _, ax = plt.subplots(1, 2, figsize=(10, 4)) ax[0].stem(x) ax[1].stem(y); # 6 tap equalizer taps = 6 Y = toeplitz(y[taps-1:taps-1+taps], np.flip(y[:taps])) zerof = inv(Y)@x[:taps] zerof x_zerof = np.convolve(y, zerof, 'valid') # x_est is shorter than x because of 'valid' plt.stem(x_zerof); calculate_error_rms(x, x_zerof)
_____no_output_____
MIT
chap4.ipynb
mohantyk/dsp_in_comm
Least Squares Error Equalizer
taps = 6 L = 30 # number of input samples used Y = toeplitz(y[taps-1: taps-1+L ] ,np.flip(y[:taps])) lse = inv(Y.T @ Y)@Y.T @ x[:L] #.T is fine because Y is a real matrix, else use Y.conj().T lse x_lse = np.convolve(y, lse, 'valid') plt.stem(x_lse); calculate_error_rms(x, x_lse)
_____no_output_____
MIT
chap4.ipynb
mohantyk/dsp_in_comm
Channel Estimation
L = 30 taps = 5 X = toeplitz( x[:L], np.zeros(6) ) h_est = inv(X.T@X)@X.T@y[:L] h_est
_____no_output_____
MIT
chap4.ipynb
mohantyk/dsp_in_comm
MMSE Equalizer
x_var = 1 noise_var = 1e-4 # SNR = 40 dB channel = array([.1, -.1, .05, .9+0.1j, .05], complex) # L = 5 N = 5 # Taps in equalizer L = len(channel) D = N + L - 4 # Calculate H col = np.zeros(N, complex) col[0] = channel[0] row = np.zeros(N+L-1, complex) row[:L] = channel H = toeplitz(col, row) H.shape # Calculate R_y R_x = x_var * np.eye(N+L-1) R_v = noise_var * np.eye(N) R_y = H @ R_x @ H.conj().T + R_v xD_x = np.zeros(N+L-1) xD_x[D] = x_var C = xD_x @ H.conj().T @ inv(R_y) C corrected_channel = np.convolve(channel , C) corrected_channel plt.stem(channel.real, markerfmt='o', label='real') plt.stem(channel.imag, markerfmt='x', label='imag') plt.legend(); symbols = 2000 x = (np.random.choice([-1, 1], symbols) + 1j*np.random.choice([-1, 1], symbols))*np.sqrt(x_var/2) noise = (np.random.choice([-1, 1], symbols) + 1j*np.random.choice([-1, 1], symbols))*np.sqrt(noise_var/2) y = np.convolve(x, channel, 'full') # This is not the same as signal.lfilter y = y[:len(x)] observations = y + noise
_____no_output_____
MIT
chap4.ipynb
mohantyk/dsp_in_comm
A quick note about how `np.convolve()` and `signal.lfilter()` are related.1. ```y_full = np.convolve(x, channel, 'full') size: 2004```2. ```y_same = np.convolve(x, channel, 'same') size: 2000, ```This is same as `y_full[2:-2]` (drops 2 samples in the beginning and end)3. ```y_filt = signal.lfilter(channel, 1, x) size: 2000```This is same as `y_full[:-4]` (drops 4 samples at end)
estimation = signal.lfilter(C, 1, observations) error = estimation[D:] - x[:-D] evm_db = 10*np.log10(np.var(error)/x_var) snr_db = 10*np.log10(np.var(y)/noise_var) evm_rms = 100*np.sqrt( np.var(error)/x_var ) print(f'EVM: {evm_db:.2f} dB , {evm_rms:.4f} (% rms) \nSNR: {snr_db:.1f} dB') fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(10,8)) ax1.scatter(observations.real, observations.imag) ax1.grid(linestyle='dashed') ax1.set_title('Observations Y[n]') ax2.scatter(estimation[D:].real, estimation[D:].imag); ax2.grid(linestyle='dashed'); ax2.set_title('Estimate of x with EVM={:.2f}dB'.format(evm_db)); ax3.stem(corrected_channel.real, markerfmt='o', label='real') ax3.stem(corrected_channel.imag, markerfmt='x', label='imag') ax3.legend() ax3.set_title('Corrected channel') w, h = signal.freqz(channel, 1) ax4.plot(w/(2*pi), 20*np.log10(abs(h)), label='Original channel') w, h = signal.freqz(corrected_channel, 1) ax4.plot(w/(2*pi), 20*np.log10(abs(h)), label='Equalized channel'); ax4.set_title('Magnitude Responses') ax4.legend() ax4.grid(linestyle='dashed');
_____no_output_____
MIT
chap4.ipynb
mohantyk/dsp_in_comm
Least Mean Squares (LMS)
x_len = 1500 x = np.random.choice([-1, 1], x_len) + 1j*np.random.choice([-1, 1],x_len) channel = array([-.19j, .14, 1+.1j, -.16, .11j+.03]) y = signal.lfilter(channel, 1, x) _, (ax1, ax2) = plt.subplots(2, 1) ax1.stem(x[:80].imag) ax2.stem(y[:80].imag); taps = 13 equalizer = np.zeros(taps, complex) equalizer[taps//2] = 1 # All pass filter, just delays the input channel_delay = np.argmax(channel) equalizer_delay = np.argmax(equalizer) approx_delay = channel_delay + equalizer_delay print(f'Initial approximate delay : {approx_delay} ({channel_delay} + {equalizer_delay})') u = 0.005 # Learning rate Y = np.zeros_like(equalizer, complex) estimate = np.zeros_like(y, complex) error = np.zeros_like(y, complex) # Adaptation loop for n, sample in enumerate(y): Y = np.roll(Y, 1) Y[0] = sample estimate[n] = np.dot(Y, equalizer) if n >= approx_delay+5: training_symbol = x[n-approx_delay] error[n] = training_symbol - estimate[n] equalizer += 2*u*error[n]*Y.conjugate() #print('{}: {} \t {:.2f} {}'.format(n, training_symbol, estimate[n], equalizer[:2])) plt.plot(abs(error)) plt.xlabel('LMS iterations') plt.title('Absolute value of error') plt.grid(); fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,4)) ax1.scatter(y[10:].real, y[10:].imag) ax1.grid(linestyle='dashed') ax2.scatter(estimate[200:].real, estimate[200:].imag) ax2.grid(linestyle='dashed');
_____no_output_____
MIT
chap4.ipynb
mohantyk/dsp_in_comm
Decision feedback equalizers
# QPSK signal x_len = 200 qpsk = np.random.choice([-1, 1], x_len) + 1j*np.random.choice([-1, 1], x_len) channel = array([0.05, 0.1j, 1.0, 0.20, -0.15j, 0.1]) y = signal.lfilter(channel, 1, qpsk) # Feed forward filter # feed_forward_filter = array([0, 0, 0, 1, 0]) # Pass through feed_forward_filter = array([ 0.004, 0.011j, -0.06, -0.1j, 1]) # From MMSE, see next section feed_forward = signal.lfilter(feed_forward_filter, 1, y) # Decision feedback loop post_cursors = 3 feedback_coeffs = -channel[-post_cursors:] # Negative of post-cursors fb_pipe = np.zeros_like(feedback_coeffs, complex) estimate = np.zeros_like(y, complex) last_estimate = 0 for n, ff_out in enumerate(feed_forward): decision = np.sign(last_estimate.real) + 1j*np.sign(last_estimate.imag) fb_pipe = np.roll(fb_pipe, 1) fb_pipe[0] = decision last_estimate = ff_out + np.dot(fb_pipe, feedback_coeffs) estimate[n] = last_estimate # EVM ideal = np.sign(y[5:].real) + 1j*np.sign(y[5:].imag) error = y[5:] - ideal evm_observation = 10*np.log10(error.var()/ideal.var()) print('EVM of observation: {:.2f} dB'.format(evm_observation)) ideal = np.sign( estimate[5:].real) + 1j*np.sign(estimate[5:].imag) error = estimate[5:] - ideal evm_estimate = 10*np.log10(error.var()/ideal.var()) print('EVM of estimate: {:.2f} dB'.format(evm_estimate)) _, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,4)) ax1.scatter(y[5:].real, y[5:].imag) ax1.set_title('Observation y[n]') ax1.grid() ax2.scatter(estimate[10:].real, estimate[10:].imag) ax2.set_title('Estimate of x[n]') ax2.grid();
_____no_output_____
MIT
chap4.ipynb
mohantyk/dsp_in_comm
MMSE Optimal Coefficient Calculation
x_var = 1 noise_var = 0 # Noise-free channel = array([0.05, 0.1j, 1]) # After removing post-cursors (using feedback loop) L = len(channel) # number of taps in channel N = 5 # taps in feed forward FIR D = N + L - 2 # Calculate H col = np.zeros(N, complex) col[0] = channel[0] row = np.zeros(N+L-1, complex) row[:L] = channel H = toeplitz(col, row) H.shape # Calculate covariance matrices R_x = x_var*np.eye(N+L-1) R_v = noise_var*np.eye(N) R_y = H @ R_x @ H.conj().T + R_v xD_x = np.zeros(N+L-1) xD_x[D] = x_var equalizer = xD_x @ H.conj().T @ inv(R_y) equalizer
_____no_output_____
MIT
chap4.ipynb
mohantyk/dsp_in_comm
Combine LMS and Decision Feedback
x_len = 1000 qpsk = np.random.choice([1, -1], x_len) + 1j*np.random.choice([1, -1], x_len) channel = array([0.05, 0.1j, 1.0, 0.2, -0.15j, 0.1]) y = signal.lfilter(channel, 1, qpsk) # Initialize feed forward equalizer equalizer = array([ 0.004, 0.0109j, -0.06, -0.1j, 1.0]) # MMSE (see prev section) N = len(equalizer) channel_delay = np.argmax(channel) equalizer_delay = np.argmax(equalizer) total_delay = channel_delay + equalizer_delay Y = np.zeros_like(equalizer, complex) equalizer_output = np.zeros_like(y, complex) # Adaptive loop post_cursors = 3 fb_coeffs = -channel[-post_cursors:] fb_pipe = np.zeros_like(fb_coeffs, complex) error = np.zeros_like(y, complex) estimate = np.zeros_like(y, complex) u = 0.005 # LMS learning rate last_estimate = 0 for n, obs in enumerate(y): Y = np.roll(Y, 1) Y[0] = obs equalizer_output[n] = np.dot(Y, equalizer) decision = np.sign(last_estimate.real) + 1j*np.sign(last_estimate.imag) fb_pipe = np.roll(fb_pipe, 1) fb_pipe[0] = decision estimate[n] = equalizer_output[n] + np.dot(fb_coeffs, fb_pipe) last_estimate = estimate[n] if n > total_delay : training_symbol = qpsk[n-total_delay] error[n] = training_symbol - estimate[n] equalizer += 2*u*error[n]*Y.conj() fb_coeffs += 2*u*error[n]*fb_pipe.conj() plt.plot(abs(error)); # EVM ideal = np.sign(y[5:].real) + 1j*np.sign(y[5:].imag) y_error = y[5:] - ideal evm_observation = 10*np.log10(y_error.var()/ideal.var()) print('EVM of observation: {:.2f} dB'.format(evm_observation)) # Check EVM after error has gone down ideal = np.sign( estimate[600:].real) + 1j*np.sign(estimate[600:].imag) estimate_error = estimate[600:] - ideal evm_estimate = 10*np.log10(estimate_error.var()/ideal.var()) print('EVM of estimate: {:.2f} dB'.format(evm_estimate)) _, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,4)) ax1.scatter(y[5:].real, y[5:].imag) ax1.set_title('Observation y[n]') ax1.grid() ax2.scatter(estimate[600:].real, estimate[600:].imag) ax2.set_title('Estimate of x[n]') ax2.grid();
_____no_output_____
MIT
chap4.ipynb
mohantyk/dsp_in_comm
0. get data demographics with pandas
datadir = '../data' zipped_data = os.path.join(datadir, 'thickness.zip') with zipfile.ZipFile(zipped_data, 'r') as zip_ref: zip_ref.extractall(datadir) dfname = os.path.join(datadir, 'thickness.csv') df = pd.read_csv(dfname) idx = np.array(df["ID2"]) age = np.array(df["AGE"]) iq = np.array(df["IQ"])
_____no_output_____
MIT
code/analysis_01.ipynb
jsheunis/COSN-talk
1. load thickness data
j = 0 for Sidx in idx: # load mgh files for left hem for file_L in glob.glob('../data/thickness/%s*_lh2fsaverage5_20.mgh' % (Sidx)): S_Lmgh = mgh.load(file_L).get_fdata() S_Larr = np.array(S_Lmgh)[:,:,0] # load mgh files for right hem for file_R in glob.glob('../data/thickness/%s*_rh2fsaverage5_20.mgh' % (Sidx)): S_Rmgh = mgh.load(file_R).get_fdata() S_Rarr = np.array(S_Rmgh)[:,:,0] # concatenate left & right thickness for each subject Sidx_thickness = np.concatenate((S_Larr, S_Rarr)).T if j == 0: thickness = Sidx_thickness else: thickness = np.concatenate((thickness, Sidx_thickness), axis=0) j += 1 Mean_thickness = thickness.mean(axis=0) print("all thicknes data loaded shape: ", thickness.shape) print("Mean thickness shape: ", Mean_thickness.shape) fig01 = plt.imshow(thickness, extent=[0,20484,0,259], aspect='auto')
_____no_output_____
MIT
code/analysis_01.ipynb
jsheunis/COSN-talk
2. plot mean thickness along the cortex
Fs_Mesh_L = read_geometry(os.path.join(datadir, 'fsaverage5/lh.pial')) Fs_Mesh_R = read_geometry(os.path.join(datadir, 'fsaverage5/rh.pial')) Fs_Bg_Map_L = load_surf_data(os.path.join(datadir, 'fsaverage5/lh.sulc')) Fs_Bg_Map_R = load_surf_data(os.path.join(datadir, 'fsaverage5/rh.sulc')) Mask_Left = nb.freesurfer.io.read_label((os.path.join( datadir,'fsaverage5/lh.cortex.label'))) Mask_Right = nb.freesurfer.io.read_label((os.path.join( datadir,'fsaverage5/rh.cortex.label'))) surf_mesh = {} surf_mesh['coords'] = np.concatenate((Fs_Mesh_L[0], Fs_Mesh_R[0])) surf_mesh['tri'] = np.concatenate((Fs_Mesh_L[1], Fs_Mesh_R[1])) bg_map = np.concatenate((Fs_Bg_Map_L, Fs_Bg_Map_R)) medial_wall = np.concatenate((Mask_Left, 10242 + Mask_Right)) fig02 = myvis.plot_surfstat(surf_mesh, bg_map, Mean_thickness, mask = medial_wall, cmap = 'viridis', vmin = 1.5, vmax = 4)
_____no_output_____
MIT
code/analysis_01.ipynb
jsheunis/COSN-talk
3. build the stats model
term_intercept = FixedEffect(1, names="intercept") term_age = FixedEffect(age, "age") term_iq = FixedEffect(iq, "iq") model = term_intercept + term_age slm = SLM(model, -age, surf=surf_mesh) slm.fit(thickness) tvals = slm.t.flatten() pvals = slm.fdr() print("t-values: ", tvals) # These are the t-values of the model. print("p-values: ", pvals) # These are the p-values of the model. fig03 = myvis.plot_surfstat(surf_mesh, bg_map, tvals, mask = medial_wall, cmap = 'gnuplot', vmin = tvals.min(), vmax = tvals.max()) fig04 = myvis.plot_surfstat(surf_mesh, bg_map, pvals, mask = medial_wall, cmap = 'YlOrRd', vmin = 0, vmax = 0.05)
_____no_output_____
MIT
code/analysis_01.ipynb
jsheunis/COSN-talk
This is a Level 1 HeadingThis is some paragraph text that describes the code below:
print("this is the code the was describes above")
this is the code the was describes above
MIT
HelloJupyter.ipynb
fsslh007/artificial-intelligence-fundamentals-with-python-2021-class
SNOW partitioning parallelThe filter is used to perform SNOW algorithm in parallel and serial mode to save computational time and memory requirement respectively. [SNOW](https://journals.aps.org/pre/abstract/10.1103/PhysRevE.96.023307) algorithm converts a binary image in to partitioned regions while avoiding oversegmentation. SNOW_partitioning_parallel speeds up this process by decomposing the domain into several subdomains and either process them in different cores in parallel to save time or one by one in single core to save memory requirements. Import Modules
import numpy as np import porespy as ps from porespy.tools import randomize_colors import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec gs = gridspec.GridSpec(2, 4) gs.update(wspace=0.5) np.random.seed(10) ps.visualization.set_mpl_style()
_____no_output_____
MIT
examples/filters/tutorials/snow_partitioning_parallel.ipynb
jamesobutler/porespy
Create a random image of overlapping spheres
im = ps.generators.overlapping_spheres([1000, 1000], r=10, porosity=0.5) fig, ax = plt.subplots() ax.imshow(im, origin='lower');
_____no_output_____
MIT
examples/filters/tutorials/snow_partitioning_parallel.ipynb
jamesobutler/porespy
Apply SNOW_partitioning_parallel on the binary image
snow_out = ps.filters.snow_partitioning_parallel( im=im, divs=2, r_max=5, sigma=0.4)
_____no_output_____
MIT
examples/filters/tutorials/snow_partitioning_parallel.ipynb
jamesobutler/porespy
Plot output results
fig, ax = plt.subplots(1, 3, figsize=[9, 3]) ax[0].imshow(snow_out.im); ax[1].imshow(snow_out.dt); ax[2].imshow(randomize_colors(snow_out.regions)); ax[0].set_title('Binary Image'); ax[1].set_title('Euclidean Distance Transform') ax[2].set_title('Segmented Image'); print(f"Number of regions: {snow_out.regions.max()}")
Number of regions: 1006
MIT
examples/filters/tutorials/snow_partitioning_parallel.ipynb
jamesobutler/porespy
Blog 4: Spectral ClusteringIn this blog post, we'll explore several algorithms to cluster data points. Some notation:- Boldface capital letters like $\mathbf{A}$ are matrices.- Boldface lowercase letters like $\mathbf{v}$ are vectors. The clustering problemWe're aiming to solve the problem of assigning labels to observations based on the distribution of their features. In this exercise, the features are represented as the matrix $\mathbf{X}$, with each row representing a data point and each data point being in $\mathbb{R}^2$. We are also dealing with the simple case of 2 clusters for now. What we do generalize is the shape of the distribution in $\mathbb{R}^2$. But first, let's look at a simple 2-cluster distribution.
import numpy as np from sklearn import datasets from matplotlib import pyplot as plt n = 200 np.random.seed(1111) X, y = datasets.make_blobs(n_samples=n, shuffle=True, random_state=None, centers = 2, cluster_std = 2.0) plt.scatter(X[:,0], X[:,1])
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
For this simple distribution, we can use k-means clustering. Intuitively, this minimizes the distances between each cluster's members and its "center of gravity", and works well for roughly circular blobs.
from sklearn.cluster import KMeans km = KMeans(n_clusters = 2) km.fit(X) plt.scatter(X[:,0], X[:,1], c = km.predict(X))
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
Generalizing the distributionNow let's look at this distribution of data points.
np.random.seed(1234) n = 200 X, y = datasets.make_moons(n_samples=n, shuffle=True, noise=0.05, random_state=None) plt.scatter(X[:,0], X[:,1])
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
The two clusters are still obvious by sight, but k-means clustering fails.
km = KMeans(n_clusters = 2) km.fit(X) plt.scatter(X[:,0], X[:,1], c = km.predict(X))
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
Constructing a similarity matrix AAn important ingredient in all of the clustering algorithms in this post is the similarity matrix *similarity matrix* $\mathbf{A}$. A is a symmetric square matrix with n rows and columns. `A[i,j]` should be equal to `1` if and only if `X[i]` is close to `X[j]`. To quantify closeness, we will have a threshold distance `epsilon`, set to 0.4 for now. The diagonal entries `A[i,i]` should all be equal to zero.
from sklearn.metrics import pairwise_distances epsilon = 0.4 dist = pairwise_distances(X) A = np.array(dist < epsilon).astype('int') np.fill_diagonal(A,0) A
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
Norm cut objective functionNow that we have encoded the pairwise distances of the data points in $\mathbf{A}$, we can define the clustering problem as a minimization problem. Intuitively, the parameter space of this minimization problem should be the possible cluster assignments (a vector of n 0s and 1s), and the objective function should decrease when our assignment yields more proximity within each cluster, and increase when our assignment yields more proximity between different clusters. We thus define the *binary norm cut objective* of the similarity matrix $\mathbf{A}$ :$$N_{\mathbf{A}}(C_0, C_1)\equiv \mathbf{cut}(C_0, C_1)\left(\frac{1}{\mathbf{vol}(C_0)} + \frac{1}{\mathbf{vol}(C_1)}\right)\;.$$In this expression, - $C_0$ and $C_1$ are the clusters assigned.- $\mathbf{cut}(C_0, C_1) \equiv \sum_{i \in C_0, j \in C_1} a_{ij}$ (the "*cut*") is the number of ones in $\mathbf{A}$ connecting $C_0$ to cluster $C_1$.- $\mathbf{vol}(C_0) \equiv \sum_{i \in C_0}d_i$, where $d_i = \sum_{j = 1}^n a_{ij}$ is the *degree* of row $i$. This volume term measures the size and connectedness within the closter $C_0$. The cut termThe function `cut(A,y)` below computes the cut term. We first construct a `diff` matrix ($n$ by $n$), whose $i,j$-th term is an indicator for points i and j being in different clusters under classification `y`. We then elementwise-multiply it with the similarity matrix.
def cut(A,y): # diff[i,j] is 1 iff y[i] is in a different group than y[j] diff = np.array([y != i for i in y], dtype='int') # return sum of entries in A where diff is 1, divide by 2 due to double counting return np.sum(np.multiply(diff,A))/2
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
We first test it with the true labels, `y`.
cut(A,y)
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
...and then with randomly generated labels
randn = np.random.randint(0,2,n) cut(A,randn)
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
As expected, the true labels yield a lower cut term than the fake labels. The Volume Term Now we compute the *volume* of each cluster. The volume of the cluster is just the sum of degrees of the cluster's elements. Remember we want to minimize $\frac{1}{\mathbf{vol}(C_0)}$ so we want to maximize the volume.
def vols(A,y): # sum the rows of A where the row belongs to the cluster v0 = np.sum(A[y==0,]) v1 = np.sum(A[y==1,]) return v0, v1
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
Now we have all the ingredients for the norm cut objective.
def normcut(A,y): v0, v1 = vols(A,y) return cut(A,y) * (1/v0 + 1/v1) print(vols(A,y)) print(normcut(A,y))
(2299, 2217) 0.011518412331615225
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
Again, the true labels `y` yield a much lower minimizing value than the random labels.
normcut(A,randn)
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
Part CUnfortunately, with what we have, the parameter space (the possible set of labels) is too large for a computationally efficient algorithm. This is why we need some linear algebra magic to give us another formula for the norm cut objective:$$\mathbf{N}_{\mathbf{A}}(C_0, C_1) = \frac{\mathbf{z}^T (\mathbf{D} - \mathbf{A})\mathbf{z}}{\mathbf{z}^T\mathbf{D}\mathbf{z}}\;,$$where- $\mathbf{D}$ is the diagonal matrix with nonzero entries $d_{ii} = d_i$, and where $d_i = \sum_{j = 1}^n a_i$ is the degree.- and $\mathbf{z}$ is a vector such that$$z_i = \begin{cases} \frac{1}{\mathbf{vol}(C_0)} &\quad \text{if } y_i = 0 \\ -\frac{1}{\mathbf{vol}(C_1)} &\quad \text{if } y_i = 1 \\ \end{cases}$$Since the volume term is just a function of `A` and `y`, $\mathbf{z}$ encodes all the information in `A` and `y` through the volume term and the sign. We define the function `transform(A,y)` to compute the appropriate $\mathbf{z}$ vector.
def transform(A,y): # compute volumes v0, v1 = vols(A,y) # initialize z to be array of same shape as y, then fill depending on y z = np.where(y==0, 1/v0, -1/v1) return z z = transform(A,y) # degree matrix: row sums placed on diagonal # the "at" sign is the matrix product D = np.diag([email protected](n)) normcut_formula = (z@(D-A)@z)/(z@D@z) normcut_formula
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
We check that the value of the norm cut function is numerically close by either method.
np.isclose(normcut(A,y), normcut_formula)
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
We can also check the identity $\mathbf{z}^T\mathbf{D}\mathbb{1} = 0$, where $\mathbb{1}$ is the vector of `n` ones. This identity effectively says that $\mathbf{z}$ should contain roughly as many positive as negative entries, i.e. as many labels in each cluster.
D = np.diag([email protected](n)) np.isclose((z@[email protected](n)),0)
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
We denote the objective function$$ R_\mathbf{A}(\mathbf{z})\equiv \frac{\mathbf{z}^T (\mathbf{D} - \mathbf{A})\mathbf{z}}{\mathbf{z}^T\mathbf{D}\mathbf{z}} $$We can minimize this function subject to the condition $\mathbf{z}^T\mathbf{D}\mathbb{1} = 0$, which says that the clusters ar equally sized. We can guarantee the condition holds if, instead of minimizing over $\mathbf{z}$, we minimize the orthogonal complement of $\mathbf{z}$ relative to $\mathbf{D}\mathbb{1}$. The `orth_obj` function computes this. Then we use the `minimize` function from `scipy.optimize` to minimize the function `orth_obj` with respect to $\mathbf{z}$.
def orth(u, v): return (u @ v) / (v @ v) * v e = np.ones(n) d = D @ e def orth_obj(z): z_o = z - orth(z, d) return (z_o @ (D - A) @ z_o)/(z_o @ D @ z_o) from scipy.optimize import minimize z_min = minimize(fun=orth_obj, x0=np.ones(n)).x
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
By construction, the sign of `z_min[i]` corresponds to the cluster label of data point `i`. We plot the points below, coloring it by the sign of `z_min`.
plt.scatter(X[:,0], X[:,1], c = (z_min >= 0))
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
Part FExplicitly minimizing the orthogonal objective is extremely slow, but thankfully find a solution using eigenvalues and eigenvectors.The Rayleigh-Ritz Theorem implies that the minimizing $\mathbf{z}$ is a solution to the eigenvalue problem $$ \mathbf{D}^{-1}(\mathbf{D} - \mathbf{A}) \mathbf{z} = \lambda \mathbf{z}\;, \quad \mathbf{z}^T\mathbb{1} = 0\;.$$Since $\mathbb{1}$ is the eigenvector with smallest eigenvalue, the vector $\mathbf{z}$ that we want must be the eigenvector with the second-smallest eigenvalue. We construct the *Laplacian* matrix of $\mathbf{A}$, $\mathbf{L} = \mathbf{D}^{-1}(\mathbf{D} - \mathbf{A})$, and find the eigenvector corresponding to its second-smallest eigenvalue, `z_eig`.
L = np.linalg.inv(D)@(D-A) Lam, U = np.linalg.eig(L) z_eig = U[:,1]
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
Now we color the point according to the sign of `z_eig`. Looks pretty good.
plt.scatter(X[:,0], X[:,1], c = z_eig<0)
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
Finally, we can define `spectral_clustering(X, epsilon)` which takes in the input data `X` and the distance threshold `epsilon`, performs spectral clustering, and returns an array of labels indicating whether data point `i` is in group `0` or group `1`.
def spectral_clustering(X, epsilon): ''' Given input X (n by 2 array) and distance threshold epsilon, performs spectral clustering, and returns n by 1 labels of cluster classification. ''' A = np.array(pairwise_distances(X) < epsilon).astype('int') np.fill_diagonal(A,0) D = np.diag([email protected](X.shape[0])) L = np.linalg.inv(D)@(D-A) Lam, U = np.linalg.eig(L) z_eig = U[:,1] labels = np.array(z_eig<0, dtype='int') return labels spectral_clustering(X,0.4)
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io
Now we can run som experiments, making the problem harder by increasing the noise parameter, and increase the computation by increasing `n`.
np.random.seed(123) fig, axs = plt.subplots(3, figsize = (8,20)) noises = [0.05, 0.1, 0.2] for i in range(3): X, y = datasets.make_moons(n_samples=1000, shuffle=True, noise=noises[i], random_state=None) axs[i].scatter(X[:,0], X[:,1], c = spectral_clustering(X,0.4)) axs[i].set_title(label = "noise = " + str(noises[i]))
_____no_output_____
MIT
notebooks/blog4.ipynb
zhijianli9999/zhijianli9999.github.io