markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
We can make a forward shift by using **minus** for the periods. Here we use **(-1)** to shift the data one day forward: | TS.shift(-1) | _____no_output_____ | MIT | 14.5 Shifting Data Through Time (Lagging and Leading).ipynb | SiamakMushakhian/Numpy-Pandas-Seaborn |
Usually when dealing with time series, we create a shifted data and attach it as a new column in the time series, like this: | TS['lag1'] = TS['apple'].shift(1)
TS | _____no_output_____ | MIT | 14.5 Shifting Data Through Time (Lagging and Leading).ipynb | SiamakMushakhian/Numpy-Pandas-Seaborn |
Shifting data in time series generates **missing values**, we can delete these missing values using the function **dropna()**, and to make the changes reflected in the original times series we use the argument **inplace = True**: | TS.dropna(inplace = True)
TS | _____no_output_____ | MIT | 14.5 Shifting Data Through Time (Lagging and Leading).ipynb | SiamakMushakhian/Numpy-Pandas-Seaborn |
For example we can calculate the daily percent change for stock prices using the **shift()** function like this: | TS['percent_change'] = ( TS['apple'] / TS['apple'].shift(1) ) -1
TS | _____no_output_____ | MIT | 14.5 Shifting Data Through Time (Lagging and Leading).ipynb | SiamakMushakhian/Numpy-Pandas-Seaborn |
Again here we can delete the missing values like this: | TS.dropna(inplace = True)
TS | _____no_output_____ | MIT | 14.5 Shifting Data Through Time (Lagging and Leading).ipynb | SiamakMushakhian/Numpy-Pandas-Seaborn |
Set Up | from pythonosc import dispatcher, osc_server
from pythonosc.udp_client import SimpleUDPClient
import time
from bitalino import BITalino
import biofeatures
bitalino_ip = '192.168.0.101'
bitalino_port = 31000
actuator_ip = '192.168.0.100'
actuator_port = 12000
osc_client = SimpleUDPClient(actuator_ip, actuator_port)
def process_riot_data(unused_addr, *values):
global resp_data, last_update, client, inflated, inflating, deflating
new_data = values[12]
resp_data.append(new_data)
if len(resp_data) > 200*10 and time.time() - last_update > update_freq:
last_int, breathe_in = biofeatures.resp_intervals(resp_data, sampling_rate = 200, last_breath = True)
if breathe_in:
print("Breathing in")
client.send_message("/actuator/inflate", 100.0)
inflating = True
deflating = False
else:
print("Breathing out")
client.send_message("/actuator/inflate", -100.0)
deflating = True
inflating = False
last_update = time.time()
# only save the last 5 min of data
if len(resp_data) > 200 * 60 * 5:
resp_data = resp_data[-200*60*5:]
def handle_pressure(unused_addr, pressure):
global client, inflated, inflating, deflating, stop_flag, pressure_readings, pressure_readings_wearable, t0, wearable
print(pressure)
if time.time() - t0 > 30:
stop_flag = False
wearable = False
t0 = time.time()
if stop_flag:
if wearable:
pressure_readings_wearable.append(pressure)
else:
pressure_readings.append(pressure)
return
if pressure < 800:
print("Fully deflated!")
client.send_message("/actuator/inflate", 0.0)
deflating = False
elif deflating and pressure > 1000:
print("Squeeze!")
elif pressure > 1150:
print("Careful!")
client.send_message("/actuator/inflate", 0.0)
t0 = time.time()
stop_flag = True
# deflating = True
elif not deflating:
client.send_message("/actuator/inflate", 70.0)
import pythonosc
import time
dispatcher2 = pythonosc.dispatcher.Dispatcher()
dispatcher2.map("/sensor/pressure", handle_pressure)
client = SimpleUDPClient(actuator_ip, actuator_port)
inflated = False
inflating = False
deflating = False
stop_flag = False
pressure_readings_wearable = []
pressure_readings = []
wearable = True
t0 = time.time()
server = osc_server.ThreadingOSCUDPServer((bitalino_ip, bitalino_port), dispatcher2)
print("Serving on {}".format(server.server_address))
server.serve_forever()
client = SimpleUDPClient(actuator_ip, actuator_port)
client.send_message("/actuator/inflate", 0.0)
client = SimpleUDPClient('192.168.0.101', 32000)
client.send_message("/actuator/1/inflate", 0.0)
# deflating: 970
# deflating when empty: <800
# neutral empty: 1016
# neutral full: 1050 going down slowly to 1020
# inflating: 1068-1200
# squeezed: 1050 - 1200
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(pressure_readings_wearable)
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(pressure_readings)
import numpy as np
readings = np.array(pressure_readings_wearable) - np.array(pressure_readings)
plt.plot(readings[20:]) | _____no_output_____ | ISC | notebooks/alternative_resp_v1.ipynb | malfarasplux/biofeatures |
from torchvision import datasets, transforms
import numpy
transform = transforms.Compose([transforms.Lambda(lambda pil_im: numpy.array(pil_im))])
transform = None
testset = datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
from PIL import Image
import numpy as np
import cv2
import imutils
import matplotlib.pyplot as plt
%matplotlib inline
#https://en.wikipedia.org/wiki/Web_colors
class ColorDetector():
COLORS = {
'RED': [
[255, 0,0],
[220, 20, 60], #Crismon
[255, 20, 147], #Deep pink
],
'ORANGE': [
[255, 69, 0], #Orange red
[255, 140, 0], #Dark orange
],
'BLUE': [
[0, 0, 255],
[0, 255, 255], #Aqua/Cyan
[0, 206, 209], #DarkTurquoise
[ 0, 0, 128] #Navy
],
'GREEN': [
[0, 128, 0],
[0, 100, 0], #Dark green
#[154, 205, 50], #Yellow green
#[128, 128, 0] #Olive
],
'YELLOW': [
[255, 255 , 0],
[255, 215, 0], #Gold
],
#'PURPLE': [
# [128, 0 ,128],
# [255, 0, 255], #Fuchsia
#],
'BROWN': [
[165, 42, 42], #Brown
[210, 105, 30], #Chocolate
[128, 0, 0] #Maroon
],
'DARK': [
[0, 0, 0],
#[169, 169, 169], #Dim gray
],
'LIGHT': [
[255, 255, 255],
[248, 248, 255], #GhostWhite
[255, 255, 240], #Ivory
[255, 248, 220], #Cornsilk
[224, 255, 255], #LightCyan
[192, 192, 192], #Silver
[176, 224, 230], #PowderBlue
]
}
def __init__(self, pil_image):
self.image = pil_image
self.palimage = Image.new('P', (16, 16))
self.palimage.putpalette(self.colors2vec())
self.png = im_pil.quantize( method=0, kmeans=0, palette=self.palimage)
self.px_count = im_pil.size[0]*im_pil.size[1]
def colors2vec(self):
out = []
for l in ColorDetector.COLORS.values():
out += [item for sublist in l for item in sublist]
vec2 = out*32
return vec2[:256*3]
def get_colors(self,im_pil):
names = []
tmp_png = self.png.copy()
color_usage = np.array(tmp_png.getcolors())
for key, cl in ColorDetector.COLORS.items():
pixels_count = 0
for c in cl:
c_ind = tmp_png.palette.getcolor(tuple(c))
cu = color_usage[color_usage[:,1] == c_ind]
if len(cu) > 0:
pixels_count += cu[0][0]
if pixels_count > 0:
# pixel count to %
names.append((key,pixels_count/self.px_count))
names.sort(key=lambda tup: tup[1], reverse = True)
return names
def show(pil, png,title = ""):
plt.rcParams["figure.figsize"] = (5,5)
fig = plt.figure()
plt.axis('off')
plt.subplot(1, 2, 1)
plt.imshow(im_pil)
plt.subplot(1, 2, 2)
plt.imshow(png)
plt.title(title)
for i in range(10):
im_pil, cl_num = testset.__getitem__(i)
cd = ColorDetector(im_pil)
colors = cd.get_colors(im_pil)
show(im_pil,cd.png,", ".join((colors[0][0],colors[1][0],colors[2][0])))
#Another approach
#https://www.pyimagesearch.com/2014/08/04/opencv-python-color-detection/ | _____no_output_____ | MIT | extra/Base_colors_detection.ipynb | Gan4x4/hse-cv2019 |
|
Author: **Lee Surprenant** [email protected] [Section 1: The FHIR HTTP API](Section-1.-The-FHIR-REST-API)- [Section 2: FHIR Search](Section-2:-FHIR-Search)- [Section 3: Search paramater types](Section-3:-Search-parameter-types)- [Section 4: Chaining and includes](Section-4:-Chaining-and-includes)- [Section 5: Putting it together](Section-5:-Putting-it-together)- [Section 6: Bulk export](Section-6:-Bulk-export) | # save the base url of the FHIR server
base = 'https://cluster1-573846-250babbbe4c3000e15508cd07c1d282b-0000.us-east.containers.appdomain.cloud/open'
# setup imports
import os
from requests import get
from requests import post
from requests import put
from requests import delete
from requests import head
from IPython.display import IFrame
# a function to print the top x rows and add a newline
def peek(string, line_count=25):
print(os.linesep.join(string.split(os.linesep)[:line_count]) + '\n')
#(Optional)
# install jsonpointer
!pip install jsonpointer
from jsonpointer import resolve_pointer as resolve | /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
Requirement already satisfied: jsonpointer in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (2.1)
| Apache-2.0 | Notebook 1 - The FHIR API.ipynb | Alvearie/FHIR-from-Jupyter |
Section 1. The FHIR REST API | # HL7 FHIR defines a set of "resources" for exchanging information.
IFrame('https://www.hl7.org/fhir/resourcelist.html#tabs', width=1200, height=330)
# Each resource type supports the same set of interactions, categorized in the spec into "instance-level" and "type-level" interactions.
IFrame('https://www.hl7.org/fhir/http.html#operations', width=1200, height=330)
# This notebook focuses on the FHIR Search API, but first we use the "capabilities" interaction to learn about our target server.
# retrieve the server "CapabilityStatement" and print the important bits
response = get(base + '/metadata')
print('Response code: ' + str(response.status_code))
result = response.json()
print('Server: ' + result['name'] + ' ' + result['version'])
print('Security: ' + str(result['rest'][0]['security']))
resources = result['rest'][0]['resource']
supported_types = {r['type']: [i['code'] for i in r['interaction']] for r in resources}
print('Supported types: ')
for k,v in supported_types.items():
print(' ' + k + ': ' + str(v)) | Response code: 200
Server: IBM FHIR Server 4.8.0
Security: {'cors': True}
Supported types:
Measure: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicinalProductIndication: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Organization: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
EvidenceVariable: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Library: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
CarePlan: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicinalProductAuthorization: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Account: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
OperationOutcome: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MeasureReport: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
PractitionerRole: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Binary: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ImmunizationEvaluation: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicationDispense: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
DiagnosticReport: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicinalProductUndesirableEffect: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
DeviceUseStatement: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
PlanDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Immunization: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
NutritionOrder: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Person: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicinalProduct: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
AdverseEvent: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ClaimResponse: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
DeviceDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
GraphDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MolecularSequence: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
DeviceMetric: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MessageHeader: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Invoice: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Linkage: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicationKnowledge: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
EventDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ServiceRequest: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ActivityDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
SpecimenDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
SubstanceReferenceInformation: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ChargeItem: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
DetectedIssue: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
EffectEvidenceSynthesis: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
FamilyMemberHistory: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ImplementationGuide: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ClinicalImpression: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
CoverageEligibilityRequest: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
EnrollmentRequest: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
OrganizationAffiliation: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
StructureMap: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
EnrollmentResponse: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
List: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
SubstanceProtein: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ResearchStudy: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Condition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Practitioner: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
AppointmentResponse: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Task: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Provenance: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Coverage: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
InsurancePlan: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Slot: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Device: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Bundle: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ConceptMap: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
BodyStructure: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Location: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicinalProductContraindication: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
RequestGroup: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
BiologicallyDerivedProduct: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Medication: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
TerminologyCapabilities: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Group: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ValueSet: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicinalProductManufactured: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
SupplyDelivery: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicinalProductInteraction: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
CoverageEligibilityResponse: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ChargeItemDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ObservationDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
VerificationResult: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Basic: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
CommunicationRequest: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Parameters: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
RelatedPerson: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
CompartmentDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
TestScript: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
AllergyIntolerance: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
GuidanceResponse: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MessageDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Substance: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Composition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ImmunizationRecommendation: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
SearchParameter: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
AuditEvent: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ImagingStudy: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
NamingSystem: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
SubstanceSourceMaterial: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ResearchElementDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
EpisodeOfCare: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Goal: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ResearchDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ExampleScenario: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
SubstanceNucleicAcid: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Contract: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
QuestionnaireResponse: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Endpoint: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
StructureDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
CodeSystem: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
PaymentReconciliation: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Flag: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
RiskEvidenceSynthesis: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
CareTeam: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
SubstanceSpecification: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Subscription: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ExplanationOfBenefit: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
VisionPrescription: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
DeviceRequest: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
HealthcareService: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Questionnaire: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
OperationDefinition: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Consent: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
RiskAssessment: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
SubstancePolymer: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Encounter: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicationRequest: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicationStatement: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
CapabilityStatement: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Patient: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Specimen: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicinalProductPackaged: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
ResearchSubject: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicinalProductIngredient: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Evidence: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
PaymentNotice: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
CatalogEntry: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicinalProductPharmaceutical: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
SupplyRequest: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
TestReport: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Appointment: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Communication: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Claim: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
DocumentReference: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Observation: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Schedule: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Procedure: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
DocumentManifest: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
Media: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
MedicationAdministration: ['create', 'read', 'vread', 'update', 'patch', 'delete', 'history-instance', 'search-type']
| Apache-2.0 | Notebook 1 - The FHIR API.ipynb | Alvearie/FHIR-from-Jupyter |
Section 2: FHIR Search | # Now that we know our server supports the "search-type" interaction on all resource types, lets start working with the Patient endpoint.
# query for all Patient resources, then print the HTTP status code and the first 25 lines of the response
response = get(base + '/Patient')
print('Response code: ' + str(response.status_code))
peek('Response body: \n' + response.text)
print('Number of entries: ' + str(len(response.json().get('entry'))))
# technically you've now performed your first FHIR "search" (just with no parameters)
# note that results are paged and the "link" field in the response Bundle contains links to the previous, current, and next page of results
for link in response.json().get('link'):
if link.get('relation') == 'next':
page2 = get(link.get('url'))
peek('Second page: \n' + page2.text)
print('Number of entries: ' + str(len(response.json().get('entry'))))
# we can control the number of resources on each page by passing the _count parameter
response = get(base + '/Patient?_count=1')
peek('Single resource per page: \n' + response.text)
print('Number of entries: ' + str(len(response.json().get('entry'))))
# if you're only interested in the count, you can specify that via either
# A. _count=0 (0 results per page); or
# B. _summary=count
print(get(base + '/Patient' + '?' + '_summary=count').text)
# if you want a lot of results per page, you can reduce the amount of data returned via the _summary or _elements parameters
# first, lets review the structure of the Patient resource
IFrame('https://hl7.org/fhir/patient.html#resource', width=1200, height=330)
# print the list of top-level elements in the first Patient resource returned
response = get(base + '/Patient')
peek('Normal: \n' + str(response.json().get('entry')[0].get('resource').keys()))
# look for the Σ flag in the Resource Content section of the resource page in the specification for what elements are considered "summary" elements
response = get(base + '/Patient?' + '_summary=true')
peek('Summary: \n' + str(response.json().get('entry')[0].get('resource').keys()))
# need more control?
# you can use the _elements parameter to ask for specific fields back (although the server should include required fields and modifier fields as well)
response = get(base + '/Patient?' + '_elements=id,gender')
peek('Elements: \n' + str(response.json().get('entry')[0].get('resource').keys()))
# this can add up!
response = get(base + '/Patient?_count=100')
print('Normal: \t' + str(len(response.content)) + ' bytes \t(' + str(response.elapsed.total_seconds()) + ' s)')
response = get(base + '/Patient?_count=100&_summary=true')
print('Summary: \t' + str(len(response.content)) + ' bytes \t(' + str(response.elapsed.total_seconds()) + ' s)')
response = get(base + '/Patient?_count=100&_elements=id,gender,birthDate')
print('Elements: \t' + str(len(response.content)) + ' bytes \t(' + str(response.elapsed.total_seconds()) + ' s)')
# now add some search parameters
# each FHIR resource type has its own set of parameters; find them toward the bottom of the page for that resource type in the specification
# for example, for the Patient resource type, see https://www.hl7.org/fhir/patient.html#search
IFrame('https://www.hl7.org/fhir/patient.html#search', width=1200, height=500)
# for example, lets use the search parameter named "gender"
response = get(base + '/Patient' + '?' + 'gender=male')
print('Response code: ' + str(response.status_code))
peek('Response body: \n' + response.text)
# pro tip: combine your search query with _summary=count to explore the data
print('male: \t' + str(get(base + '/Patient' + '?' + 'gender=male' + '&' + '_summary=count').json().get('total')))
print('female: \t' + str(get(base + '/Patient' + '?' + 'gender=female' + '&' + '_summary=count').json().get('total')))
# use the "missing" modifier to look for resources that do NOT have a value for the target parameter
response = get(base + '/Patient' + '?' + 'gender:missing=true' + '&' + '_summary=count')
print('missing gender: ' + str(response.json().get('total'))) | male: 15450
female: 17204
missing gender: 1
| Apache-2.0 | Notebook 1 - The FHIR API.ipynb | Alvearie/FHIR-from-Jupyter |
Section 3: Search parameter types | # search parameters have types
# gender is considered a "token" search parameter
# Token search
# this parameter type is common for 'coded' values (Code, Coding, and CodeableConcept) and identifiers
# token values consist of a system and a code, although sometimes the system is implicit (like in the case of gender)
# users can search on the system and code (system|code), the code alone (code), system-less codes (|code), or even the system alone (system|)
response = get(base + '/Patient' + '?' + 'gender=http://hl7.org/fhir/administrative-gender|male' + '&_count=1&_elements=gender')
print('male:\n' + response.text)
# there are also Number, Date/DateTime, String, Reference, Quantity, URI, and Composite parameter types
# String search
response = get(base + '/Patient' + '?' + 'family=Smith' + '&_elements=name')
print('Smiths:')
for entry in response.json().get('entry'):
resource = entry.get('resource')
print(resource.get('id'), end=': ')
print(', '.join(map(lambda n: n.get('family'), resource.get('name'))))
# wait, "Smitham" !?
# string search performs a case-insensitive "begins-with" search by default!
# use the modifier ":exact" if you want exact matches (and improved performance)
response = get(base + '/Patient' + '?' + 'family:exact=Smith' + '&_elements=name')
print('Smiths:')
for entry in response.json().get('entry'):
resource = entry.get('resource')
print(resource.get('id'), end=': ')
print(', '.join(map(lambda n: n.get('family'), resource.get('name'))))
print()
# string search also has a ":contains" modifier
response = get(base + '/Patient' + '?' + 'family:contains=ski' + '&_elements=name')
print('Skis:')
for entry in response.json().get('entry'):
resource = entry.get('resource')
print(resource.get('id'), end=': ')
print(', '.join(map(lambda n: n.get('family'), resource.get('name'))))
# Date search
response = get(base + '/Patient' + '?' + 'birthdate=1984' + '&_elements=birthDate')
print('Born in 1984:')
for entry in response.json().get('entry'):
resource = entry.get('resource')
print(resource.get('id'), end=': ')
print(resource.get('birthDate'))
# date searches support lt(<), le(<=), gt(>), ge(>=), sa(starts after), and eb(ends before) "prefixes"
response = get(base + '/Patient' + '?' + 'birthdate=eb1984' + '&_elements=birthDate')
print('Born before 1984:')
for entry in response.json().get('entry'):
resource = entry.get('resource')
print(resource.get('id'), end=': ')
print(resource.get('birthDate'))
response = get(base + '/Patient' + '?' + 'birthdate=sa1984' + '&_elements=birthDate')
print('\n' + 'Born after 1984:')
for entry in response.json().get('entry'):
resource = entry.get('resource')
print(resource.get('id'), end=': ')
print(resource.get('birthDate'))
# some servers support ap(approximately equal) as well, although the spec lets the server decide exactly what that means...
response = get(base + '/Patient' + '?' + 'birthdate=ap1984' + '&_elements=birthDate')
print('\n' + 'Born "around" 1984:')
for entry in response.json().get('entry'):
resource = entry.get('resource')
print(resource.get('id'), end=': ')
print(resource.get('birthDate'))
# Reference search
response = get(base + '/Patient?general-practitioner:missing=false&_elements=generalPractitioner,link,managingOrganization&_count=1')
peek('Patients with a general-practitioner: \n' + response.text, 15)
# since our model doesn't have any reference fields on the Patient resources, lets look at Conditions instead
IFrame('https://hl7.org/fhir/condition.html#resource', width=1200, height=480)
# get all conditions that reference a specific patient
response = get(base + '/Condition' + '?' + 'subject=Patient/17598beef3c-73a65dab-c8e5-4756-a60a-69bbc48cef4f' + '&_elements=code')
print('Conditions for patient 17598beef3c-73a65dab-c8e5-4756-a60a-69bbc48cef4f:')
for entry in response.json().get('entry'):
resource = entry.get('resource')
print(resource.get('code'))
# when the type of the reference is fixed to a single value, it can be omitted (Patient/x -> x)
response2 = get(base + '/Condition' + '?' + 'patient=17598beef3c-73a65dab-c8e5-4756-a60a-69bbc48cef4f' + '&_elements=code')
print('\n' + 'Result entries match? ' + str(response.json().get('entry') == response2.json().get('entry')))
# a reference to a resource's full url on the server should be equivalent to the relative reference format mentioned above
response3 = get(base + '/Condition' + '?' + 'patient=' + base + '/Patient/17598beef3c-73a65dab-c8e5-4756-a60a-69bbc48cef4f' + '&_elements=code')
print('\n' + 'Result entries match? ' + str(response.json().get('entry') == response3.json().get('entry')))
# references can also reference resources on other servers | Conditions for patient 17598beef3c-73a65dab-c8e5-4756-a60a-69bbc48cef4f:
{'coding': [{'system': 'http://snomed.info/sct', 'code': '10509002', 'display': 'Acute bronchitis (disorder)'}], 'text': 'Acute bronchitis (disorder)'}
{'coding': [{'system': 'http://snomed.info/sct', 'code': '195662009', 'display': 'Acute viral pharyngitis (disorder)'}], 'text': 'Acute viral pharyngitis (disorder)'}
{'coding': [{'system': 'http://snomed.info/sct', 'code': '68235000', 'display': 'Nasal congestion (finding)'}], 'text': 'Nasal congestion (finding)'}
{'coding': [{'system': 'http://snomed.info/sct', 'code': '267102003', 'display': 'Sore throat symptom (finding)'}], 'text': 'Sore throat symptom (finding)'}
{'coding': [{'system': 'http://snomed.info/sct', 'code': '248595008', 'display': 'Sputum finding (finding)'}], 'text': 'Sputum finding (finding)'}
{'coding': [{'system': 'http://snomed.info/sct', 'code': '84229001', 'display': 'Fatigue (finding)'}], 'text': 'Fatigue (finding)'}
{'coding': [{'system': 'http://snomed.info/sct', 'code': '267036007', 'display': 'Dyspnea (finding)'}], 'text': 'Dyspnea (finding)'}
{'coding': [{'system': 'http://snomed.info/sct', 'code': '56018004', 'display': 'Wheezing (finding)'}], 'text': 'Wheezing (finding)'}
{'coding': [{'system': 'http://snomed.info/sct', 'code': '43724002', 'display': 'Chill (finding)'}], 'text': 'Chill (finding)'}
{'coding': [{'system': 'http://snomed.info/sct', 'code': '386661006', 'display': 'Fever (finding)'}], 'text': 'Fever (finding)'}
Result entries match? True
Result entries match? True
| Apache-2.0 | Notebook 1 - The FHIR API.ipynb | Alvearie/FHIR-from-Jupyter |
Section 4: Chaining and includes | # Chaining
# where reference parameters get really interesting is when you want to query one resource type based on a property of another resource to which its linked
# for example, here is a search for Type II Diabetes in female patients
response = get(base + '/Condition' + '?' + 'code=http://snomed.info/sct|44054006' + '&' + 'patient:Patient.gender=female' + '&_count=1')
peek('Type II Diabetes in female patients: \n' + str(response.text), 100)
# Reverse chaining
# references can be searched the other way around via the "_has" parameter
response = get(base + '/Patient' + '?' + '_has:Condition:patient:code=http://snomed.info/sct|44054006' + '&_count=1')
peek('Patients with Type II Diabetes: \n' + response.text)
# Includes
# its also possible to get a resource and its related resources back in a single query
response = get(base + '/Condition?code=http://snomed.info/sct|44054006' + '&' + '_include=Condition:patient' + '&_count=2')
peek('Response contains both Conditions and Patients, but only the Conditions are counted in the page size and total:')
print('Total: ' + str(response.json().get('total')))
for entry in response.json().get('entry'):
resource = entry.get('resource')
print(resource.get('resourceType'), end=': ')
print(resource.get('id'))
# Reverse Includes
response = get(base + '/Patient?gender=female' + '&' + '_revinclude=Condition:patient' + '&_count=2')
peek('Response contains both Patients and Conditions, but only the Patients are counted in the page size and total:')
print('Total: ' + str(response.json().get('total')))
for entry in response.json().get('entry'):
resource = entry.get('resource')
print(resource.get('resourceType'), end=': ')
print(resource.get('id')) | Response contains both Patients and Conditions, but only the Patients are counted in the page size and total:
Total: 17204
Patient: 17598bf36c7-fedcedc3-b78c-4688-82ef-622e0cc71b22
Patient: 17598bf4a5d-185e291b-d0e3-42d3-9b70-eeae60debeec
Condition: 17598bf36c8-5800bd67-4c08-4a68-8968-054c670ef2a6
Condition: 17598bf36c8-7d2861ad-e607-4122-90b3-74265240aaca
Condition: 17598bf36c9-d53628c0-8fc4-4570-bb9a-25af21328712
Condition: 17598bf36c9-9dec87f6-1a92-4b8a-b02a-ad1ab3dabd30
Condition: 17598bf36c9-5c6f054a-eb02-4eac-ae3e-f02d3472b15c
Condition: 17598bf36c9-4fb95111-885e-421f-8a5b-a9942a759020
Condition: 17598bf36c9-a8f8bb3d-c403-4fb9-8f18-5a8c0cbe2b78
Condition: 17598bf36c9-b1f1fc40-ef31-49b5-8579-4a45cca657b2
Condition: 17598bf4a5e-b9e99052-25a6-4400-a8b8-da48b8cbb22d
Condition: 17598bf4a65-ee99ca87-65c3-4a14-900f-02e90ec4db4a
Condition: 17598bf4a66-49d7afde-a7f1-42e4-858d-990f49a2f1ca
Condition: 17598bf4a66-39d8080e-27ee-488d-9e35-59275a486457
Condition: 17598bf4a6f-b87ac7b6-6faf-49b5-95fd-3eddf4f7494c
Condition: 17598bf4a6f-ec9abd6f-ee59-4f71-8fa5-c737ac1be63b
Condition: 17598bf4a70-b98f87d7-d469-4b3d-987e-02b78336b82e
Condition: 17598bf4a70-6e7537b4-1a90-4a3c-842c-6c5c1db45ad3
Condition: 17598bf4a70-ca048f40-02a3-4c24-a1b6-f53ffb3c1667
| Apache-2.0 | Notebook 1 - The FHIR API.ipynb | Alvearie/FHIR-from-Jupyter |
Section 5: Putting it together | response = get(base + '/Condition' + '?' + 'code=http://snomed.info/sct|44054006' + '&_count=1')
print('Patients with Type II Diabetes: ' + str(response.json().get('total')))
# SNOMED concepts for comorbidities of Type II Diabetes
#coronary heart disease (CHD), 53741008
#chronic kidney disease (CKD), 709044004
#atrial fibrillation, 49436004
#stroke, 230690007
#hypertension, 38341003
#heart failure, 84114007
#peripheral vascular disease (PVD), 400047006
#rheumatoid arthritis, 69896004
#Malignant neoplasm, primary (morphologic abnormality), 86049000
#Malignant neoplastic disease (disorder), 363346000
#osteoporosis, 64859006
#depression, 35489007
#asthma, 195967001
#chronic obstructive pulmonary disease (COPD), 13645005
#dementia, 52448006
#severe mental illness (SMI), 391193001
#epilepsy, 84757009
#hypothyroidism, 40930008
#learning disability, 1855002
print('CHD: \t\t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '53741008').json().get('total')))
print('CKD: \t\t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '709044004').json().get('total')))
print('AFib: \t\t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '49436004').json().get('total')))
print('stroke: \t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '230690007').json().get('total')))
print('hypertension: \t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '38341003').json().get('total')))
print('heart failure: \t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '84114007').json().get('total')))
print('PVD: \t\t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '400047006').json().get('total')))
print('arthritis: \t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '69896004').json().get('total')))
print('cancer: \t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '86049000').json().get('total') + get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '363346000').json().get('total')))
print('osteoporosis: \t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '64859006').json().get('total')))
print('depression: \t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '35489007').json().get('total')))
print('asthma: \t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '195967001').json().get('total')))
print('COPD: \t\t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '13645005').json().get('total')))
print('dementia: \t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '52448006').json().get('total')))
print('SMI: \t\t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '391193001').json().get('total')))
print('epilepsy: \t\t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '84757009').json().get('total')))
print('hypothyroidism: \t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '40930008').json().get('total')))
print('learning disability: \t' + str(get(base + '/Condition?_summary=count&code=http://snomed.info/sct|' + '1855002').json().get('total')))
# Patients with Type II Diabetes *and* comorbidities
hasDiabetes = base + '/Patient?_elements=id&_has:Condition:patient:code=http://snomed.info/sct|44054006'
def printPatientsWithComorbidity(conceptId):
responseJSON = get(hasDiabetes + '&_has:Condition:patient:code=http://snomed.info/sct|' + conceptId).json()
print('Total: ' + str(responseJSON.get('total')))
if 'entry' in responseJSON:
for entry in responseJSON.get('entry'):
print(entry.get('resource').get('id'), end=", ")
print('\n')
print('CHD:')
printPatientsWithComorbidity('53741008')
print('AFib:')
printPatientsWithComorbidity('49436004')
print('stroke:')
printPatientsWithComorbidity('230690007')
print('heart failure:')
printPatientsWithComorbidity('84114007')
print('arthritis:')
printPatientsWithComorbidity('69896004')
print('osteoporosis:')
printPatientsWithComorbidity('64859006')
print('asthma:')
printPatientsWithComorbidity('195967001')
print('epilepsy:')
printPatientsWithComorbidity('84757009') | CHD:
Total: 30
1759a06f11a-77b7ee76-ae8d-47a8-9a8e-17e7278e4137, 1759bccd5dc-a4192660-b224-4d88-acf6-bfb538fc0052, 1759c042626-da786c63-fcd4-4a86-aaec-54ec28458861, 1759c7179ab-cdc927fb-ec72-4c10-961e-a424bb8241f3, 1759c906bce-6fcc151d-240e-441f-aa3f-3a8123222d0c, 1759c9d0e32-10946dda-2ac9-4f8e-a6ba-eade9aba6ac8, 175b90993dc-7568b0d3-91ce-4862-be00-7cb99389f323, 175ba73ffc5-86182577-c52f-4c24-b09c-acfdc0043b8c, 175babfca99-31bcaafc-876a-453a-905c-b0482607b2c5, 175bb745e4f-1f3f83a8-6f6c-43da-adf1-fdc5440d88b5,
AFib:
Total: 23
1759b62a9bc-650fc722-8bb1-4a81-827c-95b73a83d7b4, 1759be696c1-fd6cfa13-8e69-454f-ad73-d3b27a051f6f, 1759c1d61ec-500b2890-fcc9-428c-86c2-bcb15a5cbaa5, 1759c4dfa60-d50fbfcb-dda0-4669-82c8-42e73c5bb239, 1759c76bdcd-dfc226a9-37b6-4a72-b303-1613ed7ec838, 175b9f4c2a0-e971ee19-f55e-4319-85bd-57737b23ba26, 175ba948672-bc9ba458-81df-4def-8064-da02406e9f62, 175ba9518ee-29c6291a-ae29-4c62-8283-6c5a1264c404, 175ba9ca813-0a7f0565-f701-4c66-9d11-7b2754dddf6d, 175baa492a2-c7306451-8ede-4d55-9a21-2da3932f036c,
stroke:
Total: 36
175998f05f8-29eebaa9-2179-4d7b-9603-ca9c7cf25e23, 1759b62a9bc-650fc722-8bb1-4a81-827c-95b73a83d7b4, 1759b7179b6-d3d28532-15b8-41db-8781-fbde5c781fab, 1759b91f0bf-5a8bf1c3-7aa4-4d8b-a4d3-83d5097f18c4, 1759bcaf1a7-188d7217-1e1d-4664-bc7a-e07bc658539d, 1759c048c78-1e94db2f-f936-4a12-b427-a89667db2ee8, 1759c363589-398bc35f-8c6c-41f7-8840-572e653a30e5, 1759c3a8255-0131fdfe-0db6-47e3-9cb9-d770efee0012, 1759c4dfa60-d50fbfcb-dda0-4669-82c8-42e73c5bb239, 1759c51e871-214856d2-a5be-47ca-99a7-e9b43b21dc26,
heart failure:
Total: 5
1759b62a9bc-650fc722-8bb1-4a81-827c-95b73a83d7b4, 175b9f4c2a0-e971ee19-f55e-4319-85bd-57737b23ba26, 175c0fd9b65-022a8b08-76b1-481e-bef9-82866c621bd9, 175c1151cca-345c1f27-af3e-403f-8aad-42be1a128b73, 175c2e6aa49-7a8d5bae-e376-41fc-8d51-659148290cbe,
arthritis:
Total: 0
osteoporosis:
Total: 25
175998be983-6278fae0-a059-467f-8637-3cbff69d9b4e, 1759b7afa3c-e89ecf98-c0b2-4e69-866b-ef6e89f6fba0, 1759bfa2d15-72ddae5b-292a-40f5-8558-8c062211dd42, 1759c0c9587-ab5bdbf5-56e3-4328-b347-089d17ea59b8, 1759c14d16a-fa7ed5df-d034-4b8c-bc74-296cc4b32c31, 1759c51e871-214856d2-a5be-47ca-99a7-e9b43b21dc26, 1759c64e1fd-de32b308-29ab-4a04-8850-564e574b20fd, 1759c685bbc-bff3331b-d71e-4987-8246-657b653a77c2, 175b98dc155-70c909ff-7125-414d-beff-ff17bd202c40, 175ba10c839-3767563a-d7f3-4dbd-88a4-cf54a587d310,
asthma:
Total: 1
175c2b7c38a-61ab893b-7b40-45ae-9955-6f26107de9cc,
epilepsy:
Total: 8
1759ba18cd9-617df281-8a07-4a25-919d-d320d53f4da2, 1759c181967-bd8686d0-bcf5-475c-932d-e41f1f3769bc, 175ba3d7afe-c2a50287-ebb2-48b0-90f8-3654e28244c1, 175bae212ec-e15e067c-7a7a-4d60-9635-213cdfcae928, 175bae68708-0423d3ed-55d0-4f64-ac98-717649009b04, 175c0c66c0a-f500462c-e568-47ac-ad35-045a11eb8b8f, 175c15e0a00-c2dcdc18-a2f6-4cd1-b9f4-e78a940e8064, 175c2ac659d-cc743967-f254-43bb-b345-17a5282088fe,
| Apache-2.0 | Notebook 1 - The FHIR API.ipynb | Alvearie/FHIR-from-Jupyter |
Section 6: Bulk export | # To perform deeper analysis of the data, it can be useful to export some or all of the data into "bulk fhir" format
export_response = get(base + '/$export' + '?' + '_type=Patient,Condition')
print('Response code: ' + str(export_response.status_code))
print(export_response.headers)
import time
# poll the status endpoint (returned in the Content-Location header of the $export response)
status_response = get(export_response.headers['Content-Location'])
print('Response code: ' + str(status_response.status_code))
while(status_response.status_code != 200):
time.sleep(20)
status_response = get(export_response.headers['Content-Location'])
print('Response code: ' + str(status_response.status_code))
print('Response body: ' + str(status_response.text))
# retrieve one of the NDJSON files and view the first 25 rows
ndjson = get(status_response.json().get('output')[1].get('url'))
peek(ndjson.text)
%%html
<style>
div.output_area pre {
white-space: pre;
}
</style> | _____no_output_____ | Apache-2.0 | Notebook 1 - The FHIR API.ipynb | Alvearie/FHIR-from-Jupyter |
Tabular data preprocessing | from fastai.gen_doc.nbdoc import *
from fastai.tabular import *
from fastai import * | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Overview This package contains the basic class to define a transformation for preprocessing dataframes of tabular data, as well as basic [`TabularTransform`](/tabular.transform.htmlTabularTransform). Preprocessing includes things like- replacing non-numerical variables by categories, then their ids,- filling missing values,- normalizing continuous variables.In all those steps we have to be careful to use the correspondance we decide on our training set (which id we give to each category, what is the value we put for missing data, or how the mean/std we use to normalize) on our validation or test set. To deal with this, we use a speciall class called [`TabularTransform`](/tabular.transform.htmlTabularTransform).The data used in this document page is a subset of the [adult dataset](https://archive.ics.uci.edu/ml/datasets/adult). It gives a certain amount of data on individuals to train a model to predict wether their salary is greater than \$50k or not. | path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
train_df, valid_df = df[:800].copy(),df[800:].copy()
train_df.head() | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
We see it contains numerical variables (like `age` or `education-num`) as well as categorical ones (like `workclass` or `relationship`). The original dataset is clean, but we removed a few values to give examples of dealing with missing variables. | cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
cont_names = ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week'] | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Transforms for tabular data | show_doc(TabularTransform, doc_string=False) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Base class for creating transforms for dataframes with categorical variables `cat_names` and continuous variables `cont_names`. Note that any column not in one of those lists won't be touched. | show_doc(TabularTransform.__call__) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
This simply calls `apply_test` if `test` or `apply_train` otherwise. Those functions apply the changes in place. | show_doc(TabularTransform.apply_train, doc_string=False) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Must be implemented by an inherited class with the desired transformation logic. | show_doc(TabularTransform.apply_test, doc_string=False) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
If not implemented by an inherited class, defaults to calling `apply_train`. The following [`TabularTransform`](/tabular.transform.htmlTabularTransform) are implemented in the fastai library. Note that the replacement from categories to codes as well as the normalization of continuous variables are automatically done in a [`TabularDataset`](/tabular.data.htmlTabularDataset). | show_doc(Categorify, doc_string=False) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Changes the categorical variables in `cat_names` in categories. Variables in `cont_names` aren't affected. | show_doc(Categorify.apply_train, doc_string=False) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Transforms the variable in the `cat_names` columns in categories. The category codes are the unique values in these columns. | show_doc(Categorify.apply_test, doc_string=False) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Transforms the variable in the `cat_names` columns in categories. The category codes are the ones used for the training set, new categories are replaced by NaN. | tfm = Categorify(cat_names, cont_names)
tfm(train_df)
tfm(valid_df, test=True) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Since we haven't changed the categories by their codes, nothing visible has changed in the dataframe yet, but we can check that the variables are now categorical and view their corresponding codes. | train_df['workclass'].cat.categories | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
The test set will be given the same category codes as the training set. | valid_df['workclass'].cat.categories
show_doc(FillMissing, doc_string=False) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Transform that fills the missing values in `cont_names`. `cat_names` variables are left untouched (their missing value will be raplced by code 0 in the [`TabularDataset`](/tabular.data.htmlTabularDataset)). [`fill_strategy`](FillStrategy) is adopted to replace those nans and if `add_col` is True, whenever a column `c` has missing values, a column named `c_nan` is added and flags the line where the value was missing. | show_doc(FillMissing.apply_train, doc_string=False) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Fills the missing values in the `cont_names` columns. | show_doc(FillMissing.apply_test, doc_string=False) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Fills the missing values in the `cont_names` columns with the ones picked during train. | train_df[cont_names].head()
tfm = FillMissing(cat_names, cont_names)
tfm(train_df)
tfm(valid_df, test=True)
train_df[cont_names].head() | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Values issing in the `education-num` column are replaced by 10, which is the median of the column in `train_df`. Categorical variables are not changed, since `nan` is simply used as another category. | valid_df[cont_names].head()
%reload_ext autoreload
%autoreload 2
%matplotlib inline
show_doc(FillStrategy, alt_doc_string='Enum flag represents determines how `FillMissing` should handle missing/nan values', arg_comments={
'MEDIAN':'nans are replaced by the median value of the column',
'COMMON': 'nans are replaced by the most common value of the column',
'CONSTANT': 'nans are replaced by `fill_val`'
}) | _____no_output_____ | Apache-2.0 | docs_src/tabular.transform.ipynb | dienhoa/fastai_docs |
Automatic Differentiation of PDE solvers with JAX and dolfin-adjointDerivative information is the crucial requirement for using effective algorithms for design optimization, parameter estimation, optimal control, modelreduction and experimental design, and other tasks. This notebook gives an example how to use JAX together with FEniCS and dolfin-adjoint for computing derivatives. Poisson equationThe Poisson equation is the canonical elliptic partial differential equation. For a domain $\Omega \subset \mathbb{R}^n$ with boundary $\partial \Omega = \Gamma_{D}$, the Poisson equation with particular boundary conditions reads:$$\begin{align*} - \nabla^{2} u &= f \quad {\rm in} \ \Omega, \\ u &= 0 \quad {\rm on} \ \Gamma_{D}. \\ \end{align*}$$Here, $f$ is input data. The most standard variational form of Poisson equation reads: find $u \in V$ such that$$\begin{equation*} a(u, v) = L(v) \quad \forall \ v \in V, \end{equation*}$$where $V$ is a suitable function space and$$\begin{align*} a(u, v) &= \int_{\Omega} \nabla u \cdot \nabla v \, {\rm d} x, \\ L(v) &= \int_{\Omega} f v \, {\rm d} x. \end{align*}$$ | # Let's import all needed stuff
import jax
from jax.config import config
import jax.numpy as np
import numpy as onp
from scipy.optimize import minimize
# Library for automated PDE solution
import fenics
# Library for automated derivative computation of FEniCS programs
import fenics_adjoint
# UFL is domain specific language (DSL) for declaration of finite element discretizations of variational forms
import ufl
# Suppress JIT compile message from FEniCS
import logging
logging.getLogger('FFC').setLevel(logging.WARNING)
logging.getLogger('UFL').setLevel(logging.WARNING)
# This is the core function here
from jaxfenics_adjoint import build_jax_fem_eval
from jaxfenics_adjoint import from_jax
import matplotlib.pyplot as plt
config.update("jax_enable_x64", True)
# fenics.set_log_level(fenics.LogLevel.ERROR) | _____no_output_____ | MIT | notebooks/poisson-intro.ipynb | shyams2/jax-fenics-adjoint |
PDEs are spatial models and require a domain to be defined on. Here we choose the domain to be a unit square and it is triangulated for finite element discretization. | # Create mesh
n = 16
mesh = fenics_adjoint.UnitSquareMesh(n, n)
fenics.plot(mesh) | _____no_output_____ | MIT | notebooks/poisson-intro.ipynb | shyams2/jax-fenics-adjoint |
Another important part of the Poisson variational problem is the function space $V$. This object is resposible for representation of the disretized functions on the chosen mesh. Here we choose the function space to consist of piece-wise linear functions (P1 in finite element terminology, or CG1 for Continuous Galerkin of degree 1). DG corresponds to Discontinuous Galerkin function space. DG0 is piece-wise constant function in each mesh element. If the mesh was consisting of quads then it would be similar to pixel image. | # Define discrete function spaces and functions
V = fenics.FunctionSpace(mesh, "CG", 1)
W = fenics.FunctionSpace(mesh, "DG", 0) | _____no_output_____ | MIT | notebooks/poisson-intro.ipynb | shyams2/jax-fenics-adjoint |
JAX-FEniCS interface needs an auxilary information to be able to freely convert data between the libraries, therefore we need "templates" which represent what is the expected input to FEniCS function. | solve_templates = (fenics_adjoint.Function(W),) | _____no_output_____ | MIT | notebooks/poisson-intro.ipynb | shyams2/jax-fenics-adjoint |
Now we define the `fenics_solve` function which takes a function `f` which lives in the function space `W` and outputs the solution `u` to the Poisson equation.`build_jax_fem_eval` is a wrapper decorator that registers `fenics_solve` for JAX. | # Define and solve the Poisson equation
@build_jax_fem_eval(solve_templates)
def fenics_solve(f):
u = fenics.TrialFunction(V)
v = fenics.TestFunction(V)
inner, grad, dx = ufl.inner, ufl.grad, ufl.dx
# Compare this code to the mathematical formulation above
a = inner(grad(u), grad(v)) * dx
L = f * v * dx
bcs = fenics_adjoint.DirichletBC(V, 0.0, "on_boundary")
u = fenics_adjoint.Function(V, name="PDE Solution")
fenics_adjoint.solve(a == L, u, bcs)
return u
# Let's create a vector of ones with size equal to the number of cells in the mesh
f = np.ones(W.dim())
# and solve the Poisson equation for given `f`
u = fenics_solve(f) # u is JAX's array
# We need to explicitly provide the template function for conversion to FEniCS
u_fenics = from_jax(u, fenics.Function(V))
c = fenics.plot(u_fenics)
plt.colorbar(c) | _____no_output_____ | MIT | notebooks/poisson-intro.ipynb | shyams2/jax-fenics-adjoint |
Here comes the JAX specific part. Having defined a mapping from $f$ to $u$ we can differentiate it. For example calculating vector-Jacobian product with `jax.vjp`: | %%time
u, vjp_fun = jax.vjp(fenics_solve, f)
g = np.ones_like(u)
vjp_result = vjp_fun(g)
vjp_result_fenics = from_jax(*vjp_result, fenics.Function(W))
c = fenics.plot(vjp_result_fenics)
plt.colorbar(c) | _____no_output_____ | MIT | notebooks/poisson-intro.ipynb | shyams2/jax-fenics-adjoint |
It is also possible to calculate the full (dense) Jacobian matrix $\frac{du}{df}$ with `jax.jacrev`: | %%time
dudf = jax.jacrev(fenics_solve)(f)
# function `fenics_solve` maps R^512 (dimension of W) to R^289 (dimension of V)
# therefore the Jacobian matrix dimension is dim V x dim W
assert dudf.shape == (V.dim(), W.dim()) | _____no_output_____ | MIT | notebooks/poisson-intro.ipynb | shyams2/jax-fenics-adjoint |
The same Jacobian matrix can be calculated using finite-differences, for example with `fdm` library. However, it takes considerably more time because it requires a lot of `fenics_solve` calls. The difference is even more drastic for larger models. `jaxfenics_adjoint` library makes it possible to combine FEniCS programs with arbitrary JAX programs. In above example `f` could be the output of a neural network and passed as an input to the PDE solver or `u` could be further processed in JAX for evaluating functionals that are not integrals over the domain (this case can be handled in FEniCS). | # stax is JAX's mini-library for neural networks
from jax.experimental import stax
from jax.experimental.stax import Dense, Relu
from jax import random
# Use stax to set up network initialization and evaluation functions
# Define R^2 -> R^1 function
net_init, net_apply = stax.serial(
Dense(2), Relu,
Dense(10), Relu,
Dense(1),
)
# Initialize parameters, not committing to a batch shape
rng = random.PRNGKey(0)
in_shape = (-1, 2)
out_shape, net_params = net_init(rng, in_shape)
# Apply network to dummy inputs
predictions = net_apply(net_params, W.tabulate_dof_coordinates())
source_nn = from_jax(predictions, fenics.Function(W))
# Plot neural network prediction
c = fenics.plot(source_nn)
plt.colorbar(c)
def eval_nn(net_params):
f_nn = np.ravel(net_apply(net_params, W.tabulate_dof_coordinates()))
u = fenics_solve(f_nn)
norm_u = np.linalg.norm(u)
return norm_u
%%time
jax.grad(eval_nn)(net_params) | CPU times: user 340 ms, sys: 7.82 ms, total: 348 ms
Wall time: 337 ms
| MIT | notebooks/poisson-intro.ipynb | shyams2/jax-fenics-adjoint |
© Copyright Quantopian Inc.© Modifications Copyright QuantRocket LLCLicensed under the [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/legalcode).Disclaimer Introduction to pandasby Maxwell Margenot pandas is a Python library that provides a collection of powerful data structures to better help you manage data. In this lecture, we will cover how to use the `Series` and `DataFrame` objects to handle data. These objects have a strong integration with NumPy, allowing us to easily do the necessary statistical and mathematical calculations that we need for finance. | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
With pandas, it is easy to store, visualize, and perform calculations on your data. With only a few lines of code we can modify our data and present it in an easily-understandable way. Here we simulate some returns in NumPy, put them into a pandas `DataFrame`, and perform calculations to turn them into prices and plot them, all only using a few lines of code. | returns = pd.DataFrame(np.random.normal(1.0, 0.03, (100, 10)))
prices = returns.cumprod()
prices.plot()
plt.title('Randomly-generated Prices')
plt.xlabel('Time')
plt.ylabel('Price')
plt.legend(loc=0); | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
So let's have a look at how we actually build up to this point! pandas Data Structures `Series`A pandas `Series` is a 1-dimensional array with labels that can contain any data type. We primarily use them for handling time series data. Creating a `Series` is as easy as calling `pandas.Series()` on a Python list or NumPy array. | s = pd.Series([1, 2, np.nan, 4, 5])
print(s) | 0 1.0
1 2.0
2 NaN
3 4.0
4 5.0
dtype: float64
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Every `Series` has a name. We can give the series a name as a parameter or we can define it afterwards by directly accessing the name attribute. In this case, we have given our time series no name so the attribute should be empty. | print(s.name) | None
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
This name can be directly modified with no repercussions. | s.name = "Toy Series"
print(s.name) | Toy Series
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We call the collected axis labels of a `Series` its index. An index can either passed to a `Series` as a parameter or added later, similarly to its name. In the absence of an index, a `Series` will simply contain an index composed of integers, starting at $0$, as in the case of our "Toy Series". | print(s.index) | RangeIndex(start=0, stop=5, step=1)
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
pandas has a built-in function specifically for creating date indices, `date_range()`. We use the function here to create a new index for `s`. | new_index = pd.date_range("2016-01-01", periods=len(s), freq="D")
print(new_index) | DatetimeIndex(['2016-01-01', '2016-01-02', '2016-01-03', '2016-01-04',
'2016-01-05'],
dtype='datetime64[ns]', freq='D')
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
An index must be exactly the same length as the `Series` itself. Each index must match one-to-one with each element of the `Series`. Once this is satisfied, we can directly modify the `Series` index, as with the name, to use our new and more informative index (relatively speaking). | s.index = new_index
print(s.index) | DatetimeIndex(['2016-01-01', '2016-01-02', '2016-01-03', '2016-01-04',
'2016-01-05'],
dtype='datetime64[ns]', freq='D')
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
The index of the `Series` is crucial for handling time series, which we will get into a little later. Accessing `Series` Elements`Series` are typically accessed using the `iloc[]` and `loc[]` methods. We use `iloc[]` to access elements by integer index and we use `loc[]` to access the index of the Series. | print("First element of the series:", s.iloc[0])
print("Last element of the series:", s.iloc[len(s)-1]) | First element of the series: 1.0
Last element of the series: 5.0
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can slice a `Series` similarly to our favorite collections, Python lists and NumPy arrays. We use the colon operator to indicate the slice. | s.iloc[:2] | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
When creating a slice, we have the options of specifying a beginning, an end, and a step. The slice will begin at the start index, and take steps of size `step` until it passes the end index, not including the end. | start = 0
end = len(s) - 1
step = 1
s.iloc[start:end:step] | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can even reverse a `Series` by specifying a negative step size. Similarly, we can index the start and end with a negative integer value. | s.iloc[::-1] | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
This returns a slice of the series that starts from the second to last element and ends at the third to last element (because the fourth to last is not included, taking steps of size $1$). | s.iloc[-2:-4:-1] | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can also access a series by using the values of its index. Since we indexed `s` with a collection of dates (`Timestamp` objects) we can look at the value contained in `s` for a particular date. | s.loc['2016-01-01'] | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Or even for a range of dates! | s.loc['2016-01-02':'2016-01-04'] | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
With `Series`, we *can* just use the brackets (`[]`) to access elements, but this is not best practice. The brackets are ambiguous because they can be used to access `Series` (and `DataFrames`) using both index and integer values and the results will change based on context (especially with `DataFrames`). Boolean IndexingIn addition to the above-mentioned access methods, you can filter `Series` using boolean arrays. `Series` are compatible with your standard comparators. Once compared with whatever condition you like, you get back yet another `Series`, this time filled with boolean values. | print(s < 3) | 2016-01-01 True
2016-01-02 True
2016-01-03 False
2016-01-04 False
2016-01-05 False
Freq: D, Name: Toy Series, dtype: bool
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can pass *this* `Series` back into the original `Series` to filter out only the elements for which our condition is `True`. | print(s.loc[s < 3]) | 2016-01-01 1.0
2016-01-02 2.0
Freq: D, Name: Toy Series, dtype: float64
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
If we so desire, we can group multiple conditions together using the logical operators `&`, `|`, and `~` (and, or, and not, respectively). | print(s.loc[(s < 3) & (s > 1)]) | 2016-01-02 2.0
Freq: D, Name: Toy Series, dtype: float64
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
This is very convenient for getting only elements of a `Series` that fulfill specific criteria that we need. It gets even more convenient when we are handling `DataFrames`. Indexing and Time SeriesSince we use `Series` for handling time series, it's worth covering a little bit of how we handle the time component. For our purposes we use pandas `Timestamp` objects. Let's pull a full time series, complete with all the appropriate labels, by using our `get_prices()` function. All data pulled with `get_prices()` will be in `DataFrame` format. We can modify this index however we like. | from quantrocket.master import get_securities
securities = get_securities(symbols='XOM', fields=['Sid','Symbol','Exchange'], vendors='usstock')
securities
from quantrocket import get_prices
XOM = securities.index[0]
start = "2012-01-01"
end = "2016-01-01"
prices = get_prices("usstock-free-1min", data_frequency="daily", sids=XOM, start_date=start, end_date=end, fields="Close")
prices = prices.loc["Close"][XOM] | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can display the first few elements of our series by using the `head()` method and specifying the number of elements that we want. The analogous method for the last few elements is `tail()`. | print(type(prices))
prices.head(5) | <class 'pandas.core.series.Series'>
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
As with our toy example, we can specify a name for our time series, if only to clarify the name the `get_pricing()` provides us. | print('Old name:', prices.name)
prices.name = "XOM"
print('New name:', prices.name) | Old name: FIBBG000GZQ728
New name: XOM
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Let's take a closer look at the `DatetimeIndex` of our `prices` time series. | print(prices.index)
print("tz:", prices.index.tz) | DatetimeIndex(['2012-01-03', '2012-01-04', '2012-01-05', '2012-01-06',
'2012-01-09', '2012-01-10', '2012-01-11', '2012-01-12',
'2012-01-13', '2012-01-17',
...
'2015-12-17', '2015-12-18', '2015-12-21', '2015-12-22',
'2015-12-23', '2015-12-24', '2015-12-28', '2015-12-29',
'2015-12-30', '2015-12-31'],
dtype='datetime64[ns]', name='Date', length=1006, freq=None)
tz: None
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Notice that this `DatetimeIndex` has a collection of associated information. In particular it has an associated frequency (`freq`) and an associated timezone (`tz`). The frequency indicates whether the data is daily vs monthly vs some other period while the timezone indicates what locale this index is relative to. We can modify all of this extra information!If we resample our `Series`, we can adjust the frequency of our data. We currently have daily data (excluding weekends). Let's downsample from this daily data to monthly data using the `resample()` method. | monthly_prices = prices.resample('M').last()
monthly_prices.head(10) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
In the above example we use the last value of the lower level data to create the higher level data. We can specify how else we might want the down-sampling to be calculated, for example using the median. | monthly_prices_med = prices.resample('M').median()
monthly_prices_med.head(10) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can even specify how we want the calculation of the new period to be done. Here we create a `custom_resampler()` function that will return the first value of the period. In our specific case, this will return a `Series` where the monthly value is the first value of that month. | def custom_resampler(array_like):
""" Returns the first value of the period """
return array_like[0]
first_of_month_prices = prices.resample('M').apply(custom_resampler)
first_of_month_prices.head(10) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can also adjust the timezone of a `Series` to adapt the time of real-world data. In our case, our time series isn't localized to a timezone, but let's say that we want to localize the time to be 'America/New_York'. In this case we use the `tz_localize()` method, since the time isn't already localized. | eastern_prices = prices.tz_localize('America/New_York')
eastern_prices.head(10) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
In addition to the capacity for timezone and frequency management, each time series has a built-in `reindex()` method that we can use to realign the existing data according to a new set of index labels. If data does not exist for a particular label, the data will be filled with a placeholder value. This is typically `np.nan`, though we can provide a fill method.The data that we get from `get_prices()` only includes market days. But what if we want prices for every single calendar day? This will include holidays and weekends, times when you normally cannot trade equities. First let's create a new `DatetimeIndex` that contains all that we want. | calendar_dates = pd.date_range(start=start, end=end, freq='D')
print(calendar_dates) | DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03', '2012-01-04',
'2012-01-05', '2012-01-06', '2012-01-07', '2012-01-08',
'2012-01-09', '2012-01-10',
...
'2015-12-23', '2015-12-24', '2015-12-25', '2015-12-26',
'2015-12-27', '2015-12-28', '2015-12-29', '2015-12-30',
'2015-12-31', '2016-01-01'],
dtype='datetime64[ns]', length=1462, freq='D')
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Now let's use this new set of dates to reindex our time series. We tell the function that the fill method that we want is `ffill`. This denotes "forward fill". Any `NaN` values will be filled by the *last value* listed. So the price on the weekend or on a holiday will be listed as the price on the last market day that we know about. | calendar_prices = prices.reindex(calendar_dates, method='ffill')
calendar_prices.head(15) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
You'll notice that we still have a couple of `NaN` values right at the beginning of our time series. This is because the first of January in 2012 was a Sunday and the second was a market holiday! Because these are the earliest data points and we don't have any information from before them, they cannot be forward-filled. We will take care of these `NaN` values in the next section, when we deal with missing data. Missing DataWhenever we deal with real data, there is a very real possibility of encountering missing values. Real data is riddled with holes and pandas provides us with ways to handle them. Sometimes resampling or reindexing can create `NaN` values. Fortunately, pandas provides us with ways to handle them. We have two primary means of coping with missing data. The first of these is filling in the missing data with `fillna()`. For example, say that we want to fill in the missing days with the mean price of all days. | meanfilled_prices = calendar_prices.fillna(calendar_prices.mean())
meanfilled_prices.head(10) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Using `fillna()` is fairly easy. It is just a matter of indicating the value that you want to fill the spaces with. Unfortunately, this particular case doesn't make a whole lot of sense, for reasons discussed in the lecture on stationarity in the Lecture series. We could fill them with with $0$, simply, but that's similarly uninformative.Rather than filling in specific values, we can use the `method` parameter. We could use "backward fill", where `NaN`s are filled with the *next* filled value (instead of forward fill's *last* filled value) like so: | bfilled_prices = calendar_prices.fillna(method='bfill')
bfilled_prices.head(10) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
But again, this is a bad idea for the same reasons as the previous option. Both of these so-called solutions take into account *future data* that was not available at the time of the data points that we are trying to fill. In the case of using the mean or the median, these summary statistics are calculated by taking into account the entire time series. Backward filling is equivalent to saying that the price of a particular security today, right now, is tomorrow's price. This also makes no sense. These two options are both examples of look-ahead bias, using data that would be unknown or unavailable at the desired time, and should be avoided.Our next option is significantly more appealing. We could simply drop the missing data using the `dropna()` method. This is much better alternative than filling `NaN` values in with arbitrary numbers. | dropped_prices = calendar_prices.dropna()
dropped_prices.head(10) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Now our time series is cleaned for the calendar year, with all of our `NaN` values properly handled. It is time to talk about how to actually do time series analysis with pandas data structures. Time Series Analysis with pandasLet's do some basic time series analysis on our original prices. Each pandas `Series` has a built-in plotting method. | prices.plot();
# We still need to add the axis labels and title ourselves
plt.title("XOM Prices")
plt.ylabel("Price")
plt.xlabel("Date"); | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
As well as some built-in descriptive statistics. We can either calculate these individually or using the `describe()` method. | print("Mean:", prices.mean())
print("Standard deviation:", prices.std())
print("Summary Statistics")
print(prices.describe()) | Summary Statistics
count 1006.000000
mean 86.777275
std 6.800729
min 68.116000
25% 82.356500
50% 85.377000
75% 91.559500
max 102.762000
Name: XOM, dtype: float64
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can easily modify `Series` with scalars using our basic mathematical operators. | modified_prices = prices * 2 - 10
modified_prices.head(5) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
And we can create linear combinations of `Series` themselves using the basic mathematical operators. pandas will group up matching indices and perform the calculations elementwise to produce a new `Series`. | noisy_prices = prices + 5 * pd.Series(np.random.normal(0, 5, len(prices)), index=prices.index) + 20
noisy_prices.head(5) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
If there are no matching indices, however, we may get an empty `Series` in return. | empty_series = prices + pd.Series(np.random.normal(0, 1, len(prices)))
empty_series.head(5) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Rather than looking at a time series itself, we may want to look at its first-order differences or percent change (in order to get additive or multiplicative returns, in our particular case). Both of these are built-in methods. | add_returns = prices.diff()[1:]
mult_returns = prices.pct_change()[1:]
plt.title("Multiplicative returns of XOM")
plt.xlabel("Date")
plt.ylabel("Percent Returns")
mult_returns.plot(); | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
pandas has convenient functions for calculating rolling means and standard deviations, as well! | rolling_mean = prices.rolling(30).mean()
rolling_mean.name = "30-day rolling mean"
prices.plot()
rolling_mean.plot()
plt.title("XOM Price")
plt.xlabel("Date")
plt.ylabel("Price")
plt.legend();
rolling_std = prices.rolling(30).std()
rolling_std.name = "30-day rolling volatility"
rolling_std.plot()
plt.title(rolling_std.name);
plt.xlabel("Date")
plt.ylabel("Standard Deviation"); | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Many NumPy functions will work on `Series` the same way that they work on 1-dimensional NumPy arrays. | print(np.median(mult_returns)) | -0.000332104546511
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
The majority of these functions, however, are already implemented directly as `Series` and `DataFrame` methods. | print(mult_returns.median()) | -0.0003321045465112249
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
In every case, using the built-in pandas method will be better than using the NumPy function on a pandas data structure due to improvements in performance. Make sure to check out the `Series` [documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html) before resorting to other calculations of common functions. `DataFrames`Many of the aspects of working with `Series` carry over into `DataFrames`. pandas `DataFrames` allow us to easily manage our data with their intuitive structure. Like `Series`, `DataFrames` can hold multiple types of data, but `DataFrames` are 2-dimensional objects, unlike `Series`. Each `DataFrame` has an index and a columns attribute, which we will cover more in-depth when we start actually playing with an object. The index attribute is like the index of a `Series`, though indices in pandas have some extra features that we will unfortunately not be able to cover here. If you are interested in this, check out the [pandas documentation](https://pandas.pydata.org/docs/user_guide/advanced.html) on advanced indexing. The columns attribute is what provides the second dimension of our `DataFrames`, allowing us to combine named columns (all `Series`), into a cohesive object with the index lined-up.We can create a `DataFrame` by calling `pandas.DataFrame()` on a dictionary or NumPy `ndarray`. We can also concatenate a group of pandas `Series` into a `DataFrame` using `pandas.concat()`. | dict_data = {
'a' : [1, 2, 3, 4, 5],
'b' : ['L', 'K', 'J', 'M', 'Z'],
'c' : np.random.normal(0, 1, 5)
}
print(dict_data) | {'a': [1, 2, 3, 4, 5], 'b': ['L', 'K', 'J', 'M', 'Z'], 'c': array([-0.56478731, -0.54468815, -0.97128315, 0.73563591, -0.02876649])}
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Each `DataFrame` has a few key attributes that we need to keep in mind. The first of these is the index attribute. We can easily include an index of `Timestamp` objects like we did with `Series`. | frame_data = pd.DataFrame(dict_data, index=pd.date_range('2016-01-01', periods=5))
print(frame_data) | a b c
2016-01-01 1 L -0.564787
2016-01-02 2 K -0.544688
2016-01-03 3 J -0.971283
2016-01-04 4 M 0.735636
2016-01-05 5 Z -0.028766
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
As mentioned above, we can combine `Series` into `DataFrames`. Concatatenating `Series` like this will match elements up based on their corresponding index. As the following `Series` do not have an index assigned, they each default to an integer index. | s_1 = pd.Series([2, 4, 6, 8, 10], name='Evens')
s_2 = pd.Series([1, 3, 5, 7, 9], name="Odds")
numbers = pd.concat([s_1, s_2], axis=1)
print(numbers) | Evens Odds
0 2 1
1 4 3
2 6 5
3 8 7
4 10 9
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We will use `pandas.concat()` again later to combine multiple `DataFrame`s into one. Each `DataFrame` also has a `columns` attribute. These can either be assigned when we call `pandas.DataFrame` or they can be modified directly like the index. Note that when we concatenated the two `Series` above, the column names were the names of those `Series`. | print(numbers.columns) | Index(['Evens', 'Odds'], dtype='object')
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
To modify the columns after object creation, we need only do the following: | numbers.columns = ['Shmevens', 'Shmodds']
print(numbers) | Shmevens Shmodds
0 2 1
1 4 3
2 6 5
3 8 7
4 10 9
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
In the same vein, the index of a `DataFrame` can be changed after the fact. | print(numbers.index)
numbers.index = pd.date_range("2016-01-01", periods=len(numbers))
print(numbers) | Shmevens Shmodds
2016-01-01 2 1
2016-01-02 4 3
2016-01-03 6 5
2016-01-04 8 7
2016-01-05 10 9
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Separate from the columns and index of a `DataFrame`, we can also directly access the values they contain by looking at the values attribute. | numbers.values | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
This returns a NumPy array. | type(numbers.values) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Accessing `DataFrame` elementsAgain we see a lot of carryover from `Series` in how we access the elements of `DataFrames`. The key sticking point here is that everything has to take into account multiple dimensions now. The main way that this happens is through the access of the columns of a `DataFrame`, either individually or in groups. We can do this either by directly accessing the attributes or by using the methods we already are familiar with. Let's start by loading price data for several securities: | securities = get_securities(symbols=['XOM', 'JNJ', 'MON', 'KKD'], vendors='usstock')
securities | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Since `get_securities` returns sids in the index, we can call the index's `tolist()` method to pass a list of sids to `get_prices`: | start = "2012-01-01"
end = "2017-01-01"
prices = get_prices("usstock-free-1min", data_frequency="daily", sids=securities.index.tolist(), start_date=start, end_date=end, fields="Close")
prices = prices.loc["Close"]
prices.head() | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
For the purpose of this tutorial, it will be more convenient to reference the data by symbol instead of sid. To do this, we can create a Python dictionary mapping sid to symbol, and use the dictionary to rename the columns, using the DataFrame's `rename` method: | sids_to_symbols = securities.Symbol.to_dict()
prices = prices.rename(columns=sids_to_symbols)
prices.head() | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Here we directly access the `XOM` column. Note that this style of access will only work if your column name has no spaces or unfriendly characters in it. | prices.XOM.head() | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can also access the column using the column name in brackets: | prices["XOM"].head() | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can also use `loc[]` to access an individual column like so. | prices.loc[:, 'XOM'].head() | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Accessing an individual column will return a `Series`, regardless of how we get it. | print(type(prices.XOM))
print(type(prices.loc[:, 'XOM'])) | <class 'pandas.core.series.Series'>
<class 'pandas.core.series.Series'>
| CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Notice how we pass a tuple into the `loc[]` method? This is a key difference between accessing a `Series` and accessing a `DataFrame`, grounded in the fact that a `DataFrame` has multiple dimensions. When you pass a 2-dimensional tuple into a `DataFrame`, the first element of the tuple is applied to the rows and the second is applied to the columns. So, to break it down, the above line of code tells the `DataFrame` to return every single row of the column with label `'XOM'`. Lists of columns are also supported. | prices.loc[:, ['XOM', 'JNJ']].head() | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can also simply access the `DataFrame` by index value using `loc[]`, as with `Series`. | prices.loc['2015-12-15':'2015-12-22'] | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
This plays nicely with lists of columns, too. | prices.loc['2015-12-15':'2015-12-22', ['XOM', 'JNJ']] | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Using `iloc[]` also works similarly, allowing you to access parts of the `DataFrame` by integer index. | prices.iloc[0:2, 1]
# Access prices with integer index in
# [1, 3, 5, 7, 9, 11, 13, ..., 99]
# and in column 0 or 2
prices.iloc[[1, 3, 5] + list(range(7, 100, 2)), [0, 2]].head(20) | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Boolean indexingAs with `Series`, sometimes we want to filter a `DataFrame` according to a set of criteria. We do this by indexing our `DataFrame` with boolean values. | prices.loc[prices.MON > prices.JNJ].head() | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
We can add multiple boolean conditions by using the logical operators `&`, `|`, and `~` (and, or, and not, respectively) again! | prices.loc[(prices.MON > prices.JNJ) & ~(prices.XOM > 66)].head() | _____no_output_____ | CC-BY-4.0 | quant_finance_lectures/Lecture04-Introduction-to-Pandas.ipynb | jonrtaylor/quant-finance-lectures |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.