body_hash
stringlengths 64
64
| body
stringlengths 23
109k
| docstring
stringlengths 1
57k
| path
stringlengths 4
198
| name
stringlengths 1
115
| repository_name
stringlengths 7
111
| repository_stars
float64 0
191k
| lang
stringclasses 1
value | body_without_docstring
stringlengths 14
108k
| unified
stringlengths 45
133k
|
---|---|---|---|---|---|---|---|---|---|
5fbac1222e7e65440aacf634a83faffceb43c5f0de6a06797ed53ff9bc3e7b35 | def run(self, messenger):
'\n This method will be called by the Worker to execute in a process.\n\n Override this method.\n Use __init__ to set any params needed for this call\n The messenger parameter is a Messenger instance\n\n Use messenger.debug/info/warning/error to send logs\n Use messenger.submit_tasks to submit sub tasks to the server\n Use messenger.query_results to query for results of the submitted sub tasks\n\n If you call predefined functions in this method, to catch possible `print` in the function, do:\n predefined_function.__globals__["print"] = messenger.print # inject messenger.print as print\n See the RunFunction procedure as an example\n\n ATTENTION: do not use multiprocessing in this method.\n\n :param messenger: Messenger\n :return: The data if the task is successful. The data will be constructed to a successful\n TaskResult by the TaskWorker.\n :raise raise TaskFailed exception with the failed data if the task is unsuccessful. e.g.\n raise TaskFailed("ID not found"). The "ID not found" will be constructed to a failed TaskResult.\n Other exceptions will be caught by the Worker and be constructed to a unsuccessful TaskResult using\n the Exception instance as data\n '
raise NotImplementedError | This method will be called by the Worker to execute in a process.
Override this method.
Use __init__ to set any params needed for this call
The messenger parameter is a Messenger instance
Use messenger.debug/info/warning/error to send logs
Use messenger.submit_tasks to submit sub tasks to the server
Use messenger.query_results to query for results of the submitted sub tasks
If you call predefined functions in this method, to catch possible `print` in the function, do:
predefined_function.__globals__["print"] = messenger.print # inject messenger.print as print
See the RunFunction procedure as an example
ATTENTION: do not use multiprocessing in this method.
:param messenger: Messenger
:return: The data if the task is successful. The data will be constructed to a successful
TaskResult by the TaskWorker.
:raise raise TaskFailed exception with the failed data if the task is unsuccessful. e.g.
raise TaskFailed("ID not found"). The "ID not found" will be constructed to a failed TaskResult.
Other exceptions will be caught by the Worker and be constructed to a unsuccessful TaskResult using
the Exception instance as data | src/palpable/procedures/procedure.py | run | XiaoMutt/palpable | 0 | python | def run(self, messenger):
'\n This method will be called by the Worker to execute in a process.\n\n Override this method.\n Use __init__ to set any params needed for this call\n The messenger parameter is a Messenger instance\n\n Use messenger.debug/info/warning/error to send logs\n Use messenger.submit_tasks to submit sub tasks to the server\n Use messenger.query_results to query for results of the submitted sub tasks\n\n If you call predefined functions in this method, to catch possible `print` in the function, do:\n predefined_function.__globals__["print"] = messenger.print # inject messenger.print as print\n See the RunFunction procedure as an example\n\n ATTENTION: do not use multiprocessing in this method.\n\n :param messenger: Messenger\n :return: The data if the task is successful. The data will be constructed to a successful\n TaskResult by the TaskWorker.\n :raise raise TaskFailed exception with the failed data if the task is unsuccessful. e.g.\n raise TaskFailed("ID not found"). The "ID not found" will be constructed to a failed TaskResult.\n Other exceptions will be caught by the Worker and be constructed to a unsuccessful TaskResult using\n the Exception instance as data\n '
raise NotImplementedError | def run(self, messenger):
'\n This method will be called by the Worker to execute in a process.\n\n Override this method.\n Use __init__ to set any params needed for this call\n The messenger parameter is a Messenger instance\n\n Use messenger.debug/info/warning/error to send logs\n Use messenger.submit_tasks to submit sub tasks to the server\n Use messenger.query_results to query for results of the submitted sub tasks\n\n If you call predefined functions in this method, to catch possible `print` in the function, do:\n predefined_function.__globals__["print"] = messenger.print # inject messenger.print as print\n See the RunFunction procedure as an example\n\n ATTENTION: do not use multiprocessing in this method.\n\n :param messenger: Messenger\n :return: The data if the task is successful. The data will be constructed to a successful\n TaskResult by the TaskWorker.\n :raise raise TaskFailed exception with the failed data if the task is unsuccessful. e.g.\n raise TaskFailed("ID not found"). The "ID not found" will be constructed to a failed TaskResult.\n Other exceptions will be caught by the Worker and be constructed to a unsuccessful TaskResult using\n the Exception instance as data\n '
raise NotImplementedError<|docstring|>This method will be called by the Worker to execute in a process.
Override this method.
Use __init__ to set any params needed for this call
The messenger parameter is a Messenger instance
Use messenger.debug/info/warning/error to send logs
Use messenger.submit_tasks to submit sub tasks to the server
Use messenger.query_results to query for results of the submitted sub tasks
If you call predefined functions in this method, to catch possible `print` in the function, do:
predefined_function.__globals__["print"] = messenger.print # inject messenger.print as print
See the RunFunction procedure as an example
ATTENTION: do not use multiprocessing in this method.
:param messenger: Messenger
:return: The data if the task is successful. The data will be constructed to a successful
TaskResult by the TaskWorker.
:raise raise TaskFailed exception with the failed data if the task is unsuccessful. e.g.
raise TaskFailed("ID not found"). The "ID not found" will be constructed to a failed TaskResult.
Other exceptions will be caught by the Worker and be constructed to a unsuccessful TaskResult using
the Exception instance as data<|endoftext|> |
ecf62cc1ea9f0fa2947e86dbd3c3096956d7d15453804987fa3605ecdac0f258 | def comprep():
'Preparation of the communication (termination, etc...)'
print(f'VISA Manufacturer: {Instrument.visa_manufacturer}')
Instrument.visa_timeout = 5000
Instrument.opc_timeout = 5000
Instrument.instrument_status_checking = True
Instrument.clear_status() | Preparation of the communication (termination, etc...) | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | comprep | Rohde-Schwarz/examples | 0 | python | def comprep():
print(f'VISA Manufacturer: {Instrument.visa_manufacturer}')
Instrument.visa_timeout = 5000
Instrument.opc_timeout = 5000
Instrument.instrument_status_checking = True
Instrument.clear_status() | def comprep():
print(f'VISA Manufacturer: {Instrument.visa_manufacturer}')
Instrument.visa_timeout = 5000
Instrument.opc_timeout = 5000
Instrument.instrument_status_checking = True
Instrument.clear_status()<|docstring|>Preparation of the communication (termination, etc...)<|endoftext|> |
f544a7d881d6d549903c86d32ee8ac85892e42f64389c7f5f13264bbd342aa21 | def close():
'Close the VISA session'
Instrument.close() | Close the VISA session | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | close | Rohde-Schwarz/examples | 0 | python | def close():
Instrument.close() | def close():
Instrument.close()<|docstring|>Close the VISA session<|endoftext|> |
9bea74ac8e321f50595cc7a0e895ba4a3cd9d732cedf641a55af3182598af6ba | def comcheck():
'Check communication with the device'
idnResponse = Instrument.query_str('*IDN?')
sleep(1)
print(('Hello, I am ' + idnResponse)) | Check communication with the device | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | comcheck | Rohde-Schwarz/examples | 0 | python | def comcheck():
idnResponse = Instrument.query_str('*IDN?')
sleep(1)
print(('Hello, I am ' + idnResponse)) | def comcheck():
idnResponse = Instrument.query_str('*IDN?')
sleep(1)
print(('Hello, I am ' + idnResponse))<|docstring|>Check communication with the device<|endoftext|> |
ba61186c1f9e39d0df675e76d2adf2db0488f2c68dd23057b657e007b52401df | def meassetup():
'Prepare measurement setup and define calkit'
Instrument.write_str_with_opc('SYSTEM:DISPLAY:UPDATE ON')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc('CALCulate1:PARameter:MEAsure "Trc1", "S11"')
Instrument.write_str_with_opc('SENSe1:CORRection:CKIT:PC292:SELect "ZN-Z229"')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:CONN PC292MALE')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:METHod:DEFine "NewCal", FOPort, 1')
Instrument.write_str_with_opc('SENSe:CORRection:COLLect:ACQuire:RSAVe:DEFault OFF') | Prepare measurement setup and define calkit | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | meassetup | Rohde-Schwarz/examples | 0 | python | def meassetup():
Instrument.write_str_with_opc('SYSTEM:DISPLAY:UPDATE ON')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc('CALCulate1:PARameter:MEAsure "Trc1", "S11"')
Instrument.write_str_with_opc('SENSe1:CORRection:CKIT:PC292:SELect "ZN-Z229"')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:CONN PC292MALE')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:METHod:DEFine "NewCal", FOPort, 1')
Instrument.write_str_with_opc('SENSe:CORRection:COLLect:ACQuire:RSAVe:DEFault OFF') | def meassetup():
Instrument.write_str_with_opc('SYSTEM:DISPLAY:UPDATE ON')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc('CALCulate1:PARameter:MEAsure "Trc1", "S11"')
Instrument.write_str_with_opc('SENSe1:CORRection:CKIT:PC292:SELect "ZN-Z229"')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:CONN PC292MALE')
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:METHod:DEFine "NewCal", FOPort, 1')
Instrument.write_str_with_opc('SENSe:CORRection:COLLect:ACQuire:RSAVe:DEFault OFF')<|docstring|>Prepare measurement setup and define calkit<|endoftext|> |
1a174ca8a92d6f72dea95e752a05bb690fafa023c79127a202c0dbc23104a5ae | def calopen():
'Perform calibration of open element'
print()
print('Please connect OPEN to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected OPEN, 1') | Perform calibration of open element | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | calopen | Rohde-Schwarz/examples | 0 | python | def calopen():
print()
print('Please connect OPEN to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected OPEN, 1') | def calopen():
print()
print('Please connect OPEN to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected OPEN, 1')<|docstring|>Perform calibration of open element<|endoftext|> |
5246185cfcb1fd96f657be57e32b4e4e58ec72c9fc14b5659b7baa3f9190c0fc | def calshort():
'Perform calibration with short element'
print('Please connect SHORT to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected SHORT, 1') | Perform calibration with short element | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | calshort | Rohde-Schwarz/examples | 0 | python | def calshort():
print('Please connect SHORT to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected SHORT, 1') | def calshort():
print('Please connect SHORT to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected SHORT, 1')<|docstring|>Perform calibration with short element<|endoftext|> |
0ea980840a620f5ad8ba6bb680ddebd9c27ab8cb1869f0b5bb9d6b07de6a9b60 | def calmatch():
'Perform calibration with matched element'
print('Please connect MATCH to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected MATCH, 1') | Perform calibration with matched element | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | calmatch | Rohde-Schwarz/examples | 0 | python | def calmatch():
print('Please connect MATCH to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected MATCH, 1') | def calmatch():
print('Please connect MATCH to port 1 and confirm')
_ = input()
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:ACQuire:SELected MATCH, 1')<|docstring|>Perform calibration with matched element<|endoftext|> |
878fb1ef804542e23bfc82957d210873713d410be5cd1ec95d1c37d78abf1005 | def applycal():
'Apply calibration after it is finished and save the calfile'
sleep(2)
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:SAVE:SELected') | Apply calibration after it is finished and save the calfile | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | applycal | Rohde-Schwarz/examples | 0 | python | def applycal():
sleep(2)
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:SAVE:SELected') | def applycal():
sleep(2)
Instrument.write_str_with_opc('SENSe1:CORRection:COLLect:SAVE:SELected')<|docstring|>Apply calibration after it is finished and save the calfile<|endoftext|> |
bacbe8468b363fdae186b5da7d9a47b110550b063a2c6580b26fd411d0103ff8 | def savecal():
'Save the calibration file to the pool'
print('Now saving the calibration to the pool')
Instrument.write('MMEMory:STORE:CORRection 1,"P1_OSM_1-2GHz"') | Save the calibration file to the pool | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | savecal | Rohde-Schwarz/examples | 0 | python | def savecal():
print('Now saving the calibration to the pool')
Instrument.write('MMEMory:STORE:CORRection 1,"P1_OSM_1-2GHz"') | def savecal():
print('Now saving the calibration to the pool')
Instrument.write('MMEMory:STORE:CORRection 1,"P1_OSM_1-2GHz"')<|docstring|>Save the calibration file to the pool<|endoftext|> |
d7da93e442948bd267b4ff1cee4a8225015c47bb1845fc1e4a35de06cf1efe83 | def loadprep():
'Reset the instrument, add two channels and load calibration file to each channel'
print()
print('Resetting the instrument, assign three channels with adequate settings')
Instrument.write_str_with_opc('*RST')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SDEFINE 'Trc2', 'S11'")
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SELECT 'Trc2'")
Instrument.write_str_with_opc('DISPLAY:WINDOW2:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW2:TRACE1:FEED 'Trc2'")
Instrument.write_str_with_opc('SENSe2:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe2:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe2:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SDEFINE 'Trc3', 'S11'")
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SELECT 'Trc3'")
Instrument.write_str_with_opc('DISPLAY:WINDOW3:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW3:TRACE1:FEED 'Trc3'")
Instrument.write_str_with_opc('SENSe3:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe3:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe3:SWEep:POINts 501') | Reset the instrument, add two channels and load calibration file to each channel | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | loadprep | Rohde-Schwarz/examples | 0 | python | def loadprep():
print()
print('Resetting the instrument, assign three channels with adequate settings')
Instrument.write_str_with_opc('*RST')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SDEFINE 'Trc2', 'S11'")
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SELECT 'Trc2'")
Instrument.write_str_with_opc('DISPLAY:WINDOW2:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW2:TRACE1:FEED 'Trc2'")
Instrument.write_str_with_opc('SENSe2:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe2:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe2:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SDEFINE 'Trc3', 'S11'")
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SELECT 'Trc3'")
Instrument.write_str_with_opc('DISPLAY:WINDOW3:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW3:TRACE1:FEED 'Trc3'")
Instrument.write_str_with_opc('SENSe3:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe3:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe3:SWEep:POINts 501') | def loadprep():
print()
print('Resetting the instrument, assign three channels with adequate settings')
Instrument.write_str_with_opc('*RST')
Instrument.write_str_with_opc('SENSe1:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe1:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe1:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SDEFINE 'Trc2', 'S11'")
Instrument.write_str_with_opc("CALCULATE2:PARAMETER:SELECT 'Trc2'")
Instrument.write_str_with_opc('DISPLAY:WINDOW2:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW2:TRACE1:FEED 'Trc2'")
Instrument.write_str_with_opc('SENSe2:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe2:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe2:SWEep:POINts 501')
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SDEFINE 'Trc3', 'S11'")
Instrument.write_str_with_opc("CALCULATE3:PARAMETER:SELECT 'Trc3'")
Instrument.write_str_with_opc('DISPLAY:WINDOW3:STATE ON')
Instrument.write_str_with_opc("DISPLAY:WINDOW3:TRACE1:FEED 'Trc3'")
Instrument.write_str_with_opc('SENSe3:FREQuency:Start 1e9')
Instrument.write_str_with_opc('SENSe3:FREQuency:Stop 2e9')
Instrument.write_str_with_opc('SENSe3:SWEep:POINts 501')<|docstring|>Reset the instrument, add two channels and load calibration file to each channel<|endoftext|> |
05d018a3906438e60511401bb6b55027105fc08dc20fe444d27bbf0eb3a81247 | def loadcal():
'Now load the cal file to each channel'
print()
print('Load the calibration to all three channels')
Instrument.write('MMEMory:LOAD:CORRection 1,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 2,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 3,"P1_OSM_1-2GHz"') | Now load the cal file to each channel | VectorNetworkAnalyzers/Python/RsInstrument/RsInstrument_ZNB_CAL_P1_Save_Reload.py | loadcal | Rohde-Schwarz/examples | 0 | python | def loadcal():
print()
print('Load the calibration to all three channels')
Instrument.write('MMEMory:LOAD:CORRection 1,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 2,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 3,"P1_OSM_1-2GHz"') | def loadcal():
print()
print('Load the calibration to all three channels')
Instrument.write('MMEMory:LOAD:CORRection 1,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 2,"P1_OSM_1-2GHz"')
Instrument.write('MMEMory:LOAD:CORRection 3,"P1_OSM_1-2GHz"')<|docstring|>Now load the cal file to each channel<|endoftext|> |
3ff8b9203d34a658b8d6fbb38c1b4711d16c93ff7fbd743ce52f1d65d220ff9a | def test_collection_count(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection and add vectors in it,
assert the value returned by count_entities method is equal to length of vectors
expected: the count is equal to the length of vectors | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count | RyanWei/milvus | 3 | python | def test_collection_count(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | def test_collection_count(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection and add vectors in it,
assert the value returned by count_entities method is equal to length of vectors
expected: the count is equal to the length of vectors<|endoftext|> |
be8bf77bff628fe01eec36d76adf1c2e5399e41041412a592a347fc75cba9eb2 | def test_collection_count_partition(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partition and add vectors in it,
assert the value returned by count_entities method is equal to length of vectors
expected: the count is equal to the length of vectors | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_partition | RyanWei/milvus | 3 | python | def test_collection_count_partition(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | def test_collection_count_partition(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add vectors in it,\n assert the value returned by count_entities method is equal to length of vectors\n expected: the count is equal to the length of vectors\n '
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partition and add vectors in it,
assert the value returned by count_entities method is equal to length of vectors
expected: the count is equal to the length of vectors<|endoftext|> |
b5920042c419c97a35a951c165a210d46ec2cad2fe7513107e0098ce80f976bb | def test_collection_count_multi_partitions_A(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_A | RyanWei/milvus | 3 | python | def test_collection_count_multi_partitions_A(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | def test_collection_count_multi_partitions_A(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
272bebdbd5c9cd15c8052c3e53881740088089d3e7d213a941584aa0ea9556e2 | def test_collection_count_multi_partitions_B(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_B | RyanWei/milvus | 3 | python | def test_collection_count_multi_partitions_B(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count) | def test_collection_count_multi_partitions_B(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
00f1ab5395ac0bd32cec7ed3d8ab7fa0b8b288a3d202442c8a83d8066a8cfda6 | def test_collection_count_multi_partitions_C(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of vectors\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
res_ids_2 = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2)) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of vectors | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_C | RyanWei/milvus | 3 | python | def test_collection_count_multi_partitions_C(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of vectors\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
res_ids_2 = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2)) | def test_collection_count_multi_partitions_C(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of vectors\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities)
res_ids_2 = connect.insert(collection, entities, partition_tag=tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2))<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of vectors<|endoftext|> |
44e5a31f5f9477a111a82c0e28bd987d7fb6d1f958d33c358c33d8fe0a7fdc92 | def test_collection_count_multi_partitions_D(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
res_ids2 = connect.insert(collection, entities, partition_tag=new_tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2)) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the collection count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_D | RyanWei/milvus | 3 | python | def test_collection_count_multi_partitions_D(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
res_ids2 = connect.insert(collection, entities, partition_tag=new_tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2)) | def test_collection_count_multi_partitions_D(self, connect, collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
entities = gen_entities(insert_count)
connect.create_partition(collection, tag)
connect.create_partition(collection, new_tag)
res_ids = connect.insert(collection, entities, partition_tag=tag)
res_ids2 = connect.insert(collection, entities, partition_tag=new_tag)
connect.flush([collection])
res = connect.count_entities(collection)
assert (res == (insert_count * 2))<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the collection count is equal to the length of entities<|endoftext|> |
2cdb648a12c07332d4e0b0fd5801410c88a58b429306bae2303396c9dff41a46 | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, default_float_vec_field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count) | target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception | tests/milvus_python_test/collection/test_collection_count.py | _test_collection_count_after_index_created | RyanWei/milvus | 3 | python | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, default_float_vec_field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count) | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, default_float_vec_field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception<|endoftext|> |
fad252f5927a259eb49255e0d517edd6b375822afd6bbed1958140078b404508 | def test_count_without_connection(self, collection, dis_connect):
'\n target: test count_entities, without connection\n method: calling count_entities with correct params, with a disconnected instance\n expected: count_entities raise exception\n '
with pytest.raises(Exception) as e:
dis_connect.count_entities(collection) | target: test count_entities, without connection
method: calling count_entities with correct params, with a disconnected instance
expected: count_entities raise exception | tests/milvus_python_test/collection/test_collection_count.py | test_count_without_connection | RyanWei/milvus | 3 | python | def test_count_without_connection(self, collection, dis_connect):
'\n target: test count_entities, without connection\n method: calling count_entities with correct params, with a disconnected instance\n expected: count_entities raise exception\n '
with pytest.raises(Exception) as e:
dis_connect.count_entities(collection) | def test_count_without_connection(self, collection, dis_connect):
'\n target: test count_entities, without connection\n method: calling count_entities with correct params, with a disconnected instance\n expected: count_entities raise exception\n '
with pytest.raises(Exception) as e:
dis_connect.count_entities(collection)<|docstring|>target: test count_entities, without connection
method: calling count_entities with correct params, with a disconnected instance
expected: count_entities raise exception<|endoftext|> |
8cd24880cb7b6ccf597167cc40e7109ac350ea8a7985433ac892171c95548b38 | def test_collection_count_no_vectors(self, connect, collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(collection)
assert (res == 0) | target: test collection rows_count is correct or not, if collection is empty
method: create collection and no vectors in it,
assert the value returned by count_entities method is equal to 0
expected: the count is equal to 0 | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_no_vectors | RyanWei/milvus | 3 | python | def test_collection_count_no_vectors(self, connect, collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(collection)
assert (res == 0) | def test_collection_count_no_vectors(self, connect, collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(collection)
assert (res == 0)<|docstring|>target: test collection rows_count is correct or not, if collection is empty
method: create collection and no vectors in it,
assert the value returned by count_entities method is equal to 0
expected: the count is equal to 0<|endoftext|> |
bf479c69039350542da1a3dbda5ad214bfc1765674ee1f4680e4879aa849e55b | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count) | target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception | tests/milvus_python_test/collection/test_collection_count.py | _test_collection_count_after_index_created | RyanWei/milvus | 3 | python | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count) | def _test_collection_count_after_index_created(self, connect, collection, get_simple_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
entities = gen_entities(insert_count)
res = connect.insert(collection, entities)
connect.flush([collection])
connect.create_index(collection, field_name, get_simple_index)
res = connect.count_entities(collection)
assert (res == insert_count)<|docstring|>target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception<|endoftext|> |
d7915203feac9e3756bf67cb58d280b08395db2ff74b94246e7d41377c474472 | def test_collection_count(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
logging.getLogger().info(len(res))
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count | RyanWei/milvus | 3 | python | def test_collection_count(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
logging.getLogger().info(len(res))
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | def test_collection_count(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
logging.getLogger().info(len(res))
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
5b4e0c2f85689b6a90d941758c5cc3ad6fa65a0a5677ce991414c5600d7aedd7 | def test_collection_count_partition(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partition and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_partition | RyanWei/milvus | 3 | python | def test_collection_count_partition(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | def test_collection_count_partition(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partition and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partition and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
9a8d4fa1c99ba833d2357767eb22394f9b78c9a3f54e8fba8742c0079b10bc64 | @pytest.mark.level(2)
def test_collection_count_multi_partitions_A(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_A | RyanWei/milvus | 3 | python | @pytest.mark.level(2)
def test_collection_count_multi_partitions_A(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | @pytest.mark.level(2)
def test_collection_count_multi_partitions_A(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
a141514a7c944e8afc7fd4cf215584a7e3fd272263f44d9679e438737da77bbf | @pytest.mark.level(2)
def test_collection_count_multi_partitions_B(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_B | RyanWei/milvus | 3 | python | @pytest.mark.level(2)
def test_collection_count_multi_partitions_B(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count) | @pytest.mark.level(2)
def test_collection_count_multi_partitions_B(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
49b11d7d45cd31fee07a8dce5ac73b2193bffe1e5b58ea2120532b9768e65f8b | def test_collection_count_multi_partitions_C(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
res_ids_2 = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2)) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_C | RyanWei/milvus | 3 | python | def test_collection_count_multi_partitions_C(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
res_ids_2 = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2)) | def test_collection_count_multi_partitions_C(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities)
res_ids_2 = connect.insert(binary_collection, entities, partition_tag=tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2))<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
058a27c1ade68860abe2630065d10cbb5c2c32e412e2282b77480b3130e51b73 | @pytest.mark.level(2)
def test_collection_count_multi_partitions_D(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
res_ids2 = connect.insert(binary_collection, entities, partition_tag=new_tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2)) | target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the collection count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_partitions_D | RyanWei/milvus | 3 | python | @pytest.mark.level(2)
def test_collection_count_multi_partitions_D(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
res_ids2 = connect.insert(binary_collection, entities, partition_tag=new_tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2)) | @pytest.mark.level(2)
def test_collection_count_multi_partitions_D(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not\n method: create collection, create partitions and add entities in one of the partitions,\n assert the value returned by count_entities method is equal to length of entities\n expected: the collection count is equal to the length of entities\n '
new_tag = 'new_tag'
(raw_vectors, entities) = gen_binary_entities(insert_count)
connect.create_partition(binary_collection, tag)
connect.create_partition(binary_collection, new_tag)
res_ids = connect.insert(binary_collection, entities, partition_tag=tag)
res_ids2 = connect.insert(binary_collection, entities, partition_tag=new_tag)
connect.flush([binary_collection])
res = connect.count_entities(binary_collection)
assert (res == (insert_count * 2))<|docstring|>target: test collection rows_count is correct or not
method: create collection, create partitions and add entities in one of the partitions,
assert the value returned by count_entities method is equal to length of entities
expected: the collection count is equal to the length of entities<|endoftext|> |
3d5a047123c96bac76431ad3bab6352d3980074083d2451a60078418aa4c247e | def _test_collection_count_after_index_created(self, connect, binary_collection, get_jaccard_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_jaccard_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception | tests/milvus_python_test/collection/test_collection_count.py | _test_collection_count_after_index_created | RyanWei/milvus | 3 | python | def _test_collection_count_after_index_created(self, connect, binary_collection, get_jaccard_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_jaccard_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count) | def _test_collection_count_after_index_created(self, connect, binary_collection, get_jaccard_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_jaccard_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception<|endoftext|> |
4c0e83a981d71177f638fb5a4d4b3a825862d8ffd089741416ac16cebe3e10de | def _test_collection_count_after_index_created(self, connect, binary_collection, get_hamming_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_hamming_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count) | target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception | tests/milvus_python_test/collection/test_collection_count.py | _test_collection_count_after_index_created | RyanWei/milvus | 3 | python | def _test_collection_count_after_index_created(self, connect, binary_collection, get_hamming_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_hamming_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count) | def _test_collection_count_after_index_created(self, connect, binary_collection, get_hamming_index, insert_count):
'\n target: test count_entities, after index have been created\n method: add vectors in db, and create index, then calling count_entities with correct params \n expected: count_entities raise exception\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
connect.flush([binary_collection])
connect.create_index(binary_collection, field_name, get_hamming_index)
res = connect.count_entities(binary_collection)
assert (res == insert_count)<|docstring|>target: test count_entities, after index have been created
method: add vectors in db, and create index, then calling count_entities with correct params
expected: count_entities raise exception<|endoftext|> |
a24a30860e0f364d690372fb1f4c159f1ed2570ec92578cd4bbe59b481126796 | def test_collection_count_no_entities(self, connect, binary_collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(binary_collection)
assert (res == 0) | target: test collection rows_count is correct or not, if collection is empty
method: create collection and no vectors in it,
assert the value returned by count_entities method is equal to 0
expected: the count is equal to 0 | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_no_entities | RyanWei/milvus | 3 | python | def test_collection_count_no_entities(self, connect, binary_collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(binary_collection)
assert (res == 0) | def test_collection_count_no_entities(self, connect, binary_collection):
'\n target: test collection rows_count is correct or not, if collection is empty\n method: create collection and no vectors in it,\n assert the value returned by count_entities method is equal to 0\n expected: the count is equal to 0\n '
res = connect.count_entities(binary_collection)
assert (res == 0)<|docstring|>target: test collection rows_count is correct or not, if collection is empty
method: create collection and no vectors in it,
assert the value returned by count_entities method is equal to 0
expected: the count is equal to 0<|endoftext|> |
ad73705c4510671dba0039868bb12878060608b113ec70984d48560d03ce46b7 | def test_collection_count_multi_collections_l2(self, connect, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of L2\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
entities = gen_entities(insert_count)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count) | target: test collection rows_count is correct or not with multiple collections of L2
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_collections_l2 | RyanWei/milvus | 3 | python | def test_collection_count_multi_collections_l2(self, connect, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of L2\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
entities = gen_entities(insert_count)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count) | def test_collection_count_multi_collections_l2(self, connect, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of L2\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
entities = gen_entities(insert_count)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not with multiple collections of L2
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
811428e6bf687da6052e5ba2e4799f26d7adc67f3cad04cd312e8417f3d50986 | @pytest.mark.level(2)
def test_collection_count_multi_collections_binary(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count) | target: test collection rows_count is correct or not with multiple collections of JACCARD
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_collections_binary | RyanWei/milvus | 3 | python | @pytest.mark.level(2)
def test_collection_count_multi_collections_binary(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count) | @pytest.mark.level(2)
def test_collection_count_multi_collections_binary(self, connect, binary_collection, insert_count):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
(raw_vectors, entities) = gen_binary_entities(insert_count)
res = connect.insert(binary_collection, entities)
collection_list = []
collection_num = 20
for i in range(collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == insert_count)<|docstring|>target: test collection rows_count is correct or not with multiple collections of JACCARD
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
b613c502350e3d6226c854117fbe6ad761cb98ef1873e21e6998896e4eab2c06 | @pytest.mark.level(2)
def test_collection_count_multi_collections_mix(self, connect):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
collection_list = []
collection_num = 20
for i in range(0, int((collection_num / 2))):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, default_entities)
for i in range(int((collection_num / 2)), collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, default_binary_entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == default_nb) | target: test collection rows_count is correct or not with multiple collections of JACCARD
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities | tests/milvus_python_test/collection/test_collection_count.py | test_collection_count_multi_collections_mix | RyanWei/milvus | 3 | python | @pytest.mark.level(2)
def test_collection_count_multi_collections_mix(self, connect):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
collection_list = []
collection_num = 20
for i in range(0, int((collection_num / 2))):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, default_entities)
for i in range(int((collection_num / 2)), collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, default_binary_entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == default_nb) | @pytest.mark.level(2)
def test_collection_count_multi_collections_mix(self, connect):
'\n target: test collection rows_count is correct or not with multiple collections of JACCARD\n method: create collection and add entities in it,\n assert the value returned by count_entities method is equal to length of entities\n expected: the count is equal to the length of entities\n '
collection_list = []
collection_num = 20
for i in range(0, int((collection_num / 2))):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_fields)
res = connect.insert(collection_name, default_entities)
for i in range(int((collection_num / 2)), collection_num):
collection_name = gen_unique_str(uid)
collection_list.append(collection_name)
connect.create_collection(collection_name, default_binary_fields)
res = connect.insert(collection_name, default_binary_entities)
connect.flush(collection_list)
for i in range(collection_num):
res = connect.count_entities(collection_list[i])
assert (res == default_nb)<|docstring|>target: test collection rows_count is correct or not with multiple collections of JACCARD
method: create collection and add entities in it,
assert the value returned by count_entities method is equal to length of entities
expected: the count is equal to the length of entities<|endoftext|> |
1451e621da424d5884a0589e168784a348aa8f2e4ab972e04705217f4be8b5b3 | def extractKuronochandesuyoWordpressCom(item):
"\n\tParser for 'kuronochandesuyo.wordpress.com'\n\t"
(vol, chp, frag, postfix) = extractVolChapterFragmentPostfix(item['title'])
if ((not (chp or vol)) or ('preview' in item['title'].lower())):
return None
if ('Since I reincarnated・・・・' in item['tags']):
return buildReleaseMessageWithType(item, 'Since I reincarnated・・・・', vol, chp, frag=frag, postfix=postfix)
return False | Parser for 'kuronochandesuyo.wordpress.com' | WebMirror/management/rss_parser_funcs/feed_parse_extractKuronochandesuyoWordpressCom.py | extractKuronochandesuyoWordpressCom | fake-name/ReadableWebProxy | 193 | python | def extractKuronochandesuyoWordpressCom(item):
"\n\t\n\t"
(vol, chp, frag, postfix) = extractVolChapterFragmentPostfix(item['title'])
if ((not (chp or vol)) or ('preview' in item['title'].lower())):
return None
if ('Since I reincarnated・・・・' in item['tags']):
return buildReleaseMessageWithType(item, 'Since I reincarnated・・・・', vol, chp, frag=frag, postfix=postfix)
return False | def extractKuronochandesuyoWordpressCom(item):
"\n\t\n\t"
(vol, chp, frag, postfix) = extractVolChapterFragmentPostfix(item['title'])
if ((not (chp or vol)) or ('preview' in item['title'].lower())):
return None
if ('Since I reincarnated・・・・' in item['tags']):
return buildReleaseMessageWithType(item, 'Since I reincarnated・・・・', vol, chp, frag=frag, postfix=postfix)
return False<|docstring|>Parser for 'kuronochandesuyo.wordpress.com'<|endoftext|> |
2d02ed9b9acee1e940a18bbe08bd3a25919a069a553f38ed3b60c2bd4a5a8ae6 | def fetch_videos(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_video_metadata() and\n download_video_files(). See documentation of those functions for details.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_video_directory (str): Base of local video tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
logger.info('Fetching metadata for videos that match specified parameters')
video_metadata = fetch_video_metadata(start=start, end=end, video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading video files')
video_metadata_with_local_paths = download_video_files(video_metadata=video_metadata, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension, download_workers=download_workers)
return video_metadata_with_local_paths | Downloads videos that match search parameters and returns their metadata.
This function simply combines the operations of fetch_video_metadata() and
download_video_files(). See documentation of those functions for details.
Args:
start (datetime): Start of time period to fetch (default is None)
end (datetime): End of time period to fetch (default is None)
video_timestamps (list of datetime): List of video start times to fetch (default is None)
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
local_video_directory (str): Base of local video tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for videos with local path information appended | video_io/core.py | fetch_videos | optimuspaul/wf-video-io | 0 | python | def fetch_videos(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_video_metadata() and\n download_video_files(). See documentation of those functions for details.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_video_directory (str): Base of local video tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
logger.info('Fetching metadata for videos that match specified parameters')
video_metadata = fetch_video_metadata(start=start, end=end, video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading video files')
video_metadata_with_local_paths = download_video_files(video_metadata=video_metadata, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension, download_workers=download_workers)
return video_metadata_with_local_paths | def fetch_videos(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_video_metadata() and\n download_video_files(). See documentation of those functions for details.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_video_directory (str): Base of local video tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
logger.info('Fetching metadata for videos that match specified parameters')
video_metadata = fetch_video_metadata(start=start, end=end, video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading video files')
video_metadata_with_local_paths = download_video_files(video_metadata=video_metadata, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension, download_workers=download_workers)
return video_metadata_with_local_paths<|docstring|>Downloads videos that match search parameters and returns their metadata.
This function simply combines the operations of fetch_video_metadata() and
download_video_files(). See documentation of those functions for details.
Args:
start (datetime): Start of time period to fetch (default is None)
end (datetime): End of time period to fetch (default is None)
video_timestamps (list of datetime): List of video start times to fetch (default is None)
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
local_video_directory (str): Base of local video tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for videos with local path information appended<|endoftext|> |
f604327861369202603d278580ab22d60679b560735fa9056d7e5287c44834db | def fetch_images(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_image_directory='./images', image_filename_extension='png', local_video_directory='./videos', video_filename_extension='mp4'):
"\n Downloads images that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_image_metadata() and\n download_image_files(). See documentation of those functions for details.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_image_directory (str): Base of local image file tree (default is './images')\n image_filename_extension (str): Filename extension for image files (default is 'png')\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for images with local path information appended\n "
logger.info('Fetching metadata for images that match specified parameters')
image_metadata = fetch_image_metadata(image_timestamps=image_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading image files')
image_metadata_with_local_paths = download_image_files(image_metadata=image_metadata, local_image_directory=local_image_directory, image_filename_extension=image_filename_extension, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension)
return image_metadata_with_local_paths | Downloads images that match search parameters and returns their metadata.
This function simply combines the operations of fetch_image_metadata() and
download_image_files(). See documentation of those functions for details.
Args:
image_timestamps (list of datetime): List of image timestamps to fetch
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
local_image_directory (str): Base of local image file tree (default is './images')
image_filename_extension (str): Filename extension for image files (default is 'png')
local_video_directory (str): Base of local video file tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for images with local path information appended | video_io/core.py | fetch_images | optimuspaul/wf-video-io | 0 | python | def fetch_images(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_image_directory='./images', image_filename_extension='png', local_video_directory='./videos', video_filename_extension='mp4'):
"\n Downloads images that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_image_metadata() and\n download_image_files(). See documentation of those functions for details.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_image_directory (str): Base of local image file tree (default is './images')\n image_filename_extension (str): Filename extension for image files (default is 'png')\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for images with local path information appended\n "
logger.info('Fetching metadata for images that match specified parameters')
image_metadata = fetch_image_metadata(image_timestamps=image_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading image files')
image_metadata_with_local_paths = download_image_files(image_metadata=image_metadata, local_image_directory=local_image_directory, image_filename_extension=image_filename_extension, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension)
return image_metadata_with_local_paths | def fetch_images(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None, local_image_directory='./images', image_filename_extension='png', local_video_directory='./videos', video_filename_extension='mp4'):
"\n Downloads images that match search parameters and returns their metadata.\n\n This function simply combines the operations of fetch_image_metadata() and\n download_image_files(). See documentation of those functions for details.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n local_image_directory (str): Base of local image file tree (default is './images')\n image_filename_extension (str): Filename extension for image files (default is 'png')\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for images with local path information appended\n "
logger.info('Fetching metadata for images that match specified parameters')
image_metadata = fetch_image_metadata(image_timestamps=image_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Downloading image files')
image_metadata_with_local_paths = download_image_files(image_metadata=image_metadata, local_image_directory=local_image_directory, image_filename_extension=image_filename_extension, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension)
return image_metadata_with_local_paths<|docstring|>Downloads images that match search parameters and returns their metadata.
This function simply combines the operations of fetch_image_metadata() and
download_image_files(). See documentation of those functions for details.
Args:
image_timestamps (list of datetime): List of image timestamps to fetch
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
local_image_directory (str): Base of local image file tree (default is './images')
image_filename_extension (str): Filename extension for image files (default is 'png')
local_video_directory (str): Base of local video file tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for images with local path information appended<|endoftext|> |
3fe1e452e5a6a5d9dc5d01f3a9b0c3e6e583a704d2b63764c48ea1fe0517685b | def fetch_video_metadata(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos that match specified search parameters and\n returns their metadata.\n\n Videos must match all specified search parameters (i.e., the function\n performs a logical AND of all of the queries). If camera information is not\n specified, returns results for all devices that have one of the specified\n camera device types ('PI3WITHCAMERA' and 'PIZEROWITHCAMERA' by default).\n Redundant combinations of search terms will generate an error (e.g., user\n cannot specify environment name and environment ID, camera assignment IDs\n and camera device IDs, etc.)\n\n If start and end are specified, returns all videos that overlap with\n specified start and end (e.g., if start is 10:32:56 and end is 10:33:20,\n returns videos starting at 10:32:50, 10:33:00 and 10:33:10).\n\n Returned metadata is a list of dictionaries, one for each video. Each\n dictionary has the following fields: data_id, video_timestamp,\n environment_id, assignment_id, device_id, bucket, key.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for videos that match search parameters\n "
if (((start is not None) or (end is not None)) and (video_timestamps is not None)):
raise ValueError('Cannot specify start/end and list of video timestamps')
if ((video_timestamps is None) and ((start is None) or (end is None))):
raise ValueError('If not specifying specific timestamps, must specify both start and end times')
if ((camera_assignment_ids is not None) and ((environment_id is not None) or (environment_name is not None))):
raise ValueError('Cannot specify camera assignment IDs and environment')
if ((camera_assignment_ids is not None) and ((camera_device_ids is not None) or (camera_part_numbers is not None) or (camera_names is not None) or (camera_serial_numbers is not None))):
raise ValueError('Cannot specify camera assignment IDs and camera device properties')
if ((environment_id is not None) and (environment_name is not None)):
raise ValueError('Cannot specify environment ID and environment name')
if (video_timestamps is not None):
video_timestamps_utc = [video_timestamp.astimezone(datetime.timezone.utc) for video_timestamp in video_timestamps]
video_timestamp_min_utc = min(video_timestamps)
video_timestamp_max_utc = max(video_timestamps)
start_utc = video_timestamp_min_utc
end_utc = (video_timestamp_max_utc + VIDEO_DURATION)
video_timestamps_utc_honeycomb = [honeycomb_io.to_honeycomb_datetime(video_timestamp_utc) for video_timestamp_utc in video_timestamps_utc]
else:
start_utc = start.astimezone(datetime.timezone.utc)
end_utc = end.astimezone(datetime.timezone.utc)
video_timestamp_min_utc = video_timestamp_min(start_utc)
video_timestamp_max_utc = video_timestamp_max(end_utc)
start_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(start_utc)
end_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(end_utc)
if (environment_name is not None):
environment_id = honeycomb_io.fetch_environment_id(environment_name=environment_name, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_environment = honeycomb_io.fetch_camera_assignment_ids_from_environment(start=start_utc, end=end_utc, environment_id=environment_id, camera_device_types=camera_device_types, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_camera_properties = honeycomb_io.fetch_camera_assignment_ids_from_camera_properties(start=start_utc, end=end_utc, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=100, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Building query list for video metadata search')
query_list = list()
if (start is not None):
query_list.append({'field': 'timestamp', 'operator': 'GTE', 'value': video_timestamp_min_utc})
if (end is not None):
query_list.append({'field': 'timestamp', 'operator': 'LTE', 'value': video_timestamp_max_utc})
if (video_timestamps is not None):
query_list.append({'field': 'timestamp', 'operator': 'IN', 'values': video_timestamps_utc_honeycomb})
if (camera_assignment_ids is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids})
if (camera_assignment_ids_from_environment is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_environment})
if (camera_assignment_ids_from_camera_properties is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_camera_properties})
return_data = ['data_id', 'timestamp', {'source': [{'... on Assignment': [{'environment': ['environment_id']}, 'assignment_id', {'assigned': [{'... on Device': ['device_id']}]}]}]}, {'file': ['bucketName', 'key']}]
result = honeycomb_io.search_datapoints(query_list=query_list, return_data=return_data, chunk_size=chunk_size, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
video_metadata = list()
logger.info('Parsing {} returned camera datapoints'.format(len(result)))
for datum in result:
source = (datum.get('source') if (datum.get('source') is not None) else {})
file = (datum.get('file') if (datum.get('file') is not None) else {})
video_metadata.append({'data_id': datum.get('data_id'), 'video_timestamp': honeycomb_io.from_honeycomb_datetime(datum.get('timestamp')), 'environment_id': (source.get('environment') if (source.get('environment') is not None) else {}).get('environment_id'), 'assignment_id': source.get('assignment_id'), 'device_id': (source.get('assigned') if (source.get('assigned') is not None) else {}).get('device_id'), 'bucket': file.get('bucketName'), 'key': file.get('key')})
return video_metadata | Searches Honeycomb for videos that match specified search parameters and
returns their metadata.
Videos must match all specified search parameters (i.e., the function
performs a logical AND of all of the queries). If camera information is not
specified, returns results for all devices that have one of the specified
camera device types ('PI3WITHCAMERA' and 'PIZEROWITHCAMERA' by default).
Redundant combinations of search terms will generate an error (e.g., user
cannot specify environment name and environment ID, camera assignment IDs
and camera device IDs, etc.)
If start and end are specified, returns all videos that overlap with
specified start and end (e.g., if start is 10:32:56 and end is 10:33:20,
returns videos starting at 10:32:50, 10:33:00 and 10:33:10).
Returned metadata is a list of dictionaries, one for each video. Each
dictionary has the following fields: data_id, video_timestamp,
environment_id, assignment_id, device_id, bucket, key.
Args:
start (datetime): Start of time period to fetch (default is None)
end (datetime): End of time period to fetch (default is None)
video_timestamps (list of datetime): List of video start times to fetch (default is None)
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
Returns:
(list of dict): Metadata for videos that match search parameters | video_io/core.py | fetch_video_metadata | optimuspaul/wf-video-io | 0 | python | def fetch_video_metadata(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos that match specified search parameters and\n returns their metadata.\n\n Videos must match all specified search parameters (i.e., the function\n performs a logical AND of all of the queries). If camera information is not\n specified, returns results for all devices that have one of the specified\n camera device types ('PI3WITHCAMERA' and 'PIZEROWITHCAMERA' by default).\n Redundant combinations of search terms will generate an error (e.g., user\n cannot specify environment name and environment ID, camera assignment IDs\n and camera device IDs, etc.)\n\n If start and end are specified, returns all videos that overlap with\n specified start and end (e.g., if start is 10:32:56 and end is 10:33:20,\n returns videos starting at 10:32:50, 10:33:00 and 10:33:10).\n\n Returned metadata is a list of dictionaries, one for each video. Each\n dictionary has the following fields: data_id, video_timestamp,\n environment_id, assignment_id, device_id, bucket, key.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for videos that match search parameters\n "
if (((start is not None) or (end is not None)) and (video_timestamps is not None)):
raise ValueError('Cannot specify start/end and list of video timestamps')
if ((video_timestamps is None) and ((start is None) or (end is None))):
raise ValueError('If not specifying specific timestamps, must specify both start and end times')
if ((camera_assignment_ids is not None) and ((environment_id is not None) or (environment_name is not None))):
raise ValueError('Cannot specify camera assignment IDs and environment')
if ((camera_assignment_ids is not None) and ((camera_device_ids is not None) or (camera_part_numbers is not None) or (camera_names is not None) or (camera_serial_numbers is not None))):
raise ValueError('Cannot specify camera assignment IDs and camera device properties')
if ((environment_id is not None) and (environment_name is not None)):
raise ValueError('Cannot specify environment ID and environment name')
if (video_timestamps is not None):
video_timestamps_utc = [video_timestamp.astimezone(datetime.timezone.utc) for video_timestamp in video_timestamps]
video_timestamp_min_utc = min(video_timestamps)
video_timestamp_max_utc = max(video_timestamps)
start_utc = video_timestamp_min_utc
end_utc = (video_timestamp_max_utc + VIDEO_DURATION)
video_timestamps_utc_honeycomb = [honeycomb_io.to_honeycomb_datetime(video_timestamp_utc) for video_timestamp_utc in video_timestamps_utc]
else:
start_utc = start.astimezone(datetime.timezone.utc)
end_utc = end.astimezone(datetime.timezone.utc)
video_timestamp_min_utc = video_timestamp_min(start_utc)
video_timestamp_max_utc = video_timestamp_max(end_utc)
start_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(start_utc)
end_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(end_utc)
if (environment_name is not None):
environment_id = honeycomb_io.fetch_environment_id(environment_name=environment_name, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_environment = honeycomb_io.fetch_camera_assignment_ids_from_environment(start=start_utc, end=end_utc, environment_id=environment_id, camera_device_types=camera_device_types, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_camera_properties = honeycomb_io.fetch_camera_assignment_ids_from_camera_properties(start=start_utc, end=end_utc, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=100, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Building query list for video metadata search')
query_list = list()
if (start is not None):
query_list.append({'field': 'timestamp', 'operator': 'GTE', 'value': video_timestamp_min_utc})
if (end is not None):
query_list.append({'field': 'timestamp', 'operator': 'LTE', 'value': video_timestamp_max_utc})
if (video_timestamps is not None):
query_list.append({'field': 'timestamp', 'operator': 'IN', 'values': video_timestamps_utc_honeycomb})
if (camera_assignment_ids is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids})
if (camera_assignment_ids_from_environment is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_environment})
if (camera_assignment_ids_from_camera_properties is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_camera_properties})
return_data = ['data_id', 'timestamp', {'source': [{'... on Assignment': [{'environment': ['environment_id']}, 'assignment_id', {'assigned': [{'... on Device': ['device_id']}]}]}]}, {'file': ['bucketName', 'key']}]
result = honeycomb_io.search_datapoints(query_list=query_list, return_data=return_data, chunk_size=chunk_size, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
video_metadata = list()
logger.info('Parsing {} returned camera datapoints'.format(len(result)))
for datum in result:
source = (datum.get('source') if (datum.get('source') is not None) else {})
file = (datum.get('file') if (datum.get('file') is not None) else {})
video_metadata.append({'data_id': datum.get('data_id'), 'video_timestamp': honeycomb_io.from_honeycomb_datetime(datum.get('timestamp')), 'environment_id': (source.get('environment') if (source.get('environment') is not None) else {}).get('environment_id'), 'assignment_id': source.get('assignment_id'), 'device_id': (source.get('assigned') if (source.get('assigned') is not None) else {}).get('device_id'), 'bucket': file.get('bucketName'), 'key': file.get('key')})
return video_metadata | def fetch_video_metadata(start=None, end=None, video_timestamps=None, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos that match specified search parameters and\n returns their metadata.\n\n Videos must match all specified search parameters (i.e., the function\n performs a logical AND of all of the queries). If camera information is not\n specified, returns results for all devices that have one of the specified\n camera device types ('PI3WITHCAMERA' and 'PIZEROWITHCAMERA' by default).\n Redundant combinations of search terms will generate an error (e.g., user\n cannot specify environment name and environment ID, camera assignment IDs\n and camera device IDs, etc.)\n\n If start and end are specified, returns all videos that overlap with\n specified start and end (e.g., if start is 10:32:56 and end is 10:33:20,\n returns videos starting at 10:32:50, 10:33:00 and 10:33:10).\n\n Returned metadata is a list of dictionaries, one for each video. Each\n dictionary has the following fields: data_id, video_timestamp,\n environment_id, assignment_id, device_id, bucket, key.\n\n Args:\n start (datetime): Start of time period to fetch (default is None)\n end (datetime): End of time period to fetch (default is None)\n video_timestamps (list of datetime): List of video start times to fetch (default is None)\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for videos that match search parameters\n "
if (((start is not None) or (end is not None)) and (video_timestamps is not None)):
raise ValueError('Cannot specify start/end and list of video timestamps')
if ((video_timestamps is None) and ((start is None) or (end is None))):
raise ValueError('If not specifying specific timestamps, must specify both start and end times')
if ((camera_assignment_ids is not None) and ((environment_id is not None) or (environment_name is not None))):
raise ValueError('Cannot specify camera assignment IDs and environment')
if ((camera_assignment_ids is not None) and ((camera_device_ids is not None) or (camera_part_numbers is not None) or (camera_names is not None) or (camera_serial_numbers is not None))):
raise ValueError('Cannot specify camera assignment IDs and camera device properties')
if ((environment_id is not None) and (environment_name is not None)):
raise ValueError('Cannot specify environment ID and environment name')
if (video_timestamps is not None):
video_timestamps_utc = [video_timestamp.astimezone(datetime.timezone.utc) for video_timestamp in video_timestamps]
video_timestamp_min_utc = min(video_timestamps)
video_timestamp_max_utc = max(video_timestamps)
start_utc = video_timestamp_min_utc
end_utc = (video_timestamp_max_utc + VIDEO_DURATION)
video_timestamps_utc_honeycomb = [honeycomb_io.to_honeycomb_datetime(video_timestamp_utc) for video_timestamp_utc in video_timestamps_utc]
else:
start_utc = start.astimezone(datetime.timezone.utc)
end_utc = end.astimezone(datetime.timezone.utc)
video_timestamp_min_utc = video_timestamp_min(start_utc)
video_timestamp_max_utc = video_timestamp_max(end_utc)
start_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(start_utc)
end_utc_honeycomb = honeycomb_io.to_honeycomb_datetime(end_utc)
if (environment_name is not None):
environment_id = honeycomb_io.fetch_environment_id(environment_name=environment_name, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_environment = honeycomb_io.fetch_camera_assignment_ids_from_environment(start=start_utc, end=end_utc, environment_id=environment_id, camera_device_types=camera_device_types, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
camera_assignment_ids_from_camera_properties = honeycomb_io.fetch_camera_assignment_ids_from_camera_properties(start=start_utc, end=end_utc, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=100, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
logger.info('Building query list for video metadata search')
query_list = list()
if (start is not None):
query_list.append({'field': 'timestamp', 'operator': 'GTE', 'value': video_timestamp_min_utc})
if (end is not None):
query_list.append({'field': 'timestamp', 'operator': 'LTE', 'value': video_timestamp_max_utc})
if (video_timestamps is not None):
query_list.append({'field': 'timestamp', 'operator': 'IN', 'values': video_timestamps_utc_honeycomb})
if (camera_assignment_ids is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids})
if (camera_assignment_ids_from_environment is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_environment})
if (camera_assignment_ids_from_camera_properties is not None):
query_list.append({'field': 'source', 'operator': 'IN', 'values': camera_assignment_ids_from_camera_properties})
return_data = ['data_id', 'timestamp', {'source': [{'... on Assignment': [{'environment': ['environment_id']}, 'assignment_id', {'assigned': [{'... on Device': ['device_id']}]}]}]}, {'file': ['bucketName', 'key']}]
result = honeycomb_io.search_datapoints(query_list=query_list, return_data=return_data, chunk_size=chunk_size, client=None, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
video_metadata = list()
logger.info('Parsing {} returned camera datapoints'.format(len(result)))
for datum in result:
source = (datum.get('source') if (datum.get('source') is not None) else {})
file = (datum.get('file') if (datum.get('file') is not None) else {})
video_metadata.append({'data_id': datum.get('data_id'), 'video_timestamp': honeycomb_io.from_honeycomb_datetime(datum.get('timestamp')), 'environment_id': (source.get('environment') if (source.get('environment') is not None) else {}).get('environment_id'), 'assignment_id': source.get('assignment_id'), 'device_id': (source.get('assigned') if (source.get('assigned') is not None) else {}).get('device_id'), 'bucket': file.get('bucketName'), 'key': file.get('key')})
return video_metadata<|docstring|>Searches Honeycomb for videos that match specified search parameters and
returns their metadata.
Videos must match all specified search parameters (i.e., the function
performs a logical AND of all of the queries). If camera information is not
specified, returns results for all devices that have one of the specified
camera device types ('PI3WITHCAMERA' and 'PIZEROWITHCAMERA' by default).
Redundant combinations of search terms will generate an error (e.g., user
cannot specify environment name and environment ID, camera assignment IDs
and camera device IDs, etc.)
If start and end are specified, returns all videos that overlap with
specified start and end (e.g., if start is 10:32:56 and end is 10:33:20,
returns videos starting at 10:32:50, 10:33:00 and 10:33:10).
Returned metadata is a list of dictionaries, one for each video. Each
dictionary has the following fields: data_id, video_timestamp,
environment_id, assignment_id, device_id, bucket, key.
Args:
start (datetime): Start of time period to fetch (default is None)
end (datetime): End of time period to fetch (default is None)
video_timestamps (list of datetime): List of video start times to fetch (default is None)
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
Returns:
(list of dict): Metadata for videos that match search parameters<|endoftext|> |
7573517164918a66d27e76d61c49c21a1342131a5af838d7931148cdb15668fe | def download_video_files(video_metadata, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos from S3 to local directory tree and returns metadata with\n local path information added.\n\n Videos are specified as a list of dictionaries, as returned by the function\n fetch_video_metadata(). Each dictionary is assumed to have the following\n fields: data_id, video_timestamp, environment_id, assignment_id, device_id,\n bucket, and key (though only a subset of these are currently used).\n\n Structure of resulting tree is [base directory]/[environment ID]/[camera\n assignment ID]/[year]/[month]/[day]. Filenames are in the form\n [hour]-[minute]-[second].[filename extension]. Videos are only downloaded if\n they don't already exist in the local directory tree. Directories are\n created as necessary.\n\n Function returns the metadata with local path information appended to each\n record (in the field video_local_path).\n\n Args:\n video_metadata (list of dict): Metadata in the format output by fetch_video_metadata()\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
video_metadata_with_local_paths = []
executor = ProcessPoolExecutor(max_workers=download_workers)
futures = [executor.submit(_download_video, video, local_video_directory, video_filename_extension) for video in video_metadata]
for future in as_completed(futures):
video_metadata_with_local_paths.append(future.result())
return video_metadata_with_local_paths | Downloads videos from S3 to local directory tree and returns metadata with
local path information added.
Videos are specified as a list of dictionaries, as returned by the function
fetch_video_metadata(). Each dictionary is assumed to have the following
fields: data_id, video_timestamp, environment_id, assignment_id, device_id,
bucket, and key (though only a subset of these are currently used).
Structure of resulting tree is [base directory]/[environment ID]/[camera
assignment ID]/[year]/[month]/[day]. Filenames are in the form
[hour]-[minute]-[second].[filename extension]. Videos are only downloaded if
they don't already exist in the local directory tree. Directories are
created as necessary.
Function returns the metadata with local path information appended to each
record (in the field video_local_path).
Args:
video_metadata (list of dict): Metadata in the format output by fetch_video_metadata()
local_video_directory (str): Base of local video file tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for videos with local path information appended | video_io/core.py | download_video_files | optimuspaul/wf-video-io | 0 | python | def download_video_files(video_metadata, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos from S3 to local directory tree and returns metadata with\n local path information added.\n\n Videos are specified as a list of dictionaries, as returned by the function\n fetch_video_metadata(). Each dictionary is assumed to have the following\n fields: data_id, video_timestamp, environment_id, assignment_id, device_id,\n bucket, and key (though only a subset of these are currently used).\n\n Structure of resulting tree is [base directory]/[environment ID]/[camera\n assignment ID]/[year]/[month]/[day]. Filenames are in the form\n [hour]-[minute]-[second].[filename extension]. Videos are only downloaded if\n they don't already exist in the local directory tree. Directories are\n created as necessary.\n\n Function returns the metadata with local path information appended to each\n record (in the field video_local_path).\n\n Args:\n video_metadata (list of dict): Metadata in the format output by fetch_video_metadata()\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
video_metadata_with_local_paths = []
executor = ProcessPoolExecutor(max_workers=download_workers)
futures = [executor.submit(_download_video, video, local_video_directory, video_filename_extension) for video in video_metadata]
for future in as_completed(futures):
video_metadata_with_local_paths.append(future.result())
return video_metadata_with_local_paths | def download_video_files(video_metadata, local_video_directory='./videos', video_filename_extension='mp4', download_workers=4):
"\n Downloads videos from S3 to local directory tree and returns metadata with\n local path information added.\n\n Videos are specified as a list of dictionaries, as returned by the function\n fetch_video_metadata(). Each dictionary is assumed to have the following\n fields: data_id, video_timestamp, environment_id, assignment_id, device_id,\n bucket, and key (though only a subset of these are currently used).\n\n Structure of resulting tree is [base directory]/[environment ID]/[camera\n assignment ID]/[year]/[month]/[day]. Filenames are in the form\n [hour]-[minute]-[second].[filename extension]. Videos are only downloaded if\n they don't already exist in the local directory tree. Directories are\n created as necessary.\n\n Function returns the metadata with local path information appended to each\n record (in the field video_local_path).\n\n Args:\n video_metadata (list of dict): Metadata in the format output by fetch_video_metadata()\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for videos with local path information appended\n "
video_metadata_with_local_paths = []
executor = ProcessPoolExecutor(max_workers=download_workers)
futures = [executor.submit(_download_video, video, local_video_directory, video_filename_extension) for video in video_metadata]
for future in as_completed(futures):
video_metadata_with_local_paths.append(future.result())
return video_metadata_with_local_paths<|docstring|>Downloads videos from S3 to local directory tree and returns metadata with
local path information added.
Videos are specified as a list of dictionaries, as returned by the function
fetch_video_metadata(). Each dictionary is assumed to have the following
fields: data_id, video_timestamp, environment_id, assignment_id, device_id,
bucket, and key (though only a subset of these are currently used).
Structure of resulting tree is [base directory]/[environment ID]/[camera
assignment ID]/[year]/[month]/[day]. Filenames are in the form
[hour]-[minute]-[second].[filename extension]. Videos are only downloaded if
they don't already exist in the local directory tree. Directories are
created as necessary.
Function returns the metadata with local path information appended to each
record (in the field video_local_path).
Args:
video_metadata (list of dict): Metadata in the format output by fetch_video_metadata()
local_video_directory (str): Base of local video file tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for videos with local path information appended<|endoftext|> |
622b22814916798db87451f451a18212d5d5ffe00e816a0ab0e674ed8f46d326 | def fetch_image_metadata(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos containing images that match specified search\n parameters and returns video/image metadata.\n\n Image timestamps are rounded to the nearest tenth of a second to synchronize\n with video frames. Videos containing these images must match all specified\n search parameters (i.e., the function performs a logical AND of all of the\n queries). If camera information is not specified, returns results for all\n devices that have one of the specified camera device types ('PI3WITHCAMERA'\n and 'PIZEROWITHCAMERA' by default). Redundant combinations of search terms\n will generate an error (e.g., user cannot specify environment name and\n environment ID, camera assignment IDs and camera device IDs, etc.)\n\n Returned metadata is a list of dictionaries, one for each image. Each\n dictionary contains information both about the image and the video that\n contains the image: data_id, video_timestamp, environment_id, assignment_id,\n device_id, bucket, key, and image_timestamp, and frame_number.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for images that match search parameters\n "
image_metadata_by_video_timestamp = dict()
for image_timestamp in image_timestamps:
image_timestamp = image_timestamp.astimezone(datetime.timezone.utc)
timestamp_floor = image_timestamp.replace(second=0, microsecond=0)
video_timestamp = (timestamp_floor + (math.floor(((image_timestamp - timestamp_floor) / datetime.timedelta(seconds=10))) * datetime.timedelta(seconds=10)))
frame_number = round(((image_timestamp - video_timestamp) / datetime.timedelta(milliseconds=100)))
if (video_timestamp not in image_metadata_by_video_timestamp.keys()):
image_metadata_by_video_timestamp[video_timestamp] = list()
image_metadata_by_video_timestamp[video_timestamp].append({'image_timestamp': image_timestamp, 'frame_number': frame_number})
video_timestamps = list(image_metadata_by_video_timestamp.keys())
video_metadata = fetch_video_metadata(video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
image_metadata = list()
for video in video_metadata:
for image in image_metadata_by_video_timestamp[video['video_timestamp']]:
image_metadata.append({**video, **image})
return image_metadata | Searches Honeycomb for videos containing images that match specified search
parameters and returns video/image metadata.
Image timestamps are rounded to the nearest tenth of a second to synchronize
with video frames. Videos containing these images must match all specified
search parameters (i.e., the function performs a logical AND of all of the
queries). If camera information is not specified, returns results for all
devices that have one of the specified camera device types ('PI3WITHCAMERA'
and 'PIZEROWITHCAMERA' by default). Redundant combinations of search terms
will generate an error (e.g., user cannot specify environment name and
environment ID, camera assignment IDs and camera device IDs, etc.)
Returned metadata is a list of dictionaries, one for each image. Each
dictionary contains information both about the image and the video that
contains the image: data_id, video_timestamp, environment_id, assignment_id,
device_id, bucket, key, and image_timestamp, and frame_number.
Args:
image_timestamps (list of datetime): List of image timestamps to fetch
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
Returns:
(list of dict): Metadata for images that match search parameters | video_io/core.py | fetch_image_metadata | optimuspaul/wf-video-io | 0 | python | def fetch_image_metadata(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos containing images that match specified search\n parameters and returns video/image metadata.\n\n Image timestamps are rounded to the nearest tenth of a second to synchronize\n with video frames. Videos containing these images must match all specified\n search parameters (i.e., the function performs a logical AND of all of the\n queries). If camera information is not specified, returns results for all\n devices that have one of the specified camera device types ('PI3WITHCAMERA'\n and 'PIZEROWITHCAMERA' by default). Redundant combinations of search terms\n will generate an error (e.g., user cannot specify environment name and\n environment ID, camera assignment IDs and camera device IDs, etc.)\n\n Returned metadata is a list of dictionaries, one for each image. Each\n dictionary contains information both about the image and the video that\n contains the image: data_id, video_timestamp, environment_id, assignment_id,\n device_id, bucket, key, and image_timestamp, and frame_number.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for images that match search parameters\n "
image_metadata_by_video_timestamp = dict()
for image_timestamp in image_timestamps:
image_timestamp = image_timestamp.astimezone(datetime.timezone.utc)
timestamp_floor = image_timestamp.replace(second=0, microsecond=0)
video_timestamp = (timestamp_floor + (math.floor(((image_timestamp - timestamp_floor) / datetime.timedelta(seconds=10))) * datetime.timedelta(seconds=10)))
frame_number = round(((image_timestamp - video_timestamp) / datetime.timedelta(milliseconds=100)))
if (video_timestamp not in image_metadata_by_video_timestamp.keys()):
image_metadata_by_video_timestamp[video_timestamp] = list()
image_metadata_by_video_timestamp[video_timestamp].append({'image_timestamp': image_timestamp, 'frame_number': frame_number})
video_timestamps = list(image_metadata_by_video_timestamp.keys())
video_metadata = fetch_video_metadata(video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
image_metadata = list()
for video in video_metadata:
for image in image_metadata_by_video_timestamp[video['video_timestamp']]:
image_metadata.append({**video, **image})
return image_metadata | def fetch_image_metadata(image_timestamps, camera_assignment_ids=None, environment_id=None, environment_name=None, camera_device_types=None, camera_device_ids=None, camera_part_numbers=None, camera_names=None, camera_serial_numbers=None, chunk_size=100, client=None, uri=None, token_uri=None, audience=None, client_id=None, client_secret=None):
"\n Searches Honeycomb for videos containing images that match specified search\n parameters and returns video/image metadata.\n\n Image timestamps are rounded to the nearest tenth of a second to synchronize\n with video frames. Videos containing these images must match all specified\n search parameters (i.e., the function performs a logical AND of all of the\n queries). If camera information is not specified, returns results for all\n devices that have one of the specified camera device types ('PI3WITHCAMERA'\n and 'PIZEROWITHCAMERA' by default). Redundant combinations of search terms\n will generate an error (e.g., user cannot specify environment name and\n environment ID, camera assignment IDs and camera device IDs, etc.)\n\n Returned metadata is a list of dictionaries, one for each image. Each\n dictionary contains information both about the image and the video that\n contains the image: data_id, video_timestamp, environment_id, assignment_id,\n device_id, bucket, key, and image_timestamp, and frame_number.\n\n Args:\n image_timestamps (list of datetime): List of image timestamps to fetch\n camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)\n environment_id (str): Honeycomb environment ID (default is None)\n environment_name (str): Honeycomb environment name (default is None)\n camera_device_types (list of str): Honeycomb device types (default is None)\n camera_device_ids (list of str): Honeycomb device IDs (default is None)\n camera_part_numbers (list of str): Honeycomb device part numbers (default is None)\n camera_names (list of str): Honeycomb device names (default is None)\n camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)\n chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)\n client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)\n uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)\n token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)\n audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)\n client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)\n client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)\n\n Returns:\n (list of dict): Metadata for images that match search parameters\n "
image_metadata_by_video_timestamp = dict()
for image_timestamp in image_timestamps:
image_timestamp = image_timestamp.astimezone(datetime.timezone.utc)
timestamp_floor = image_timestamp.replace(second=0, microsecond=0)
video_timestamp = (timestamp_floor + (math.floor(((image_timestamp - timestamp_floor) / datetime.timedelta(seconds=10))) * datetime.timedelta(seconds=10)))
frame_number = round(((image_timestamp - video_timestamp) / datetime.timedelta(milliseconds=100)))
if (video_timestamp not in image_metadata_by_video_timestamp.keys()):
image_metadata_by_video_timestamp[video_timestamp] = list()
image_metadata_by_video_timestamp[video_timestamp].append({'image_timestamp': image_timestamp, 'frame_number': frame_number})
video_timestamps = list(image_metadata_by_video_timestamp.keys())
video_metadata = fetch_video_metadata(video_timestamps=video_timestamps, camera_assignment_ids=camera_assignment_ids, environment_id=environment_id, environment_name=environment_name, camera_device_types=camera_device_types, camera_device_ids=camera_device_ids, camera_part_numbers=camera_part_numbers, camera_names=camera_names, camera_serial_numbers=camera_serial_numbers, chunk_size=chunk_size, client=client, uri=uri, token_uri=token_uri, audience=audience, client_id=client_id, client_secret=client_secret)
image_metadata = list()
for video in video_metadata:
for image in image_metadata_by_video_timestamp[video['video_timestamp']]:
image_metadata.append({**video, **image})
return image_metadata<|docstring|>Searches Honeycomb for videos containing images that match specified search
parameters and returns video/image metadata.
Image timestamps are rounded to the nearest tenth of a second to synchronize
with video frames. Videos containing these images must match all specified
search parameters (i.e., the function performs a logical AND of all of the
queries). If camera information is not specified, returns results for all
devices that have one of the specified camera device types ('PI3WITHCAMERA'
and 'PIZEROWITHCAMERA' by default). Redundant combinations of search terms
will generate an error (e.g., user cannot specify environment name and
environment ID, camera assignment IDs and camera device IDs, etc.)
Returned metadata is a list of dictionaries, one for each image. Each
dictionary contains information both about the image and the video that
contains the image: data_id, video_timestamp, environment_id, assignment_id,
device_id, bucket, key, and image_timestamp, and frame_number.
Args:
image_timestamps (list of datetime): List of image timestamps to fetch
camera_assignment_ids (list of str): Honeycomb assignment IDs (default is None)
environment_id (str): Honeycomb environment ID (default is None)
environment_name (str): Honeycomb environment name (default is None)
camera_device_types (list of str): Honeycomb device types (default is None)
camera_device_ids (list of str): Honeycomb device IDs (default is None)
camera_part_numbers (list of str): Honeycomb device part numbers (default is None)
camera_names (list of str): Honeycomb device names (default is None)
camera_serial_numbers (list of str): Honeycomb device serial numbers (default is None)
chunk_size (int): Maximum number of data points to be returned by each Honeycomb query (default is 100)
client (MinimalHoneycombClient): Existing Honeycomb client (otherwise will create one)
uri (str): Server URI for creating Honeycomb client (default is value of HONEYCOMB_URI environment variable)
token_uri (str): Token URI for creating Honeycomb client (default is value of HONEYCOMB_TOKEN_URI environment variable)
audience (str): Audience for creating Honeycomb client (default is value of HONEYCOMB_AUDIENCE environment variable)
client_id: Client ID for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_ID environment variable)
client_secret: Client secret for creating Honeycomb client (default is value of HONEYCOMB_CLIENT_SECRET environment variable)
Returns:
(list of dict): Metadata for images that match search parameters<|endoftext|> |
0a74803d9a65b6329b937cdf81f8d61e3d450d249379347b92c95c3a0d4295be | def download_image_files(image_metadata, local_image_directory='./images', image_filename_extension='png', local_video_directory='./videos', video_filename_extension='mp4'):
"\n Downloads videos from S3 to local directory tree, extract images, saves\n images to local directory tree, and returns metadata with local path\n information added.\n\n Images are specified as a list of dictionaries, as returned by the function\n fetch_image_metadata(). Each dictionary is expected to contain information\n both about the image and the video that contains the image and is assumed to\n have the following fields: data_id, video_timestamp, environment_id,\n assignment_id, device_id, bucket, key, and image_timestamp, and frame_number\n (though only a subset of these are currently used).\n\n Structure of resulting video file tree is as described in documentation for\n download_video_files(). Structure of resulting image file tree is [base\n directory]/[environment ID]/[camera assignment ID]/[year]/[month]/[day].\n Filenames contain the timestamp for the start of the containing video and\n the frame number of the image in the form [hour]-[minute]-[second]_[frame\n number].[filename extension]. Videos and images are only downloaded if they\n don't already exist in the local directory trees. Directories are created as\n necessary.\n\n Function returns the metadata with local path information appended to each\n record (in the fields video_local_path and image_local_path).\n\n Args:\n image_metadata (list of dict): Metadata in the format output by fetch_image_metadata()\n local_image_directory (str): Base of local image file tree (default is './images')\n image_filename_extension (str): Filename extension for image files (default is 'png')\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for images with local path information appended\n "
image_metadata_with_local_video_paths = download_video_files(image_metadata, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension)
image_metadata_with_local_paths = list()
for image in image_metadata_with_local_video_paths:
download_path = image_local_path(local_image_directory=local_image_directory, environment_id=image.get('environment_id'), assignment_id=image.get('assignment_id'), video_timestamp=image.get('video_timestamp'), frame_number=image.get('frame_number'), image_filename_extension=image_filename_extension)
if (not os.path.exists(download_path)):
video_input = cv_utils.VideoInput(image.get('video_local_path'))
image_data = video_input.get_frame_by_frame_number(image.get('frame_number'))
os.makedirs(os.path.dirname(download_path), exist_ok=True)
cv.imwrite(download_path, image_data)
else:
logger.info('File {} already exists'.format(download_path))
image['image_local_path'] = download_path
image_metadata_with_local_paths.append(image)
return image_metadata_with_local_paths | Downloads videos from S3 to local directory tree, extract images, saves
images to local directory tree, and returns metadata with local path
information added.
Images are specified as a list of dictionaries, as returned by the function
fetch_image_metadata(). Each dictionary is expected to contain information
both about the image and the video that contains the image and is assumed to
have the following fields: data_id, video_timestamp, environment_id,
assignment_id, device_id, bucket, key, and image_timestamp, and frame_number
(though only a subset of these are currently used).
Structure of resulting video file tree is as described in documentation for
download_video_files(). Structure of resulting image file tree is [base
directory]/[environment ID]/[camera assignment ID]/[year]/[month]/[day].
Filenames contain the timestamp for the start of the containing video and
the frame number of the image in the form [hour]-[minute]-[second]_[frame
number].[filename extension]. Videos and images are only downloaded if they
don't already exist in the local directory trees. Directories are created as
necessary.
Function returns the metadata with local path information appended to each
record (in the fields video_local_path and image_local_path).
Args:
image_metadata (list of dict): Metadata in the format output by fetch_image_metadata()
local_image_directory (str): Base of local image file tree (default is './images')
image_filename_extension (str): Filename extension for image files (default is 'png')
local_video_directory (str): Base of local video file tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for images with local path information appended | video_io/core.py | download_image_files | optimuspaul/wf-video-io | 0 | python | def download_image_files(image_metadata, local_image_directory='./images', image_filename_extension='png', local_video_directory='./videos', video_filename_extension='mp4'):
"\n Downloads videos from S3 to local directory tree, extract images, saves\n images to local directory tree, and returns metadata with local path\n information added.\n\n Images are specified as a list of dictionaries, as returned by the function\n fetch_image_metadata(). Each dictionary is expected to contain information\n both about the image and the video that contains the image and is assumed to\n have the following fields: data_id, video_timestamp, environment_id,\n assignment_id, device_id, bucket, key, and image_timestamp, and frame_number\n (though only a subset of these are currently used).\n\n Structure of resulting video file tree is as described in documentation for\n download_video_files(). Structure of resulting image file tree is [base\n directory]/[environment ID]/[camera assignment ID]/[year]/[month]/[day].\n Filenames contain the timestamp for the start of the containing video and\n the frame number of the image in the form [hour]-[minute]-[second]_[frame\n number].[filename extension]. Videos and images are only downloaded if they\n don't already exist in the local directory trees. Directories are created as\n necessary.\n\n Function returns the metadata with local path information appended to each\n record (in the fields video_local_path and image_local_path).\n\n Args:\n image_metadata (list of dict): Metadata in the format output by fetch_image_metadata()\n local_image_directory (str): Base of local image file tree (default is './images')\n image_filename_extension (str): Filename extension for image files (default is 'png')\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for images with local path information appended\n "
image_metadata_with_local_video_paths = download_video_files(image_metadata, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension)
image_metadata_with_local_paths = list()
for image in image_metadata_with_local_video_paths:
download_path = image_local_path(local_image_directory=local_image_directory, environment_id=image.get('environment_id'), assignment_id=image.get('assignment_id'), video_timestamp=image.get('video_timestamp'), frame_number=image.get('frame_number'), image_filename_extension=image_filename_extension)
if (not os.path.exists(download_path)):
video_input = cv_utils.VideoInput(image.get('video_local_path'))
image_data = video_input.get_frame_by_frame_number(image.get('frame_number'))
os.makedirs(os.path.dirname(download_path), exist_ok=True)
cv.imwrite(download_path, image_data)
else:
logger.info('File {} already exists'.format(download_path))
image['image_local_path'] = download_path
image_metadata_with_local_paths.append(image)
return image_metadata_with_local_paths | def download_image_files(image_metadata, local_image_directory='./images', image_filename_extension='png', local_video_directory='./videos', video_filename_extension='mp4'):
"\n Downloads videos from S3 to local directory tree, extract images, saves\n images to local directory tree, and returns metadata with local path\n information added.\n\n Images are specified as a list of dictionaries, as returned by the function\n fetch_image_metadata(). Each dictionary is expected to contain information\n both about the image and the video that contains the image and is assumed to\n have the following fields: data_id, video_timestamp, environment_id,\n assignment_id, device_id, bucket, key, and image_timestamp, and frame_number\n (though only a subset of these are currently used).\n\n Structure of resulting video file tree is as described in documentation for\n download_video_files(). Structure of resulting image file tree is [base\n directory]/[environment ID]/[camera assignment ID]/[year]/[month]/[day].\n Filenames contain the timestamp for the start of the containing video and\n the frame number of the image in the form [hour]-[minute]-[second]_[frame\n number].[filename extension]. Videos and images are only downloaded if they\n don't already exist in the local directory trees. Directories are created as\n necessary.\n\n Function returns the metadata with local path information appended to each\n record (in the fields video_local_path and image_local_path).\n\n Args:\n image_metadata (list of dict): Metadata in the format output by fetch_image_metadata()\n local_image_directory (str): Base of local image file tree (default is './images')\n image_filename_extension (str): Filename extension for image files (default is 'png')\n local_video_directory (str): Base of local video file tree (default is './videos')\n video_filename_extension (str): Filename extension for video files (default is 'mp4')\n\n Returns:\n (list of dict): Metadata for images with local path information appended\n "
image_metadata_with_local_video_paths = download_video_files(image_metadata, local_video_directory=local_video_directory, video_filename_extension=video_filename_extension)
image_metadata_with_local_paths = list()
for image in image_metadata_with_local_video_paths:
download_path = image_local_path(local_image_directory=local_image_directory, environment_id=image.get('environment_id'), assignment_id=image.get('assignment_id'), video_timestamp=image.get('video_timestamp'), frame_number=image.get('frame_number'), image_filename_extension=image_filename_extension)
if (not os.path.exists(download_path)):
video_input = cv_utils.VideoInput(image.get('video_local_path'))
image_data = video_input.get_frame_by_frame_number(image.get('frame_number'))
os.makedirs(os.path.dirname(download_path), exist_ok=True)
cv.imwrite(download_path, image_data)
else:
logger.info('File {} already exists'.format(download_path))
image['image_local_path'] = download_path
image_metadata_with_local_paths.append(image)
return image_metadata_with_local_paths<|docstring|>Downloads videos from S3 to local directory tree, extract images, saves
images to local directory tree, and returns metadata with local path
information added.
Images are specified as a list of dictionaries, as returned by the function
fetch_image_metadata(). Each dictionary is expected to contain information
both about the image and the video that contains the image and is assumed to
have the following fields: data_id, video_timestamp, environment_id,
assignment_id, device_id, bucket, key, and image_timestamp, and frame_number
(though only a subset of these are currently used).
Structure of resulting video file tree is as described in documentation for
download_video_files(). Structure of resulting image file tree is [base
directory]/[environment ID]/[camera assignment ID]/[year]/[month]/[day].
Filenames contain the timestamp for the start of the containing video and
the frame number of the image in the form [hour]-[minute]-[second]_[frame
number].[filename extension]. Videos and images are only downloaded if they
don't already exist in the local directory trees. Directories are created as
necessary.
Function returns the metadata with local path information appended to each
record (in the fields video_local_path and image_local_path).
Args:
image_metadata (list of dict): Metadata in the format output by fetch_image_metadata()
local_image_directory (str): Base of local image file tree (default is './images')
image_filename_extension (str): Filename extension for image files (default is 'png')
local_video_directory (str): Base of local video file tree (default is './videos')
video_filename_extension (str): Filename extension for video files (default is 'mp4')
Returns:
(list of dict): Metadata for images with local path information appended<|endoftext|> |
81cc6a0e21117326ece647121458da8655ee0c603f3343583fc838454d9bed46 | @wrap_input(0)
def _surface_selection(surf, array, low=(- np.inf), upp=np.inf, use_cell=False):
'Selection of points or cells meeting some thresholding criteria.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or ndarray\n Array used to perform selection.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is +np.inf.\n use_cell : bool, optional\n If True, apply selection to cells. Otherwise, use points.\n Default is False.\n\n Returns\n -------\n surf_selected : BSPolyData\n Surface after thresholding.\n\n '
if (low > upp):
raise ValueError('Threshold not valid: [{},{}]'.format(low, upp))
at = ('c' if use_cell else 'p')
if isinstance(array, np.ndarray):
drop_array = True
array_name = surf.append_array(array, at=at)
else:
drop_array = False
array_name = array
array = surf.get_array(name=array, at=at, return_name=False)
if (array.ndim > 1):
raise ValueError('Arrays has more than one dimension.')
if (low == (- np.inf)):
low = array.min()
if (upp == np.inf):
upp = array.max()
tf = wrap_vtk(vtkThreshold, allScalars=True)
tf.ThresholdBetween(low, upp)
if use_cell:
tf.SetInputArrayToProcess(0, 0, 0, ASSOC_CELLS, array_name)
else:
tf.SetInputArrayToProcess(0, 0, 0, ASSOC_POINTS, array_name)
gf = wrap_vtk(vtkGeometryFilter(), merging=False)
surf_sel = serial_connect(surf, tf, gf)
n_exp = np.logical_and((array >= low), (array <= upp)).sum()
n_sel = (surf_sel.n_cells if use_cell else surf_sel.n_points)
if (n_exp != n_sel):
element = ('cells' if use_cell else 'points')
warnings.warn('Number of selected {}={}. Expected {}.This may be due to the topology after selection.'.format(element, n_exp, n_sel))
if drop_array:
surf.remove_array(name=array_name, at=at)
surf_sel.remove_array(name=array_name, at=at)
return surf_sel | Selection of points or cells meeting some thresholding criteria.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
array : str or ndarray
Array used to perform selection.
low : float or -np.inf
Lower threshold. Default is -np.inf.
upp : float or np.inf
Upper threshold. Default is +np.inf.
use_cell : bool, optional
If True, apply selection to cells. Otherwise, use points.
Default is False.
Returns
-------
surf_selected : BSPolyData
Surface after thresholding. | brainspace/mesh/mesh_operations.py | _surface_selection | anibalsolon/BrainSpace | 100 | python | @wrap_input(0)
def _surface_selection(surf, array, low=(- np.inf), upp=np.inf, use_cell=False):
'Selection of points or cells meeting some thresholding criteria.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or ndarray\n Array used to perform selection.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is +np.inf.\n use_cell : bool, optional\n If True, apply selection to cells. Otherwise, use points.\n Default is False.\n\n Returns\n -------\n surf_selected : BSPolyData\n Surface after thresholding.\n\n '
if (low > upp):
raise ValueError('Threshold not valid: [{},{}]'.format(low, upp))
at = ('c' if use_cell else 'p')
if isinstance(array, np.ndarray):
drop_array = True
array_name = surf.append_array(array, at=at)
else:
drop_array = False
array_name = array
array = surf.get_array(name=array, at=at, return_name=False)
if (array.ndim > 1):
raise ValueError('Arrays has more than one dimension.')
if (low == (- np.inf)):
low = array.min()
if (upp == np.inf):
upp = array.max()
tf = wrap_vtk(vtkThreshold, allScalars=True)
tf.ThresholdBetween(low, upp)
if use_cell:
tf.SetInputArrayToProcess(0, 0, 0, ASSOC_CELLS, array_name)
else:
tf.SetInputArrayToProcess(0, 0, 0, ASSOC_POINTS, array_name)
gf = wrap_vtk(vtkGeometryFilter(), merging=False)
surf_sel = serial_connect(surf, tf, gf)
n_exp = np.logical_and((array >= low), (array <= upp)).sum()
n_sel = (surf_sel.n_cells if use_cell else surf_sel.n_points)
if (n_exp != n_sel):
element = ('cells' if use_cell else 'points')
warnings.warn('Number of selected {}={}. Expected {}.This may be due to the topology after selection.'.format(element, n_exp, n_sel))
if drop_array:
surf.remove_array(name=array_name, at=at)
surf_sel.remove_array(name=array_name, at=at)
return surf_sel | @wrap_input(0)
def _surface_selection(surf, array, low=(- np.inf), upp=np.inf, use_cell=False):
'Selection of points or cells meeting some thresholding criteria.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or ndarray\n Array used to perform selection.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is +np.inf.\n use_cell : bool, optional\n If True, apply selection to cells. Otherwise, use points.\n Default is False.\n\n Returns\n -------\n surf_selected : BSPolyData\n Surface after thresholding.\n\n '
if (low > upp):
raise ValueError('Threshold not valid: [{},{}]'.format(low, upp))
at = ('c' if use_cell else 'p')
if isinstance(array, np.ndarray):
drop_array = True
array_name = surf.append_array(array, at=at)
else:
drop_array = False
array_name = array
array = surf.get_array(name=array, at=at, return_name=False)
if (array.ndim > 1):
raise ValueError('Arrays has more than one dimension.')
if (low == (- np.inf)):
low = array.min()
if (upp == np.inf):
upp = array.max()
tf = wrap_vtk(vtkThreshold, allScalars=True)
tf.ThresholdBetween(low, upp)
if use_cell:
tf.SetInputArrayToProcess(0, 0, 0, ASSOC_CELLS, array_name)
else:
tf.SetInputArrayToProcess(0, 0, 0, ASSOC_POINTS, array_name)
gf = wrap_vtk(vtkGeometryFilter(), merging=False)
surf_sel = serial_connect(surf, tf, gf)
n_exp = np.logical_and((array >= low), (array <= upp)).sum()
n_sel = (surf_sel.n_cells if use_cell else surf_sel.n_points)
if (n_exp != n_sel):
element = ('cells' if use_cell else 'points')
warnings.warn('Number of selected {}={}. Expected {}.This may be due to the topology after selection.'.format(element, n_exp, n_sel))
if drop_array:
surf.remove_array(name=array_name, at=at)
surf_sel.remove_array(name=array_name, at=at)
return surf_sel<|docstring|>Selection of points or cells meeting some thresholding criteria.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
array : str or ndarray
Array used to perform selection.
low : float or -np.inf
Lower threshold. Default is -np.inf.
upp : float or np.inf
Upper threshold. Default is +np.inf.
use_cell : bool, optional
If True, apply selection to cells. Otherwise, use points.
Default is False.
Returns
-------
surf_selected : BSPolyData
Surface after thresholding.<|endoftext|> |
f230446db801268a887c24f50556692d9c9e2a36fea1037246e9547c6cc92881 | @wrap_input(0)
def _surface_mask(surf, mask, use_cell=False):
'Selection fo points or cells meeting some criteria.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n mask : str or ndarray\n Binary boolean or integer array. Zero or False elements are\n discarded.\n use_cell : bool, optional\n If True, apply selection to cells. Otherwise, use points.\n Default is False.\n\n Returns\n -------\n surf_masked : BSPolyData\n PolyData after masking.\n\n '
if isinstance(mask, np.ndarray):
if np.issubdtype(mask.dtype, np.bool_):
mask = mask.astype(np.uint8)
else:
mask = surf.get_array(name=mask, at=('c' if use_cell else 'p'))
if np.any((np.unique(mask) > 1)):
raise ValueError('Cannot work with non-binary mask.')
return _surface_selection(surf, mask, low=1, upp=1, use_cell=use_cell) | Selection fo points or cells meeting some criteria.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
mask : str or ndarray
Binary boolean or integer array. Zero or False elements are
discarded.
use_cell : bool, optional
If True, apply selection to cells. Otherwise, use points.
Default is False.
Returns
-------
surf_masked : BSPolyData
PolyData after masking. | brainspace/mesh/mesh_operations.py | _surface_mask | anibalsolon/BrainSpace | 100 | python | @wrap_input(0)
def _surface_mask(surf, mask, use_cell=False):
'Selection fo points or cells meeting some criteria.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n mask : str or ndarray\n Binary boolean or integer array. Zero or False elements are\n discarded.\n use_cell : bool, optional\n If True, apply selection to cells. Otherwise, use points.\n Default is False.\n\n Returns\n -------\n surf_masked : BSPolyData\n PolyData after masking.\n\n '
if isinstance(mask, np.ndarray):
if np.issubdtype(mask.dtype, np.bool_):
mask = mask.astype(np.uint8)
else:
mask = surf.get_array(name=mask, at=('c' if use_cell else 'p'))
if np.any((np.unique(mask) > 1)):
raise ValueError('Cannot work with non-binary mask.')
return _surface_selection(surf, mask, low=1, upp=1, use_cell=use_cell) | @wrap_input(0)
def _surface_mask(surf, mask, use_cell=False):
'Selection fo points or cells meeting some criteria.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n mask : str or ndarray\n Binary boolean or integer array. Zero or False elements are\n discarded.\n use_cell : bool, optional\n If True, apply selection to cells. Otherwise, use points.\n Default is False.\n\n Returns\n -------\n surf_masked : BSPolyData\n PolyData after masking.\n\n '
if isinstance(mask, np.ndarray):
if np.issubdtype(mask.dtype, np.bool_):
mask = mask.astype(np.uint8)
else:
mask = surf.get_array(name=mask, at=('c' if use_cell else 'p'))
if np.any((np.unique(mask) > 1)):
raise ValueError('Cannot work with non-binary mask.')
return _surface_selection(surf, mask, low=1, upp=1, use_cell=use_cell)<|docstring|>Selection fo points or cells meeting some criteria.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
mask : str or ndarray
Binary boolean or integer array. Zero or False elements are
discarded.
use_cell : bool, optional
If True, apply selection to cells. Otherwise, use points.
Default is False.
Returns
-------
surf_masked : BSPolyData
PolyData after masking.<|endoftext|> |
8347d72ce29689c17013ac815f533f88d3978bf5db2208a7c9067c4f76486cad | def drop_points(surf, array, low=(- np.inf), upp=np.inf):
'Remove surface points whose values fall within the threshold.\n\n Cells corresponding to these points are also removed.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the PointData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after thresholding.\n\n See Also\n --------\n :func:`drop_cells`\n :func:`select_points`\n :func:`mask_points`\n\n '
if isinstance(array, str):
array = surf.get_array(name=array, at='p')
mask = np.logical_or((array < low), (array > upp))
return mask_points(surf, mask) | Remove surface points whose values fall within the threshold.
Cells corresponding to these points are also removed.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
array : str or 1D ndarray
Array used to perform selection. If str, it must be an array in
the PointData attributes of the PolyData.
low : float or -np.inf
Lower threshold. Default is -np.inf.
upp : float or np.inf
Upper threshold. Default is np.inf.
Returns
-------
surf_selected : vtkPolyData or BSPolyData
PolyData after thresholding.
See Also
--------
:func:`drop_cells`
:func:`select_points`
:func:`mask_points` | brainspace/mesh/mesh_operations.py | drop_points | anibalsolon/BrainSpace | 100 | python | def drop_points(surf, array, low=(- np.inf), upp=np.inf):
'Remove surface points whose values fall within the threshold.\n\n Cells corresponding to these points are also removed.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the PointData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after thresholding.\n\n See Also\n --------\n :func:`drop_cells`\n :func:`select_points`\n :func:`mask_points`\n\n '
if isinstance(array, str):
array = surf.get_array(name=array, at='p')
mask = np.logical_or((array < low), (array > upp))
return mask_points(surf, mask) | def drop_points(surf, array, low=(- np.inf), upp=np.inf):
'Remove surface points whose values fall within the threshold.\n\n Cells corresponding to these points are also removed.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the PointData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after thresholding.\n\n See Also\n --------\n :func:`drop_cells`\n :func:`select_points`\n :func:`mask_points`\n\n '
if isinstance(array, str):
array = surf.get_array(name=array, at='p')
mask = np.logical_or((array < low), (array > upp))
return mask_points(surf, mask)<|docstring|>Remove surface points whose values fall within the threshold.
Cells corresponding to these points are also removed.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
array : str or 1D ndarray
Array used to perform selection. If str, it must be an array in
the PointData attributes of the PolyData.
low : float or -np.inf
Lower threshold. Default is -np.inf.
upp : float or np.inf
Upper threshold. Default is np.inf.
Returns
-------
surf_selected : vtkPolyData or BSPolyData
PolyData after thresholding.
See Also
--------
:func:`drop_cells`
:func:`select_points`
:func:`mask_points`<|endoftext|> |
9e2349377ae3fa6a41cd32dfe7c727a8d2d4ace99fa253a559029f4de8a2f590 | def drop_cells(surf, array, low=(- np.inf), upp=np.inf):
'Remove surface cells whose values fall within the threshold.\n\n Points corresponding to these cells are also removed.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the CellData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after thresholding.\n\n See Also\n --------\n :func:`drop_points`\n :func:`select_cells`\n :func:`mask_cells`\n\n '
if isinstance(array, str):
array = surf.get_array(name=array, at='c')
mask = np.logical_or((array < low), (array > upp))
return mask_cells(surf, mask) | Remove surface cells whose values fall within the threshold.
Points corresponding to these cells are also removed.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
array : str or 1D ndarray
Array used to perform selection. If str, it must be an array in
the CellData attributes of the PolyData.
low : float or -np.inf
Lower threshold. Default is -np.inf.
upp : float or np.inf
Upper threshold. Default is np.inf.
Returns
-------
surf_selected : vtkPolyData or BSPolyData
PolyData after thresholding.
See Also
--------
:func:`drop_points`
:func:`select_cells`
:func:`mask_cells` | brainspace/mesh/mesh_operations.py | drop_cells | anibalsolon/BrainSpace | 100 | python | def drop_cells(surf, array, low=(- np.inf), upp=np.inf):
'Remove surface cells whose values fall within the threshold.\n\n Points corresponding to these cells are also removed.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the CellData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after thresholding.\n\n See Also\n --------\n :func:`drop_points`\n :func:`select_cells`\n :func:`mask_cells`\n\n '
if isinstance(array, str):
array = surf.get_array(name=array, at='c')
mask = np.logical_or((array < low), (array > upp))
return mask_cells(surf, mask) | def drop_cells(surf, array, low=(- np.inf), upp=np.inf):
'Remove surface cells whose values fall within the threshold.\n\n Points corresponding to these cells are also removed.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the CellData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after thresholding.\n\n See Also\n --------\n :func:`drop_points`\n :func:`select_cells`\n :func:`mask_cells`\n\n '
if isinstance(array, str):
array = surf.get_array(name=array, at='c')
mask = np.logical_or((array < low), (array > upp))
return mask_cells(surf, mask)<|docstring|>Remove surface cells whose values fall within the threshold.
Points corresponding to these cells are also removed.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
array : str or 1D ndarray
Array used to perform selection. If str, it must be an array in
the CellData attributes of the PolyData.
low : float or -np.inf
Lower threshold. Default is -np.inf.
upp : float or np.inf
Upper threshold. Default is np.inf.
Returns
-------
surf_selected : vtkPolyData or BSPolyData
PolyData after thresholding.
See Also
--------
:func:`drop_points`
:func:`select_cells`
:func:`mask_cells`<|endoftext|> |
43b5bded9186d18d173e5654e0b2cf4948d14a918ddbbdf69a1794595cc9eee2 | def select_points(surf, array, low=(- np.inf), upp=np.inf):
'Select surface points whose values fall within the threshold.\n\n Cells corresponding to these points are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the PointData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after selection.\n\n See Also\n --------\n :func:`select_cells`\n :func:`drop_points`\n :func:`mask_points`\n\n '
return _surface_selection(surf, array, low=low, upp=upp) | Select surface points whose values fall within the threshold.
Cells corresponding to these points are also kept.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
array : str or 1D ndarray
Array used to perform selection. If str, it must be an array in
the PointData attributes of the PolyData.
low : float or -np.inf
Lower threshold. Default is -np.inf.
upp : float or np.inf
Upper threshold. Default is np.inf.
Returns
-------
surf_selected : vtkPolyData or BSPolyData
PolyData after selection.
See Also
--------
:func:`select_cells`
:func:`drop_points`
:func:`mask_points` | brainspace/mesh/mesh_operations.py | select_points | anibalsolon/BrainSpace | 100 | python | def select_points(surf, array, low=(- np.inf), upp=np.inf):
'Select surface points whose values fall within the threshold.\n\n Cells corresponding to these points are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the PointData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after selection.\n\n See Also\n --------\n :func:`select_cells`\n :func:`drop_points`\n :func:`mask_points`\n\n '
return _surface_selection(surf, array, low=low, upp=upp) | def select_points(surf, array, low=(- np.inf), upp=np.inf):
'Select surface points whose values fall within the threshold.\n\n Cells corresponding to these points are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the PointData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after selection.\n\n See Also\n --------\n :func:`select_cells`\n :func:`drop_points`\n :func:`mask_points`\n\n '
return _surface_selection(surf, array, low=low, upp=upp)<|docstring|>Select surface points whose values fall within the threshold.
Cells corresponding to these points are also kept.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
array : str or 1D ndarray
Array used to perform selection. If str, it must be an array in
the PointData attributes of the PolyData.
low : float or -np.inf
Lower threshold. Default is -np.inf.
upp : float or np.inf
Upper threshold. Default is np.inf.
Returns
-------
surf_selected : vtkPolyData or BSPolyData
PolyData after selection.
See Also
--------
:func:`select_cells`
:func:`drop_points`
:func:`mask_points`<|endoftext|> |
f0c4af8ab1ae794c78d6a1b92ae427676b697c884d5aad3f54d41bd7cb0877ed | def select_cells(surf, array, low=(- np.inf), upp=np.inf):
'Select surface cells whose values fall within the threshold.\n\n Points corresponding to these cells are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the CellData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after selection.\n\n See Also\n --------\n :func:`select_points`\n :func:`drop_cells`\n :func:`mask_cells`\n\n '
return _surface_selection(surf, array, low=low, upp=upp, use_cell=True) | Select surface cells whose values fall within the threshold.
Points corresponding to these cells are also kept.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
array : str or 1D ndarray
Array used to perform selection. If str, it must be an array in
the CellData attributes of the PolyData.
low : float or -np.inf
Lower threshold. Default is -np.inf.
upp : float or np.inf
Upper threshold. Default is np.inf.
Returns
-------
surf_selected : vtkPolyData or BSPolyData
PolyData after selection.
See Also
--------
:func:`select_points`
:func:`drop_cells`
:func:`mask_cells` | brainspace/mesh/mesh_operations.py | select_cells | anibalsolon/BrainSpace | 100 | python | def select_cells(surf, array, low=(- np.inf), upp=np.inf):
'Select surface cells whose values fall within the threshold.\n\n Points corresponding to these cells are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the CellData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after selection.\n\n See Also\n --------\n :func:`select_points`\n :func:`drop_cells`\n :func:`mask_cells`\n\n '
return _surface_selection(surf, array, low=low, upp=upp, use_cell=True) | def select_cells(surf, array, low=(- np.inf), upp=np.inf):
'Select surface cells whose values fall within the threshold.\n\n Points corresponding to these cells are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n array : str or 1D ndarray\n Array used to perform selection. If str, it must be an array in\n the CellData attributes of the PolyData.\n low : float or -np.inf\n Lower threshold. Default is -np.inf.\n upp : float or np.inf\n Upper threshold. Default is np.inf.\n\n Returns\n -------\n surf_selected : vtkPolyData or BSPolyData\n PolyData after selection.\n\n See Also\n --------\n :func:`select_points`\n :func:`drop_cells`\n :func:`mask_cells`\n\n '
return _surface_selection(surf, array, low=low, upp=upp, use_cell=True)<|docstring|>Select surface cells whose values fall within the threshold.
Points corresponding to these cells are also kept.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
array : str or 1D ndarray
Array used to perform selection. If str, it must be an array in
the CellData attributes of the PolyData.
low : float or -np.inf
Lower threshold. Default is -np.inf.
upp : float or np.inf
Upper threshold. Default is np.inf.
Returns
-------
surf_selected : vtkPolyData or BSPolyData
PolyData after selection.
See Also
--------
:func:`select_points`
:func:`drop_cells`
:func:`mask_cells`<|endoftext|> |
8331b95580b14e9e1784b6a1caafa47ac5b38970699ccf5271c53b8cc39f32b8 | def mask_points(surf, mask):
'Mask surface points.\n\n Cells corresponding to these points are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n mask : 1D ndarray\n Binary boolean array. Zero elements are discarded.\n\n Returns\n -------\n surf_masked : vtkPolyData or BSPolyData\n PolyData after masking.\n\n See Also\n --------\n :func:`mask_cells`\n :func:`drop_points`\n :func:`select_points`\n\n '
return _surface_mask(surf, mask) | Mask surface points.
Cells corresponding to these points are also kept.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
mask : 1D ndarray
Binary boolean array. Zero elements are discarded.
Returns
-------
surf_masked : vtkPolyData or BSPolyData
PolyData after masking.
See Also
--------
:func:`mask_cells`
:func:`drop_points`
:func:`select_points` | brainspace/mesh/mesh_operations.py | mask_points | anibalsolon/BrainSpace | 100 | python | def mask_points(surf, mask):
'Mask surface points.\n\n Cells corresponding to these points are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n mask : 1D ndarray\n Binary boolean array. Zero elements are discarded.\n\n Returns\n -------\n surf_masked : vtkPolyData or BSPolyData\n PolyData after masking.\n\n See Also\n --------\n :func:`mask_cells`\n :func:`drop_points`\n :func:`select_points`\n\n '
return _surface_mask(surf, mask) | def mask_points(surf, mask):
'Mask surface points.\n\n Cells corresponding to these points are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n mask : 1D ndarray\n Binary boolean array. Zero elements are discarded.\n\n Returns\n -------\n surf_masked : vtkPolyData or BSPolyData\n PolyData after masking.\n\n See Also\n --------\n :func:`mask_cells`\n :func:`drop_points`\n :func:`select_points`\n\n '
return _surface_mask(surf, mask)<|docstring|>Mask surface points.
Cells corresponding to these points are also kept.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
mask : 1D ndarray
Binary boolean array. Zero elements are discarded.
Returns
-------
surf_masked : vtkPolyData or BSPolyData
PolyData after masking.
See Also
--------
:func:`mask_cells`
:func:`drop_points`
:func:`select_points`<|endoftext|> |
0011734548eb2283b05baa53e4bad25854261888a01b57d3877e6cbc47948745 | def mask_cells(surf, mask):
'Mask surface cells.\n\n Points corresponding to these cells are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n mask : 1D ndarray\n Binary boolean array. Zero elements are discarded.\n\n Returns\n -------\n surf_masked : vtkPolyData or BSPolyData\n PolyData after masking.\n\n See Also\n --------\n :func:`mask_points`\n :func:`drop_cells`\n :func:`select_cells`\n\n '
return _surface_mask(surf, mask, use_cell=True) | Mask surface cells.
Points corresponding to these cells are also kept.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
mask : 1D ndarray
Binary boolean array. Zero elements are discarded.
Returns
-------
surf_masked : vtkPolyData or BSPolyData
PolyData after masking.
See Also
--------
:func:`mask_points`
:func:`drop_cells`
:func:`select_cells` | brainspace/mesh/mesh_operations.py | mask_cells | anibalsolon/BrainSpace | 100 | python | def mask_cells(surf, mask):
'Mask surface cells.\n\n Points corresponding to these cells are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n mask : 1D ndarray\n Binary boolean array. Zero elements are discarded.\n\n Returns\n -------\n surf_masked : vtkPolyData or BSPolyData\n PolyData after masking.\n\n See Also\n --------\n :func:`mask_points`\n :func:`drop_cells`\n :func:`select_cells`\n\n '
return _surface_mask(surf, mask, use_cell=True) | def mask_cells(surf, mask):
'Mask surface cells.\n\n Points corresponding to these cells are also kept.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n mask : 1D ndarray\n Binary boolean array. Zero elements are discarded.\n\n Returns\n -------\n surf_masked : vtkPolyData or BSPolyData\n PolyData after masking.\n\n See Also\n --------\n :func:`mask_points`\n :func:`drop_cells`\n :func:`select_cells`\n\n '
return _surface_mask(surf, mask, use_cell=True)<|docstring|>Mask surface cells.
Points corresponding to these cells are also kept.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
mask : 1D ndarray
Binary boolean array. Zero elements are discarded.
Returns
-------
surf_masked : vtkPolyData or BSPolyData
PolyData after masking.
See Also
--------
:func:`mask_points`
:func:`drop_cells`
:func:`select_cells`<|endoftext|> |
66beded77d7d01759a34d72674cbe17a7df99ba6a6df2d922556f01f3ce1d705 | def combine_surfaces(*surfs):
' Combine surfaces.\n\n Parameters\n ----------\n surfs : sequence of vtkPolyData and/or BSPolyData\n Input surfaces.\n\n Returns\n -------\n res : BSPolyData\n Combination of input surfaces.\n\n See Also\n --------\n :func:`split_surface`\n\n '
alg = vtkAppendPolyData()
for s in surfs:
alg = connect(s, alg, add_conn=True)
return get_output(alg) | Combine surfaces.
Parameters
----------
surfs : sequence of vtkPolyData and/or BSPolyData
Input surfaces.
Returns
-------
res : BSPolyData
Combination of input surfaces.
See Also
--------
:func:`split_surface` | brainspace/mesh/mesh_operations.py | combine_surfaces | anibalsolon/BrainSpace | 100 | python | def combine_surfaces(*surfs):
' Combine surfaces.\n\n Parameters\n ----------\n surfs : sequence of vtkPolyData and/or BSPolyData\n Input surfaces.\n\n Returns\n -------\n res : BSPolyData\n Combination of input surfaces.\n\n See Also\n --------\n :func:`split_surface`\n\n '
alg = vtkAppendPolyData()
for s in surfs:
alg = connect(s, alg, add_conn=True)
return get_output(alg) | def combine_surfaces(*surfs):
' Combine surfaces.\n\n Parameters\n ----------\n surfs : sequence of vtkPolyData and/or BSPolyData\n Input surfaces.\n\n Returns\n -------\n res : BSPolyData\n Combination of input surfaces.\n\n See Also\n --------\n :func:`split_surface`\n\n '
alg = vtkAppendPolyData()
for s in surfs:
alg = connect(s, alg, add_conn=True)
return get_output(alg)<|docstring|>Combine surfaces.
Parameters
----------
surfs : sequence of vtkPolyData and/or BSPolyData
Input surfaces.
Returns
-------
res : BSPolyData
Combination of input surfaces.
See Also
--------
:func:`split_surface`<|endoftext|> |
25eaecc7593978a50788d66f6875c8fd7a05803a14175f58c5de32cd1eafc152 | @append_vtk(to='point')
def get_connected_components(surf, labeling=None, mask=None, fill=0, append=False, key='components'):
"Get connected components.\n\n Connected components are based on connectivity (and same label if\n `labeling` is provided).\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n labeling : str or 1D ndarray, optional\n Array with labels. If str, it must be in the point data\n attributes of `surf`. Default is None. If provided, connectivity is\n based on neighboring points with the same label.\n mask : str or 1D ndarray, optional\n Boolean mask. If str, it must be in the point data\n attributes of `surf`. Default is None. If specified, only consider\n points within the mask.\n fill : int or float, optional\n Value used for entries out of the mask. Only used if the\n `target_mask` is provided. Default is 0.\n append : bool, optional\n If True, append array to point data attributes of input surface and\n return surface. Otherwise, only return array. Default is False.\n key : str, optional\n Array name to append to surface's point data attributes. Only used if\n ``append == True``. Default is 'components'.\n\n Returns\n -------\n output : vtkPolyData, BSPolyData or ndarray\n 1D array with different labels for each connected component.\n Return ndarray if ``append == False``. Otherwise, return input surface\n with the new array.\n\n Notes\n -----\n VTK point data does not accept boolean arrays. If the mask is provided as\n a string, the mask is built from the corresponding array such that any\n value larger than 0 is True.\n\n "
if isinstance(mask, str):
mask = (surf.get_array(name=mask, at='p') > 0)
if (labeling is None):
alg = wrap_vtk(vtkPolyDataConnectivityFilter, colorRegions=True, extractionMode='AllRegions')
cc = (serial_connect(surf, alg).PointData['RegionId'] + 1)
if (mask is not None):
cc[(~ mask)] = 0
return cc
if isinstance(labeling, str):
labeling = surf.get_array(name=labeling, at='p')
mlab = (labeling if (mask is None) else labeling[mask])
adj = get_immediate_adjacency(surf, mask=mask)
adj = ssp.triu(adj, 1)
mask_remove = (mlab[adj.row] != mlab[adj.col])
adj.data[mask_remove] = 0
adj.eliminate_zeros()
(nc, cc) = csg.connected_components(adj, directed=True, connection='weak')
cc += 1
if (mask is not None):
cc = map_to_mask(cc, mask=mask, fill=fill)
return cc | Get connected components.
Connected components are based on connectivity (and same label if
`labeling` is provided).
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
labeling : str or 1D ndarray, optional
Array with labels. If str, it must be in the point data
attributes of `surf`. Default is None. If provided, connectivity is
based on neighboring points with the same label.
mask : str or 1D ndarray, optional
Boolean mask. If str, it must be in the point data
attributes of `surf`. Default is None. If specified, only consider
points within the mask.
fill : int or float, optional
Value used for entries out of the mask. Only used if the
`target_mask` is provided. Default is 0.
append : bool, optional
If True, append array to point data attributes of input surface and
return surface. Otherwise, only return array. Default is False.
key : str, optional
Array name to append to surface's point data attributes. Only used if
``append == True``. Default is 'components'.
Returns
-------
output : vtkPolyData, BSPolyData or ndarray
1D array with different labels for each connected component.
Return ndarray if ``append == False``. Otherwise, return input surface
with the new array.
Notes
-----
VTK point data does not accept boolean arrays. If the mask is provided as
a string, the mask is built from the corresponding array such that any
value larger than 0 is True. | brainspace/mesh/mesh_operations.py | get_connected_components | anibalsolon/BrainSpace | 100 | python | @append_vtk(to='point')
def get_connected_components(surf, labeling=None, mask=None, fill=0, append=False, key='components'):
"Get connected components.\n\n Connected components are based on connectivity (and same label if\n `labeling` is provided).\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n labeling : str or 1D ndarray, optional\n Array with labels. If str, it must be in the point data\n attributes of `surf`. Default is None. If provided, connectivity is\n based on neighboring points with the same label.\n mask : str or 1D ndarray, optional\n Boolean mask. If str, it must be in the point data\n attributes of `surf`. Default is None. If specified, only consider\n points within the mask.\n fill : int or float, optional\n Value used for entries out of the mask. Only used if the\n `target_mask` is provided. Default is 0.\n append : bool, optional\n If True, append array to point data attributes of input surface and\n return surface. Otherwise, only return array. Default is False.\n key : str, optional\n Array name to append to surface's point data attributes. Only used if\n ``append == True``. Default is 'components'.\n\n Returns\n -------\n output : vtkPolyData, BSPolyData or ndarray\n 1D array with different labels for each connected component.\n Return ndarray if ``append == False``. Otherwise, return input surface\n with the new array.\n\n Notes\n -----\n VTK point data does not accept boolean arrays. If the mask is provided as\n a string, the mask is built from the corresponding array such that any\n value larger than 0 is True.\n\n "
if isinstance(mask, str):
mask = (surf.get_array(name=mask, at='p') > 0)
if (labeling is None):
alg = wrap_vtk(vtkPolyDataConnectivityFilter, colorRegions=True, extractionMode='AllRegions')
cc = (serial_connect(surf, alg).PointData['RegionId'] + 1)
if (mask is not None):
cc[(~ mask)] = 0
return cc
if isinstance(labeling, str):
labeling = surf.get_array(name=labeling, at='p')
mlab = (labeling if (mask is None) else labeling[mask])
adj = get_immediate_adjacency(surf, mask=mask)
adj = ssp.triu(adj, 1)
mask_remove = (mlab[adj.row] != mlab[adj.col])
adj.data[mask_remove] = 0
adj.eliminate_zeros()
(nc, cc) = csg.connected_components(adj, directed=True, connection='weak')
cc += 1
if (mask is not None):
cc = map_to_mask(cc, mask=mask, fill=fill)
return cc | @append_vtk(to='point')
def get_connected_components(surf, labeling=None, mask=None, fill=0, append=False, key='components'):
"Get connected components.\n\n Connected components are based on connectivity (and same label if\n `labeling` is provided).\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n labeling : str or 1D ndarray, optional\n Array with labels. If str, it must be in the point data\n attributes of `surf`. Default is None. If provided, connectivity is\n based on neighboring points with the same label.\n mask : str or 1D ndarray, optional\n Boolean mask. If str, it must be in the point data\n attributes of `surf`. Default is None. If specified, only consider\n points within the mask.\n fill : int or float, optional\n Value used for entries out of the mask. Only used if the\n `target_mask` is provided. Default is 0.\n append : bool, optional\n If True, append array to point data attributes of input surface and\n return surface. Otherwise, only return array. Default is False.\n key : str, optional\n Array name to append to surface's point data attributes. Only used if\n ``append == True``. Default is 'components'.\n\n Returns\n -------\n output : vtkPolyData, BSPolyData or ndarray\n 1D array with different labels for each connected component.\n Return ndarray if ``append == False``. Otherwise, return input surface\n with the new array.\n\n Notes\n -----\n VTK point data does not accept boolean arrays. If the mask is provided as\n a string, the mask is built from the corresponding array such that any\n value larger than 0 is True.\n\n "
if isinstance(mask, str):
mask = (surf.get_array(name=mask, at='p') > 0)
if (labeling is None):
alg = wrap_vtk(vtkPolyDataConnectivityFilter, colorRegions=True, extractionMode='AllRegions')
cc = (serial_connect(surf, alg).PointData['RegionId'] + 1)
if (mask is not None):
cc[(~ mask)] = 0
return cc
if isinstance(labeling, str):
labeling = surf.get_array(name=labeling, at='p')
mlab = (labeling if (mask is None) else labeling[mask])
adj = get_immediate_adjacency(surf, mask=mask)
adj = ssp.triu(adj, 1)
mask_remove = (mlab[adj.row] != mlab[adj.col])
adj.data[mask_remove] = 0
adj.eliminate_zeros()
(nc, cc) = csg.connected_components(adj, directed=True, connection='weak')
cc += 1
if (mask is not None):
cc = map_to_mask(cc, mask=mask, fill=fill)
return cc<|docstring|>Get connected components.
Connected components are based on connectivity (and same label if
`labeling` is provided).
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
labeling : str or 1D ndarray, optional
Array with labels. If str, it must be in the point data
attributes of `surf`. Default is None. If provided, connectivity is
based on neighboring points with the same label.
mask : str or 1D ndarray, optional
Boolean mask. If str, it must be in the point data
attributes of `surf`. Default is None. If specified, only consider
points within the mask.
fill : int or float, optional
Value used for entries out of the mask. Only used if the
`target_mask` is provided. Default is 0.
append : bool, optional
If True, append array to point data attributes of input surface and
return surface. Otherwise, only return array. Default is False.
key : str, optional
Array name to append to surface's point data attributes. Only used if
``append == True``. Default is 'components'.
Returns
-------
output : vtkPolyData, BSPolyData or ndarray
1D array with different labels for each connected component.
Return ndarray if ``append == False``. Otherwise, return input surface
with the new array.
Notes
-----
VTK point data does not accept boolean arrays. If the mask is provided as
a string, the mask is built from the corresponding array such that any
value larger than 0 is True.<|endoftext|> |
747103dbafc160fd25ff99453313d32b2442d4ed799316330554c9fd3a6a9159 | @wrap_input(0)
def split_surface(surf, labeling=None):
' Split surface according to the labeling.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n labeling : str, 1D ndarray or None, optional\n Array used to perform the splitting. If str, it must be an array in\n the PointData attributes of `surf`. If None, split surface in its\n connected components. Default is None.\n\n Returns\n -------\n res : dict[int, BSPolyData]\n Dictionary of sub-surfaces for each label.\n\n See Also\n --------\n :func:`combine_surfaces`\n :func:`mask_points`\n\n '
if (labeling is None):
labeling = get_connected_components(surf)
elif isinstance(labeling, str):
labeling = surf.get_array(labeling, at='p')
ulab = np.unique(labeling)
return {l: mask_points(surf, (labeling == l)) for l in ulab} | Split surface according to the labeling.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
labeling : str, 1D ndarray or None, optional
Array used to perform the splitting. If str, it must be an array in
the PointData attributes of `surf`. If None, split surface in its
connected components. Default is None.
Returns
-------
res : dict[int, BSPolyData]
Dictionary of sub-surfaces for each label.
See Also
--------
:func:`combine_surfaces`
:func:`mask_points` | brainspace/mesh/mesh_operations.py | split_surface | anibalsolon/BrainSpace | 100 | python | @wrap_input(0)
def split_surface(surf, labeling=None):
' Split surface according to the labeling.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n labeling : str, 1D ndarray or None, optional\n Array used to perform the splitting. If str, it must be an array in\n the PointData attributes of `surf`. If None, split surface in its\n connected components. Default is None.\n\n Returns\n -------\n res : dict[int, BSPolyData]\n Dictionary of sub-surfaces for each label.\n\n See Also\n --------\n :func:`combine_surfaces`\n :func:`mask_points`\n\n '
if (labeling is None):
labeling = get_connected_components(surf)
elif isinstance(labeling, str):
labeling = surf.get_array(labeling, at='p')
ulab = np.unique(labeling)
return {l: mask_points(surf, (labeling == l)) for l in ulab} | @wrap_input(0)
def split_surface(surf, labeling=None):
' Split surface according to the labeling.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n labeling : str, 1D ndarray or None, optional\n Array used to perform the splitting. If str, it must be an array in\n the PointData attributes of `surf`. If None, split surface in its\n connected components. Default is None.\n\n Returns\n -------\n res : dict[int, BSPolyData]\n Dictionary of sub-surfaces for each label.\n\n See Also\n --------\n :func:`combine_surfaces`\n :func:`mask_points`\n\n '
if (labeling is None):
labeling = get_connected_components(surf)
elif isinstance(labeling, str):
labeling = surf.get_array(labeling, at='p')
ulab = np.unique(labeling)
return {l: mask_points(surf, (labeling == l)) for l in ulab}<|docstring|>Split surface according to the labeling.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
labeling : str, 1D ndarray or None, optional
Array used to perform the splitting. If str, it must be an array in
the PointData attributes of `surf`. If None, split surface in its
connected components. Default is None.
Returns
-------
res : dict[int, BSPolyData]
Dictionary of sub-surfaces for each label.
See Also
--------
:func:`combine_surfaces`
:func:`mask_points`<|endoftext|> |
da77e6d42a7be8d764b51733e87d85d7465d30cece9a80ca63dae2eac62135d8 | @wrap_input(0)
def downsample_with_parcellation(surf, labeling, name='parcel', check_connected=True):
" Downsample surface according to the labeling.\n\n Such that, each parcel centroid is used as a point in the new donwsampled\n surface. Connectivity is based on neighboring parcels.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n labeling : str or 1D ndarray\n Array of labels used to perform the downsampling. If str, it must be an\n array in the PointData attributes of `surf`.\n name : str, optional\n Name of the downsampled parcellation appended to the PointData of the\n new surface. Default is 'parcel'.\n check_connected : bool, optional\n Whether to check if the points in each parcel are connected.\n Downsampling may produce inconsistent results if some parcels have more\n than one connected component. Default is True.\n\n Returns\n -------\n res : BSPolyData\n Downsampled surface.\n\n "
if isinstance(labeling, str):
labeling = surf.get_array(labeling, at='p')
labeling_small = np.unique(labeling)
nlabs = labeling_small.size
labeling_con = relabel_consecutive(labeling)
adj = get_immediate_adjacency(surf)
adj_neigh = adj.multiply(labeling_con).tocsr()
adj_small = np.zeros((nlabs, nlabs), dtype=np.bool)
for i in range(nlabs):
arow = adj_neigh[(labeling_con == i)]
for j in range((i + 1), nlabs):
adj_small[(j, i)] = adj_small[(i, j)] = np.any((arow.data == j))
points = np.empty((nlabs, 3))
cells = []
for i in range(nlabs):
m = (labeling_con == i)
if (check_connected and (csg.connected_components(adj[m][(:, m)])[0] > 1)):
warnings.warn(('Parcel %d is not fully connected. Downsampling may produce inconsistent results.' % labeling_small[i]))
neigh = np.unique(adj_neigh[m].data)
neigh = neigh[(neigh != i)]
if (neigh.size < 2):
continue
edges = np.array(list(combinations(neigh, 2)))
edges = edges[adj_small[(edges[(:, 0)], edges[(:, 1)])]]
c = np.hstack([np.full(edges.shape[0], i)[(:, None)], edges])
cells.append(c)
p = surf.Points[m]
d = cdist(p, p.mean(0, keepdims=True))[(:, 0)]
points[i] = p[np.argmin(d)]
cells = np.unique(np.sort(np.vstack(cells), axis=1), axis=0)
surf_small = build_polydata(points, cells=cells)
surf_small.append_array(labeling_small, name=name, at='p')
return surf_small | Downsample surface according to the labeling.
Such that, each parcel centroid is used as a point in the new donwsampled
surface. Connectivity is based on neighboring parcels.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
labeling : str or 1D ndarray
Array of labels used to perform the downsampling. If str, it must be an
array in the PointData attributes of `surf`.
name : str, optional
Name of the downsampled parcellation appended to the PointData of the
new surface. Default is 'parcel'.
check_connected : bool, optional
Whether to check if the points in each parcel are connected.
Downsampling may produce inconsistent results if some parcels have more
than one connected component. Default is True.
Returns
-------
res : BSPolyData
Downsampled surface. | brainspace/mesh/mesh_operations.py | downsample_with_parcellation | anibalsolon/BrainSpace | 100 | python | @wrap_input(0)
def downsample_with_parcellation(surf, labeling, name='parcel', check_connected=True):
" Downsample surface according to the labeling.\n\n Such that, each parcel centroid is used as a point in the new donwsampled\n surface. Connectivity is based on neighboring parcels.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n labeling : str or 1D ndarray\n Array of labels used to perform the downsampling. If str, it must be an\n array in the PointData attributes of `surf`.\n name : str, optional\n Name of the downsampled parcellation appended to the PointData of the\n new surface. Default is 'parcel'.\n check_connected : bool, optional\n Whether to check if the points in each parcel are connected.\n Downsampling may produce inconsistent results if some parcels have more\n than one connected component. Default is True.\n\n Returns\n -------\n res : BSPolyData\n Downsampled surface.\n\n "
if isinstance(labeling, str):
labeling = surf.get_array(labeling, at='p')
labeling_small = np.unique(labeling)
nlabs = labeling_small.size
labeling_con = relabel_consecutive(labeling)
adj = get_immediate_adjacency(surf)
adj_neigh = adj.multiply(labeling_con).tocsr()
adj_small = np.zeros((nlabs, nlabs), dtype=np.bool)
for i in range(nlabs):
arow = adj_neigh[(labeling_con == i)]
for j in range((i + 1), nlabs):
adj_small[(j, i)] = adj_small[(i, j)] = np.any((arow.data == j))
points = np.empty((nlabs, 3))
cells = []
for i in range(nlabs):
m = (labeling_con == i)
if (check_connected and (csg.connected_components(adj[m][(:, m)])[0] > 1)):
warnings.warn(('Parcel %d is not fully connected. Downsampling may produce inconsistent results.' % labeling_small[i]))
neigh = np.unique(adj_neigh[m].data)
neigh = neigh[(neigh != i)]
if (neigh.size < 2):
continue
edges = np.array(list(combinations(neigh, 2)))
edges = edges[adj_small[(edges[(:, 0)], edges[(:, 1)])]]
c = np.hstack([np.full(edges.shape[0], i)[(:, None)], edges])
cells.append(c)
p = surf.Points[m]
d = cdist(p, p.mean(0, keepdims=True))[(:, 0)]
points[i] = p[np.argmin(d)]
cells = np.unique(np.sort(np.vstack(cells), axis=1), axis=0)
surf_small = build_polydata(points, cells=cells)
surf_small.append_array(labeling_small, name=name, at='p')
return surf_small | @wrap_input(0)
def downsample_with_parcellation(surf, labeling, name='parcel', check_connected=True):
" Downsample surface according to the labeling.\n\n Such that, each parcel centroid is used as a point in the new donwsampled\n surface. Connectivity is based on neighboring parcels.\n\n Parameters\n ----------\n surf : vtkPolyData or BSPolyData\n Input surface.\n labeling : str or 1D ndarray\n Array of labels used to perform the downsampling. If str, it must be an\n array in the PointData attributes of `surf`.\n name : str, optional\n Name of the downsampled parcellation appended to the PointData of the\n new surface. Default is 'parcel'.\n check_connected : bool, optional\n Whether to check if the points in each parcel are connected.\n Downsampling may produce inconsistent results if some parcels have more\n than one connected component. Default is True.\n\n Returns\n -------\n res : BSPolyData\n Downsampled surface.\n\n "
if isinstance(labeling, str):
labeling = surf.get_array(labeling, at='p')
labeling_small = np.unique(labeling)
nlabs = labeling_small.size
labeling_con = relabel_consecutive(labeling)
adj = get_immediate_adjacency(surf)
adj_neigh = adj.multiply(labeling_con).tocsr()
adj_small = np.zeros((nlabs, nlabs), dtype=np.bool)
for i in range(nlabs):
arow = adj_neigh[(labeling_con == i)]
for j in range((i + 1), nlabs):
adj_small[(j, i)] = adj_small[(i, j)] = np.any((arow.data == j))
points = np.empty((nlabs, 3))
cells = []
for i in range(nlabs):
m = (labeling_con == i)
if (check_connected and (csg.connected_components(adj[m][(:, m)])[0] > 1)):
warnings.warn(('Parcel %d is not fully connected. Downsampling may produce inconsistent results.' % labeling_small[i]))
neigh = np.unique(adj_neigh[m].data)
neigh = neigh[(neigh != i)]
if (neigh.size < 2):
continue
edges = np.array(list(combinations(neigh, 2)))
edges = edges[adj_small[(edges[(:, 0)], edges[(:, 1)])]]
c = np.hstack([np.full(edges.shape[0], i)[(:, None)], edges])
cells.append(c)
p = surf.Points[m]
d = cdist(p, p.mean(0, keepdims=True))[(:, 0)]
points[i] = p[np.argmin(d)]
cells = np.unique(np.sort(np.vstack(cells), axis=1), axis=0)
surf_small = build_polydata(points, cells=cells)
surf_small.append_array(labeling_small, name=name, at='p')
return surf_small<|docstring|>Downsample surface according to the labeling.
Such that, each parcel centroid is used as a point in the new donwsampled
surface. Connectivity is based on neighboring parcels.
Parameters
----------
surf : vtkPolyData or BSPolyData
Input surface.
labeling : str or 1D ndarray
Array of labels used to perform the downsampling. If str, it must be an
array in the PointData attributes of `surf`.
name : str, optional
Name of the downsampled parcellation appended to the PointData of the
new surface. Default is 'parcel'.
check_connected : bool, optional
Whether to check if the points in each parcel are connected.
Downsampling may produce inconsistent results if some parcels have more
than one connected component. Default is True.
Returns
-------
res : BSPolyData
Downsampled surface.<|endoftext|> |
1108281b6d405b7acfc807b371fa2b9c76afeb8a7eb7c841eb177f9b3ddc9346 | def __init__(self, learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, amsgrad=False, use_locking=False, name='RAdam'):
'Construct a new Rectified Adam optimizer.\n Args:\n learning_rate: A Tensor or a floating point value. The learning rate.\n beta1: A float value or a constant float tensor. The exponential decay\n rate for the 1st moment estimates.\n beta2: A float value or a constant float tensor. The exponential decay\n rate for the 2nd moment estimates.\n epsilon: A small constant for numerical stability. This epsilon is\n "epsilon hat" in the Kingma and Ba paper (in the formula just before\n Section 2.1), not the epsilon in Algorithm 1 of the paper.\n amsgrad: boolean. Whether to apply AMSGrad variant of this algorithm from\n the paper "On the Convergence of Adam and beyond".\n use_locking: If `True` use locks for update operations.\n name: Optional name for the operations created when applying gradients.\n Defaults to "Adam". @compatibility(eager) When eager execution is\n enabled, `learning_rate`, `beta1`, `beta2`, and `epsilon` can each be\n a callable that takes no arguments and returns the actual value to use.\n This can be useful for changing these values across different\n invocations of optimizer functions. @end_compatibility\n '
super(RAdam, self).__init__(use_locking, name)
self._lr = learning_rate
self._beta1 = beta1
self._beta2 = beta2
self._epsilon = epsilon
self._amsgrad = amsgrad | Construct a new Rectified Adam optimizer.
Args:
learning_rate: A Tensor or a floating point value. The learning rate.
beta1: A float value or a constant float tensor. The exponential decay
rate for the 1st moment estimates.
beta2: A float value or a constant float tensor. The exponential decay
rate for the 2nd moment estimates.
epsilon: A small constant for numerical stability. This epsilon is
"epsilon hat" in the Kingma and Ba paper (in the formula just before
Section 2.1), not the epsilon in Algorithm 1 of the paper.
amsgrad: boolean. Whether to apply AMSGrad variant of this algorithm from
the paper "On the Convergence of Adam and beyond".
use_locking: If `True` use locks for update operations.
name: Optional name for the operations created when applying gradients.
Defaults to "Adam". @compatibility(eager) When eager execution is
enabled, `learning_rate`, `beta1`, `beta2`, and `epsilon` can each be
a callable that takes no arguments and returns the actual value to use.
This can be useful for changing these values across different
invocations of optimizer functions. @end_compatibility | radam.py | __init__ | klicperajo/tensorflow_v1_rectified_adam | 0 | python | def __init__(self, learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, amsgrad=False, use_locking=False, name='RAdam'):
'Construct a new Rectified Adam optimizer.\n Args:\n learning_rate: A Tensor or a floating point value. The learning rate.\n beta1: A float value or a constant float tensor. The exponential decay\n rate for the 1st moment estimates.\n beta2: A float value or a constant float tensor. The exponential decay\n rate for the 2nd moment estimates.\n epsilon: A small constant for numerical stability. This epsilon is\n "epsilon hat" in the Kingma and Ba paper (in the formula just before\n Section 2.1), not the epsilon in Algorithm 1 of the paper.\n amsgrad: boolean. Whether to apply AMSGrad variant of this algorithm from\n the paper "On the Convergence of Adam and beyond".\n use_locking: If `True` use locks for update operations.\n name: Optional name for the operations created when applying gradients.\n Defaults to "Adam". @compatibility(eager) When eager execution is\n enabled, `learning_rate`, `beta1`, `beta2`, and `epsilon` can each be\n a callable that takes no arguments and returns the actual value to use.\n This can be useful for changing these values across different\n invocations of optimizer functions. @end_compatibility\n '
super(RAdam, self).__init__(use_locking, name)
self._lr = learning_rate
self._beta1 = beta1
self._beta2 = beta2
self._epsilon = epsilon
self._amsgrad = amsgrad | def __init__(self, learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, amsgrad=False, use_locking=False, name='RAdam'):
'Construct a new Rectified Adam optimizer.\n Args:\n learning_rate: A Tensor or a floating point value. The learning rate.\n beta1: A float value or a constant float tensor. The exponential decay\n rate for the 1st moment estimates.\n beta2: A float value or a constant float tensor. The exponential decay\n rate for the 2nd moment estimates.\n epsilon: A small constant for numerical stability. This epsilon is\n "epsilon hat" in the Kingma and Ba paper (in the formula just before\n Section 2.1), not the epsilon in Algorithm 1 of the paper.\n amsgrad: boolean. Whether to apply AMSGrad variant of this algorithm from\n the paper "On the Convergence of Adam and beyond".\n use_locking: If `True` use locks for update operations.\n name: Optional name for the operations created when applying gradients.\n Defaults to "Adam". @compatibility(eager) When eager execution is\n enabled, `learning_rate`, `beta1`, `beta2`, and `epsilon` can each be\n a callable that takes no arguments and returns the actual value to use.\n This can be useful for changing these values across different\n invocations of optimizer functions. @end_compatibility\n '
super(RAdam, self).__init__(use_locking, name)
self._lr = learning_rate
self._beta1 = beta1
self._beta2 = beta2
self._epsilon = epsilon
self._amsgrad = amsgrad<|docstring|>Construct a new Rectified Adam optimizer.
Args:
learning_rate: A Tensor or a floating point value. The learning rate.
beta1: A float value or a constant float tensor. The exponential decay
rate for the 1st moment estimates.
beta2: A float value or a constant float tensor. The exponential decay
rate for the 2nd moment estimates.
epsilon: A small constant for numerical stability. This epsilon is
"epsilon hat" in the Kingma and Ba paper (in the formula just before
Section 2.1), not the epsilon in Algorithm 1 of the paper.
amsgrad: boolean. Whether to apply AMSGrad variant of this algorithm from
the paper "On the Convergence of Adam and beyond".
use_locking: If `True` use locks for update operations.
name: Optional name for the operations created when applying gradients.
Defaults to "Adam". @compatibility(eager) When eager execution is
enabled, `learning_rate`, `beta1`, `beta2`, and `epsilon` can each be
a callable that takes no arguments and returns the actual value to use.
This can be useful for changing these values across different
invocations of optimizer functions. @end_compatibility<|endoftext|> |
92a2c2c4e59e586b829638a15e0ca5e55b8155aace41654cfd0c56add66424fd | def __init__(self, service_url: str, update_interval: float, context_features: Sequence[str], *, http_client_args: Dict[(str, Any)]=None):
'\n Args:\n service_url: The HTTP url to the Heksher server.\n update_interval: The interval to wait between any two regular update calls, in seconds.\n context_features: The context features to expect in the Heksher server.\n http_client_args: Forwarded as kwargs to httpx.AsyncClient constructor.\n '
super().__init__(context_features)
http_client_args = (http_client_args or {})
self._service_url = service_url
self._update_interval = update_interval
self._http_client = AsyncClient(base_url=service_url, **http_client_args)
self._undeclared: Queue[Setting] = Queue()
self._declaration_task: Optional[Task[NoReturn]] = None
self._update_task: Optional[Task[NoReturn]] = None
self._last_cache_time: Optional[datetime] = None
self.modification_lock = Lock()
'\n A lock that is acquired whenever setting values are updated. To ensure that no modifications are made to\n settings, acquire this lock.\n '
self._update_event = Event()
'This event marks that an update occurred'
self._manual_update = Event()
'This event is waited on by the update loop (with a timeout), set it to instantly begin an update' | Args:
service_url: The HTTP url to the Heksher server.
update_interval: The interval to wait between any two regular update calls, in seconds.
context_features: The context features to expect in the Heksher server.
http_client_args: Forwarded as kwargs to httpx.AsyncClient constructor. | heksher/clients/async_client.py | __init__ | biocatchltd/heksher-py | 2 | python | def __init__(self, service_url: str, update_interval: float, context_features: Sequence[str], *, http_client_args: Dict[(str, Any)]=None):
'\n Args:\n service_url: The HTTP url to the Heksher server.\n update_interval: The interval to wait between any two regular update calls, in seconds.\n context_features: The context features to expect in the Heksher server.\n http_client_args: Forwarded as kwargs to httpx.AsyncClient constructor.\n '
super().__init__(context_features)
http_client_args = (http_client_args or {})
self._service_url = service_url
self._update_interval = update_interval
self._http_client = AsyncClient(base_url=service_url, **http_client_args)
self._undeclared: Queue[Setting] = Queue()
self._declaration_task: Optional[Task[NoReturn]] = None
self._update_task: Optional[Task[NoReturn]] = None
self._last_cache_time: Optional[datetime] = None
self.modification_lock = Lock()
'\n A lock that is acquired whenever setting values are updated. To ensure that no modifications are made to\n settings, acquire this lock.\n '
self._update_event = Event()
'This event marks that an update occurred'
self._manual_update = Event()
'This event is waited on by the update loop (with a timeout), set it to instantly begin an update' | def __init__(self, service_url: str, update_interval: float, context_features: Sequence[str], *, http_client_args: Dict[(str, Any)]=None):
'\n Args:\n service_url: The HTTP url to the Heksher server.\n update_interval: The interval to wait between any two regular update calls, in seconds.\n context_features: The context features to expect in the Heksher server.\n http_client_args: Forwarded as kwargs to httpx.AsyncClient constructor.\n '
super().__init__(context_features)
http_client_args = (http_client_args or {})
self._service_url = service_url
self._update_interval = update_interval
self._http_client = AsyncClient(base_url=service_url, **http_client_args)
self._undeclared: Queue[Setting] = Queue()
self._declaration_task: Optional[Task[NoReturn]] = None
self._update_task: Optional[Task[NoReturn]] = None
self._last_cache_time: Optional[datetime] = None
self.modification_lock = Lock()
'\n A lock that is acquired whenever setting values are updated. To ensure that no modifications are made to\n settings, acquire this lock.\n '
self._update_event = Event()
'This event marks that an update occurred'
self._manual_update = Event()
'This event is waited on by the update loop (with a timeout), set it to instantly begin an update'<|docstring|>Args:
service_url: The HTTP url to the Heksher server.
update_interval: The interval to wait between any two regular update calls, in seconds.
context_features: The context features to expect in the Heksher server.
http_client_args: Forwarded as kwargs to httpx.AsyncClient constructor.<|endoftext|> |
a3af8c8687a717ce9e75176ab7f6067cfc42fa795449e9282daaad6cbfa242b4 | async def _declaration_loop(self) -> NoReturn:
'\n The method for the task that continuously declares new settings.\n '
async def declare_setting(setting):
declaration_data = {'name': setting.name, 'configurable_features': list(setting.configurable_features), 'type': setting.type.heksher_string(), 'metadata': setting.metadata}
if (setting.default_value is not NO_DEFAULT):
declaration_data['default_value'] = setting.default_value
response = (await self._http_client.put('api/v1/settings/declare', content=orjson.dumps(declaration_data), headers=content_header))
self._handle_declaration_response(setting, response)
while True:
setting = (await self._undeclared.get())
try:
(await declare_setting(setting))
except CancelledError:
raise
except Exception:
logger.exception('setting declaration failed', extra={'setting': setting.name})
finally:
self._undeclared.task_done() | The method for the task that continuously declares new settings. | heksher/clients/async_client.py | _declaration_loop | biocatchltd/heksher-py | 2 | python | async def _declaration_loop(self) -> NoReturn:
'\n \n '
async def declare_setting(setting):
declaration_data = {'name': setting.name, 'configurable_features': list(setting.configurable_features), 'type': setting.type.heksher_string(), 'metadata': setting.metadata}
if (setting.default_value is not NO_DEFAULT):
declaration_data['default_value'] = setting.default_value
response = (await self._http_client.put('api/v1/settings/declare', content=orjson.dumps(declaration_data), headers=content_header))
self._handle_declaration_response(setting, response)
while True:
setting = (await self._undeclared.get())
try:
(await declare_setting(setting))
except CancelledError:
raise
except Exception:
logger.exception('setting declaration failed', extra={'setting': setting.name})
finally:
self._undeclared.task_done() | async def _declaration_loop(self) -> NoReturn:
'\n \n '
async def declare_setting(setting):
declaration_data = {'name': setting.name, 'configurable_features': list(setting.configurable_features), 'type': setting.type.heksher_string(), 'metadata': setting.metadata}
if (setting.default_value is not NO_DEFAULT):
declaration_data['default_value'] = setting.default_value
response = (await self._http_client.put('api/v1/settings/declare', content=orjson.dumps(declaration_data), headers=content_header))
self._handle_declaration_response(setting, response)
while True:
setting = (await self._undeclared.get())
try:
(await declare_setting(setting))
except CancelledError:
raise
except Exception:
logger.exception('setting declaration failed', extra={'setting': setting.name})
finally:
self._undeclared.task_done()<|docstring|>The method for the task that continuously declares new settings.<|endoftext|> |
c02f6996ed45f524bc6c2b862db07c8641d1d74191406d51683f321b2d6cfe75 | async def _update_loop(self) -> NoReturn:
'\n The method for the task that continuously updates declared settings.\n '
async def update():
logger.debug('heksher reload started')
data = {'setting_names': list(self._tracked_settings.keys()), 'context_features_options': self._context_feature_options(), 'include_metadata': False}
if self._last_cache_time:
data['cache_time'] = self._last_cache_time.isoformat()
new_cache_time = datetime.utcnow()
response = (await self._http_client.post('/api/v1/rules/query', content=orjson.dumps(data), headers=content_header))
response.raise_for_status()
updated_settings = response.json()['rules']
async with self.modification_lock:
self._update_settings_from_query(updated_settings)
self._last_cache_time = new_cache_time
logger.info('heksher reload done', extra={'updated_settings': list(updated_settings.keys())})
while True:
try:
(await update())
except CancelledError:
raise
except Exception:
logger.exception('error during heksher update')
finally:
self._update_event.set()
try:
self._manual_update.clear()
(await wait_for(self._manual_update.wait(), self._update_interval))
except TimeoutError:
pass | The method for the task that continuously updates declared settings. | heksher/clients/async_client.py | _update_loop | biocatchltd/heksher-py | 2 | python | async def _update_loop(self) -> NoReturn:
'\n \n '
async def update():
logger.debug('heksher reload started')
data = {'setting_names': list(self._tracked_settings.keys()), 'context_features_options': self._context_feature_options(), 'include_metadata': False}
if self._last_cache_time:
data['cache_time'] = self._last_cache_time.isoformat()
new_cache_time = datetime.utcnow()
response = (await self._http_client.post('/api/v1/rules/query', content=orjson.dumps(data), headers=content_header))
response.raise_for_status()
updated_settings = response.json()['rules']
async with self.modification_lock:
self._update_settings_from_query(updated_settings)
self._last_cache_time = new_cache_time
logger.info('heksher reload done', extra={'updated_settings': list(updated_settings.keys())})
while True:
try:
(await update())
except CancelledError:
raise
except Exception:
logger.exception('error during heksher update')
finally:
self._update_event.set()
try:
self._manual_update.clear()
(await wait_for(self._manual_update.wait(), self._update_interval))
except TimeoutError:
pass | async def _update_loop(self) -> NoReturn:
'\n \n '
async def update():
logger.debug('heksher reload started')
data = {'setting_names': list(self._tracked_settings.keys()), 'context_features_options': self._context_feature_options(), 'include_metadata': False}
if self._last_cache_time:
data['cache_time'] = self._last_cache_time.isoformat()
new_cache_time = datetime.utcnow()
response = (await self._http_client.post('/api/v1/rules/query', content=orjson.dumps(data), headers=content_header))
response.raise_for_status()
updated_settings = response.json()['rules']
async with self.modification_lock:
self._update_settings_from_query(updated_settings)
self._last_cache_time = new_cache_time
logger.info('heksher reload done', extra={'updated_settings': list(updated_settings.keys())})
while True:
try:
(await update())
except CancelledError:
raise
except Exception:
logger.exception('error during heksher update')
finally:
self._update_event.set()
try:
self._manual_update.clear()
(await wait_for(self._manual_update.wait(), self._update_interval))
except TimeoutError:
pass<|docstring|>The method for the task that continuously updates declared settings.<|endoftext|> |
f831d01f7351573049850604dd5f6fac4c6aa4455261cfcd12fdf1cb8edbb868 | async def reload(self):
'\n Block until all the tracked settings are up to date\n '
(await self._undeclared.join())
self._update_event.clear()
self._manual_update.set()
(await self._update_event.wait()) | Block until all the tracked settings are up to date | heksher/clients/async_client.py | reload | biocatchltd/heksher-py | 2 | python | async def reload(self):
'\n \n '
(await self._undeclared.join())
self._update_event.clear()
self._manual_update.set()
(await self._update_event.wait()) | async def reload(self):
'\n \n '
(await self._undeclared.join())
self._update_event.clear()
self._manual_update.set()
(await self._update_event.wait())<|docstring|>Block until all the tracked settings are up to date<|endoftext|> |
e585175b80554f0dc94ed70d03cbff69f5cc91b45bbe37dc95832f1dbc7062d6 | async def ping(self) -> None:
'\n Check the health of the heksher server\n Raises:\n httpx.HTTPError, if an error occurs\n '
response = (await self._http_client.get('/api/health'))
response.raise_for_status() | Check the health of the heksher server
Raises:
httpx.HTTPError, if an error occurs | heksher/clients/async_client.py | ping | biocatchltd/heksher-py | 2 | python | async def ping(self) -> None:
'\n Check the health of the heksher server\n Raises:\n httpx.HTTPError, if an error occurs\n '
response = (await self._http_client.get('/api/health'))
response.raise_for_status() | async def ping(self) -> None:
'\n Check the health of the heksher server\n Raises:\n httpx.HTTPError, if an error occurs\n '
response = (await self._http_client.get('/api/health'))
response.raise_for_status()<|docstring|>Check the health of the heksher server
Raises:
httpx.HTTPError, if an error occurs<|endoftext|> |
ce9e148d8dd48c296c5e5650f3a0a2bb4b4348d322a66619e992aa2e92cf38e9 | async def get_settings(self) -> Dict:
'\n List all the settings in the service\n '
response = (await self._http_client.get('/api/v1/settings', params=orjson.dumps({'include_additional_data': True})))
response.raise_for_status()
settings = SettingsOutput.parse_obj(response.json()).to_settings_data()
return settings | List all the settings in the service | heksher/clients/async_client.py | get_settings | biocatchltd/heksher-py | 2 | python | async def get_settings(self) -> Dict:
'\n \n '
response = (await self._http_client.get('/api/v1/settings', params=orjson.dumps({'include_additional_data': True})))
response.raise_for_status()
settings = SettingsOutput.parse_obj(response.json()).to_settings_data()
return settings | async def get_settings(self) -> Dict:
'\n \n '
response = (await self._http_client.get('/api/v1/settings', params=orjson.dumps({'include_additional_data': True})))
response.raise_for_status()
settings = SettingsOutput.parse_obj(response.json()).to_settings_data()
return settings<|docstring|>List all the settings in the service<|endoftext|> |
91416fe4d6fb9d62cc8cf77fae8fb0702d2bb686d8b9e8ef4d0b00329c88f2f1 | def make_hyper_patch_conv2d_block(in_nc, out_nc, kernel_size=3, stride=1, padding=None, dilation=1, groups=1, padding_mode='reflect', norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU')):
" Defines a Hyper patch-wise convolution block with a normalization layer, an activation layer, and an optional\n dropout layer.\n\n Args:\n in_nc (int): Input number of channels\n out_nc (int): Output number of channels\n kernel_size (int): Convolution kernel size\n stride (int): Convolution stride\n padding (int, optional): The amount of padding for the height and width dimensions\n dilation (int or tuple, optional): Spacing between kernel elements. Default: 1\n groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1\n padding_mode (str, optional): ``'zeros'``, ``'reflect'``, ``'replicate'`` or ``'circular'``. Default: ``'zeros'``\n norm_cfg (dict): Type of feature normalization layer\n act_cfg (dict): Type of activation layer\n "
padding = ((kernel_size // 2) if (padding is None) else padding)
if (padding == 0):
layers = [HyperPatchNoPadding(in_nc, out_nc, kernel_size, stride, dilation, groups)]
else:
layers = [HyperPatchConv2d(in_nc, out_nc, kernel_size, stride, padding, dilation, groups, padding_mode)]
if (norm_cfg is not None):
layers.append(build_norm_layer(norm_cfg, out_nc)[1])
if (act_cfg is not None):
layers.append(build_activation_layer(act_cfg))
return MetaSequential(*layers) | Defines a Hyper patch-wise convolution block with a normalization layer, an activation layer, and an optional
dropout layer.
Args:
in_nc (int): Input number of channels
out_nc (int): Output number of channels
kernel_size (int): Convolution kernel size
stride (int): Convolution stride
padding (int, optional): The amount of padding for the height and width dimensions
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1
padding_mode (str, optional): ``'zeros'``, ``'reflect'``, ``'replicate'`` or ``'circular'``. Default: ``'zeros'``
norm_cfg (dict): Type of feature normalization layer
act_cfg (dict): Type of activation layer | mmseg/models/decode_heads/hyperseg_head.py | make_hyper_patch_conv2d_block | yunchu/mmsegmentation | 3 | python | def make_hyper_patch_conv2d_block(in_nc, out_nc, kernel_size=3, stride=1, padding=None, dilation=1, groups=1, padding_mode='reflect', norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU')):
" Defines a Hyper patch-wise convolution block with a normalization layer, an activation layer, and an optional\n dropout layer.\n\n Args:\n in_nc (int): Input number of channels\n out_nc (int): Output number of channels\n kernel_size (int): Convolution kernel size\n stride (int): Convolution stride\n padding (int, optional): The amount of padding for the height and width dimensions\n dilation (int or tuple, optional): Spacing between kernel elements. Default: 1\n groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1\n padding_mode (str, optional): ``'zeros'``, ``'reflect'``, ``'replicate'`` or ``'circular'``. Default: ``'zeros'``\n norm_cfg (dict): Type of feature normalization layer\n act_cfg (dict): Type of activation layer\n "
padding = ((kernel_size // 2) if (padding is None) else padding)
if (padding == 0):
layers = [HyperPatchNoPadding(in_nc, out_nc, kernel_size, stride, dilation, groups)]
else:
layers = [HyperPatchConv2d(in_nc, out_nc, kernel_size, stride, padding, dilation, groups, padding_mode)]
if (norm_cfg is not None):
layers.append(build_norm_layer(norm_cfg, out_nc)[1])
if (act_cfg is not None):
layers.append(build_activation_layer(act_cfg))
return MetaSequential(*layers) | def make_hyper_patch_conv2d_block(in_nc, out_nc, kernel_size=3, stride=1, padding=None, dilation=1, groups=1, padding_mode='reflect', norm_cfg=dict(type='BN'), act_cfg=dict(type='ReLU')):
" Defines a Hyper patch-wise convolution block with a normalization layer, an activation layer, and an optional\n dropout layer.\n\n Args:\n in_nc (int): Input number of channels\n out_nc (int): Output number of channels\n kernel_size (int): Convolution kernel size\n stride (int): Convolution stride\n padding (int, optional): The amount of padding for the height and width dimensions\n dilation (int or tuple, optional): Spacing between kernel elements. Default: 1\n groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1\n padding_mode (str, optional): ``'zeros'``, ``'reflect'``, ``'replicate'`` or ``'circular'``. Default: ``'zeros'``\n norm_cfg (dict): Type of feature normalization layer\n act_cfg (dict): Type of activation layer\n "
padding = ((kernel_size // 2) if (padding is None) else padding)
if (padding == 0):
layers = [HyperPatchNoPadding(in_nc, out_nc, kernel_size, stride, dilation, groups)]
else:
layers = [HyperPatchConv2d(in_nc, out_nc, kernel_size, stride, padding, dilation, groups, padding_mode)]
if (norm_cfg is not None):
layers.append(build_norm_layer(norm_cfg, out_nc)[1])
if (act_cfg is not None):
layers.append(build_activation_layer(act_cfg))
return MetaSequential(*layers)<|docstring|>Defines a Hyper patch-wise convolution block with a normalization layer, an activation layer, and an optional
dropout layer.
Args:
in_nc (int): Input number of channels
out_nc (int): Output number of channels
kernel_size (int): Convolution kernel size
stride (int): Convolution stride
padding (int, optional): The amount of padding for the height and width dimensions
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1
padding_mode (str, optional): ``'zeros'``, ``'reflect'``, ``'replicate'`` or ``'circular'``. Default: ``'zeros'``
norm_cfg (dict): Type of feature normalization layer
act_cfg (dict): Type of activation layer<|endoftext|> |
b7bcc3e1633e96710d3ff2d259d2be0bd623414cbcbf08ade756ebe5487e8cf6 | def divide_feature(in_feature, out_features, min_unit=8):
' Divides in_feature relative to each of the provided out_features.\n\n The division of the input feature will be in multiplies of "min_unit".\n The algorithm makes sure that equal output features will get the same portion of the input feature.\n The smallest out feature will receive all the round down overflow (usually the final fc)\n\n Args:\n in_feature: the input feature to divide\n out_features: the relative sizes of the output features\n min_unit: each division of the input feature will be divisible by this number.\n in_feature must be divisible by this number as well\n\n Returns:\n np.array: array of integers of the divided input feature in the size of out_features.\n '
assert ((in_feature % min_unit) == 0), f'in_feature ({in_feature}) must be divisible by min_unit ({min_unit})'
units = (in_feature // min_unit)
indices = np.argsort(out_features)
out_features_sorted = np.array(out_features)[indices]
out_feat_groups = [(k, indices[list(g)]) for (k, g) in groupby(range(len(indices)), (lambda i: out_features_sorted[i]))]
out_feat_groups.sort(key=(lambda x: (x[0] * len(x[1]))), reverse=True)
units_feat_ratio = (float(units) / sum(out_features))
out_group_units = [len(out_feat_group[1]) for out_feat_group in out_feat_groups]
remaining_units = (units - sum(out_group_units))
for (i, out_feat_group) in enumerate(out_feat_groups):
if (i < (len(out_feat_groups) - 1)):
n = len(out_feat_group[1])
curr_out_feat_size = (out_feat_group[0] * n)
curr_units = max((curr_out_feat_size * units_feat_ratio), n)
curr_units = (((curr_units // n) * n) - n)
curr_units = min(curr_units, remaining_units)
out_group_units[i] += curr_units
remaining_units -= curr_units
if (remaining_units == 0):
break
else:
out_group_units[(- 1)] += remaining_units
divided_in_features = np.zeros(len(out_features), dtype=int)
for (i, out_feat_group) in enumerate(out_feat_groups):
for j in range(len(out_feat_group[1])):
divided_in_features[out_feat_group[1][j]] = ((out_group_units[i] // len(out_feat_group[1])) * min_unit)
return divided_in_features | Divides in_feature relative to each of the provided out_features.
The division of the input feature will be in multiplies of "min_unit".
The algorithm makes sure that equal output features will get the same portion of the input feature.
The smallest out feature will receive all the round down overflow (usually the final fc)
Args:
in_feature: the input feature to divide
out_features: the relative sizes of the output features
min_unit: each division of the input feature will be divisible by this number.
in_feature must be divisible by this number as well
Returns:
np.array: array of integers of the divided input feature in the size of out_features. | mmseg/models/decode_heads/hyperseg_head.py | divide_feature | yunchu/mmsegmentation | 3 | python | def divide_feature(in_feature, out_features, min_unit=8):
' Divides in_feature relative to each of the provided out_features.\n\n The division of the input feature will be in multiplies of "min_unit".\n The algorithm makes sure that equal output features will get the same portion of the input feature.\n The smallest out feature will receive all the round down overflow (usually the final fc)\n\n Args:\n in_feature: the input feature to divide\n out_features: the relative sizes of the output features\n min_unit: each division of the input feature will be divisible by this number.\n in_feature must be divisible by this number as well\n\n Returns:\n np.array: array of integers of the divided input feature in the size of out_features.\n '
assert ((in_feature % min_unit) == 0), f'in_feature ({in_feature}) must be divisible by min_unit ({min_unit})'
units = (in_feature // min_unit)
indices = np.argsort(out_features)
out_features_sorted = np.array(out_features)[indices]
out_feat_groups = [(k, indices[list(g)]) for (k, g) in groupby(range(len(indices)), (lambda i: out_features_sorted[i]))]
out_feat_groups.sort(key=(lambda x: (x[0] * len(x[1]))), reverse=True)
units_feat_ratio = (float(units) / sum(out_features))
out_group_units = [len(out_feat_group[1]) for out_feat_group in out_feat_groups]
remaining_units = (units - sum(out_group_units))
for (i, out_feat_group) in enumerate(out_feat_groups):
if (i < (len(out_feat_groups) - 1)):
n = len(out_feat_group[1])
curr_out_feat_size = (out_feat_group[0] * n)
curr_units = max((curr_out_feat_size * units_feat_ratio), n)
curr_units = (((curr_units // n) * n) - n)
curr_units = min(curr_units, remaining_units)
out_group_units[i] += curr_units
remaining_units -= curr_units
if (remaining_units == 0):
break
else:
out_group_units[(- 1)] += remaining_units
divided_in_features = np.zeros(len(out_features), dtype=int)
for (i, out_feat_group) in enumerate(out_feat_groups):
for j in range(len(out_feat_group[1])):
divided_in_features[out_feat_group[1][j]] = ((out_group_units[i] // len(out_feat_group[1])) * min_unit)
return divided_in_features | def divide_feature(in_feature, out_features, min_unit=8):
' Divides in_feature relative to each of the provided out_features.\n\n The division of the input feature will be in multiplies of "min_unit".\n The algorithm makes sure that equal output features will get the same portion of the input feature.\n The smallest out feature will receive all the round down overflow (usually the final fc)\n\n Args:\n in_feature: the input feature to divide\n out_features: the relative sizes of the output features\n min_unit: each division of the input feature will be divisible by this number.\n in_feature must be divisible by this number as well\n\n Returns:\n np.array: array of integers of the divided input feature in the size of out_features.\n '
assert ((in_feature % min_unit) == 0), f'in_feature ({in_feature}) must be divisible by min_unit ({min_unit})'
units = (in_feature // min_unit)
indices = np.argsort(out_features)
out_features_sorted = np.array(out_features)[indices]
out_feat_groups = [(k, indices[list(g)]) for (k, g) in groupby(range(len(indices)), (lambda i: out_features_sorted[i]))]
out_feat_groups.sort(key=(lambda x: (x[0] * len(x[1]))), reverse=True)
units_feat_ratio = (float(units) / sum(out_features))
out_group_units = [len(out_feat_group[1]) for out_feat_group in out_feat_groups]
remaining_units = (units - sum(out_group_units))
for (i, out_feat_group) in enumerate(out_feat_groups):
if (i < (len(out_feat_groups) - 1)):
n = len(out_feat_group[1])
curr_out_feat_size = (out_feat_group[0] * n)
curr_units = max((curr_out_feat_size * units_feat_ratio), n)
curr_units = (((curr_units // n) * n) - n)
curr_units = min(curr_units, remaining_units)
out_group_units[i] += curr_units
remaining_units -= curr_units
if (remaining_units == 0):
break
else:
out_group_units[(- 1)] += remaining_units
divided_in_features = np.zeros(len(out_features), dtype=int)
for (i, out_feat_group) in enumerate(out_feat_groups):
for j in range(len(out_feat_group[1])):
divided_in_features[out_feat_group[1][j]] = ((out_group_units[i] // len(out_feat_group[1])) * min_unit)
return divided_in_features<|docstring|>Divides in_feature relative to each of the provided out_features.
The division of the input feature will be in multiplies of "min_unit".
The algorithm makes sure that equal output features will get the same portion of the input feature.
The smallest out feature will receive all the round down overflow (usually the final fc)
Args:
in_feature: the input feature to divide
out_features: the relative sizes of the output features
min_unit: each division of the input feature will be divisible by this number.
in_feature must be divisible by this number as well
Returns:
np.array: array of integers of the divided input feature in the size of out_features.<|endoftext|> |
790c7ab8b6070abbf043af689df2c5db8b550b9bd16d316cef240a159e87c65d | def __focus():
"Sİmilasyona odaklanma\n Fare'yi similasyona getirir ve üzerine tıklar.\n Odaklanma olmadan tuş kombinasyonlarının gönderimi başarısız olur.\n "
mouse.position = POSITION
mouse.click(MButton.left)
time.sleep(FOCUS_WAITING_TIME) | Sİmilasyona odaklanma
Fare'yi similasyona getirir ve üzerine tıklar.
Odaklanma olmadan tuş kombinasyonlarının gönderimi başarısız olur. | src/simulator.py | __focus | yedhrab/OTOBIL-Simulator | 0 | python | def __focus():
"Sİmilasyona odaklanma\n Fare'yi similasyona getirir ve üzerine tıklar.\n Odaklanma olmadan tuş kombinasyonlarının gönderimi başarısız olur.\n "
mouse.position = POSITION
mouse.click(MButton.left)
time.sleep(FOCUS_WAITING_TIME) | def __focus():
"Sİmilasyona odaklanma\n Fare'yi similasyona getirir ve üzerine tıklar.\n Odaklanma olmadan tuş kombinasyonlarının gönderimi başarısız olur.\n "
mouse.position = POSITION
mouse.click(MButton.left)
time.sleep(FOCUS_WAITING_TIME)<|docstring|>Sİmilasyona odaklanma
Fare'yi similasyona getirir ve üzerine tıklar.
Odaklanma olmadan tuş kombinasyonlarının gönderimi başarısız olur.<|endoftext|> |
194f81471abb431621d2744ff6fe462482ef971fc77a08e45192e6b00a6eded8 | def turn(ratio: float, save_speed=True):
'Aracın dönemsini sağlar\n Aracın dönüşünü, isteğe bağlı olarak hız kaybını engelleyerek sağlar\n\n Arguments:\n ratio {float} -- Dönüş değeri, `-1` sola 90 derece dönüş, `+1` sağa 90 derece dönüş anlamına gelir\n\n Keyword Arguments:\n save_speed {bool} -- Hızı koru (default: {True})\n '
key = ('d' if (ratio > 0) else 'a')
if save_speed:
__hold_key(key, (TURN_TIME * abs(ratio)))
else:
keyboard.release('w')
__hold_key(key, (TURN_TIME * abs(ratio)))
keyboard.press('w') | Aracın dönemsini sağlar
Aracın dönüşünü, isteğe bağlı olarak hız kaybını engelleyerek sağlar
Arguments:
ratio {float} -- Dönüş değeri, `-1` sola 90 derece dönüş, `+1` sağa 90 derece dönüş anlamına gelir
Keyword Arguments:
save_speed {bool} -- Hızı koru (default: {True}) | src/simulator.py | turn | yedhrab/OTOBIL-Simulator | 0 | python | def turn(ratio: float, save_speed=True):
'Aracın dönemsini sağlar\n Aracın dönüşünü, isteğe bağlı olarak hız kaybını engelleyerek sağlar\n\n Arguments:\n ratio {float} -- Dönüş değeri, `-1` sola 90 derece dönüş, `+1` sağa 90 derece dönüş anlamına gelir\n\n Keyword Arguments:\n save_speed {bool} -- Hızı koru (default: {True})\n '
key = ('d' if (ratio > 0) else 'a')
if save_speed:
__hold_key(key, (TURN_TIME * abs(ratio)))
else:
keyboard.release('w')
__hold_key(key, (TURN_TIME * abs(ratio)))
keyboard.press('w') | def turn(ratio: float, save_speed=True):
'Aracın dönemsini sağlar\n Aracın dönüşünü, isteğe bağlı olarak hız kaybını engelleyerek sağlar\n\n Arguments:\n ratio {float} -- Dönüş değeri, `-1` sola 90 derece dönüş, `+1` sağa 90 derece dönüş anlamına gelir\n\n Keyword Arguments:\n save_speed {bool} -- Hızı koru (default: {True})\n '
key = ('d' if (ratio > 0) else 'a')
if save_speed:
__hold_key(key, (TURN_TIME * abs(ratio)))
else:
keyboard.release('w')
__hold_key(key, (TURN_TIME * abs(ratio)))
keyboard.press('w')<|docstring|>Aracın dönemsini sağlar
Aracın dönüşünü, isteğe bağlı olarak hız kaybını engelleyerek sağlar
Arguments:
ratio {float} -- Dönüş değeri, `-1` sola 90 derece dönüş, `+1` sağa 90 derece dönüş anlamına gelir
Keyword Arguments:
save_speed {bool} -- Hızı koru (default: {True})<|endoftext|> |
8e507e43fcdf5b66864eeefbaa09dd52c3d24eadc2d2f02e3135d5739c25abfe | def set_speed_limit(limit: bool):
'20 Hız limitini aktif eder\n\n Arguments:\n limit {bool} -- `True` veya `False`\n '
if limit:
keyboard.press('k')
else:
keyboard.release('k') | 20 Hız limitini aktif eder
Arguments:
limit {bool} -- `True` veya `False` | src/simulator.py | set_speed_limit | yedhrab/OTOBIL-Simulator | 0 | python | def set_speed_limit(limit: bool):
'20 Hız limitini aktif eder\n\n Arguments:\n limit {bool} -- `True` veya `False`\n '
if limit:
keyboard.press('k')
else:
keyboard.release('k') | def set_speed_limit(limit: bool):
'20 Hız limitini aktif eder\n\n Arguments:\n limit {bool} -- `True` veya `False`\n '
if limit:
keyboard.press('k')
else:
keyboard.release('k')<|docstring|>20 Hız limitini aktif eder
Arguments:
limit {bool} -- `True` veya `False`<|endoftext|> |
89c228647ba271ee1d32757b93ca5e8f9344c39718c99cccd3405f512d8d90a1 | def move():
"Similasyondaki aracın ilerlemesi\n Similasyonda manuel olarak 'w' tuşuna basma işlemini sağlar.\n\n Keyword Arguments:\n stable {bool} -- Sabit hızda gitme (default: {False})\n "
__hold_key('w', 0) | Similasyondaki aracın ilerlemesi
Similasyonda manuel olarak 'w' tuşuna basma işlemini sağlar.
Keyword Arguments:
stable {bool} -- Sabit hızda gitme (default: {False}) | src/simulator.py | move | yedhrab/OTOBIL-Simulator | 0 | python | def move():
"Similasyondaki aracın ilerlemesi\n Similasyonda manuel olarak 'w' tuşuna basma işlemini sağlar.\n\n Keyword Arguments:\n stable {bool} -- Sabit hızda gitme (default: {False})\n "
__hold_key('w', 0) | def move():
"Similasyondaki aracın ilerlemesi\n Similasyonda manuel olarak 'w' tuşuna basma işlemini sağlar.\n\n Keyword Arguments:\n stable {bool} -- Sabit hızda gitme (default: {False})\n "
__hold_key('w', 0)<|docstring|>Similasyondaki aracın ilerlemesi
Similasyonda manuel olarak 'w' tuşuna basma işlemini sağlar.
Keyword Arguments:
stable {bool} -- Sabit hızda gitme (default: {False})<|endoftext|> |
0b87592ec0ec1164631b54c9092f38b54911491661280c78871596a27ac243f9 | def stop():
'\n Aracı durdurup 5 saniye bekler\n '
keyboard.release('w')
keyboard.press(Key.space)
time.sleep(5)
keyboard.release(Key.space) | Aracı durdurup 5 saniye bekler | src/simulator.py | stop | yedhrab/OTOBIL-Simulator | 0 | python | def stop():
'\n \n '
keyboard.release('w')
keyboard.press(Key.space)
time.sleep(5)
keyboard.release(Key.space) | def stop():
'\n \n '
keyboard.release('w')
keyboard.press(Key.space)
time.sleep(5)
keyboard.release(Key.space)<|docstring|>Aracı durdurup 5 saniye bekler<|endoftext|> |
80bff4ee3fca17955882dac0aeb89afc1710c4139db58fa48d7b917ba4a531b1 | def initiate():
'Similasyonu başlatma\n Arabayı tam gazla hızlanacak şekilde similasyonu başlatır.\n '
__focus()
move() | Similasyonu başlatma
Arabayı tam gazla hızlanacak şekilde similasyonu başlatır. | src/simulator.py | initiate | yedhrab/OTOBIL-Simulator | 0 | python | def initiate():
'Similasyonu başlatma\n Arabayı tam gazla hızlanacak şekilde similasyonu başlatır.\n '
__focus()
move() | def initiate():
'Similasyonu başlatma\n Arabayı tam gazla hızlanacak şekilde similasyonu başlatır.\n '
__focus()
move()<|docstring|>Similasyonu başlatma
Arabayı tam gazla hızlanacak şekilde similasyonu başlatır.<|endoftext|> |
3a3a3c9d2bce66f2b81b030a30312a02e7b749c2957da01ba6a83d60ce662921 | def slow_down(ratio=1.0):
'Arabayı yavaşlatma\n Durak gibi alanlar görüldüğünde arabayı belli bir oranda yavaşlatma\n\n Keyword Arguments:\n ratio {float} -- Hız düşürme oranı (default: {1.})\n\n Examples:\n Oran `1` ise araba durur. `0.5` ise hız yarıya indirilir\n '
keyboard.release('w')
__hold_key(Key.space, (BREAK_TIME * ratio)) | Arabayı yavaşlatma
Durak gibi alanlar görüldüğünde arabayı belli bir oranda yavaşlatma
Keyword Arguments:
ratio {float} -- Hız düşürme oranı (default: {1.})
Examples:
Oran `1` ise araba durur. `0.5` ise hız yarıya indirilir | src/simulator.py | slow_down | yedhrab/OTOBIL-Simulator | 0 | python | def slow_down(ratio=1.0):
'Arabayı yavaşlatma\n Durak gibi alanlar görüldüğünde arabayı belli bir oranda yavaşlatma\n\n Keyword Arguments:\n ratio {float} -- Hız düşürme oranı (default: {1.})\n\n Examples:\n Oran `1` ise araba durur. `0.5` ise hız yarıya indirilir\n '
keyboard.release('w')
__hold_key(Key.space, (BREAK_TIME * ratio)) | def slow_down(ratio=1.0):
'Arabayı yavaşlatma\n Durak gibi alanlar görüldüğünde arabayı belli bir oranda yavaşlatma\n\n Keyword Arguments:\n ratio {float} -- Hız düşürme oranı (default: {1.})\n\n Examples:\n Oran `1` ise araba durur. `0.5` ise hız yarıya indirilir\n '
keyboard.release('w')
__hold_key(Key.space, (BREAK_TIME * ratio))<|docstring|>Arabayı yavaşlatma
Durak gibi alanlar görüldüğünde arabayı belli bir oranda yavaşlatma
Keyword Arguments:
ratio {float} -- Hız düşürme oranı (default: {1.})
Examples:
Oran `1` ise araba durur. `0.5` ise hız yarıya indirilir<|endoftext|> |
a99dc301065de11f7e163eac7db87894e901453efa7e24492352a02ac235c7f5 | def test_method(other_test):
'Test metodlarını test eder\n Test metodlarını geçen süreyi ve adını yazarak test eder\n\n Arguments:\n other_test {function} -- Test metodu\n '
first_time = time.time()
other_test()
print(f"'{other_test.__name__}' metodunda geçen süre: {(time.time() - first_time)}") | Test metodlarını test eder
Test metodlarını geçen süreyi ve adını yazarak test eder
Arguments:
other_test {function} -- Test metodu | src/simulator.py | test_method | yedhrab/OTOBIL-Simulator | 0 | python | def test_method(other_test):
'Test metodlarını test eder\n Test metodlarını geçen süreyi ve adını yazarak test eder\n\n Arguments:\n other_test {function} -- Test metodu\n '
first_time = time.time()
other_test()
print(f"'{other_test.__name__}' metodunda geçen süre: {(time.time() - first_time)}") | def test_method(other_test):
'Test metodlarını test eder\n Test metodlarını geçen süreyi ve adını yazarak test eder\n\n Arguments:\n other_test {function} -- Test metodu\n '
first_time = time.time()
other_test()
print(f"'{other_test.__name__}' metodunda geçen süre: {(time.time() - first_time)}")<|docstring|>Test metodlarını test eder
Test metodlarını geçen süreyi ve adını yazarak test eder
Arguments:
other_test {function} -- Test metodu<|endoftext|> |
dff57aed4e658ef53745d70d510ab137c8f42ea2f6fcbedc58d8141e1cb8e0b0 | def first_part():
'İlk parçayı test etme\n Başlangıçtan, ilk Girilmez levhasındaki dönüşe kadar olan kısmı ele alır.\n '
initiate()
time.sleep(4.9)
slow_down(1)
move()
time.sleep(2.9)
slow_down(1)
move()
time.sleep(2.35)
turn((- 0.755)) | İlk parçayı test etme
Başlangıçtan, ilk Girilmez levhasındaki dönüşe kadar olan kısmı ele alır. | src/simulator.py | first_part | yedhrab/OTOBIL-Simulator | 0 | python | def first_part():
'İlk parçayı test etme\n Başlangıçtan, ilk Girilmez levhasındaki dönüşe kadar olan kısmı ele alır.\n '
initiate()
time.sleep(4.9)
slow_down(1)
move()
time.sleep(2.9)
slow_down(1)
move()
time.sleep(2.35)
turn((- 0.755)) | def first_part():
'İlk parçayı test etme\n Başlangıçtan, ilk Girilmez levhasındaki dönüşe kadar olan kısmı ele alır.\n '
initiate()
time.sleep(4.9)
slow_down(1)
move()
time.sleep(2.9)
slow_down(1)
move()
time.sleep(2.35)
turn((- 0.755))<|docstring|>İlk parçayı test etme
Başlangıçtan, ilk Girilmez levhasındaki dönüşe kadar olan kısmı ele alır.<|endoftext|> |
87f8283327721aee2adecf81afb6b78ccbda8963f0b1d7115d31f52a12c4cfc2 | def build_device_capabilities(device: 'MusicCastDevice') -> List[Capability]:
"\n Function to build all Capabilities of a given device.\n The ID of the capabilities will be set to '{feature.name.lower()}_{key}'\n @param device: The MusicCastDevice to generate the capabilities for\n @return: the list of capabilities of the device\n "
result = []
for feature in [f for f in DeviceFeature if (f in device.features)]:
feature_entry = _device_capabilities.get(feature)
if (feature_entry is not None):
if isinstance(feature_entry, dict):
for (key, capability) in feature_entry.items():
capability_id = f'{feature.name}_{key}'
result.append(capability(capability_id, device))
else:
result.append(feature_entry(feature.name, device))
return result | Function to build all Capabilities of a given device.
The ID of the capabilities will be set to '{feature.name.lower()}_{key}'
@param device: The MusicCastDevice to generate the capabilities for
@return: the list of capabilities of the device | aiomusiccast/capability_registry.py | build_device_capabilities | vigonotion/aiomusiccast | 3 | python | def build_device_capabilities(device: 'MusicCastDevice') -> List[Capability]:
"\n Function to build all Capabilities of a given device.\n The ID of the capabilities will be set to '{feature.name.lower()}_{key}'\n @param device: The MusicCastDevice to generate the capabilities for\n @return: the list of capabilities of the device\n "
result = []
for feature in [f for f in DeviceFeature if (f in device.features)]:
feature_entry = _device_capabilities.get(feature)
if (feature_entry is not None):
if isinstance(feature_entry, dict):
for (key, capability) in feature_entry.items():
capability_id = f'{feature.name}_{key}'
result.append(capability(capability_id, device))
else:
result.append(feature_entry(feature.name, device))
return result | def build_device_capabilities(device: 'MusicCastDevice') -> List[Capability]:
"\n Function to build all Capabilities of a given device.\n The ID of the capabilities will be set to '{feature.name.lower()}_{key}'\n @param device: The MusicCastDevice to generate the capabilities for\n @return: the list of capabilities of the device\n "
result = []
for feature in [f for f in DeviceFeature if (f in device.features)]:
feature_entry = _device_capabilities.get(feature)
if (feature_entry is not None):
if isinstance(feature_entry, dict):
for (key, capability) in feature_entry.items():
capability_id = f'{feature.name}_{key}'
result.append(capability(capability_id, device))
else:
result.append(feature_entry(feature.name, device))
return result<|docstring|>Function to build all Capabilities of a given device.
The ID of the capabilities will be set to '{feature.name.lower()}_{key}'
@param device: The MusicCastDevice to generate the capabilities for
@return: the list of capabilities of the device<|endoftext|> |
543b37006278b1ba9821b1c9fefafcec78631a8d15316f73fa6013272e1e56aa | def build_zone_capabilities(device: 'MusicCastDevice', zone_id) -> List[Capability]:
"\n Function to build all Capabilities of a given zone of a device.\n The ID of the capabilities will be set to 'zone_{feature.name.lower()}_{key}'\n @param device: The MusicCastDevice to generate the capabilities for\n @param zone_id: The zone to generate the capabilities for\n @return: The list of capabilities of the given zone\n "
result = []
for feature in [f for f in ZoneFeature if (f in device.data.zones[zone_id].features)]:
feature_entry = _zone_capabilities.get(feature)
if (feature_entry is not None):
if isinstance(feature_entry, dict):
for (key, capability) in feature_entry.items():
capability_id = f'zone_{feature.name}_{key}'
result.append(capability(capability_id, device, zone_id))
else:
result.append(feature_entry(f'zone_{feature.name}', device, zone_id))
return result | Function to build all Capabilities of a given zone of a device.
The ID of the capabilities will be set to 'zone_{feature.name.lower()}_{key}'
@param device: The MusicCastDevice to generate the capabilities for
@param zone_id: The zone to generate the capabilities for
@return: The list of capabilities of the given zone | aiomusiccast/capability_registry.py | build_zone_capabilities | vigonotion/aiomusiccast | 3 | python | def build_zone_capabilities(device: 'MusicCastDevice', zone_id) -> List[Capability]:
"\n Function to build all Capabilities of a given zone of a device.\n The ID of the capabilities will be set to 'zone_{feature.name.lower()}_{key}'\n @param device: The MusicCastDevice to generate the capabilities for\n @param zone_id: The zone to generate the capabilities for\n @return: The list of capabilities of the given zone\n "
result = []
for feature in [f for f in ZoneFeature if (f in device.data.zones[zone_id].features)]:
feature_entry = _zone_capabilities.get(feature)
if (feature_entry is not None):
if isinstance(feature_entry, dict):
for (key, capability) in feature_entry.items():
capability_id = f'zone_{feature.name}_{key}'
result.append(capability(capability_id, device, zone_id))
else:
result.append(feature_entry(f'zone_{feature.name}', device, zone_id))
return result | def build_zone_capabilities(device: 'MusicCastDevice', zone_id) -> List[Capability]:
"\n Function to build all Capabilities of a given zone of a device.\n The ID of the capabilities will be set to 'zone_{feature.name.lower()}_{key}'\n @param device: The MusicCastDevice to generate the capabilities for\n @param zone_id: The zone to generate the capabilities for\n @return: The list of capabilities of the given zone\n "
result = []
for feature in [f for f in ZoneFeature if (f in device.data.zones[zone_id].features)]:
feature_entry = _zone_capabilities.get(feature)
if (feature_entry is not None):
if isinstance(feature_entry, dict):
for (key, capability) in feature_entry.items():
capability_id = f'zone_{feature.name}_{key}'
result.append(capability(capability_id, device, zone_id))
else:
result.append(feature_entry(f'zone_{feature.name}', device, zone_id))
return result<|docstring|>Function to build all Capabilities of a given zone of a device.
The ID of the capabilities will be set to 'zone_{feature.name.lower()}_{key}'
@param device: The MusicCastDevice to generate the capabilities for
@param zone_id: The zone to generate the capabilities for
@return: The list of capabilities of the given zone<|endoftext|> |
d97ecbdbbe35fdd05d9f1a3852119fd1de00a88b862a7d3cd86ddb6a2fff2223 | def select_startpts_BFGS(list_sampled_points, point_to_start_from, num_multistart, problem):
'\n create starting points for BFGS, first select points from previously sampled points,\n but not more than half of the starting points\n :return: numpy array with starting points for BFGS\n '
if (len(list_sampled_points) > 0):
indices_chosen = np.random.choice(len(list_sampled_points), int(min(len(list_sampled_points), ((num_multistart / 2.0) - 1.0))), replace=False)
start_pts = np.array(list_sampled_points)[indices_chosen]
start_pts = np.vstack((point_to_start_from, start_pts))
else:
start_pts = [point_to_start_from]
start_pts = np.vstack((start_pts, problem.obj_func_min.get_moe_domain().generate_uniform_random_points_in_domain((num_multistart - len(start_pts)))))
return start_pts | create starting points for BFGS, first select points from previously sampled points,
but not more than half of the starting points
:return: numpy array with starting points for BFGS | multifidelity_KG/misokg_utils.py | select_startpts_BFGS | JammyL/NIPS2017 | 19 | python | def select_startpts_BFGS(list_sampled_points, point_to_start_from, num_multistart, problem):
'\n create starting points for BFGS, first select points from previously sampled points,\n but not more than half of the starting points\n :return: numpy array with starting points for BFGS\n '
if (len(list_sampled_points) > 0):
indices_chosen = np.random.choice(len(list_sampled_points), int(min(len(list_sampled_points), ((num_multistart / 2.0) - 1.0))), replace=False)
start_pts = np.array(list_sampled_points)[indices_chosen]
start_pts = np.vstack((point_to_start_from, start_pts))
else:
start_pts = [point_to_start_from]
start_pts = np.vstack((start_pts, problem.obj_func_min.get_moe_domain().generate_uniform_random_points_in_domain((num_multistart - len(start_pts)))))
return start_pts | def select_startpts_BFGS(list_sampled_points, point_to_start_from, num_multistart, problem):
'\n create starting points for BFGS, first select points from previously sampled points,\n but not more than half of the starting points\n :return: numpy array with starting points for BFGS\n '
if (len(list_sampled_points) > 0):
indices_chosen = np.random.choice(len(list_sampled_points), int(min(len(list_sampled_points), ((num_multistart / 2.0) - 1.0))), replace=False)
start_pts = np.array(list_sampled_points)[indices_chosen]
start_pts = np.vstack((point_to_start_from, start_pts))
else:
start_pts = [point_to_start_from]
start_pts = np.vstack((start_pts, problem.obj_func_min.get_moe_domain().generate_uniform_random_points_in_domain((num_multistart - len(start_pts)))))
return start_pts<|docstring|>create starting points for BFGS, first select points from previously sampled points,
but not more than half of the starting points
:return: numpy array with starting points for BFGS<|endoftext|> |
abf41a511426939667521bcc859c17e38b3a6d415bf28152fca624e8b3490f67 | def prepare_context(self, image_obj, context):
"\n Pre-processes the image's dockerfile.\n Leaves the context with a dictionary of dockerfile lines by directive.\n e.g.\n context.data['dockerfile']['RUN'] = ['RUN apt-get update', 'RUN blah']\n context.data['dockerfile']['VOLUME'] = ['VOLUME /tmp', 'VOLUMN /var/log']\n\n :return: updated context\n "
return context | Pre-processes the image's dockerfile.
Leaves the context with a dictionary of dockerfile lines by directive.
e.g.
context.data['dockerfile']['RUN'] = ['RUN apt-get update', 'RUN blah']
context.data['dockerfile']['VOLUME'] = ['VOLUME /tmp', 'VOLUMN /var/log']
:return: updated context | anchore_engine/services/policy_engine/engine/policy/gates/image_metadata.py | prepare_context | Talanor/anchore-engine | 0 | python | def prepare_context(self, image_obj, context):
"\n Pre-processes the image's dockerfile.\n Leaves the context with a dictionary of dockerfile lines by directive.\n e.g.\n context.data['dockerfile']['RUN'] = ['RUN apt-get update', 'RUN blah']\n context.data['dockerfile']['VOLUME'] = ['VOLUME /tmp', 'VOLUMN /var/log']\n\n :return: updated context\n "
return context | def prepare_context(self, image_obj, context):
"\n Pre-processes the image's dockerfile.\n Leaves the context with a dictionary of dockerfile lines by directive.\n e.g.\n context.data['dockerfile']['RUN'] = ['RUN apt-get update', 'RUN blah']\n context.data['dockerfile']['VOLUME'] = ['VOLUME /tmp', 'VOLUMN /var/log']\n\n :return: updated context\n "
return context<|docstring|>Pre-processes the image's dockerfile.
Leaves the context with a dictionary of dockerfile lines by directive.
e.g.
context.data['dockerfile']['RUN'] = ['RUN apt-get update', 'RUN blah']
context.data['dockerfile']['VOLUME'] = ['VOLUME /tmp', 'VOLUMN /var/log']
:return: updated context<|endoftext|> |
a6d199835f96c8dad009282824d3e2d266345b14f233b58fbf6e49431a841a1b | def bytes_to_long(bs):
' convert 8 bytes bs to unsigned long (bigendian) '
return bytes_to_int(bs) | convert 8 bytes bs to unsigned long (bigendian) | dubbo/utils.py | bytes_to_long | feiyuw/dubbo-py | 14 | python | def bytes_to_long(bs):
' '
return bytes_to_int(bs) | def bytes_to_long(bs):
' '
return bytes_to_int(bs)<|docstring|>convert 8 bytes bs to unsigned long (bigendian)<|endoftext|> |
1db88ce332fe99f7881edfb2b0dbc6d96962da04898395a7f2698e979434dc4e | def long_to_bytes(num):
' convert long to 8 bytes (bigendian) '
return struct.pack('>Q', num) | convert long to 8 bytes (bigendian) | dubbo/utils.py | long_to_bytes | feiyuw/dubbo-py | 14 | python | def long_to_bytes(num):
' '
return struct.pack('>Q', num) | def long_to_bytes(num):
' '
return struct.pack('>Q', num)<|docstring|>convert long to 8 bytes (bigendian)<|endoftext|> |
770f0f8d3b5e1977d4e0c6134aec34f4f310a84721714dae1a2d8be5a20553d5 | def byte(num):
' convert num to one byte int'
return int_to_bytes(int(bin(num)[(- 8):], 2)) | convert num to one byte int | dubbo/utils.py | byte | feiyuw/dubbo-py | 14 | python | def byte(num):
' '
return int_to_bytes(int(bin(num)[(- 8):], 2)) | def byte(num):
' '
return int_to_bytes(int(bin(num)[(- 8):], 2))<|docstring|>convert num to one byte int<|endoftext|> |
2e4afd3dedc1fdcf8af93732f8c03e733df38021ad500c4449c2d558f69bdf62 | def bytes_to_int(bs, signed=False):
' convert 4 bytes bs to unsigned/signed long integer (bigendian) '
if signed:
return int.from_bytes(bs, 'big', signed=True)
return reduce((lambda x, y: ((x * 256) + y)), bs) | convert 4 bytes bs to unsigned/signed long integer (bigendian) | dubbo/utils.py | bytes_to_int | feiyuw/dubbo-py | 14 | python | def bytes_to_int(bs, signed=False):
' '
if signed:
return int.from_bytes(bs, 'big', signed=True)
return reduce((lambda x, y: ((x * 256) + y)), bs) | def bytes_to_int(bs, signed=False):
' '
if signed:
return int.from_bytes(bs, 'big', signed=True)
return reduce((lambda x, y: ((x * 256) + y)), bs)<|docstring|>convert 4 bytes bs to unsigned/signed long integer (bigendian)<|endoftext|> |
eb33ed66868c880d16bbfdf673386c41f6abb380a4e9d7101fafba0eb3a44068 | def int_to_bytes(num, length=None, signed=False):
' convert integer to bytes (bigendian) '
if (length == 4):
return struct.pack(((signed and '>i') or '>I'), num)
int_bytes = None
if (num == 0):
int_bytes = num.to_bytes(1, 'big')
else:
int_bytes = num.to_bytes(((num.bit_length() + 7) // 8), 'big', signed=signed)
if length:
return ((b'\x00' * (length - len(int_bytes))) + int_bytes)
return int_bytes | convert integer to bytes (bigendian) | dubbo/utils.py | int_to_bytes | feiyuw/dubbo-py | 14 | python | def int_to_bytes(num, length=None, signed=False):
' '
if (length == 4):
return struct.pack(((signed and '>i') or '>I'), num)
int_bytes = None
if (num == 0):
int_bytes = num.to_bytes(1, 'big')
else:
int_bytes = num.to_bytes(((num.bit_length() + 7) // 8), 'big', signed=signed)
if length:
return ((b'\x00' * (length - len(int_bytes))) + int_bytes)
return int_bytes | def int_to_bytes(num, length=None, signed=False):
' '
if (length == 4):
return struct.pack(((signed and '>i') or '>I'), num)
int_bytes = None
if (num == 0):
int_bytes = num.to_bytes(1, 'big')
else:
int_bytes = num.to_bytes(((num.bit_length() + 7) // 8), 'big', signed=signed)
if length:
return ((b'\x00' * (length - len(int_bytes))) + int_bytes)
return int_bytes<|docstring|>convert integer to bytes (bigendian)<|endoftext|> |
98438eb5adc7d06efe36761e450860d80eefbfadd3998249113a968319fca919 | def double_to_bytes(num):
' convert double to 8 bytes (bigendian) '
return struct.pack('>d', num) | convert double to 8 bytes (bigendian) | dubbo/utils.py | double_to_bytes | feiyuw/dubbo-py | 14 | python | def double_to_bytes(num):
' '
return struct.pack('>d', num) | def double_to_bytes(num):
' '
return struct.pack('>d', num)<|docstring|>convert double to 8 bytes (bigendian)<|endoftext|> |
3660215da192211c12da374eb2b34cdea3bdf6791319326be227f3fb274cea42 | def bytes_to_double(bs):
' convert 8 bytes bs to double (bigendian) '
return struct.unpack('>d', bs)[0] | convert 8 bytes bs to double (bigendian) | dubbo/utils.py | bytes_to_double | feiyuw/dubbo-py | 14 | python | def bytes_to_double(bs):
' '
return struct.unpack('>d', bs)[0] | def bytes_to_double(bs):
' '
return struct.unpack('>d', bs)[0]<|docstring|>convert 8 bytes bs to double (bigendian)<|endoftext|> |
cd4cb7a1eac9c75ca77bc76b6a6fe8a940804e3a3d9811cc6b52320454d53cc5 | def __init__(self, placeholder_text: str=None):
'\n Args:\n placeholder_text (str): Placeholder text.. Defaults to None.\n '
self._placeholder_text = (placeholder_text if placeholder_text else self.__placeholder_text)
self._placeholder_label = QLabel(self._placeholder_text, self)
self.setup_placeholder_label() | Args:
placeholder_text (str): Placeholder text.. Defaults to None. | qiskit_metal/_gui/widgets/bases/QWidget_PlaceholderText.py | __init__ | camponogaraviera/qiskit-metal | 167 | python | def __init__(self, placeholder_text: str=None):
'\n Args:\n placeholder_text (str): Placeholder text.. Defaults to None.\n '
self._placeholder_text = (placeholder_text if placeholder_text else self.__placeholder_text)
self._placeholder_label = QLabel(self._placeholder_text, self)
self.setup_placeholder_label() | def __init__(self, placeholder_text: str=None):
'\n Args:\n placeholder_text (str): Placeholder text.. Defaults to None.\n '
self._placeholder_text = (placeholder_text if placeholder_text else self.__placeholder_text)
self._placeholder_label = QLabel(self._placeholder_text, self)
self.setup_placeholder_label()<|docstring|>Args:
placeholder_text (str): Placeholder text.. Defaults to None.<|endoftext|> |
622d961491b1123f85145ff012a641b2dc09cefe4be08ee55b010d046efbf203 | def setup_placeholder_label(self):
'QComponents will be displayed here when you create them.'
self.update_placeholder_text()
if (not self.layout()):
layout = QVBoxLayout()
self.setLayout(layout)
self.layout().addWidget(self._placeholder_label) | QComponents will be displayed here when you create them. | qiskit_metal/_gui/widgets/bases/QWidget_PlaceholderText.py | setup_placeholder_label | camponogaraviera/qiskit-metal | 167 | python | def setup_placeholder_label(self):
self.update_placeholder_text()
if (not self.layout()):
layout = QVBoxLayout()
self.setLayout(layout)
self.layout().addWidget(self._placeholder_label) | def setup_placeholder_label(self):
self.update_placeholder_text()
if (not self.layout()):
layout = QVBoxLayout()
self.setLayout(layout)
self.layout().addWidget(self._placeholder_label)<|docstring|>QComponents will be displayed here when you create them.<|endoftext|> |
d994265beed398126d65f3f57ae255a984d6b73ab4c58ab61447742f3fce24dd | def update_placeholder_text(self, text=None):
'Update the placeholder text to the given string.\n\n Args:\n text (str): New placeholder text.. Defaults to None.\n '
if text:
self._placeholder_text = text
label = self._placeholder_label
label.setText(self._placeholder_text)
label.setWordWrap(True)
label.setAlignment((Qt.AlignHCenter | Qt.AlignVCenter))
label.setAutoFillBackground(False)
label.setAttribute(Qt.WA_TranslucentBackground)
palette = self.palette()
if hasattr(palette, 'PlaceholderText'):
placeholder_color = palette.PlaceholderText
else:
placeholder_color = palette.WindowText
color = palette.color(placeholder_color)
palette.setColor(palette.Text, color)
palette.setColor(palette.Text, color)
label.setPalette(palette) | Update the placeholder text to the given string.
Args:
text (str): New placeholder text.. Defaults to None. | qiskit_metal/_gui/widgets/bases/QWidget_PlaceholderText.py | update_placeholder_text | camponogaraviera/qiskit-metal | 167 | python | def update_placeholder_text(self, text=None):
'Update the placeholder text to the given string.\n\n Args:\n text (str): New placeholder text.. Defaults to None.\n '
if text:
self._placeholder_text = text
label = self._placeholder_label
label.setText(self._placeholder_text)
label.setWordWrap(True)
label.setAlignment((Qt.AlignHCenter | Qt.AlignVCenter))
label.setAutoFillBackground(False)
label.setAttribute(Qt.WA_TranslucentBackground)
palette = self.palette()
if hasattr(palette, 'PlaceholderText'):
placeholder_color = palette.PlaceholderText
else:
placeholder_color = palette.WindowText
color = palette.color(placeholder_color)
palette.setColor(palette.Text, color)
palette.setColor(palette.Text, color)
label.setPalette(palette) | def update_placeholder_text(self, text=None):
'Update the placeholder text to the given string.\n\n Args:\n text (str): New placeholder text.. Defaults to None.\n '
if text:
self._placeholder_text = text
label = self._placeholder_label
label.setText(self._placeholder_text)
label.setWordWrap(True)
label.setAlignment((Qt.AlignHCenter | Qt.AlignVCenter))
label.setAutoFillBackground(False)
label.setAttribute(Qt.WA_TranslucentBackground)
palette = self.palette()
if hasattr(palette, 'PlaceholderText'):
placeholder_color = palette.PlaceholderText
else:
placeholder_color = palette.WindowText
color = palette.color(placeholder_color)
palette.setColor(palette.Text, color)
palette.setColor(palette.Text, color)
label.setPalette(palette)<|docstring|>Update the placeholder text to the given string.
Args:
text (str): New placeholder text.. Defaults to None.<|endoftext|> |
1bd923ad44a7b7b90fe48a071ab3ab16d2d60590eacf1332c6a092c77e08339a | def show_placeholder_text(self):
'Show the placeholder text.'
self._placeholder_label.show() | Show the placeholder text. | qiskit_metal/_gui/widgets/bases/QWidget_PlaceholderText.py | show_placeholder_text | camponogaraviera/qiskit-metal | 167 | python | def show_placeholder_text(self):
self._placeholder_label.show() | def show_placeholder_text(self):
self._placeholder_label.show()<|docstring|>Show the placeholder text.<|endoftext|> |
04cc597f3086219da0e28740040b1a8ceb18002080c7ed78cd53795e9482b366 | def hide_placeholder_text(self):
'Hide the placeholder text.'
self._placeholder_label.hide() | Hide the placeholder text. | qiskit_metal/_gui/widgets/bases/QWidget_PlaceholderText.py | hide_placeholder_text | camponogaraviera/qiskit-metal | 167 | python | def hide_placeholder_text(self):
self._placeholder_label.hide() | def hide_placeholder_text(self):
self._placeholder_label.hide()<|docstring|>Hide the placeholder text.<|endoftext|> |
37335e6a132373d6fb375f4796c1ed99f4a7102e0abdbe3a70d5af244f83d868 | @tf.function
def f1_loss(y, y_hat):
'Compute the macro soft F1-score as a cost (average 1 - soft-F1 across all labels).'
y_hat = tf.cast(y_hat, tf.float32)
y = tf.cast(y, tf.float32)
tp = tf.reduce_sum((y_hat * y), axis=0)
fp = tf.reduce_sum((y_hat * (1 - y)), axis=0)
fn = tf.reduce_sum(((1 - y_hat) * y), axis=0)
soft_f1 = ((2 * tp) / ((((2 * tp) + fn) + fp) + e))
cost = (1 - soft_f1)
macro_cost = tf.reduce_mean(cost)
return macro_cost | Compute the macro soft F1-score as a cost (average 1 - soft-F1 across all labels). | maupassant/tensorflow_helper/losses_helper.py | f1_loss | Jwuthri/TextToolKit | 2 | python | @tf.function
def f1_loss(y, y_hat):
y_hat = tf.cast(y_hat, tf.float32)
y = tf.cast(y, tf.float32)
tp = tf.reduce_sum((y_hat * y), axis=0)
fp = tf.reduce_sum((y_hat * (1 - y)), axis=0)
fn = tf.reduce_sum(((1 - y_hat) * y), axis=0)
soft_f1 = ((2 * tp) / ((((2 * tp) + fn) + fp) + e))
cost = (1 - soft_f1)
macro_cost = tf.reduce_mean(cost)
return macro_cost | @tf.function
def f1_loss(y, y_hat):
y_hat = tf.cast(y_hat, tf.float32)
y = tf.cast(y, tf.float32)
tp = tf.reduce_sum((y_hat * y), axis=0)
fp = tf.reduce_sum((y_hat * (1 - y)), axis=0)
fn = tf.reduce_sum(((1 - y_hat) * y), axis=0)
soft_f1 = ((2 * tp) / ((((2 * tp) + fn) + fp) + e))
cost = (1 - soft_f1)
macro_cost = tf.reduce_mean(cost)
return macro_cost<|docstring|>Compute the macro soft F1-score as a cost (average 1 - soft-F1 across all labels).<|endoftext|> |
f481cf8d0d0d352399fd24085c265fdad60f0bdb855857f39150434aa26685c9 | @tf.function
def iou_loss(y, y_hat):
'The IoU metric, or Jaccard Index, is similar to the Dice metric and is calculated as the ratio between the\n overlap of the positive instances between two sets, and their mutual combined values:\n https://arxiv.org/abs/1911.08287'
y_hat = K.flatten(y_hat)
y = K.flatten(y)
intersection = K.sum(K.dot(y, y_hat))
total = (K.sum(y) + K.sum(y_hat))
union = (total - intersection)
iou = ((intersection + e) / (union + e))
return (1 - iou) | The IoU metric, or Jaccard Index, is similar to the Dice metric and is calculated as the ratio between the
overlap of the positive instances between two sets, and their mutual combined values:
https://arxiv.org/abs/1911.08287 | maupassant/tensorflow_helper/losses_helper.py | iou_loss | Jwuthri/TextToolKit | 2 | python | @tf.function
def iou_loss(y, y_hat):
'The IoU metric, or Jaccard Index, is similar to the Dice metric and is calculated as the ratio between the\n overlap of the positive instances between two sets, and their mutual combined values:\n https://arxiv.org/abs/1911.08287'
y_hat = K.flatten(y_hat)
y = K.flatten(y)
intersection = K.sum(K.dot(y, y_hat))
total = (K.sum(y) + K.sum(y_hat))
union = (total - intersection)
iou = ((intersection + e) / (union + e))
return (1 - iou) | @tf.function
def iou_loss(y, y_hat):
'The IoU metric, or Jaccard Index, is similar to the Dice metric and is calculated as the ratio between the\n overlap of the positive instances between two sets, and their mutual combined values:\n https://arxiv.org/abs/1911.08287'
y_hat = K.flatten(y_hat)
y = K.flatten(y)
intersection = K.sum(K.dot(y, y_hat))
total = (K.sum(y) + K.sum(y_hat))
union = (total - intersection)
iou = ((intersection + e) / (union + e))
return (1 - iou)<|docstring|>The IoU metric, or Jaccard Index, is similar to the Dice metric and is calculated as the ratio between the
overlap of the positive instances between two sets, and their mutual combined values:
https://arxiv.org/abs/1911.08287<|endoftext|> |
06ab48841910dfef546d2cfd90abf5449e19b7a9676f220be32567e688e9e9fa | @tf.function
def focal_loss(y, y_hat, alpha=0.8, gamma=2):
'Focal Loss was introduced by Lin et al of Facebook AI Research in 2017 as a means of combatting extremely\n imbalanced datasets where positive cases were relatively rare:\n https://arxiv.org/abs/1708.02002'
y_hat = K.flatten(y_hat)
y = K.flatten(y)
bce = K.binary_crossentropy(y, y_hat)
bce_exp = K.exp((- bce))
focal = K.mean(((alpha * K.pow((1 - bce_exp), gamma)) * bce))
return focal | Focal Loss was introduced by Lin et al of Facebook AI Research in 2017 as a means of combatting extremely
imbalanced datasets where positive cases were relatively rare:
https://arxiv.org/abs/1708.02002 | maupassant/tensorflow_helper/losses_helper.py | focal_loss | Jwuthri/TextToolKit | 2 | python | @tf.function
def focal_loss(y, y_hat, alpha=0.8, gamma=2):
'Focal Loss was introduced by Lin et al of Facebook AI Research in 2017 as a means of combatting extremely\n imbalanced datasets where positive cases were relatively rare:\n https://arxiv.org/abs/1708.02002'
y_hat = K.flatten(y_hat)
y = K.flatten(y)
bce = K.binary_crossentropy(y, y_hat)
bce_exp = K.exp((- bce))
focal = K.mean(((alpha * K.pow((1 - bce_exp), gamma)) * bce))
return focal | @tf.function
def focal_loss(y, y_hat, alpha=0.8, gamma=2):
'Focal Loss was introduced by Lin et al of Facebook AI Research in 2017 as a means of combatting extremely\n imbalanced datasets where positive cases were relatively rare:\n https://arxiv.org/abs/1708.02002'
y_hat = K.flatten(y_hat)
y = K.flatten(y)
bce = K.binary_crossentropy(y, y_hat)
bce_exp = K.exp((- bce))
focal = K.mean(((alpha * K.pow((1 - bce_exp), gamma)) * bce))
return focal<|docstring|>Focal Loss was introduced by Lin et al of Facebook AI Research in 2017 as a means of combatting extremely
imbalanced datasets where positive cases were relatively rare:
https://arxiv.org/abs/1708.02002<|endoftext|> |
47dc5dc111b289357482875482718f653468ef695d7e01302a34d3d8f13f1b6d | @tf.function
def focal_loss_fixed(y_true, y_pred, gamma=2, alpha=0.4):
'Focal loss for multi-classification\n FL(p_t)=-alpha(1-p_t)^{gamma}ln(p_t)\n Notice: y_pred is probability after softmax\n gradient is d(Fl)/d(p_t) not d(Fl)/d(x) as described in paper\n d(Fl)/d(p_t) * [p_t(1-p_t)] = d(Fl)/d(x)\n Focal Loss for Dense Object Detection\n https://arxiv.org/abs/1708.02002\n '
y_true = tf.convert_to_tensor(y_true, tf.float32)
y_pred = tf.convert_to_tensor(y_pred, tf.float32)
model_out = tf.add(y_pred, e)
ce = tf.multiply(y_true, (- tf.math.log(model_out)))
weight = tf.multiply(y_true, tf.pow(tf.subtract(1.0, model_out), gamma))
fl = tf.multiply(alpha, tf.multiply(weight, ce))
reduced_fl = tf.reduce_max(fl, axis=1)
return tf.reduce_mean(reduced_fl) | Focal loss for multi-classification
FL(p_t)=-alpha(1-p_t)^{gamma}ln(p_t)
Notice: y_pred is probability after softmax
gradient is d(Fl)/d(p_t) not d(Fl)/d(x) as described in paper
d(Fl)/d(p_t) * [p_t(1-p_t)] = d(Fl)/d(x)
Focal Loss for Dense Object Detection
https://arxiv.org/abs/1708.02002 | maupassant/tensorflow_helper/losses_helper.py | focal_loss_fixed | Jwuthri/TextToolKit | 2 | python | @tf.function
def focal_loss_fixed(y_true, y_pred, gamma=2, alpha=0.4):
'Focal loss for multi-classification\n FL(p_t)=-alpha(1-p_t)^{gamma}ln(p_t)\n Notice: y_pred is probability after softmax\n gradient is d(Fl)/d(p_t) not d(Fl)/d(x) as described in paper\n d(Fl)/d(p_t) * [p_t(1-p_t)] = d(Fl)/d(x)\n Focal Loss for Dense Object Detection\n https://arxiv.org/abs/1708.02002\n '
y_true = tf.convert_to_tensor(y_true, tf.float32)
y_pred = tf.convert_to_tensor(y_pred, tf.float32)
model_out = tf.add(y_pred, e)
ce = tf.multiply(y_true, (- tf.math.log(model_out)))
weight = tf.multiply(y_true, tf.pow(tf.subtract(1.0, model_out), gamma))
fl = tf.multiply(alpha, tf.multiply(weight, ce))
reduced_fl = tf.reduce_max(fl, axis=1)
return tf.reduce_mean(reduced_fl) | @tf.function
def focal_loss_fixed(y_true, y_pred, gamma=2, alpha=0.4):
'Focal loss for multi-classification\n FL(p_t)=-alpha(1-p_t)^{gamma}ln(p_t)\n Notice: y_pred is probability after softmax\n gradient is d(Fl)/d(p_t) not d(Fl)/d(x) as described in paper\n d(Fl)/d(p_t) * [p_t(1-p_t)] = d(Fl)/d(x)\n Focal Loss for Dense Object Detection\n https://arxiv.org/abs/1708.02002\n '
y_true = tf.convert_to_tensor(y_true, tf.float32)
y_pred = tf.convert_to_tensor(y_pred, tf.float32)
model_out = tf.add(y_pred, e)
ce = tf.multiply(y_true, (- tf.math.log(model_out)))
weight = tf.multiply(y_true, tf.pow(tf.subtract(1.0, model_out), gamma))
fl = tf.multiply(alpha, tf.multiply(weight, ce))
reduced_fl = tf.reduce_max(fl, axis=1)
return tf.reduce_mean(reduced_fl)<|docstring|>Focal loss for multi-classification
FL(p_t)=-alpha(1-p_t)^{gamma}ln(p_t)
Notice: y_pred is probability after softmax
gradient is d(Fl)/d(p_t) not d(Fl)/d(x) as described in paper
d(Fl)/d(p_t) * [p_t(1-p_t)] = d(Fl)/d(x)
Focal Loss for Dense Object Detection
https://arxiv.org/abs/1708.02002<|endoftext|> |
2b7314629d33569559a9cd93c3e90027ef7b64645489dee794052767caa56fe1 | @tf.function
def combo_loss(y, y_hat, alpha=0.5, ce_ratio=0.5, smooth=1):
'Combo loss is a combination of Dice Loss and a modified Cross-Entropy function that, like Tversky loss, has\n additional constants which penalise either false positives or false negatives more respectively:\n https://arxiv.org/abs/1805.02798'
y = K.flatten(y)
y_hat = K.flatten(y_hat)
intersection = K.sum((y * y_hat))
dice = (((2.0 * intersection) + smooth) / ((K.sum(y) + K.sum(y_hat)) + smooth))
y_hat = K.clip(y_hat, e, (1.0 - e))
out = (- (alpha * ((y * K.log(y_hat)) + (((1 - alpha) * (1.0 - y)) * K.log((1.0 - y_hat))))))
weighted_ce = K.mean(out, axis=(- 1))
combo = ((ce_ratio * weighted_ce) - ((1 - ce_ratio) * dice))
return combo | Combo loss is a combination of Dice Loss and a modified Cross-Entropy function that, like Tversky loss, has
additional constants which penalise either false positives or false negatives more respectively:
https://arxiv.org/abs/1805.02798 | maupassant/tensorflow_helper/losses_helper.py | combo_loss | Jwuthri/TextToolKit | 2 | python | @tf.function
def combo_loss(y, y_hat, alpha=0.5, ce_ratio=0.5, smooth=1):
'Combo loss is a combination of Dice Loss and a modified Cross-Entropy function that, like Tversky loss, has\n additional constants which penalise either false positives or false negatives more respectively:\n https://arxiv.org/abs/1805.02798'
y = K.flatten(y)
y_hat = K.flatten(y_hat)
intersection = K.sum((y * y_hat))
dice = (((2.0 * intersection) + smooth) / ((K.sum(y) + K.sum(y_hat)) + smooth))
y_hat = K.clip(y_hat, e, (1.0 - e))
out = (- (alpha * ((y * K.log(y_hat)) + (((1 - alpha) * (1.0 - y)) * K.log((1.0 - y_hat))))))
weighted_ce = K.mean(out, axis=(- 1))
combo = ((ce_ratio * weighted_ce) - ((1 - ce_ratio) * dice))
return combo | @tf.function
def combo_loss(y, y_hat, alpha=0.5, ce_ratio=0.5, smooth=1):
'Combo loss is a combination of Dice Loss and a modified Cross-Entropy function that, like Tversky loss, has\n additional constants which penalise either false positives or false negatives more respectively:\n https://arxiv.org/abs/1805.02798'
y = K.flatten(y)
y_hat = K.flatten(y_hat)
intersection = K.sum((y * y_hat))
dice = (((2.0 * intersection) + smooth) / ((K.sum(y) + K.sum(y_hat)) + smooth))
y_hat = K.clip(y_hat, e, (1.0 - e))
out = (- (alpha * ((y * K.log(y_hat)) + (((1 - alpha) * (1.0 - y)) * K.log((1.0 - y_hat))))))
weighted_ce = K.mean(out, axis=(- 1))
combo = ((ce_ratio * weighted_ce) - ((1 - ce_ratio) * dice))
return combo<|docstring|>Combo loss is a combination of Dice Loss and a modified Cross-Entropy function that, like Tversky loss, has
additional constants which penalise either false positives or false negatives more respectively:
https://arxiv.org/abs/1805.02798<|endoftext|> |
acee355d5eeda00564488e280c133aaf7689ed8cdbf5241f3537afa2c4870178 | def get_source_packages(self):
' Returns dictionary mapping source package names to Package() objects. '
package_dict = {}
for (source_repo_name, source_repo) in self.source_repo_package_xmls.items():
for pkg_name in source_repo:
package_dict[pkg_name] = Package(pkg_name, source_repo_name)
return package_dict | Returns dictionary mapping source package names to Package() objects. | src/rosdistro/distribution_cache.py | get_source_packages | kunaltyagi/rosdistro | 6 | python | def get_source_packages(self):
' '
package_dict = {}
for (source_repo_name, source_repo) in self.source_repo_package_xmls.items():
for pkg_name in source_repo:
package_dict[pkg_name] = Package(pkg_name, source_repo_name)
return package_dict | def get_source_packages(self):
' '
package_dict = {}
for (source_repo_name, source_repo) in self.source_repo_package_xmls.items():
for pkg_name in source_repo:
package_dict[pkg_name] = Package(pkg_name, source_repo_name)
return package_dict<|docstring|>Returns dictionary mapping source package names to Package() objects.<|endoftext|> |
7636549ae9bf01a18ff6bbaed8e1fdbfb19a666cb08c30bae64a477c9ff79173 | def __init__(self, bias: int=None, daylight_bias: int=None, daylight_date: MapiCalendarTimeZoneRuleDto=None, standard_bias: int=None, standard_date: MapiCalendarTimeZoneRuleDto=None, time_zone_flags: List[str]=None, year: int=None):
"\n Represents the mapi calendar time zone rule. \n :param bias: Time zone's offset in minutes from UTC. \n :type bias: int\n :param daylight_bias: Offset in minutes from lBias during daylight saving time. \n :type daylight_bias: int\n :param daylight_date: Date and local time that indicate when to begin using the DaylightBias. \n :type daylight_date: MapiCalendarTimeZoneRuleDto\n :param standard_bias: Offset in minutes from lBias during standard time. \n :type standard_bias: int\n :param standard_date: Date and local time that indicate when to begin using the StandardBias. \n :type standard_date: MapiCalendarTimeZoneRuleDto\n :param time_zone_flags: Individual bit flags that specify information about this TimeZoneRule. \n :type time_zone_flags: List[str]\n :param year: Year in which this rule is scheduled to take effect. \n :type year: int\n "
self._bias = None
self._daylight_bias = None
self._daylight_date = None
self._standard_bias = None
self._standard_date = None
self._time_zone_flags = None
self._year = None
if (bias is not None):
self.bias = bias
if (daylight_bias is not None):
self.daylight_bias = daylight_bias
if (daylight_date is not None):
self.daylight_date = daylight_date
if (standard_bias is not None):
self.standard_bias = standard_bias
if (standard_date is not None):
self.standard_date = standard_date
if (time_zone_flags is not None):
self.time_zone_flags = time_zone_flags
if (year is not None):
self.year = year | Represents the mapi calendar time zone rule.
:param bias: Time zone's offset in minutes from UTC.
:type bias: int
:param daylight_bias: Offset in minutes from lBias during daylight saving time.
:type daylight_bias: int
:param daylight_date: Date and local time that indicate when to begin using the DaylightBias.
:type daylight_date: MapiCalendarTimeZoneRuleDto
:param standard_bias: Offset in minutes from lBias during standard time.
:type standard_bias: int
:param standard_date: Date and local time that indicate when to begin using the StandardBias.
:type standard_date: MapiCalendarTimeZoneRuleDto
:param time_zone_flags: Individual bit flags that specify information about this TimeZoneRule.
:type time_zone_flags: List[str]
:param year: Year in which this rule is scheduled to take effect.
:type year: int | sdk/AsposeEmailCloudSdk/models/mapi_calendar_time_zone_info_dto.py | __init__ | aspose-email-cloud/aspose-email-cloud-python | 1 | python | def __init__(self, bias: int=None, daylight_bias: int=None, daylight_date: MapiCalendarTimeZoneRuleDto=None, standard_bias: int=None, standard_date: MapiCalendarTimeZoneRuleDto=None, time_zone_flags: List[str]=None, year: int=None):
"\n Represents the mapi calendar time zone rule. \n :param bias: Time zone's offset in minutes from UTC. \n :type bias: int\n :param daylight_bias: Offset in minutes from lBias during daylight saving time. \n :type daylight_bias: int\n :param daylight_date: Date and local time that indicate when to begin using the DaylightBias. \n :type daylight_date: MapiCalendarTimeZoneRuleDto\n :param standard_bias: Offset in minutes from lBias during standard time. \n :type standard_bias: int\n :param standard_date: Date and local time that indicate when to begin using the StandardBias. \n :type standard_date: MapiCalendarTimeZoneRuleDto\n :param time_zone_flags: Individual bit flags that specify information about this TimeZoneRule. \n :type time_zone_flags: List[str]\n :param year: Year in which this rule is scheduled to take effect. \n :type year: int\n "
self._bias = None
self._daylight_bias = None
self._daylight_date = None
self._standard_bias = None
self._standard_date = None
self._time_zone_flags = None
self._year = None
if (bias is not None):
self.bias = bias
if (daylight_bias is not None):
self.daylight_bias = daylight_bias
if (daylight_date is not None):
self.daylight_date = daylight_date
if (standard_bias is not None):
self.standard_bias = standard_bias
if (standard_date is not None):
self.standard_date = standard_date
if (time_zone_flags is not None):
self.time_zone_flags = time_zone_flags
if (year is not None):
self.year = year | def __init__(self, bias: int=None, daylight_bias: int=None, daylight_date: MapiCalendarTimeZoneRuleDto=None, standard_bias: int=None, standard_date: MapiCalendarTimeZoneRuleDto=None, time_zone_flags: List[str]=None, year: int=None):
"\n Represents the mapi calendar time zone rule. \n :param bias: Time zone's offset in minutes from UTC. \n :type bias: int\n :param daylight_bias: Offset in minutes from lBias during daylight saving time. \n :type daylight_bias: int\n :param daylight_date: Date and local time that indicate when to begin using the DaylightBias. \n :type daylight_date: MapiCalendarTimeZoneRuleDto\n :param standard_bias: Offset in minutes from lBias during standard time. \n :type standard_bias: int\n :param standard_date: Date and local time that indicate when to begin using the StandardBias. \n :type standard_date: MapiCalendarTimeZoneRuleDto\n :param time_zone_flags: Individual bit flags that specify information about this TimeZoneRule. \n :type time_zone_flags: List[str]\n :param year: Year in which this rule is scheduled to take effect. \n :type year: int\n "
self._bias = None
self._daylight_bias = None
self._daylight_date = None
self._standard_bias = None
self._standard_date = None
self._time_zone_flags = None
self._year = None
if (bias is not None):
self.bias = bias
if (daylight_bias is not None):
self.daylight_bias = daylight_bias
if (daylight_date is not None):
self.daylight_date = daylight_date
if (standard_bias is not None):
self.standard_bias = standard_bias
if (standard_date is not None):
self.standard_date = standard_date
if (time_zone_flags is not None):
self.time_zone_flags = time_zone_flags
if (year is not None):
self.year = year<|docstring|>Represents the mapi calendar time zone rule.
:param bias: Time zone's offset in minutes from UTC.
:type bias: int
:param daylight_bias: Offset in minutes from lBias during daylight saving time.
:type daylight_bias: int
:param daylight_date: Date and local time that indicate when to begin using the DaylightBias.
:type daylight_date: MapiCalendarTimeZoneRuleDto
:param standard_bias: Offset in minutes from lBias during standard time.
:type standard_bias: int
:param standard_date: Date and local time that indicate when to begin using the StandardBias.
:type standard_date: MapiCalendarTimeZoneRuleDto
:param time_zone_flags: Individual bit flags that specify information about this TimeZoneRule.
:type time_zone_flags: List[str]
:param year: Year in which this rule is scheduled to take effect.
:type year: int<|endoftext|> |
fd4c3ec1ae4f7b28c44f41431b3e02fec6dbc727390f0ec6cb06c5c7d020f7a9 | @property
def bias(self) -> int:
"\n Time zone's offset in minutes from UTC. \n\n :return: The bias of this MapiCalendarTimeZoneInfoDto.\n :rtype: int\n "
return self._bias | Time zone's offset in minutes from UTC.
:return: The bias of this MapiCalendarTimeZoneInfoDto.
:rtype: int | sdk/AsposeEmailCloudSdk/models/mapi_calendar_time_zone_info_dto.py | bias | aspose-email-cloud/aspose-email-cloud-python | 1 | python | @property
def bias(self) -> int:
"\n Time zone's offset in minutes from UTC. \n\n :return: The bias of this MapiCalendarTimeZoneInfoDto.\n :rtype: int\n "
return self._bias | @property
def bias(self) -> int:
"\n Time zone's offset in minutes from UTC. \n\n :return: The bias of this MapiCalendarTimeZoneInfoDto.\n :rtype: int\n "
return self._bias<|docstring|>Time zone's offset in minutes from UTC.
:return: The bias of this MapiCalendarTimeZoneInfoDto.
:rtype: int<|endoftext|> |
14417fa76921317d475e50d2618bd360d8d76d8ae1102f5f24fa3e3673dedc80 | @bias.setter
def bias(self, bias: int):
"\n Time zone's offset in minutes from UTC. \n\n :param bias: The bias of this MapiCalendarTimeZoneInfoDto.\n :type: int\n "
if (bias is None):
raise ValueError('Invalid value for `bias`, must not be `None`')
self._bias = bias | Time zone's offset in minutes from UTC.
:param bias: The bias of this MapiCalendarTimeZoneInfoDto.
:type: int | sdk/AsposeEmailCloudSdk/models/mapi_calendar_time_zone_info_dto.py | bias | aspose-email-cloud/aspose-email-cloud-python | 1 | python | @bias.setter
def bias(self, bias: int):
"\n Time zone's offset in minutes from UTC. \n\n :param bias: The bias of this MapiCalendarTimeZoneInfoDto.\n :type: int\n "
if (bias is None):
raise ValueError('Invalid value for `bias`, must not be `None`')
self._bias = bias | @bias.setter
def bias(self, bias: int):
"\n Time zone's offset in minutes from UTC. \n\n :param bias: The bias of this MapiCalendarTimeZoneInfoDto.\n :type: int\n "
if (bias is None):
raise ValueError('Invalid value for `bias`, must not be `None`')
self._bias = bias<|docstring|>Time zone's offset in minutes from UTC.
:param bias: The bias of this MapiCalendarTimeZoneInfoDto.
:type: int<|endoftext|> |
861e9519fae7d7fef86b6284dbf810a1b9007838fc7dc82ad07cdfa9d76f7a1a | @property
def daylight_bias(self) -> int:
'\n Offset in minutes from lBias during daylight saving time. \n\n :return: The daylight_bias of this MapiCalendarTimeZoneInfoDto.\n :rtype: int\n '
return self._daylight_bias | Offset in minutes from lBias during daylight saving time.
:return: The daylight_bias of this MapiCalendarTimeZoneInfoDto.
:rtype: int | sdk/AsposeEmailCloudSdk/models/mapi_calendar_time_zone_info_dto.py | daylight_bias | aspose-email-cloud/aspose-email-cloud-python | 1 | python | @property
def daylight_bias(self) -> int:
'\n Offset in minutes from lBias during daylight saving time. \n\n :return: The daylight_bias of this MapiCalendarTimeZoneInfoDto.\n :rtype: int\n '
return self._daylight_bias | @property
def daylight_bias(self) -> int:
'\n Offset in minutes from lBias during daylight saving time. \n\n :return: The daylight_bias of this MapiCalendarTimeZoneInfoDto.\n :rtype: int\n '
return self._daylight_bias<|docstring|>Offset in minutes from lBias during daylight saving time.
:return: The daylight_bias of this MapiCalendarTimeZoneInfoDto.
:rtype: int<|endoftext|> |
fbe714c7ae8f0b1b5444cdc9838e92f854600d9aef7d7e7ab3979a3d01430798 | @daylight_bias.setter
def daylight_bias(self, daylight_bias: int):
'\n Offset in minutes from lBias during daylight saving time. \n\n :param daylight_bias: The daylight_bias of this MapiCalendarTimeZoneInfoDto.\n :type: int\n '
if (daylight_bias is None):
raise ValueError('Invalid value for `daylight_bias`, must not be `None`')
self._daylight_bias = daylight_bias | Offset in minutes from lBias during daylight saving time.
:param daylight_bias: The daylight_bias of this MapiCalendarTimeZoneInfoDto.
:type: int | sdk/AsposeEmailCloudSdk/models/mapi_calendar_time_zone_info_dto.py | daylight_bias | aspose-email-cloud/aspose-email-cloud-python | 1 | python | @daylight_bias.setter
def daylight_bias(self, daylight_bias: int):
'\n Offset in minutes from lBias during daylight saving time. \n\n :param daylight_bias: The daylight_bias of this MapiCalendarTimeZoneInfoDto.\n :type: int\n '
if (daylight_bias is None):
raise ValueError('Invalid value for `daylight_bias`, must not be `None`')
self._daylight_bias = daylight_bias | @daylight_bias.setter
def daylight_bias(self, daylight_bias: int):
'\n Offset in minutes from lBias during daylight saving time. \n\n :param daylight_bias: The daylight_bias of this MapiCalendarTimeZoneInfoDto.\n :type: int\n '
if (daylight_bias is None):
raise ValueError('Invalid value for `daylight_bias`, must not be `None`')
self._daylight_bias = daylight_bias<|docstring|>Offset in minutes from lBias during daylight saving time.
:param daylight_bias: The daylight_bias of this MapiCalendarTimeZoneInfoDto.
:type: int<|endoftext|> |
12f21fbc0c7d465d5a71a19c47b076a110febc86b3b66ad946406e1977f9d935 | @property
def daylight_date(self) -> MapiCalendarTimeZoneRuleDto:
'\n Date and local time that indicate when to begin using the DaylightBias. \n\n :return: The daylight_date of this MapiCalendarTimeZoneInfoDto.\n :rtype: MapiCalendarTimeZoneRuleDto\n '
return self._daylight_date | Date and local time that indicate when to begin using the DaylightBias.
:return: The daylight_date of this MapiCalendarTimeZoneInfoDto.
:rtype: MapiCalendarTimeZoneRuleDto | sdk/AsposeEmailCloudSdk/models/mapi_calendar_time_zone_info_dto.py | daylight_date | aspose-email-cloud/aspose-email-cloud-python | 1 | python | @property
def daylight_date(self) -> MapiCalendarTimeZoneRuleDto:
'\n Date and local time that indicate when to begin using the DaylightBias. \n\n :return: The daylight_date of this MapiCalendarTimeZoneInfoDto.\n :rtype: MapiCalendarTimeZoneRuleDto\n '
return self._daylight_date | @property
def daylight_date(self) -> MapiCalendarTimeZoneRuleDto:
'\n Date and local time that indicate when to begin using the DaylightBias. \n\n :return: The daylight_date of this MapiCalendarTimeZoneInfoDto.\n :rtype: MapiCalendarTimeZoneRuleDto\n '
return self._daylight_date<|docstring|>Date and local time that indicate when to begin using the DaylightBias.
:return: The daylight_date of this MapiCalendarTimeZoneInfoDto.
:rtype: MapiCalendarTimeZoneRuleDto<|endoftext|> |
2f60eabf826ca92f69ec304bba1ff18e8705fb86ae01a4f2a19883947d0b4dc2 | @daylight_date.setter
def daylight_date(self, daylight_date: MapiCalendarTimeZoneRuleDto):
'\n Date and local time that indicate when to begin using the DaylightBias. \n\n :param daylight_date: The daylight_date of this MapiCalendarTimeZoneInfoDto.\n :type: MapiCalendarTimeZoneRuleDto\n '
self._daylight_date = daylight_date | Date and local time that indicate when to begin using the DaylightBias.
:param daylight_date: The daylight_date of this MapiCalendarTimeZoneInfoDto.
:type: MapiCalendarTimeZoneRuleDto | sdk/AsposeEmailCloudSdk/models/mapi_calendar_time_zone_info_dto.py | daylight_date | aspose-email-cloud/aspose-email-cloud-python | 1 | python | @daylight_date.setter
def daylight_date(self, daylight_date: MapiCalendarTimeZoneRuleDto):
'\n Date and local time that indicate when to begin using the DaylightBias. \n\n :param daylight_date: The daylight_date of this MapiCalendarTimeZoneInfoDto.\n :type: MapiCalendarTimeZoneRuleDto\n '
self._daylight_date = daylight_date | @daylight_date.setter
def daylight_date(self, daylight_date: MapiCalendarTimeZoneRuleDto):
'\n Date and local time that indicate when to begin using the DaylightBias. \n\n :param daylight_date: The daylight_date of this MapiCalendarTimeZoneInfoDto.\n :type: MapiCalendarTimeZoneRuleDto\n '
self._daylight_date = daylight_date<|docstring|>Date and local time that indicate when to begin using the DaylightBias.
:param daylight_date: The daylight_date of this MapiCalendarTimeZoneInfoDto.
:type: MapiCalendarTimeZoneRuleDto<|endoftext|> |