body_hash
stringlengths 64
64
| body
stringlengths 23
109k
| docstring
stringlengths 1
57k
| path
stringlengths 4
198
| name
stringlengths 1
115
| repository_name
stringlengths 7
111
| repository_stars
float64 0
191k
| lang
stringclasses 1
value | body_without_docstring
stringlengths 14
108k
| unified
stringlengths 45
133k
|
---|---|---|---|---|---|---|---|---|---|
fc91cc189e0d6a7e42b417a788e5d1678b431e36778bb22535a6b6b45a3a62be | def __init__(self, ScenarioParameters, minBandwidth=0.01, maxHops=sys.maxsize, TimeStep=2, solverClockLimit=1e+75, solverDetTicksLimit=1e+75, solverParameters={}):
"Inputs:\n - ScenarioParameters, a description of the PUFFER scenario parameters in JSON format.\n See sample_input and ScenarioGenerator.py for examples.\n - minBandwidth, also used to cluster the network in groups if mode\n 'Bandwidth' or 'BandwidthHops' is used. Default = 0.01\n - maxHops, used to cluster the network in groups if mode\n 'Hops' or 'BandwidthHops' is used. Default = sys.maxint\n - TimeStep: the time step of the MILP solver\n - solverClockLimit (default=1e75), the maximum time allotted to the solver\n (in seconds). Note that this is NOT a deterministic time limit.\n - solverDetTicksLimit (default=1e75), the maximum number of ticks allotted\n to the solver. Tics are an internal deterministic measure of solver\n progress: the mapping from ticks to time is machine- and\n load-specific. This limit should be used to ensure that multiple\n instances of the problem terminate simultaneously.\n - solverParameters, a dictionary of parameters for the MIP solver.\n Default: empty.\n\n "
self.ScenarioParameters = json.loads(ScenarioParameters)
self.minBandwidth = minBandwidth
self.maxHops = maxHops
self.TimeStep = TimeStep
self.solverClockLimit = solverClockLimit
self.solverDetTicksLimit = solverDetTicksLimit
self.solverParameters = solverParameters
self.BaseStationName = 'base_station'
self.isPartitioned = False
self.isSetUp = False
self.isScheduled = False | Inputs:
- ScenarioParameters, a description of the PUFFER scenario parameters in JSON format.
See sample_input and ScenarioGenerator.py for examples.
- minBandwidth, also used to cluster the network in groups if mode
'Bandwidth' or 'BandwidthHops' is used. Default = 0.01
- maxHops, used to cluster the network in groups if mode
'Hops' or 'BandwidthHops' is used. Default = sys.maxint
- TimeStep: the time step of the MILP solver
- solverClockLimit (default=1e75), the maximum time allotted to the solver
(in seconds). Note that this is NOT a deterministic time limit.
- solverDetTicksLimit (default=1e75), the maximum number of ticks allotted
to the solver. Tics are an internal deterministic measure of solver
progress: the mapping from ticks to time is machine- and
load-specific. This limit should be used to ensure that multiple
instances of the problem terminate simultaneously.
- solverParameters, a dictionary of parameters for the MIP solver.
Default: empty. | schedulers/mosaic_schedulers/common/examples/PUFFER/MILPProblemGenerator.py | __init__ | nasa/MOSAIC | 18 | python | def __init__(self, ScenarioParameters, minBandwidth=0.01, maxHops=sys.maxsize, TimeStep=2, solverClockLimit=1e+75, solverDetTicksLimit=1e+75, solverParameters={}):
"Inputs:\n - ScenarioParameters, a description of the PUFFER scenario parameters in JSON format.\n See sample_input and ScenarioGenerator.py for examples.\n - minBandwidth, also used to cluster the network in groups if mode\n 'Bandwidth' or 'BandwidthHops' is used. Default = 0.01\n - maxHops, used to cluster the network in groups if mode\n 'Hops' or 'BandwidthHops' is used. Default = sys.maxint\n - TimeStep: the time step of the MILP solver\n - solverClockLimit (default=1e75), the maximum time allotted to the solver\n (in seconds). Note that this is NOT a deterministic time limit.\n - solverDetTicksLimit (default=1e75), the maximum number of ticks allotted\n to the solver. Tics are an internal deterministic measure of solver\n progress: the mapping from ticks to time is machine- and\n load-specific. This limit should be used to ensure that multiple\n instances of the problem terminate simultaneously.\n - solverParameters, a dictionary of parameters for the MIP solver.\n Default: empty.\n\n "
self.ScenarioParameters = json.loads(ScenarioParameters)
self.minBandwidth = minBandwidth
self.maxHops = maxHops
self.TimeStep = TimeStep
self.solverClockLimit = solverClockLimit
self.solverDetTicksLimit = solverDetTicksLimit
self.solverParameters = solverParameters
self.BaseStationName = 'base_station'
self.isPartitioned = False
self.isSetUp = False
self.isScheduled = False | def __init__(self, ScenarioParameters, minBandwidth=0.01, maxHops=sys.maxsize, TimeStep=2, solverClockLimit=1e+75, solverDetTicksLimit=1e+75, solverParameters={}):
"Inputs:\n - ScenarioParameters, a description of the PUFFER scenario parameters in JSON format.\n See sample_input and ScenarioGenerator.py for examples.\n - minBandwidth, also used to cluster the network in groups if mode\n 'Bandwidth' or 'BandwidthHops' is used. Default = 0.01\n - maxHops, used to cluster the network in groups if mode\n 'Hops' or 'BandwidthHops' is used. Default = sys.maxint\n - TimeStep: the time step of the MILP solver\n - solverClockLimit (default=1e75), the maximum time allotted to the solver\n (in seconds). Note that this is NOT a deterministic time limit.\n - solverDetTicksLimit (default=1e75), the maximum number of ticks allotted\n to the solver. Tics are an internal deterministic measure of solver\n progress: the mapping from ticks to time is machine- and\n load-specific. This limit should be used to ensure that multiple\n instances of the problem terminate simultaneously.\n - solverParameters, a dictionary of parameters for the MIP solver.\n Default: empty.\n\n "
self.ScenarioParameters = json.loads(ScenarioParameters)
self.minBandwidth = minBandwidth
self.maxHops = maxHops
self.TimeStep = TimeStep
self.solverClockLimit = solverClockLimit
self.solverDetTicksLimit = solverDetTicksLimit
self.solverParameters = solverParameters
self.BaseStationName = 'base_station'
self.isPartitioned = False
self.isSetUp = False
self.isScheduled = False<|docstring|>Inputs:
- ScenarioParameters, a description of the PUFFER scenario parameters in JSON format.
See sample_input and ScenarioGenerator.py for examples.
- minBandwidth, also used to cluster the network in groups if mode
'Bandwidth' or 'BandwidthHops' is used. Default = 0.01
- maxHops, used to cluster the network in groups if mode
'Hops' or 'BandwidthHops' is used. Default = sys.maxint
- TimeStep: the time step of the MILP solver
- solverClockLimit (default=1e75), the maximum time allotted to the solver
(in seconds). Note that this is NOT a deterministic time limit.
- solverDetTicksLimit (default=1e75), the maximum number of ticks allotted
to the solver. Tics are an internal deterministic measure of solver
progress: the mapping from ticks to time is machine- and
load-specific. This limit should be used to ensure that multiple
instances of the problem terminate simultaneously.
- solverParameters, a dictionary of parameters for the MIP solver.
Default: empty.<|endoftext|> |
bb8e5e87b7c90d90fcc7b86ef6a7923591fe406a4b8f94207a6b40d21c1eb8dd | def partition(self, mode='Bandwidth'):
'\n Partitions the graph according to the latency or bandwidth between PUFFERs.\n Input: mode, the mode used to partition the network.\n Mode can be:\n - Bandwidth: removes all edges with bandwidth lower than\n self.minBandwidth and returns the resulting connected components.\n - Hops: returns clusters where all agents are within self.maxHops of\n each other.\n - BandwidthHops: filter by bandwidth and then cluster by hops\n '
Agents = self.ScenarioParameters['Agents']
G = nx.DiGraph()
G.add_nodes_from(Agents)
Links_list = []
if (mode == 'Bandwidth'):
for link in self.ScenarioParameters['CommunicationNetwork']:
node0 = link['origin']
node1 = link['destination']
if (link['bandwidth'] >= self.minBandwidth):
Links_list.append((node0, node1, link))
G.add_edges_from(Links_list)
self.ConnectedComponents = NetworkPartitioning.partition_by_latency(G, max_diameter=len(G.nodes()))
elif (mode == 'Hops'):
for link in self.ScenarioParameters['CommunicationNetwork']:
node0 = link['origin']
node1 = link['destination']
_val = link.copy()
_val['length'] = 1
Links_list.append((node0, node1, _val))
G.add_edges_from(Links_list)
self.ConnectedComponents = NetworkPartitioning.partition_by_latency(G, max_diameter=self.maxHops, distance_label='length')
elif (mode == 'BandwidthHops'):
for link in self.ScenarioParameters['CommunicationNetwork']:
node0 = link['origin']
node1 = link['destination']
if (link['bandwidth'] >= self.minBandwidth):
Links_list.append((node0, node1, link))
G.add_edges_from(Links_list)
self.ConnectedComponents = NetworkPartitioning.partition_by_latency(G, max_diameter=self.maxHops)
else:
raise NotImplementedError
NetworkPartitioning.label_connected_components(G, self.ConnectedComponents, plot=False)
self.G = G
self.isPartitioned = True
return self.ConnectedComponents | Partitions the graph according to the latency or bandwidth between PUFFERs.
Input: mode, the mode used to partition the network.
Mode can be:
- Bandwidth: removes all edges with bandwidth lower than
self.minBandwidth and returns the resulting connected components.
- Hops: returns clusters where all agents are within self.maxHops of
each other.
- BandwidthHops: filter by bandwidth and then cluster by hops | schedulers/mosaic_schedulers/common/examples/PUFFER/MILPProblemGenerator.py | partition | nasa/MOSAIC | 18 | python | def partition(self, mode='Bandwidth'):
'\n Partitions the graph according to the latency or bandwidth between PUFFERs.\n Input: mode, the mode used to partition the network.\n Mode can be:\n - Bandwidth: removes all edges with bandwidth lower than\n self.minBandwidth and returns the resulting connected components.\n - Hops: returns clusters where all agents are within self.maxHops of\n each other.\n - BandwidthHops: filter by bandwidth and then cluster by hops\n '
Agents = self.ScenarioParameters['Agents']
G = nx.DiGraph()
G.add_nodes_from(Agents)
Links_list = []
if (mode == 'Bandwidth'):
for link in self.ScenarioParameters['CommunicationNetwork']:
node0 = link['origin']
node1 = link['destination']
if (link['bandwidth'] >= self.minBandwidth):
Links_list.append((node0, node1, link))
G.add_edges_from(Links_list)
self.ConnectedComponents = NetworkPartitioning.partition_by_latency(G, max_diameter=len(G.nodes()))
elif (mode == 'Hops'):
for link in self.ScenarioParameters['CommunicationNetwork']:
node0 = link['origin']
node1 = link['destination']
_val = link.copy()
_val['length'] = 1
Links_list.append((node0, node1, _val))
G.add_edges_from(Links_list)
self.ConnectedComponents = NetworkPartitioning.partition_by_latency(G, max_diameter=self.maxHops, distance_label='length')
elif (mode == 'BandwidthHops'):
for link in self.ScenarioParameters['CommunicationNetwork']:
node0 = link['origin']
node1 = link['destination']
if (link['bandwidth'] >= self.minBandwidth):
Links_list.append((node0, node1, link))
G.add_edges_from(Links_list)
self.ConnectedComponents = NetworkPartitioning.partition_by_latency(G, max_diameter=self.maxHops)
else:
raise NotImplementedError
NetworkPartitioning.label_connected_components(G, self.ConnectedComponents, plot=False)
self.G = G
self.isPartitioned = True
return self.ConnectedComponents | def partition(self, mode='Bandwidth'):
'\n Partitions the graph according to the latency or bandwidth between PUFFERs.\n Input: mode, the mode used to partition the network.\n Mode can be:\n - Bandwidth: removes all edges with bandwidth lower than\n self.minBandwidth and returns the resulting connected components.\n - Hops: returns clusters where all agents are within self.maxHops of\n each other.\n - BandwidthHops: filter by bandwidth and then cluster by hops\n '
Agents = self.ScenarioParameters['Agents']
G = nx.DiGraph()
G.add_nodes_from(Agents)
Links_list = []
if (mode == 'Bandwidth'):
for link in self.ScenarioParameters['CommunicationNetwork']:
node0 = link['origin']
node1 = link['destination']
if (link['bandwidth'] >= self.minBandwidth):
Links_list.append((node0, node1, link))
G.add_edges_from(Links_list)
self.ConnectedComponents = NetworkPartitioning.partition_by_latency(G, max_diameter=len(G.nodes()))
elif (mode == 'Hops'):
for link in self.ScenarioParameters['CommunicationNetwork']:
node0 = link['origin']
node1 = link['destination']
_val = link.copy()
_val['length'] = 1
Links_list.append((node0, node1, _val))
G.add_edges_from(Links_list)
self.ConnectedComponents = NetworkPartitioning.partition_by_latency(G, max_diameter=self.maxHops, distance_label='length')
elif (mode == 'BandwidthHops'):
for link in self.ScenarioParameters['CommunicationNetwork']:
node0 = link['origin']
node1 = link['destination']
if (link['bandwidth'] >= self.minBandwidth):
Links_list.append((node0, node1, link))
G.add_edges_from(Links_list)
self.ConnectedComponents = NetworkPartitioning.partition_by_latency(G, max_diameter=self.maxHops)
else:
raise NotImplementedError
NetworkPartitioning.label_connected_components(G, self.ConnectedComponents, plot=False)
self.G = G
self.isPartitioned = True
return self.ConnectedComponents<|docstring|>Partitions the graph according to the latency or bandwidth between PUFFERs.
Input: mode, the mode used to partition the network.
Mode can be:
- Bandwidth: removes all edges with bandwidth lower than
self.minBandwidth and returns the resulting connected components.
- Hops: returns clusters where all agents are within self.maxHops of
each other.
- BandwidthHops: filter by bandwidth and then cluster by hops<|endoftext|> |
568a7507d9c6104014825fd3e28bbdc64fd641040f8d0efb9e7579c51f3fba68 | def buildScheduler(self):
' Creates the problem instance to be solved by the scheduler as a JSON file'
if (self.isPartitioned is False):
self.partition()
ScenarioParameters = self.ScenarioParameters
BaseStationName = self.BaseStationName
agent = ScenarioParameters['Agent']
AgentNames = [agent]
for component in self.ConnectedComponents:
if (agent in component):
AgentNames = component
break
self.AgentNames = AgentNames
self.agent = agent
PufferNames = list(AgentNames)
if (BaseStationName in PufferNames):
PufferNames.remove(BaseStationName)
Thor_s = ScenarioParameters['Time']['TimeHorizon']
TimeStep = self.TimeStep
Thor = int((float(Thor_s) / float(TimeStep)))
InfeasibleTime = (2 * Thor_s)
CommWindowsBandwidth = {}
for t in range(Thor):
CommWindowsBandwidth[t] = {}
for Agent1 in AgentNames:
CommWindowsBandwidth[t][Agent1] = {}
for Agent2 in AgentNames:
CommWindowsBandwidth[t][Agent1][Agent2] = 0
for link in ScenarioParameters['CommunicationNetwork']:
Agent1 = link['origin']
Agent2 = link['destination']
for t in range(Thor):
if ((Agent1 in AgentNames) and (Agent2 in AgentNames)):
CommWindowsBandwidth[t][Agent1][Agent2] = link['bandwidth']
achievable_science_in_horizon = 3
pufferScienceAvailability = {}
for puffer in PufferNames:
in_science_zone = ScenarioParameters['AgentStates'][puffer]['in_science_zone']
if ('samples' in ScenarioParameters['AgentStates'][puffer].keys()):
puffer_samples = ScenarioParameters['AgentStates'][puffer]['samples']
else:
puffer_samples = achievable_science_in_horizon
pufferScienceAvailability[puffer] = (puffer_samples * in_science_zone)
PufferTaskReward_housekeeping = {'short_range_image': ScenarioParameters['Tasks']['TaskReward']['short_range_image'], 'vo_localization': ScenarioParameters['Tasks']['TaskReward']['vo_localization'], 'plan_path': ScenarioParameters['Tasks']['TaskReward']['plan_path'], 'send_drive_cmd': ScenarioParameters['Tasks']['TaskReward']['send_drive_cmd']}
PufferTaskReward_science = {'take_sample': ScenarioParameters['Tasks']['TaskReward']['take_sample'], 'analyze_sample': ScenarioParameters['Tasks']['TaskReward']['analyze_sample'], 'store_sample': ScenarioParameters['Tasks']['TaskReward']['store_sample']}
PufferOptionalTask_housekeeping = {'short_range_image': False, 'vo_localization': False, 'plan_path': False, 'send_drive_cmd': False}
PufferOptionalTask_science = {'take_sample': True, 'analyze_sample': True, 'store_sample': True}
PufferProductSizes_housekeeping = {'short_range_image': ScenarioParameters['Tasks']['ProductsSize']['short_range_image'], 'vo_localization': ScenarioParameters['Tasks']['ProductsSize']['vo_localization'], 'plan_path': ScenarioParameters['Tasks']['ProductsSize']['plan_path'], 'send_drive_cmd': ScenarioParameters['Tasks']['ProductsSize']['send_drive_cmd']}
PufferProductSizes_science = {'take_sample': ScenarioParameters['Tasks']['ProductsSize']['take_sample'], 'analyze_sample': ScenarioParameters['Tasks']['ProductsSize']['analyze_sample'], 'store_sample': ScenarioParameters['Tasks']['ProductsSize']['store_sample']}
pufferOwnHousekeepingTasks = ['short_range_image', 'vo_localization', 'plan_path', 'send_drive_cmd']
pufferOwnScienceTasks = ['take_sample', 'analyze_sample', 'store_sample']
RelocatableTasks = ['vo_localization', 'plan_path', 'analyze_sample', 'store_sample']
PufferDependencyList_housekeeping = {'short_range_image': [[]], 'vo_localization': [['short_range_image']], 'plan_path': [['vo_localization']], 'send_drive_cmd': [['plan_path']]}
PufferDependencyList_science = {'take_sample': [[]], 'analyze_sample': [['take_sample']], 'store_sample': [['analyze_sample']]}
PufferIncompatibleTasks_housekeeping = []
PufferIncompatibleTasks_science = []
AllOptionalTasks = {}
AllTaskReward = {}
AllProductSizes = {}
AllDependencyList = {}
AllIncompatibleTasks = []
for pufferName in PufferNames:
Temp_PufferOptionalTask = {('{}:'.format(pufferName) + k): v for (k, v) in PufferOptionalTask_housekeeping.items()}
Temp_PufferTaskReward = {('{}:'.format(pufferName) + k): v for (k, v) in PufferTaskReward_housekeeping.items()}
Temp_PufferProductSizes = {('{}:'.format(pufferName) + k): v for (k, v) in PufferProductSizes_housekeeping.items()}
for scienceNo in range(pufferScienceAvailability[pufferName]):
Temp_PufferOptionalTask.update({('{}:{}'.format(pufferName, scienceNo) + k): v for (k, v) in PufferOptionalTask_science.items()})
Temp_PufferTaskReward.update({('{}:{}'.format(pufferName, scienceNo) + k): v for (k, v) in PufferTaskReward_science.items()})
Temp_PufferProductSizes.update({('{}:{}'.format(pufferName, scienceNo) + k): v for (k, v) in PufferProductSizes_science.items()})
for (k, v) in PufferDependencyList_housekeeping.items():
AllDeps = []
for ConjDependency in v:
AllDeps.append([('{}:'.format(pufferName) + DisjDependency) for DisjDependency in ConjDependency])
AllDependencyList.update({('{}:'.format(pufferName) + k): AllDeps})
for scienceNo in range(pufferScienceAvailability[pufferName]):
for (k, v) in PufferDependencyList_science.items():
AllDeps = []
for ConjDependency in v:
AllDeps.append([('{}:{}'.format(pufferName, scienceNo) + DisjDependency) for DisjDependency in ConjDependency])
AllDependencyList.update({('{}:{}'.format(pufferName, scienceNo) + k): AllDeps})
for BadPairing in PufferIncompatibleTasks_housekeeping:
Temp_BadPairing = [('{}:'.format(pufferName) + Task) for Task in BadPairing]
AllIncompatibleTasks.append(Temp_BadPairing)
for scienceNo in range(pufferScienceAvailability[pufferName]):
for BadPairing in PufferIncompatibleTasks_science:
Temp_BadPairing = [('{}:{}'.format(pufferName, scienceNo) + Task) for Task in BadPairing]
AllIncompatibleTasks.append(Temp_BadPairing)
AllOptionalTasks.update(Temp_PufferOptionalTask)
AllTaskReward.update(Temp_PufferTaskReward)
AllProductSizes.update(Temp_PufferProductSizes)
AllComputationTime = {}
AllComputationLoad = {}
AllInitialInformation = {}
for pufferName in PufferNames:
for task in pufferOwnHousekeepingTasks:
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][pufferName].get(task, InfeasibleTime)
AllComputationTime.update({('{}:'.format(pufferName) + task): {pufferName: tasktime}})
AllComputationLoad.update({('{}:'.format(pufferName) + task): {pufferName: 1.0}})
AllInitialInformation.update({('{}:'.format(pufferName) + task): {pufferName: False}})
if (BaseStationName in AgentNames):
for task in pufferOwnHousekeepingTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][BaseStationName].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:'.format(pufferName) + task)][BaseStationName] = tasktime
AllComputationLoad[('{}:'.format(pufferName) + task)][BaseStationName] = 1.0
AllInitialInformation[('{}:'.format(pufferName) + task)][BaseStationName] = False
for otherPuffer in PufferNames:
if (otherPuffer != pufferName):
for task in pufferOwnHousekeepingTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][otherPuffer].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:'.format(pufferName) + task)][otherPuffer] = tasktime
AllComputationLoad[('{}:'.format(pufferName) + task)][otherPuffer] = 1.0
AllInitialInformation[('{}:'.format(pufferName) + task)][otherPuffer] = False
for scienceNo in range(pufferScienceAvailability[pufferName]):
for task in pufferOwnScienceTasks:
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][pufferName].get(task, InfeasibleTime)
AllComputationTime.update({('{}:{}'.format(pufferName, scienceNo) + task): {pufferName: tasktime}})
AllComputationLoad.update({('{}:{}'.format(pufferName, scienceNo) + task): {pufferName: 1.0}})
AllInitialInformation.update({('{}:{}'.format(pufferName, scienceNo) + task): {pufferName: False}})
if (BaseStationName in AgentNames):
for task in pufferOwnScienceTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][BaseStationName].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:{}'.format(pufferName, scienceNo) + task)][BaseStationName] = tasktime
AllComputationLoad[('{}:{}'.format(pufferName, scienceNo) + task)][BaseStationName] = 1.0
AllInitialInformation[('{}:{}'.format(pufferName, scienceNo) + task)][BaseStationName] = False
for otherPuffer in PufferNames:
if (otherPuffer != pufferName):
for task in pufferOwnScienceTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][otherPuffer].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:{}'.format(pufferName, scienceNo) + task)][otherPuffer] = tasktime
AllComputationLoad[('{}:{}'.format(pufferName, scienceNo) + task)][otherPuffer] = 1.0
AllInitialInformation[('{}:{}'.format(pufferName, scienceNo) + task)][otherPuffer] = False
for (task, val) in AllComputationTime.items():
for (agent, tasktime) in val.items():
AllComputationTime[task][agent] = int(np.ceil((float(tasktime) / float(TimeStep))))
AllTaskColors = {}
PufferTaskList_housekeeping = PufferOptionalTask_housekeeping.keys()
PufferTaskList_science = PufferOptionalTask_science.keys()
PufferNo = 0
N_puffers = len(PufferNames)
for pufferName in PufferNames:
TempPufferHSVColor = np.array([(float(PufferNo) / float(N_puffers)), 0.0, 0.85])
PufferTaskCounter = 0
NumPufferTasks = len((list(PufferTaskList_housekeeping) + list(PufferTaskList_science)))
for PufferTask in PufferTaskList_housekeeping:
TempTaskHSVColor = np.array([0.0, (0.2 + (0.8 * (1.0 - (float(PufferTaskCounter) / float((NumPufferTasks - 1)))))), 0.0])
TaskRGBColor = np.array(colorsys.hsv_to_rgb((TempPufferHSVColor[0] + TempTaskHSVColor[0]), (TempPufferHSVColor[1] + TempTaskHSVColor[1]), (TempPufferHSVColor[2] + TempTaskHSVColor[2])))
AllTaskColors[('{}:'.format(pufferName) + PufferTask)] = TaskRGBColor
PufferTaskCounter += 1
for PufferTask in PufferTaskList_science:
TempTaskHSVColor = np.array([0.0, (0.2 + (0.8 * (1.0 - (float(PufferTaskCounter) / float((NumPufferTasks - 1)))))), 0.0])
TaskRGBColor = np.array(colorsys.hsv_to_rgb((TempPufferHSVColor[0] + TempTaskHSVColor[0]), (TempPufferHSVColor[1] + TempTaskHSVColor[1]), (TempPufferHSVColor[2] + TempTaskHSVColor[2])))
for scienceNo in range(pufferScienceAvailability[pufferName]):
AllTaskColors[('{}:{}'.format(pufferName, scienceNo) + PufferTask)] = TaskRGBColor
PufferTaskCounter += 1
PufferNo += 1
self.AllTaskColors = AllTaskColors
Tasks = MOSAICSolver.MILPTasks(OptionalTasks=AllOptionalTasks, TaskReward=AllTaskReward, ProductsSize=AllProductSizes, DependencyList=AllDependencyList, IncompatibleTasks=AllIncompatibleTasks)
AgentCapabilities = MOSAICSolver.MILPAgentCapabilities(ComputationTime=AllComputationTime, ComputationLoad=AllComputationLoad, InitialInformation=AllInitialInformation)
self.Tasks = Tasks
self.AgentCapabilities = AgentCapabilities
self.Thor = Thor
self.isSetUp = True | Creates the problem instance to be solved by the scheduler as a JSON file | schedulers/mosaic_schedulers/common/examples/PUFFER/MILPProblemGenerator.py | buildScheduler | nasa/MOSAIC | 18 | python | def buildScheduler(self):
' '
if (self.isPartitioned is False):
self.partition()
ScenarioParameters = self.ScenarioParameters
BaseStationName = self.BaseStationName
agent = ScenarioParameters['Agent']
AgentNames = [agent]
for component in self.ConnectedComponents:
if (agent in component):
AgentNames = component
break
self.AgentNames = AgentNames
self.agent = agent
PufferNames = list(AgentNames)
if (BaseStationName in PufferNames):
PufferNames.remove(BaseStationName)
Thor_s = ScenarioParameters['Time']['TimeHorizon']
TimeStep = self.TimeStep
Thor = int((float(Thor_s) / float(TimeStep)))
InfeasibleTime = (2 * Thor_s)
CommWindowsBandwidth = {}
for t in range(Thor):
CommWindowsBandwidth[t] = {}
for Agent1 in AgentNames:
CommWindowsBandwidth[t][Agent1] = {}
for Agent2 in AgentNames:
CommWindowsBandwidth[t][Agent1][Agent2] = 0
for link in ScenarioParameters['CommunicationNetwork']:
Agent1 = link['origin']
Agent2 = link['destination']
for t in range(Thor):
if ((Agent1 in AgentNames) and (Agent2 in AgentNames)):
CommWindowsBandwidth[t][Agent1][Agent2] = link['bandwidth']
achievable_science_in_horizon = 3
pufferScienceAvailability = {}
for puffer in PufferNames:
in_science_zone = ScenarioParameters['AgentStates'][puffer]['in_science_zone']
if ('samples' in ScenarioParameters['AgentStates'][puffer].keys()):
puffer_samples = ScenarioParameters['AgentStates'][puffer]['samples']
else:
puffer_samples = achievable_science_in_horizon
pufferScienceAvailability[puffer] = (puffer_samples * in_science_zone)
PufferTaskReward_housekeeping = {'short_range_image': ScenarioParameters['Tasks']['TaskReward']['short_range_image'], 'vo_localization': ScenarioParameters['Tasks']['TaskReward']['vo_localization'], 'plan_path': ScenarioParameters['Tasks']['TaskReward']['plan_path'], 'send_drive_cmd': ScenarioParameters['Tasks']['TaskReward']['send_drive_cmd']}
PufferTaskReward_science = {'take_sample': ScenarioParameters['Tasks']['TaskReward']['take_sample'], 'analyze_sample': ScenarioParameters['Tasks']['TaskReward']['analyze_sample'], 'store_sample': ScenarioParameters['Tasks']['TaskReward']['store_sample']}
PufferOptionalTask_housekeeping = {'short_range_image': False, 'vo_localization': False, 'plan_path': False, 'send_drive_cmd': False}
PufferOptionalTask_science = {'take_sample': True, 'analyze_sample': True, 'store_sample': True}
PufferProductSizes_housekeeping = {'short_range_image': ScenarioParameters['Tasks']['ProductsSize']['short_range_image'], 'vo_localization': ScenarioParameters['Tasks']['ProductsSize']['vo_localization'], 'plan_path': ScenarioParameters['Tasks']['ProductsSize']['plan_path'], 'send_drive_cmd': ScenarioParameters['Tasks']['ProductsSize']['send_drive_cmd']}
PufferProductSizes_science = {'take_sample': ScenarioParameters['Tasks']['ProductsSize']['take_sample'], 'analyze_sample': ScenarioParameters['Tasks']['ProductsSize']['analyze_sample'], 'store_sample': ScenarioParameters['Tasks']['ProductsSize']['store_sample']}
pufferOwnHousekeepingTasks = ['short_range_image', 'vo_localization', 'plan_path', 'send_drive_cmd']
pufferOwnScienceTasks = ['take_sample', 'analyze_sample', 'store_sample']
RelocatableTasks = ['vo_localization', 'plan_path', 'analyze_sample', 'store_sample']
PufferDependencyList_housekeeping = {'short_range_image': [[]], 'vo_localization': [['short_range_image']], 'plan_path': [['vo_localization']], 'send_drive_cmd': [['plan_path']]}
PufferDependencyList_science = {'take_sample': [[]], 'analyze_sample': [['take_sample']], 'store_sample': [['analyze_sample']]}
PufferIncompatibleTasks_housekeeping = []
PufferIncompatibleTasks_science = []
AllOptionalTasks = {}
AllTaskReward = {}
AllProductSizes = {}
AllDependencyList = {}
AllIncompatibleTasks = []
for pufferName in PufferNames:
Temp_PufferOptionalTask = {('{}:'.format(pufferName) + k): v for (k, v) in PufferOptionalTask_housekeeping.items()}
Temp_PufferTaskReward = {('{}:'.format(pufferName) + k): v for (k, v) in PufferTaskReward_housekeeping.items()}
Temp_PufferProductSizes = {('{}:'.format(pufferName) + k): v for (k, v) in PufferProductSizes_housekeeping.items()}
for scienceNo in range(pufferScienceAvailability[pufferName]):
Temp_PufferOptionalTask.update({('{}:{}'.format(pufferName, scienceNo) + k): v for (k, v) in PufferOptionalTask_science.items()})
Temp_PufferTaskReward.update({('{}:{}'.format(pufferName, scienceNo) + k): v for (k, v) in PufferTaskReward_science.items()})
Temp_PufferProductSizes.update({('{}:{}'.format(pufferName, scienceNo) + k): v for (k, v) in PufferProductSizes_science.items()})
for (k, v) in PufferDependencyList_housekeeping.items():
AllDeps = []
for ConjDependency in v:
AllDeps.append([('{}:'.format(pufferName) + DisjDependency) for DisjDependency in ConjDependency])
AllDependencyList.update({('{}:'.format(pufferName) + k): AllDeps})
for scienceNo in range(pufferScienceAvailability[pufferName]):
for (k, v) in PufferDependencyList_science.items():
AllDeps = []
for ConjDependency in v:
AllDeps.append([('{}:{}'.format(pufferName, scienceNo) + DisjDependency) for DisjDependency in ConjDependency])
AllDependencyList.update({('{}:{}'.format(pufferName, scienceNo) + k): AllDeps})
for BadPairing in PufferIncompatibleTasks_housekeeping:
Temp_BadPairing = [('{}:'.format(pufferName) + Task) for Task in BadPairing]
AllIncompatibleTasks.append(Temp_BadPairing)
for scienceNo in range(pufferScienceAvailability[pufferName]):
for BadPairing in PufferIncompatibleTasks_science:
Temp_BadPairing = [('{}:{}'.format(pufferName, scienceNo) + Task) for Task in BadPairing]
AllIncompatibleTasks.append(Temp_BadPairing)
AllOptionalTasks.update(Temp_PufferOptionalTask)
AllTaskReward.update(Temp_PufferTaskReward)
AllProductSizes.update(Temp_PufferProductSizes)
AllComputationTime = {}
AllComputationLoad = {}
AllInitialInformation = {}
for pufferName in PufferNames:
for task in pufferOwnHousekeepingTasks:
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][pufferName].get(task, InfeasibleTime)
AllComputationTime.update({('{}:'.format(pufferName) + task): {pufferName: tasktime}})
AllComputationLoad.update({('{}:'.format(pufferName) + task): {pufferName: 1.0}})
AllInitialInformation.update({('{}:'.format(pufferName) + task): {pufferName: False}})
if (BaseStationName in AgentNames):
for task in pufferOwnHousekeepingTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][BaseStationName].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:'.format(pufferName) + task)][BaseStationName] = tasktime
AllComputationLoad[('{}:'.format(pufferName) + task)][BaseStationName] = 1.0
AllInitialInformation[('{}:'.format(pufferName) + task)][BaseStationName] = False
for otherPuffer in PufferNames:
if (otherPuffer != pufferName):
for task in pufferOwnHousekeepingTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][otherPuffer].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:'.format(pufferName) + task)][otherPuffer] = tasktime
AllComputationLoad[('{}:'.format(pufferName) + task)][otherPuffer] = 1.0
AllInitialInformation[('{}:'.format(pufferName) + task)][otherPuffer] = False
for scienceNo in range(pufferScienceAvailability[pufferName]):
for task in pufferOwnScienceTasks:
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][pufferName].get(task, InfeasibleTime)
AllComputationTime.update({('{}:{}'.format(pufferName, scienceNo) + task): {pufferName: tasktime}})
AllComputationLoad.update({('{}:{}'.format(pufferName, scienceNo) + task): {pufferName: 1.0}})
AllInitialInformation.update({('{}:{}'.format(pufferName, scienceNo) + task): {pufferName: False}})
if (BaseStationName in AgentNames):
for task in pufferOwnScienceTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][BaseStationName].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:{}'.format(pufferName, scienceNo) + task)][BaseStationName] = tasktime
AllComputationLoad[('{}:{}'.format(pufferName, scienceNo) + task)][BaseStationName] = 1.0
AllInitialInformation[('{}:{}'.format(pufferName, scienceNo) + task)][BaseStationName] = False
for otherPuffer in PufferNames:
if (otherPuffer != pufferName):
for task in pufferOwnScienceTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][otherPuffer].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:{}'.format(pufferName, scienceNo) + task)][otherPuffer] = tasktime
AllComputationLoad[('{}:{}'.format(pufferName, scienceNo) + task)][otherPuffer] = 1.0
AllInitialInformation[('{}:{}'.format(pufferName, scienceNo) + task)][otherPuffer] = False
for (task, val) in AllComputationTime.items():
for (agent, tasktime) in val.items():
AllComputationTime[task][agent] = int(np.ceil((float(tasktime) / float(TimeStep))))
AllTaskColors = {}
PufferTaskList_housekeeping = PufferOptionalTask_housekeeping.keys()
PufferTaskList_science = PufferOptionalTask_science.keys()
PufferNo = 0
N_puffers = len(PufferNames)
for pufferName in PufferNames:
TempPufferHSVColor = np.array([(float(PufferNo) / float(N_puffers)), 0.0, 0.85])
PufferTaskCounter = 0
NumPufferTasks = len((list(PufferTaskList_housekeeping) + list(PufferTaskList_science)))
for PufferTask in PufferTaskList_housekeeping:
TempTaskHSVColor = np.array([0.0, (0.2 + (0.8 * (1.0 - (float(PufferTaskCounter) / float((NumPufferTasks - 1)))))), 0.0])
TaskRGBColor = np.array(colorsys.hsv_to_rgb((TempPufferHSVColor[0] + TempTaskHSVColor[0]), (TempPufferHSVColor[1] + TempTaskHSVColor[1]), (TempPufferHSVColor[2] + TempTaskHSVColor[2])))
AllTaskColors[('{}:'.format(pufferName) + PufferTask)] = TaskRGBColor
PufferTaskCounter += 1
for PufferTask in PufferTaskList_science:
TempTaskHSVColor = np.array([0.0, (0.2 + (0.8 * (1.0 - (float(PufferTaskCounter) / float((NumPufferTasks - 1)))))), 0.0])
TaskRGBColor = np.array(colorsys.hsv_to_rgb((TempPufferHSVColor[0] + TempTaskHSVColor[0]), (TempPufferHSVColor[1] + TempTaskHSVColor[1]), (TempPufferHSVColor[2] + TempTaskHSVColor[2])))
for scienceNo in range(pufferScienceAvailability[pufferName]):
AllTaskColors[('{}:{}'.format(pufferName, scienceNo) + PufferTask)] = TaskRGBColor
PufferTaskCounter += 1
PufferNo += 1
self.AllTaskColors = AllTaskColors
Tasks = MOSAICSolver.MILPTasks(OptionalTasks=AllOptionalTasks, TaskReward=AllTaskReward, ProductsSize=AllProductSizes, DependencyList=AllDependencyList, IncompatibleTasks=AllIncompatibleTasks)
AgentCapabilities = MOSAICSolver.MILPAgentCapabilities(ComputationTime=AllComputationTime, ComputationLoad=AllComputationLoad, InitialInformation=AllInitialInformation)
self.Tasks = Tasks
self.AgentCapabilities = AgentCapabilities
self.Thor = Thor
self.isSetUp = True | def buildScheduler(self):
' '
if (self.isPartitioned is False):
self.partition()
ScenarioParameters = self.ScenarioParameters
BaseStationName = self.BaseStationName
agent = ScenarioParameters['Agent']
AgentNames = [agent]
for component in self.ConnectedComponents:
if (agent in component):
AgentNames = component
break
self.AgentNames = AgentNames
self.agent = agent
PufferNames = list(AgentNames)
if (BaseStationName in PufferNames):
PufferNames.remove(BaseStationName)
Thor_s = ScenarioParameters['Time']['TimeHorizon']
TimeStep = self.TimeStep
Thor = int((float(Thor_s) / float(TimeStep)))
InfeasibleTime = (2 * Thor_s)
CommWindowsBandwidth = {}
for t in range(Thor):
CommWindowsBandwidth[t] = {}
for Agent1 in AgentNames:
CommWindowsBandwidth[t][Agent1] = {}
for Agent2 in AgentNames:
CommWindowsBandwidth[t][Agent1][Agent2] = 0
for link in ScenarioParameters['CommunicationNetwork']:
Agent1 = link['origin']
Agent2 = link['destination']
for t in range(Thor):
if ((Agent1 in AgentNames) and (Agent2 in AgentNames)):
CommWindowsBandwidth[t][Agent1][Agent2] = link['bandwidth']
achievable_science_in_horizon = 3
pufferScienceAvailability = {}
for puffer in PufferNames:
in_science_zone = ScenarioParameters['AgentStates'][puffer]['in_science_zone']
if ('samples' in ScenarioParameters['AgentStates'][puffer].keys()):
puffer_samples = ScenarioParameters['AgentStates'][puffer]['samples']
else:
puffer_samples = achievable_science_in_horizon
pufferScienceAvailability[puffer] = (puffer_samples * in_science_zone)
PufferTaskReward_housekeeping = {'short_range_image': ScenarioParameters['Tasks']['TaskReward']['short_range_image'], 'vo_localization': ScenarioParameters['Tasks']['TaskReward']['vo_localization'], 'plan_path': ScenarioParameters['Tasks']['TaskReward']['plan_path'], 'send_drive_cmd': ScenarioParameters['Tasks']['TaskReward']['send_drive_cmd']}
PufferTaskReward_science = {'take_sample': ScenarioParameters['Tasks']['TaskReward']['take_sample'], 'analyze_sample': ScenarioParameters['Tasks']['TaskReward']['analyze_sample'], 'store_sample': ScenarioParameters['Tasks']['TaskReward']['store_sample']}
PufferOptionalTask_housekeeping = {'short_range_image': False, 'vo_localization': False, 'plan_path': False, 'send_drive_cmd': False}
PufferOptionalTask_science = {'take_sample': True, 'analyze_sample': True, 'store_sample': True}
PufferProductSizes_housekeeping = {'short_range_image': ScenarioParameters['Tasks']['ProductsSize']['short_range_image'], 'vo_localization': ScenarioParameters['Tasks']['ProductsSize']['vo_localization'], 'plan_path': ScenarioParameters['Tasks']['ProductsSize']['plan_path'], 'send_drive_cmd': ScenarioParameters['Tasks']['ProductsSize']['send_drive_cmd']}
PufferProductSizes_science = {'take_sample': ScenarioParameters['Tasks']['ProductsSize']['take_sample'], 'analyze_sample': ScenarioParameters['Tasks']['ProductsSize']['analyze_sample'], 'store_sample': ScenarioParameters['Tasks']['ProductsSize']['store_sample']}
pufferOwnHousekeepingTasks = ['short_range_image', 'vo_localization', 'plan_path', 'send_drive_cmd']
pufferOwnScienceTasks = ['take_sample', 'analyze_sample', 'store_sample']
RelocatableTasks = ['vo_localization', 'plan_path', 'analyze_sample', 'store_sample']
PufferDependencyList_housekeeping = {'short_range_image': [[]], 'vo_localization': [['short_range_image']], 'plan_path': [['vo_localization']], 'send_drive_cmd': [['plan_path']]}
PufferDependencyList_science = {'take_sample': [[]], 'analyze_sample': [['take_sample']], 'store_sample': [['analyze_sample']]}
PufferIncompatibleTasks_housekeeping = []
PufferIncompatibleTasks_science = []
AllOptionalTasks = {}
AllTaskReward = {}
AllProductSizes = {}
AllDependencyList = {}
AllIncompatibleTasks = []
for pufferName in PufferNames:
Temp_PufferOptionalTask = {('{}:'.format(pufferName) + k): v for (k, v) in PufferOptionalTask_housekeeping.items()}
Temp_PufferTaskReward = {('{}:'.format(pufferName) + k): v for (k, v) in PufferTaskReward_housekeeping.items()}
Temp_PufferProductSizes = {('{}:'.format(pufferName) + k): v for (k, v) in PufferProductSizes_housekeeping.items()}
for scienceNo in range(pufferScienceAvailability[pufferName]):
Temp_PufferOptionalTask.update({('{}:{}'.format(pufferName, scienceNo) + k): v for (k, v) in PufferOptionalTask_science.items()})
Temp_PufferTaskReward.update({('{}:{}'.format(pufferName, scienceNo) + k): v for (k, v) in PufferTaskReward_science.items()})
Temp_PufferProductSizes.update({('{}:{}'.format(pufferName, scienceNo) + k): v for (k, v) in PufferProductSizes_science.items()})
for (k, v) in PufferDependencyList_housekeeping.items():
AllDeps = []
for ConjDependency in v:
AllDeps.append([('{}:'.format(pufferName) + DisjDependency) for DisjDependency in ConjDependency])
AllDependencyList.update({('{}:'.format(pufferName) + k): AllDeps})
for scienceNo in range(pufferScienceAvailability[pufferName]):
for (k, v) in PufferDependencyList_science.items():
AllDeps = []
for ConjDependency in v:
AllDeps.append([('{}:{}'.format(pufferName, scienceNo) + DisjDependency) for DisjDependency in ConjDependency])
AllDependencyList.update({('{}:{}'.format(pufferName, scienceNo) + k): AllDeps})
for BadPairing in PufferIncompatibleTasks_housekeeping:
Temp_BadPairing = [('{}:'.format(pufferName) + Task) for Task in BadPairing]
AllIncompatibleTasks.append(Temp_BadPairing)
for scienceNo in range(pufferScienceAvailability[pufferName]):
for BadPairing in PufferIncompatibleTasks_science:
Temp_BadPairing = [('{}:{}'.format(pufferName, scienceNo) + Task) for Task in BadPairing]
AllIncompatibleTasks.append(Temp_BadPairing)
AllOptionalTasks.update(Temp_PufferOptionalTask)
AllTaskReward.update(Temp_PufferTaskReward)
AllProductSizes.update(Temp_PufferProductSizes)
AllComputationTime = {}
AllComputationLoad = {}
AllInitialInformation = {}
for pufferName in PufferNames:
for task in pufferOwnHousekeepingTasks:
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][pufferName].get(task, InfeasibleTime)
AllComputationTime.update({('{}:'.format(pufferName) + task): {pufferName: tasktime}})
AllComputationLoad.update({('{}:'.format(pufferName) + task): {pufferName: 1.0}})
AllInitialInformation.update({('{}:'.format(pufferName) + task): {pufferName: False}})
if (BaseStationName in AgentNames):
for task in pufferOwnHousekeepingTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][BaseStationName].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:'.format(pufferName) + task)][BaseStationName] = tasktime
AllComputationLoad[('{}:'.format(pufferName) + task)][BaseStationName] = 1.0
AllInitialInformation[('{}:'.format(pufferName) + task)][BaseStationName] = False
for otherPuffer in PufferNames:
if (otherPuffer != pufferName):
for task in pufferOwnHousekeepingTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][otherPuffer].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:'.format(pufferName) + task)][otherPuffer] = tasktime
AllComputationLoad[('{}:'.format(pufferName) + task)][otherPuffer] = 1.0
AllInitialInformation[('{}:'.format(pufferName) + task)][otherPuffer] = False
for scienceNo in range(pufferScienceAvailability[pufferName]):
for task in pufferOwnScienceTasks:
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][pufferName].get(task, InfeasibleTime)
AllComputationTime.update({('{}:{}'.format(pufferName, scienceNo) + task): {pufferName: tasktime}})
AllComputationLoad.update({('{}:{}'.format(pufferName, scienceNo) + task): {pufferName: 1.0}})
AllInitialInformation.update({('{}:{}'.format(pufferName, scienceNo) + task): {pufferName: False}})
if (BaseStationName in AgentNames):
for task in pufferOwnScienceTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][BaseStationName].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:{}'.format(pufferName, scienceNo) + task)][BaseStationName] = tasktime
AllComputationLoad[('{}:{}'.format(pufferName, scienceNo) + task)][BaseStationName] = 1.0
AllInitialInformation[('{}:{}'.format(pufferName, scienceNo) + task)][BaseStationName] = False
for otherPuffer in PufferNames:
if (otherPuffer != pufferName):
for task in pufferOwnScienceTasks:
if (task in RelocatableTasks):
tasktime = ScenarioParameters['AgentCapabilities']['ComputationTime'][otherPuffer].get(task, InfeasibleTime)
else:
tasktime = InfeasibleTime
AllComputationTime[('{}:{}'.format(pufferName, scienceNo) + task)][otherPuffer] = tasktime
AllComputationLoad[('{}:{}'.format(pufferName, scienceNo) + task)][otherPuffer] = 1.0
AllInitialInformation[('{}:{}'.format(pufferName, scienceNo) + task)][otherPuffer] = False
for (task, val) in AllComputationTime.items():
for (agent, tasktime) in val.items():
AllComputationTime[task][agent] = int(np.ceil((float(tasktime) / float(TimeStep))))
AllTaskColors = {}
PufferTaskList_housekeeping = PufferOptionalTask_housekeeping.keys()
PufferTaskList_science = PufferOptionalTask_science.keys()
PufferNo = 0
N_puffers = len(PufferNames)
for pufferName in PufferNames:
TempPufferHSVColor = np.array([(float(PufferNo) / float(N_puffers)), 0.0, 0.85])
PufferTaskCounter = 0
NumPufferTasks = len((list(PufferTaskList_housekeeping) + list(PufferTaskList_science)))
for PufferTask in PufferTaskList_housekeeping:
TempTaskHSVColor = np.array([0.0, (0.2 + (0.8 * (1.0 - (float(PufferTaskCounter) / float((NumPufferTasks - 1)))))), 0.0])
TaskRGBColor = np.array(colorsys.hsv_to_rgb((TempPufferHSVColor[0] + TempTaskHSVColor[0]), (TempPufferHSVColor[1] + TempTaskHSVColor[1]), (TempPufferHSVColor[2] + TempTaskHSVColor[2])))
AllTaskColors[('{}:'.format(pufferName) + PufferTask)] = TaskRGBColor
PufferTaskCounter += 1
for PufferTask in PufferTaskList_science:
TempTaskHSVColor = np.array([0.0, (0.2 + (0.8 * (1.0 - (float(PufferTaskCounter) / float((NumPufferTasks - 1)))))), 0.0])
TaskRGBColor = np.array(colorsys.hsv_to_rgb((TempPufferHSVColor[0] + TempTaskHSVColor[0]), (TempPufferHSVColor[1] + TempTaskHSVColor[1]), (TempPufferHSVColor[2] + TempTaskHSVColor[2])))
for scienceNo in range(pufferScienceAvailability[pufferName]):
AllTaskColors[('{}:{}'.format(pufferName, scienceNo) + PufferTask)] = TaskRGBColor
PufferTaskCounter += 1
PufferNo += 1
self.AllTaskColors = AllTaskColors
Tasks = MOSAICSolver.MILPTasks(OptionalTasks=AllOptionalTasks, TaskReward=AllTaskReward, ProductsSize=AllProductSizes, DependencyList=AllDependencyList, IncompatibleTasks=AllIncompatibleTasks)
AgentCapabilities = MOSAICSolver.MILPAgentCapabilities(ComputationTime=AllComputationTime, ComputationLoad=AllComputationLoad, InitialInformation=AllInitialInformation)
self.Tasks = Tasks
self.AgentCapabilities = AgentCapabilities
self.Thor = Thor
self.isSetUp = True<|docstring|>Creates the problem instance to be solved by the scheduler as a JSON file<|endoftext|> |
ecaa708018c61063a5891a42c2cec6def92c09ff92395a07cda7fbf344cde12e | def _apply_prediction(G, func, ebunch=None):
'Applies the given function to each edge in the specified iterable\n of edges.\n\n `G` is an instance of :class:`networkx.Graph`.\n\n `ebunch` is an iterable of pairs of nodes. If not specified, all\n non-edges in the graph `G` will be used.\n\n '
if (ebunch is None):
ebunch = nx.non_edges(G)
return sorted([(u, v, func(G, u, v)) for (u, v) in ebunch], key=(lambda t: t[2]), reverse=True) | Applies the given function to each edge in the specified iterable
of edges.
`G` is an instance of :class:`networkx.Graph`.
`ebunch` is an iterable of pairs of nodes. If not specified, all
non-edges in the graph `G` will be used. | networkx/algorithms/link_prediction.py | _apply_prediction | MingshanJia/networkx | 10 | python | def _apply_prediction(G, func, ebunch=None):
'Applies the given function to each edge in the specified iterable\n of edges.\n\n `G` is an instance of :class:`networkx.Graph`.\n\n `ebunch` is an iterable of pairs of nodes. If not specified, all\n non-edges in the graph `G` will be used.\n\n '
if (ebunch is None):
ebunch = nx.non_edges(G)
return sorted([(u, v, func(G, u, v)) for (u, v) in ebunch], key=(lambda t: t[2]), reverse=True) | def _apply_prediction(G, func, ebunch=None):
'Applies the given function to each edge in the specified iterable\n of edges.\n\n `G` is an instance of :class:`networkx.Graph`.\n\n `ebunch` is an iterable of pairs of nodes. If not specified, all\n non-edges in the graph `G` will be used.\n\n '
if (ebunch is None):
ebunch = nx.non_edges(G)
return sorted([(u, v, func(G, u, v)) for (u, v) in ebunch], key=(lambda t: t[2]), reverse=True)<|docstring|>Applies the given function to each edge in the specified iterable
of edges.
`G` is an instance of :class:`networkx.Graph`.
`ebunch` is an iterable of pairs of nodes. If not specified, all
non-edges in the graph `G` will be used.<|endoftext|> |
85fd6632a1a63d27ea1c5011e52cbc3058ab79d93632f5dc7e9caabe1eab6aef | @not_implemented_for('multigraph')
def resource_allocation_index(G, ebunch=None):
'Compute the resource allocation index of all node pairs in ebunch.\n\n Resource allocation index of `u` and `v` is defined as\n\n .. math::\n\n \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} \\frac{1}{|\\Gamma(w)|}\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Resource allocation index will be computed for each pair of\n nodes given in the iterable. The pairs must be given as\n 2-tuples (u, v) where u and v are nodes in the graph. If ebunch\n is None then all non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their resource allocation index.\n\n References\n ----------\n .. [1] T. Zhou, L. Lu, Y.-C. Zhang.\n Predicting missing links via local information.\n Eur. Phys. J. B 71 (2009) 623.\n https://arxiv.org/pdf/0901.0553.pdf\n '
def predict(G, u, v):
if G.is_directed():
return sum(((1 / G.degree(w)) for w in nx.directed_common_neighbors(G, u, v)))
else:
return sum(((1 / G.degree(w)) for w in nx.common_neighbors(G, u, v)))
return _apply_prediction(G, predict, ebunch) | Compute the resource allocation index of all node pairs in ebunch.
Resource allocation index of `u` and `v` is defined as
.. math::
\sum_{w \in \Gamma(u) \cap \Gamma(v)} \frac{1}{|\Gamma(w)|}
where $\Gamma(u)$ denotes the set of neighbors of $u$.
Parameters
----------
G : graph
A NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
Resource allocation index will be computed for each pair of
nodes given in the iterable. The pairs must be given as
2-tuples (u, v) where u and v are nodes in the graph. If ebunch
is None then all non-existent edges in the graph will be used.
Default value: None.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their resource allocation index.
References
----------
.. [1] T. Zhou, L. Lu, Y.-C. Zhang.
Predicting missing links via local information.
Eur. Phys. J. B 71 (2009) 623.
https://arxiv.org/pdf/0901.0553.pdf | networkx/algorithms/link_prediction.py | resource_allocation_index | MingshanJia/networkx | 10 | python | @not_implemented_for('multigraph')
def resource_allocation_index(G, ebunch=None):
'Compute the resource allocation index of all node pairs in ebunch.\n\n Resource allocation index of `u` and `v` is defined as\n\n .. math::\n\n \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} \\frac{1}{|\\Gamma(w)|}\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Resource allocation index will be computed for each pair of\n nodes given in the iterable. The pairs must be given as\n 2-tuples (u, v) where u and v are nodes in the graph. If ebunch\n is None then all non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their resource allocation index.\n\n References\n ----------\n .. [1] T. Zhou, L. Lu, Y.-C. Zhang.\n Predicting missing links via local information.\n Eur. Phys. J. B 71 (2009) 623.\n https://arxiv.org/pdf/0901.0553.pdf\n '
def predict(G, u, v):
if G.is_directed():
return sum(((1 / G.degree(w)) for w in nx.directed_common_neighbors(G, u, v)))
else:
return sum(((1 / G.degree(w)) for w in nx.common_neighbors(G, u, v)))
return _apply_prediction(G, predict, ebunch) | @not_implemented_for('multigraph')
def resource_allocation_index(G, ebunch=None):
'Compute the resource allocation index of all node pairs in ebunch.\n\n Resource allocation index of `u` and `v` is defined as\n\n .. math::\n\n \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} \\frac{1}{|\\Gamma(w)|}\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Resource allocation index will be computed for each pair of\n nodes given in the iterable. The pairs must be given as\n 2-tuples (u, v) where u and v are nodes in the graph. If ebunch\n is None then all non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their resource allocation index.\n\n References\n ----------\n .. [1] T. Zhou, L. Lu, Y.-C. Zhang.\n Predicting missing links via local information.\n Eur. Phys. J. B 71 (2009) 623.\n https://arxiv.org/pdf/0901.0553.pdf\n '
def predict(G, u, v):
if G.is_directed():
return sum(((1 / G.degree(w)) for w in nx.directed_common_neighbors(G, u, v)))
else:
return sum(((1 / G.degree(w)) for w in nx.common_neighbors(G, u, v)))
return _apply_prediction(G, predict, ebunch)<|docstring|>Compute the resource allocation index of all node pairs in ebunch.
Resource allocation index of `u` and `v` is defined as
.. math::
\sum_{w \in \Gamma(u) \cap \Gamma(v)} \frac{1}{|\Gamma(w)|}
where $\Gamma(u)$ denotes the set of neighbors of $u$.
Parameters
----------
G : graph
A NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
Resource allocation index will be computed for each pair of
nodes given in the iterable. The pairs must be given as
2-tuples (u, v) where u and v are nodes in the graph. If ebunch
is None then all non-existent edges in the graph will be used.
Default value: None.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their resource allocation index.
References
----------
.. [1] T. Zhou, L. Lu, Y.-C. Zhang.
Predicting missing links via local information.
Eur. Phys. J. B 71 (2009) 623.
https://arxiv.org/pdf/0901.0553.pdf<|endoftext|> |
7cdeecfee710f0c5c0a8554e449f5d666478b17adcfcdebffdfd85155b2b0967 | @not_implemented_for('multigraph')
def jaccard_coefficient(G, ebunch=None):
'Compute the Jaccard coefficient of all node pairs in ebunch.\n\n Jaccard coefficient of nodes `u` and `v` is defined as\n\n .. math::\n\n \\frac{|\\Gamma(u) \\cap \\Gamma(v)|}{|\\Gamma(u) \\cup \\Gamma(v)|}\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Jaccard coefficient will be computed for each pair of nodes\n given in the iterable. The pairs must be given as 2-tuples\n (u, v) where u and v are nodes in the graph. If ebunch is None\n then all non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their Jaccard coefficient.\n\n References\n ----------\n .. [1] D. Liben-Nowell, J. Kleinberg.\n The Link Prediction Problem for Social Networks (2004).\n http://www.cs.cornell.edu/home/kleinber/link-pred.pdf\n '
def predict(G, u, v):
if G.is_directed():
union_size = len((set(G._succ[u]) | set(G._pred[v])))
if (union_size == 0):
return 0
return (len(list(nx.directed_common_neighbors(G, u, v))) / union_size)
else:
union_size = len((set(G[u]) | set(G[v])))
if (union_size == 0):
return 0
return (len(list(nx.common_neighbors(G, u, v))) / union_size)
return _apply_prediction(G, predict, ebunch) | Compute the Jaccard coefficient of all node pairs in ebunch.
Jaccard coefficient of nodes `u` and `v` is defined as
.. math::
\frac{|\Gamma(u) \cap \Gamma(v)|}{|\Gamma(u) \cup \Gamma(v)|}
where $\Gamma(u)$ denotes the set of neighbors of $u$.
Parameters
----------
G : graph
A NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
Jaccard coefficient will be computed for each pair of nodes
given in the iterable. The pairs must be given as 2-tuples
(u, v) where u and v are nodes in the graph. If ebunch is None
then all non-existent edges in the graph will be used.
Default value: None.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their Jaccard coefficient.
References
----------
.. [1] D. Liben-Nowell, J. Kleinberg.
The Link Prediction Problem for Social Networks (2004).
http://www.cs.cornell.edu/home/kleinber/link-pred.pdf | networkx/algorithms/link_prediction.py | jaccard_coefficient | MingshanJia/networkx | 10 | python | @not_implemented_for('multigraph')
def jaccard_coefficient(G, ebunch=None):
'Compute the Jaccard coefficient of all node pairs in ebunch.\n\n Jaccard coefficient of nodes `u` and `v` is defined as\n\n .. math::\n\n \\frac{|\\Gamma(u) \\cap \\Gamma(v)|}{|\\Gamma(u) \\cup \\Gamma(v)|}\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Jaccard coefficient will be computed for each pair of nodes\n given in the iterable. The pairs must be given as 2-tuples\n (u, v) where u and v are nodes in the graph. If ebunch is None\n then all non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their Jaccard coefficient.\n\n References\n ----------\n .. [1] D. Liben-Nowell, J. Kleinberg.\n The Link Prediction Problem for Social Networks (2004).\n http://www.cs.cornell.edu/home/kleinber/link-pred.pdf\n '
def predict(G, u, v):
if G.is_directed():
union_size = len((set(G._succ[u]) | set(G._pred[v])))
if (union_size == 0):
return 0
return (len(list(nx.directed_common_neighbors(G, u, v))) / union_size)
else:
union_size = len((set(G[u]) | set(G[v])))
if (union_size == 0):
return 0
return (len(list(nx.common_neighbors(G, u, v))) / union_size)
return _apply_prediction(G, predict, ebunch) | @not_implemented_for('multigraph')
def jaccard_coefficient(G, ebunch=None):
'Compute the Jaccard coefficient of all node pairs in ebunch.\n\n Jaccard coefficient of nodes `u` and `v` is defined as\n\n .. math::\n\n \\frac{|\\Gamma(u) \\cap \\Gamma(v)|}{|\\Gamma(u) \\cup \\Gamma(v)|}\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Jaccard coefficient will be computed for each pair of nodes\n given in the iterable. The pairs must be given as 2-tuples\n (u, v) where u and v are nodes in the graph. If ebunch is None\n then all non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their Jaccard coefficient.\n\n References\n ----------\n .. [1] D. Liben-Nowell, J. Kleinberg.\n The Link Prediction Problem for Social Networks (2004).\n http://www.cs.cornell.edu/home/kleinber/link-pred.pdf\n '
def predict(G, u, v):
if G.is_directed():
union_size = len((set(G._succ[u]) | set(G._pred[v])))
if (union_size == 0):
return 0
return (len(list(nx.directed_common_neighbors(G, u, v))) / union_size)
else:
union_size = len((set(G[u]) | set(G[v])))
if (union_size == 0):
return 0
return (len(list(nx.common_neighbors(G, u, v))) / union_size)
return _apply_prediction(G, predict, ebunch)<|docstring|>Compute the Jaccard coefficient of all node pairs in ebunch.
Jaccard coefficient of nodes `u` and `v` is defined as
.. math::
\frac{|\Gamma(u) \cap \Gamma(v)|}{|\Gamma(u) \cup \Gamma(v)|}
where $\Gamma(u)$ denotes the set of neighbors of $u$.
Parameters
----------
G : graph
A NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
Jaccard coefficient will be computed for each pair of nodes
given in the iterable. The pairs must be given as 2-tuples
(u, v) where u and v are nodes in the graph. If ebunch is None
then all non-existent edges in the graph will be used.
Default value: None.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their Jaccard coefficient.
References
----------
.. [1] D. Liben-Nowell, J. Kleinberg.
The Link Prediction Problem for Social Networks (2004).
http://www.cs.cornell.edu/home/kleinber/link-pred.pdf<|endoftext|> |
5cbf3ffc338010bc36d709bcecd497780f00eff2f218157a42352db27a97989e | @not_implemented_for('multigraph')
def adamic_adar_index(G, ebunch=None):
'Compute the Adamic-Adar index of all node pairs in ebunch.\n\n Adamic-Adar index of `u` and `v` is defined as\n\n .. math::\n\n \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} \\frac{1}{\\log |\\Gamma(w)|}\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n This index leads to zero-division for nodes only connected via self-loops.\n It is intended to be used when no self-loops are present.\n\n Parameters\n ----------\n G : graph\n NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Adamic-Adar index will be computed for each pair of nodes given\n in the iterable. The pairs must be given as 2-tuples (u, v)\n where u and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their Adamic-Adar index.\n\n References\n ----------\n .. [1] D. Liben-Nowell, J. Kleinberg.\n The Link Prediction Problem for Social Networks (2004).\n http://www.cs.cornell.edu/home/kleinber/link-pred.pdf\n '
def predict(G, u, v):
if G.is_directed():
return sum(((1 / log(G.degree(w))) for w in nx.directed_common_neighbors(G, u, v)))
else:
return sum(((1 / log(G.degree(w))) for w in nx.common_neighbors(G, u, v)))
return _apply_prediction(G, predict, ebunch) | Compute the Adamic-Adar index of all node pairs in ebunch.
Adamic-Adar index of `u` and `v` is defined as
.. math::
\sum_{w \in \Gamma(u) \cap \Gamma(v)} \frac{1}{\log |\Gamma(w)|}
where $\Gamma(u)$ denotes the set of neighbors of $u$.
This index leads to zero-division for nodes only connected via self-loops.
It is intended to be used when no self-loops are present.
Parameters
----------
G : graph
NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
Adamic-Adar index will be computed for each pair of nodes given
in the iterable. The pairs must be given as 2-tuples (u, v)
where u and v are nodes in the graph. If ebunch is None then all
non-existent edges in the graph will be used.
Default value: None.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their Adamic-Adar index.
References
----------
.. [1] D. Liben-Nowell, J. Kleinberg.
The Link Prediction Problem for Social Networks (2004).
http://www.cs.cornell.edu/home/kleinber/link-pred.pdf | networkx/algorithms/link_prediction.py | adamic_adar_index | MingshanJia/networkx | 10 | python | @not_implemented_for('multigraph')
def adamic_adar_index(G, ebunch=None):
'Compute the Adamic-Adar index of all node pairs in ebunch.\n\n Adamic-Adar index of `u` and `v` is defined as\n\n .. math::\n\n \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} \\frac{1}{\\log |\\Gamma(w)|}\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n This index leads to zero-division for nodes only connected via self-loops.\n It is intended to be used when no self-loops are present.\n\n Parameters\n ----------\n G : graph\n NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Adamic-Adar index will be computed for each pair of nodes given\n in the iterable. The pairs must be given as 2-tuples (u, v)\n where u and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their Adamic-Adar index.\n\n References\n ----------\n .. [1] D. Liben-Nowell, J. Kleinberg.\n The Link Prediction Problem for Social Networks (2004).\n http://www.cs.cornell.edu/home/kleinber/link-pred.pdf\n '
def predict(G, u, v):
if G.is_directed():
return sum(((1 / log(G.degree(w))) for w in nx.directed_common_neighbors(G, u, v)))
else:
return sum(((1 / log(G.degree(w))) for w in nx.common_neighbors(G, u, v)))
return _apply_prediction(G, predict, ebunch) | @not_implemented_for('multigraph')
def adamic_adar_index(G, ebunch=None):
'Compute the Adamic-Adar index of all node pairs in ebunch.\n\n Adamic-Adar index of `u` and `v` is defined as\n\n .. math::\n\n \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} \\frac{1}{\\log |\\Gamma(w)|}\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n This index leads to zero-division for nodes only connected via self-loops.\n It is intended to be used when no self-loops are present.\n\n Parameters\n ----------\n G : graph\n NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Adamic-Adar index will be computed for each pair of nodes given\n in the iterable. The pairs must be given as 2-tuples (u, v)\n where u and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their Adamic-Adar index.\n\n References\n ----------\n .. [1] D. Liben-Nowell, J. Kleinberg.\n The Link Prediction Problem for Social Networks (2004).\n http://www.cs.cornell.edu/home/kleinber/link-pred.pdf\n '
def predict(G, u, v):
if G.is_directed():
return sum(((1 / log(G.degree(w))) for w in nx.directed_common_neighbors(G, u, v)))
else:
return sum(((1 / log(G.degree(w))) for w in nx.common_neighbors(G, u, v)))
return _apply_prediction(G, predict, ebunch)<|docstring|>Compute the Adamic-Adar index of all node pairs in ebunch.
Adamic-Adar index of `u` and `v` is defined as
.. math::
\sum_{w \in \Gamma(u) \cap \Gamma(v)} \frac{1}{\log |\Gamma(w)|}
where $\Gamma(u)$ denotes the set of neighbors of $u$.
This index leads to zero-division for nodes only connected via self-loops.
It is intended to be used when no self-loops are present.
Parameters
----------
G : graph
NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
Adamic-Adar index will be computed for each pair of nodes given
in the iterable. The pairs must be given as 2-tuples (u, v)
where u and v are nodes in the graph. If ebunch is None then all
non-existent edges in the graph will be used.
Default value: None.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their Adamic-Adar index.
References
----------
.. [1] D. Liben-Nowell, J. Kleinberg.
The Link Prediction Problem for Social Networks (2004).
http://www.cs.cornell.edu/home/kleinber/link-pred.pdf<|endoftext|> |
aadb93424500f36691a3c5b74813da3ef2b29c5dc89a79715b6f9231ab66075a | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def preferential_attachment(G, ebunch=None):
"Compute the preferential attachment score of all node pairs in ebunch.\n\n Preferential attachment score of `u` and `v` is defined as\n\n .. math::\n\n |\\Gamma(u)| |\\Gamma(v)|\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Preferential attachment score will be computed for each pair of\n nodes given in the iterable. The pairs must be given as\n 2-tuples (u, v) where u and v are nodes in the graph. If ebunch\n is None then all non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their preferential attachment score.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.complete_graph(5)\n >>> preds = nx.preferential_attachment(G, [(0, 1), (2, 3)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p}')\n (0, 1) -> 16\n (2, 3) -> 16\n\n References\n ----------\n .. [1] D. Liben-Nowell, J. Kleinberg.\n The Link Prediction Problem for Social Networks (2004).\n http://www.cs.cornell.edu/home/kleinber/link-pred.pdf\n "
def predict(u, v):
return (G.degree(u) * G.degree(v))
return _apply_prediction(G, predict, ebunch) | Compute the preferential attachment score of all node pairs in ebunch.
Preferential attachment score of `u` and `v` is defined as
.. math::
|\Gamma(u)| |\Gamma(v)|
where $\Gamma(u)$ denotes the set of neighbors of $u$.
Parameters
----------
G : graph
NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
Preferential attachment score will be computed for each pair of
nodes given in the iterable. The pairs must be given as
2-tuples (u, v) where u and v are nodes in the graph. If ebunch
is None then all non-existent edges in the graph will be used.
Default value: None.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their preferential attachment score.
Examples
--------
>>> import networkx as nx
>>> G = nx.complete_graph(5)
>>> preds = nx.preferential_attachment(G, [(0, 1), (2, 3)])
>>> for u, v, p in preds:
... print(f'({u}, {v}) -> {p}')
(0, 1) -> 16
(2, 3) -> 16
References
----------
.. [1] D. Liben-Nowell, J. Kleinberg.
The Link Prediction Problem for Social Networks (2004).
http://www.cs.cornell.edu/home/kleinber/link-pred.pdf | networkx/algorithms/link_prediction.py | preferential_attachment | MingshanJia/networkx | 10 | python | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def preferential_attachment(G, ebunch=None):
"Compute the preferential attachment score of all node pairs in ebunch.\n\n Preferential attachment score of `u` and `v` is defined as\n\n .. math::\n\n |\\Gamma(u)| |\\Gamma(v)|\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Preferential attachment score will be computed for each pair of\n nodes given in the iterable. The pairs must be given as\n 2-tuples (u, v) where u and v are nodes in the graph. If ebunch\n is None then all non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their preferential attachment score.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.complete_graph(5)\n >>> preds = nx.preferential_attachment(G, [(0, 1), (2, 3)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p}')\n (0, 1) -> 16\n (2, 3) -> 16\n\n References\n ----------\n .. [1] D. Liben-Nowell, J. Kleinberg.\n The Link Prediction Problem for Social Networks (2004).\n http://www.cs.cornell.edu/home/kleinber/link-pred.pdf\n "
def predict(u, v):
return (G.degree(u) * G.degree(v))
return _apply_prediction(G, predict, ebunch) | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def preferential_attachment(G, ebunch=None):
"Compute the preferential attachment score of all node pairs in ebunch.\n\n Preferential attachment score of `u` and `v` is defined as\n\n .. math::\n\n |\\Gamma(u)| |\\Gamma(v)|\n\n where $\\Gamma(u)$ denotes the set of neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n Preferential attachment score will be computed for each pair of\n nodes given in the iterable. The pairs must be given as\n 2-tuples (u, v) where u and v are nodes in the graph. If ebunch\n is None then all non-existent edges in the graph will be used.\n Default value: None.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their preferential attachment score.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.complete_graph(5)\n >>> preds = nx.preferential_attachment(G, [(0, 1), (2, 3)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p}')\n (0, 1) -> 16\n (2, 3) -> 16\n\n References\n ----------\n .. [1] D. Liben-Nowell, J. Kleinberg.\n The Link Prediction Problem for Social Networks (2004).\n http://www.cs.cornell.edu/home/kleinber/link-pred.pdf\n "
def predict(u, v):
return (G.degree(u) * G.degree(v))
return _apply_prediction(G, predict, ebunch)<|docstring|>Compute the preferential attachment score of all node pairs in ebunch.
Preferential attachment score of `u` and `v` is defined as
.. math::
|\Gamma(u)| |\Gamma(v)|
where $\Gamma(u)$ denotes the set of neighbors of $u$.
Parameters
----------
G : graph
NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
Preferential attachment score will be computed for each pair of
nodes given in the iterable. The pairs must be given as
2-tuples (u, v) where u and v are nodes in the graph. If ebunch
is None then all non-existent edges in the graph will be used.
Default value: None.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their preferential attachment score.
Examples
--------
>>> import networkx as nx
>>> G = nx.complete_graph(5)
>>> preds = nx.preferential_attachment(G, [(0, 1), (2, 3)])
>>> for u, v, p in preds:
... print(f'({u}, {v}) -> {p}')
(0, 1) -> 16
(2, 3) -> 16
References
----------
.. [1] D. Liben-Nowell, J. Kleinberg.
The Link Prediction Problem for Social Networks (2004).
http://www.cs.cornell.edu/home/kleinber/link-pred.pdf<|endoftext|> |
b4b3ce8634cf221d9f94068e5ebcb629f566d57b7ae956e778f70c1c42f99b65 | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def cn_soundarajan_hopcroft(G, ebunch=None, community='community'):
"Count the number of common neighbors of all node pairs in ebunch\n using community information.\n\n For two nodes $u$ and $v$, this function computes the number of\n common neighbors and bonus one for each common neighbor belonging to\n the same community as $u$ and $v$. Mathematically,\n\n .. math::\n\n |\\Gamma(u) \\cap \\Gamma(v)| + \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} f(w)\n\n where $f(w)$ equals 1 if $w$ belongs to the same community as $u$\n and $v$ or 0 otherwise and $\\Gamma(u)$ denotes the set of\n neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n The score will be computed for each pair of nodes given in the\n iterable. The pairs must be given as 2-tuples (u, v) where u\n and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n community : string, optional (default = 'community')\n Nodes attribute name containing the community information.\n G[u][community] identifies which community u belongs to. Each\n node belongs to at most one community. Default value: 'community'.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their score.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.path_graph(3)\n >>> G.nodes[0]['community'] = 0\n >>> G.nodes[1]['community'] = 0\n >>> G.nodes[2]['community'] = 0\n >>> preds = nx.cn_soundarajan_hopcroft(G, [(0, 2)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p}')\n (0, 2) -> 2\n\n References\n ----------\n .. [1] Sucheta Soundarajan and John Hopcroft.\n Using community information to improve the precision of link\n prediction methods.\n In Proceedings of the 21st international conference companion on\n World Wide Web (WWW '12 Companion). ACM, New York, NY, USA, 607-608.\n http://doi.acm.org/10.1145/2187980.2188150\n "
def predict(u, v):
Cu = _community(G, u, community)
Cv = _community(G, v, community)
cnbors = list(nx.common_neighbors(G, u, v))
neighbors = (sum(((_community(G, w, community) == Cu) for w in cnbors)) if (Cu == Cv) else 0)
return (len(cnbors) + neighbors)
return _apply_prediction(G, predict, ebunch) | Count the number of common neighbors of all node pairs in ebunch
using community information.
For two nodes $u$ and $v$, this function computes the number of
common neighbors and bonus one for each common neighbor belonging to
the same community as $u$ and $v$. Mathematically,
.. math::
|\Gamma(u) \cap \Gamma(v)| + \sum_{w \in \Gamma(u) \cap \Gamma(v)} f(w)
where $f(w)$ equals 1 if $w$ belongs to the same community as $u$
and $v$ or 0 otherwise and $\Gamma(u)$ denotes the set of
neighbors of $u$.
Parameters
----------
G : graph
A NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
The score will be computed for each pair of nodes given in the
iterable. The pairs must be given as 2-tuples (u, v) where u
and v are nodes in the graph. If ebunch is None then all
non-existent edges in the graph will be used.
Default value: None.
community : string, optional (default = 'community')
Nodes attribute name containing the community information.
G[u][community] identifies which community u belongs to. Each
node belongs to at most one community. Default value: 'community'.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their score.
Examples
--------
>>> import networkx as nx
>>> G = nx.path_graph(3)
>>> G.nodes[0]['community'] = 0
>>> G.nodes[1]['community'] = 0
>>> G.nodes[2]['community'] = 0
>>> preds = nx.cn_soundarajan_hopcroft(G, [(0, 2)])
>>> for u, v, p in preds:
... print(f'({u}, {v}) -> {p}')
(0, 2) -> 2
References
----------
.. [1] Sucheta Soundarajan and John Hopcroft.
Using community information to improve the precision of link
prediction methods.
In Proceedings of the 21st international conference companion on
World Wide Web (WWW '12 Companion). ACM, New York, NY, USA, 607-608.
http://doi.acm.org/10.1145/2187980.2188150 | networkx/algorithms/link_prediction.py | cn_soundarajan_hopcroft | MingshanJia/networkx | 10 | python | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def cn_soundarajan_hopcroft(G, ebunch=None, community='community'):
"Count the number of common neighbors of all node pairs in ebunch\n using community information.\n\n For two nodes $u$ and $v$, this function computes the number of\n common neighbors and bonus one for each common neighbor belonging to\n the same community as $u$ and $v$. Mathematically,\n\n .. math::\n\n |\\Gamma(u) \\cap \\Gamma(v)| + \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} f(w)\n\n where $f(w)$ equals 1 if $w$ belongs to the same community as $u$\n and $v$ or 0 otherwise and $\\Gamma(u)$ denotes the set of\n neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n The score will be computed for each pair of nodes given in the\n iterable. The pairs must be given as 2-tuples (u, v) where u\n and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n community : string, optional (default = 'community')\n Nodes attribute name containing the community information.\n G[u][community] identifies which community u belongs to. Each\n node belongs to at most one community. Default value: 'community'.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their score.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.path_graph(3)\n >>> G.nodes[0]['community'] = 0\n >>> G.nodes[1]['community'] = 0\n >>> G.nodes[2]['community'] = 0\n >>> preds = nx.cn_soundarajan_hopcroft(G, [(0, 2)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p}')\n (0, 2) -> 2\n\n References\n ----------\n .. [1] Sucheta Soundarajan and John Hopcroft.\n Using community information to improve the precision of link\n prediction methods.\n In Proceedings of the 21st international conference companion on\n World Wide Web (WWW '12 Companion). ACM, New York, NY, USA, 607-608.\n http://doi.acm.org/10.1145/2187980.2188150\n "
def predict(u, v):
Cu = _community(G, u, community)
Cv = _community(G, v, community)
cnbors = list(nx.common_neighbors(G, u, v))
neighbors = (sum(((_community(G, w, community) == Cu) for w in cnbors)) if (Cu == Cv) else 0)
return (len(cnbors) + neighbors)
return _apply_prediction(G, predict, ebunch) | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def cn_soundarajan_hopcroft(G, ebunch=None, community='community'):
"Count the number of common neighbors of all node pairs in ebunch\n using community information.\n\n For two nodes $u$ and $v$, this function computes the number of\n common neighbors and bonus one for each common neighbor belonging to\n the same community as $u$ and $v$. Mathematically,\n\n .. math::\n\n |\\Gamma(u) \\cap \\Gamma(v)| + \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} f(w)\n\n where $f(w)$ equals 1 if $w$ belongs to the same community as $u$\n and $v$ or 0 otherwise and $\\Gamma(u)$ denotes the set of\n neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n The score will be computed for each pair of nodes given in the\n iterable. The pairs must be given as 2-tuples (u, v) where u\n and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n community : string, optional (default = 'community')\n Nodes attribute name containing the community information.\n G[u][community] identifies which community u belongs to. Each\n node belongs to at most one community. Default value: 'community'.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their score.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.path_graph(3)\n >>> G.nodes[0]['community'] = 0\n >>> G.nodes[1]['community'] = 0\n >>> G.nodes[2]['community'] = 0\n >>> preds = nx.cn_soundarajan_hopcroft(G, [(0, 2)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p}')\n (0, 2) -> 2\n\n References\n ----------\n .. [1] Sucheta Soundarajan and John Hopcroft.\n Using community information to improve the precision of link\n prediction methods.\n In Proceedings of the 21st international conference companion on\n World Wide Web (WWW '12 Companion). ACM, New York, NY, USA, 607-608.\n http://doi.acm.org/10.1145/2187980.2188150\n "
def predict(u, v):
Cu = _community(G, u, community)
Cv = _community(G, v, community)
cnbors = list(nx.common_neighbors(G, u, v))
neighbors = (sum(((_community(G, w, community) == Cu) for w in cnbors)) if (Cu == Cv) else 0)
return (len(cnbors) + neighbors)
return _apply_prediction(G, predict, ebunch)<|docstring|>Count the number of common neighbors of all node pairs in ebunch
using community information.
For two nodes $u$ and $v$, this function computes the number of
common neighbors and bonus one for each common neighbor belonging to
the same community as $u$ and $v$. Mathematically,
.. math::
|\Gamma(u) \cap \Gamma(v)| + \sum_{w \in \Gamma(u) \cap \Gamma(v)} f(w)
where $f(w)$ equals 1 if $w$ belongs to the same community as $u$
and $v$ or 0 otherwise and $\Gamma(u)$ denotes the set of
neighbors of $u$.
Parameters
----------
G : graph
A NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
The score will be computed for each pair of nodes given in the
iterable. The pairs must be given as 2-tuples (u, v) where u
and v are nodes in the graph. If ebunch is None then all
non-existent edges in the graph will be used.
Default value: None.
community : string, optional (default = 'community')
Nodes attribute name containing the community information.
G[u][community] identifies which community u belongs to. Each
node belongs to at most one community. Default value: 'community'.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their score.
Examples
--------
>>> import networkx as nx
>>> G = nx.path_graph(3)
>>> G.nodes[0]['community'] = 0
>>> G.nodes[1]['community'] = 0
>>> G.nodes[2]['community'] = 0
>>> preds = nx.cn_soundarajan_hopcroft(G, [(0, 2)])
>>> for u, v, p in preds:
... print(f'({u}, {v}) -> {p}')
(0, 2) -> 2
References
----------
.. [1] Sucheta Soundarajan and John Hopcroft.
Using community information to improve the precision of link
prediction methods.
In Proceedings of the 21st international conference companion on
World Wide Web (WWW '12 Companion). ACM, New York, NY, USA, 607-608.
http://doi.acm.org/10.1145/2187980.2188150<|endoftext|> |
c15aba16ba16812423b01adb1f75cfea2068f8b5d0814f3ed9e2c368d2cd1aa1 | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def ra_index_soundarajan_hopcroft(G, ebunch=None, community='community'):
"Compute the resource allocation index of all node pairs in\n ebunch using community information.\n\n For two nodes $u$ and $v$, this function computes the resource\n allocation index considering only common neighbors belonging to the\n same community as $u$ and $v$. Mathematically,\n\n .. math::\n\n \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} \\frac{f(w)}{|\\Gamma(w)|}\n\n where $f(w)$ equals 1 if $w$ belongs to the same community as $u$\n and $v$ or 0 otherwise and $\\Gamma(u)$ denotes the set of\n neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n The score will be computed for each pair of nodes given in the\n iterable. The pairs must be given as 2-tuples (u, v) where u\n and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n community : string, optional (default = 'community')\n Nodes attribute name containing the community information.\n G[u][community] identifies which community u belongs to. Each\n node belongs to at most one community. Default value: 'community'.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their score.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.Graph()\n >>> G.add_edges_from([(0, 1), (0, 2), (1, 3), (2, 3)])\n >>> G.nodes[0]['community'] = 0\n >>> G.nodes[1]['community'] = 0\n >>> G.nodes[2]['community'] = 1\n >>> G.nodes[3]['community'] = 0\n >>> preds = nx.ra_index_soundarajan_hopcroft(G, [(0, 3)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p:.8f}')\n (0, 3) -> 0.50000000\n\n References\n ----------\n .. [1] Sucheta Soundarajan and John Hopcroft.\n Using community information to improve the precision of link\n prediction methods.\n In Proceedings of the 21st international conference companion on\n World Wide Web (WWW '12 Companion). ACM, New York, NY, USA, 607-608.\n http://doi.acm.org/10.1145/2187980.2188150\n "
def predict(u, v):
Cu = _community(G, u, community)
Cv = _community(G, v, community)
if (Cu != Cv):
return 0
cnbors = nx.common_neighbors(G, u, v)
return sum(((1 / G.degree(w)) for w in cnbors if (_community(G, w, community) == Cu)))
return _apply_prediction(G, predict, ebunch) | Compute the resource allocation index of all node pairs in
ebunch using community information.
For two nodes $u$ and $v$, this function computes the resource
allocation index considering only common neighbors belonging to the
same community as $u$ and $v$. Mathematically,
.. math::
\sum_{w \in \Gamma(u) \cap \Gamma(v)} \frac{f(w)}{|\Gamma(w)|}
where $f(w)$ equals 1 if $w$ belongs to the same community as $u$
and $v$ or 0 otherwise and $\Gamma(u)$ denotes the set of
neighbors of $u$.
Parameters
----------
G : graph
A NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
The score will be computed for each pair of nodes given in the
iterable. The pairs must be given as 2-tuples (u, v) where u
and v are nodes in the graph. If ebunch is None then all
non-existent edges in the graph will be used.
Default value: None.
community : string, optional (default = 'community')
Nodes attribute name containing the community information.
G[u][community] identifies which community u belongs to. Each
node belongs to at most one community. Default value: 'community'.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their score.
Examples
--------
>>> import networkx as nx
>>> G = nx.Graph()
>>> G.add_edges_from([(0, 1), (0, 2), (1, 3), (2, 3)])
>>> G.nodes[0]['community'] = 0
>>> G.nodes[1]['community'] = 0
>>> G.nodes[2]['community'] = 1
>>> G.nodes[3]['community'] = 0
>>> preds = nx.ra_index_soundarajan_hopcroft(G, [(0, 3)])
>>> for u, v, p in preds:
... print(f'({u}, {v}) -> {p:.8f}')
(0, 3) -> 0.50000000
References
----------
.. [1] Sucheta Soundarajan and John Hopcroft.
Using community information to improve the precision of link
prediction methods.
In Proceedings of the 21st international conference companion on
World Wide Web (WWW '12 Companion). ACM, New York, NY, USA, 607-608.
http://doi.acm.org/10.1145/2187980.2188150 | networkx/algorithms/link_prediction.py | ra_index_soundarajan_hopcroft | MingshanJia/networkx | 10 | python | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def ra_index_soundarajan_hopcroft(G, ebunch=None, community='community'):
"Compute the resource allocation index of all node pairs in\n ebunch using community information.\n\n For two nodes $u$ and $v$, this function computes the resource\n allocation index considering only common neighbors belonging to the\n same community as $u$ and $v$. Mathematically,\n\n .. math::\n\n \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} \\frac{f(w)}{|\\Gamma(w)|}\n\n where $f(w)$ equals 1 if $w$ belongs to the same community as $u$\n and $v$ or 0 otherwise and $\\Gamma(u)$ denotes the set of\n neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n The score will be computed for each pair of nodes given in the\n iterable. The pairs must be given as 2-tuples (u, v) where u\n and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n community : string, optional (default = 'community')\n Nodes attribute name containing the community information.\n G[u][community] identifies which community u belongs to. Each\n node belongs to at most one community. Default value: 'community'.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their score.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.Graph()\n >>> G.add_edges_from([(0, 1), (0, 2), (1, 3), (2, 3)])\n >>> G.nodes[0]['community'] = 0\n >>> G.nodes[1]['community'] = 0\n >>> G.nodes[2]['community'] = 1\n >>> G.nodes[3]['community'] = 0\n >>> preds = nx.ra_index_soundarajan_hopcroft(G, [(0, 3)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p:.8f}')\n (0, 3) -> 0.50000000\n\n References\n ----------\n .. [1] Sucheta Soundarajan and John Hopcroft.\n Using community information to improve the precision of link\n prediction methods.\n In Proceedings of the 21st international conference companion on\n World Wide Web (WWW '12 Companion). ACM, New York, NY, USA, 607-608.\n http://doi.acm.org/10.1145/2187980.2188150\n "
def predict(u, v):
Cu = _community(G, u, community)
Cv = _community(G, v, community)
if (Cu != Cv):
return 0
cnbors = nx.common_neighbors(G, u, v)
return sum(((1 / G.degree(w)) for w in cnbors if (_community(G, w, community) == Cu)))
return _apply_prediction(G, predict, ebunch) | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def ra_index_soundarajan_hopcroft(G, ebunch=None, community='community'):
"Compute the resource allocation index of all node pairs in\n ebunch using community information.\n\n For two nodes $u$ and $v$, this function computes the resource\n allocation index considering only common neighbors belonging to the\n same community as $u$ and $v$. Mathematically,\n\n .. math::\n\n \\sum_{w \\in \\Gamma(u) \\cap \\Gamma(v)} \\frac{f(w)}{|\\Gamma(w)|}\n\n where $f(w)$ equals 1 if $w$ belongs to the same community as $u$\n and $v$ or 0 otherwise and $\\Gamma(u)$ denotes the set of\n neighbors of $u$.\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n The score will be computed for each pair of nodes given in the\n iterable. The pairs must be given as 2-tuples (u, v) where u\n and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n community : string, optional (default = 'community')\n Nodes attribute name containing the community information.\n G[u][community] identifies which community u belongs to. Each\n node belongs to at most one community. Default value: 'community'.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their score.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.Graph()\n >>> G.add_edges_from([(0, 1), (0, 2), (1, 3), (2, 3)])\n >>> G.nodes[0]['community'] = 0\n >>> G.nodes[1]['community'] = 0\n >>> G.nodes[2]['community'] = 1\n >>> G.nodes[3]['community'] = 0\n >>> preds = nx.ra_index_soundarajan_hopcroft(G, [(0, 3)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p:.8f}')\n (0, 3) -> 0.50000000\n\n References\n ----------\n .. [1] Sucheta Soundarajan and John Hopcroft.\n Using community information to improve the precision of link\n prediction methods.\n In Proceedings of the 21st international conference companion on\n World Wide Web (WWW '12 Companion). ACM, New York, NY, USA, 607-608.\n http://doi.acm.org/10.1145/2187980.2188150\n "
def predict(u, v):
Cu = _community(G, u, community)
Cv = _community(G, v, community)
if (Cu != Cv):
return 0
cnbors = nx.common_neighbors(G, u, v)
return sum(((1 / G.degree(w)) for w in cnbors if (_community(G, w, community) == Cu)))
return _apply_prediction(G, predict, ebunch)<|docstring|>Compute the resource allocation index of all node pairs in
ebunch using community information.
For two nodes $u$ and $v$, this function computes the resource
allocation index considering only common neighbors belonging to the
same community as $u$ and $v$. Mathematically,
.. math::
\sum_{w \in \Gamma(u) \cap \Gamma(v)} \frac{f(w)}{|\Gamma(w)|}
where $f(w)$ equals 1 if $w$ belongs to the same community as $u$
and $v$ or 0 otherwise and $\Gamma(u)$ denotes the set of
neighbors of $u$.
Parameters
----------
G : graph
A NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
The score will be computed for each pair of nodes given in the
iterable. The pairs must be given as 2-tuples (u, v) where u
and v are nodes in the graph. If ebunch is None then all
non-existent edges in the graph will be used.
Default value: None.
community : string, optional (default = 'community')
Nodes attribute name containing the community information.
G[u][community] identifies which community u belongs to. Each
node belongs to at most one community. Default value: 'community'.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their score.
Examples
--------
>>> import networkx as nx
>>> G = nx.Graph()
>>> G.add_edges_from([(0, 1), (0, 2), (1, 3), (2, 3)])
>>> G.nodes[0]['community'] = 0
>>> G.nodes[1]['community'] = 0
>>> G.nodes[2]['community'] = 1
>>> G.nodes[3]['community'] = 0
>>> preds = nx.ra_index_soundarajan_hopcroft(G, [(0, 3)])
>>> for u, v, p in preds:
... print(f'({u}, {v}) -> {p:.8f}')
(0, 3) -> 0.50000000
References
----------
.. [1] Sucheta Soundarajan and John Hopcroft.
Using community information to improve the precision of link
prediction methods.
In Proceedings of the 21st international conference companion on
World Wide Web (WWW '12 Companion). ACM, New York, NY, USA, 607-608.
http://doi.acm.org/10.1145/2187980.2188150<|endoftext|> |
1920145d2565f44ae34c4a0673935a4b2f4c90dfbc0598e7ba3c9484a9f97b31 | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def within_inter_cluster(G, ebunch=None, delta=0.001, community='community'):
"Compute the ratio of within- and inter-cluster common neighbors\n of all node pairs in ebunch.\n\n For two nodes `u` and `v`, if a common neighbor `w` belongs to the\n same community as them, `w` is considered as within-cluster common\n neighbor of `u` and `v`. Otherwise, it is considered as\n inter-cluster common neighbor of `u` and `v`. The ratio between the\n size of the set of within- and inter-cluster common neighbors is\n defined as the WIC measure. [1]_\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n The WIC measure will be computed for each pair of nodes given in\n the iterable. The pairs must be given as 2-tuples (u, v) where\n u and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n delta : float, optional (default = 0.001)\n Value to prevent division by zero in case there is no\n inter-cluster common neighbor between two nodes. See [1]_ for\n details. Default value: 0.001.\n\n community : string, optional (default = 'community')\n Nodes attribute name containing the community information.\n G[u][community] identifies which community u belongs to. Each\n node belongs to at most one community. Default value: 'community'.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their WIC measure.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.Graph()\n >>> G.add_edges_from([(0, 1), (0, 2), (0, 3), (1, 4), (2, 4), (3, 4)])\n >>> G.nodes[0]['community'] = 0\n >>> G.nodes[1]['community'] = 1\n >>> G.nodes[2]['community'] = 0\n >>> G.nodes[3]['community'] = 0\n >>> G.nodes[4]['community'] = 0\n >>> preds = nx.within_inter_cluster(G, [(0, 4)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p:.8f}')\n (0, 4) -> 1.99800200\n >>> preds = nx.within_inter_cluster(G, [(0, 4)], delta=0.5)\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p:.8f}')\n (0, 4) -> 1.33333333\n\n References\n ----------\n .. [1] Jorge Carlos Valverde-Rebaza and Alneu de Andrade Lopes.\n Link prediction in complex networks based on cluster information.\n In Proceedings of the 21st Brazilian conference on Advances in\n Artificial Intelligence (SBIA'12)\n https://doi.org/10.1007/978-3-642-34459-6_10\n "
if (delta <= 0):
raise nx.NetworkXAlgorithmError('Delta must be greater than zero')
def predict(u, v):
Cu = _community(G, u, community)
Cv = _community(G, v, community)
if (Cu != Cv):
return 0
cnbors = set(nx.common_neighbors(G, u, v))
within = {w for w in cnbors if (_community(G, w, community) == Cu)}
inter = (cnbors - within)
return (len(within) / (len(inter) + delta))
return _apply_prediction(G, predict, ebunch) | Compute the ratio of within- and inter-cluster common neighbors
of all node pairs in ebunch.
For two nodes `u` and `v`, if a common neighbor `w` belongs to the
same community as them, `w` is considered as within-cluster common
neighbor of `u` and `v`. Otherwise, it is considered as
inter-cluster common neighbor of `u` and `v`. The ratio between the
size of the set of within- and inter-cluster common neighbors is
defined as the WIC measure. [1]_
Parameters
----------
G : graph
A NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
The WIC measure will be computed for each pair of nodes given in
the iterable. The pairs must be given as 2-tuples (u, v) where
u and v are nodes in the graph. If ebunch is None then all
non-existent edges in the graph will be used.
Default value: None.
delta : float, optional (default = 0.001)
Value to prevent division by zero in case there is no
inter-cluster common neighbor between two nodes. See [1]_ for
details. Default value: 0.001.
community : string, optional (default = 'community')
Nodes attribute name containing the community information.
G[u][community] identifies which community u belongs to. Each
node belongs to at most one community. Default value: 'community'.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their WIC measure.
Examples
--------
>>> import networkx as nx
>>> G = nx.Graph()
>>> G.add_edges_from([(0, 1), (0, 2), (0, 3), (1, 4), (2, 4), (3, 4)])
>>> G.nodes[0]['community'] = 0
>>> G.nodes[1]['community'] = 1
>>> G.nodes[2]['community'] = 0
>>> G.nodes[3]['community'] = 0
>>> G.nodes[4]['community'] = 0
>>> preds = nx.within_inter_cluster(G, [(0, 4)])
>>> for u, v, p in preds:
... print(f'({u}, {v}) -> {p:.8f}')
(0, 4) -> 1.99800200
>>> preds = nx.within_inter_cluster(G, [(0, 4)], delta=0.5)
>>> for u, v, p in preds:
... print(f'({u}, {v}) -> {p:.8f}')
(0, 4) -> 1.33333333
References
----------
.. [1] Jorge Carlos Valverde-Rebaza and Alneu de Andrade Lopes.
Link prediction in complex networks based on cluster information.
In Proceedings of the 21st Brazilian conference on Advances in
Artificial Intelligence (SBIA'12)
https://doi.org/10.1007/978-3-642-34459-6_10 | networkx/algorithms/link_prediction.py | within_inter_cluster | MingshanJia/networkx | 10 | python | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def within_inter_cluster(G, ebunch=None, delta=0.001, community='community'):
"Compute the ratio of within- and inter-cluster common neighbors\n of all node pairs in ebunch.\n\n For two nodes `u` and `v`, if a common neighbor `w` belongs to the\n same community as them, `w` is considered as within-cluster common\n neighbor of `u` and `v`. Otherwise, it is considered as\n inter-cluster common neighbor of `u` and `v`. The ratio between the\n size of the set of within- and inter-cluster common neighbors is\n defined as the WIC measure. [1]_\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n The WIC measure will be computed for each pair of nodes given in\n the iterable. The pairs must be given as 2-tuples (u, v) where\n u and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n delta : float, optional (default = 0.001)\n Value to prevent division by zero in case there is no\n inter-cluster common neighbor between two nodes. See [1]_ for\n details. Default value: 0.001.\n\n community : string, optional (default = 'community')\n Nodes attribute name containing the community information.\n G[u][community] identifies which community u belongs to. Each\n node belongs to at most one community. Default value: 'community'.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their WIC measure.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.Graph()\n >>> G.add_edges_from([(0, 1), (0, 2), (0, 3), (1, 4), (2, 4), (3, 4)])\n >>> G.nodes[0]['community'] = 0\n >>> G.nodes[1]['community'] = 1\n >>> G.nodes[2]['community'] = 0\n >>> G.nodes[3]['community'] = 0\n >>> G.nodes[4]['community'] = 0\n >>> preds = nx.within_inter_cluster(G, [(0, 4)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p:.8f}')\n (0, 4) -> 1.99800200\n >>> preds = nx.within_inter_cluster(G, [(0, 4)], delta=0.5)\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p:.8f}')\n (0, 4) -> 1.33333333\n\n References\n ----------\n .. [1] Jorge Carlos Valverde-Rebaza and Alneu de Andrade Lopes.\n Link prediction in complex networks based on cluster information.\n In Proceedings of the 21st Brazilian conference on Advances in\n Artificial Intelligence (SBIA'12)\n https://doi.org/10.1007/978-3-642-34459-6_10\n "
if (delta <= 0):
raise nx.NetworkXAlgorithmError('Delta must be greater than zero')
def predict(u, v):
Cu = _community(G, u, community)
Cv = _community(G, v, community)
if (Cu != Cv):
return 0
cnbors = set(nx.common_neighbors(G, u, v))
within = {w for w in cnbors if (_community(G, w, community) == Cu)}
inter = (cnbors - within)
return (len(within) / (len(inter) + delta))
return _apply_prediction(G, predict, ebunch) | @not_implemented_for('directed')
@not_implemented_for('multigraph')
def within_inter_cluster(G, ebunch=None, delta=0.001, community='community'):
"Compute the ratio of within- and inter-cluster common neighbors\n of all node pairs in ebunch.\n\n For two nodes `u` and `v`, if a common neighbor `w` belongs to the\n same community as them, `w` is considered as within-cluster common\n neighbor of `u` and `v`. Otherwise, it is considered as\n inter-cluster common neighbor of `u` and `v`. The ratio between the\n size of the set of within- and inter-cluster common neighbors is\n defined as the WIC measure. [1]_\n\n Parameters\n ----------\n G : graph\n A NetworkX undirected graph.\n\n ebunch : iterable of node pairs, optional (default = None)\n The WIC measure will be computed for each pair of nodes given in\n the iterable. The pairs must be given as 2-tuples (u, v) where\n u and v are nodes in the graph. If ebunch is None then all\n non-existent edges in the graph will be used.\n Default value: None.\n\n delta : float, optional (default = 0.001)\n Value to prevent division by zero in case there is no\n inter-cluster common neighbor between two nodes. See [1]_ for\n details. Default value: 0.001.\n\n community : string, optional (default = 'community')\n Nodes attribute name containing the community information.\n G[u][community] identifies which community u belongs to. Each\n node belongs to at most one community. Default value: 'community'.\n\n Returns\n -------\n piter : iterator\n An iterator of 3-tuples in the form (u, v, p) where (u, v) is a\n pair of nodes and p is their WIC measure.\n\n Examples\n --------\n >>> import networkx as nx\n >>> G = nx.Graph()\n >>> G.add_edges_from([(0, 1), (0, 2), (0, 3), (1, 4), (2, 4), (3, 4)])\n >>> G.nodes[0]['community'] = 0\n >>> G.nodes[1]['community'] = 1\n >>> G.nodes[2]['community'] = 0\n >>> G.nodes[3]['community'] = 0\n >>> G.nodes[4]['community'] = 0\n >>> preds = nx.within_inter_cluster(G, [(0, 4)])\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p:.8f}')\n (0, 4) -> 1.99800200\n >>> preds = nx.within_inter_cluster(G, [(0, 4)], delta=0.5)\n >>> for u, v, p in preds:\n ... print(f'({u}, {v}) -> {p:.8f}')\n (0, 4) -> 1.33333333\n\n References\n ----------\n .. [1] Jorge Carlos Valverde-Rebaza and Alneu de Andrade Lopes.\n Link prediction in complex networks based on cluster information.\n In Proceedings of the 21st Brazilian conference on Advances in\n Artificial Intelligence (SBIA'12)\n https://doi.org/10.1007/978-3-642-34459-6_10\n "
if (delta <= 0):
raise nx.NetworkXAlgorithmError('Delta must be greater than zero')
def predict(u, v):
Cu = _community(G, u, community)
Cv = _community(G, v, community)
if (Cu != Cv):
return 0
cnbors = set(nx.common_neighbors(G, u, v))
within = {w for w in cnbors if (_community(G, w, community) == Cu)}
inter = (cnbors - within)
return (len(within) / (len(inter) + delta))
return _apply_prediction(G, predict, ebunch)<|docstring|>Compute the ratio of within- and inter-cluster common neighbors
of all node pairs in ebunch.
For two nodes `u` and `v`, if a common neighbor `w` belongs to the
same community as them, `w` is considered as within-cluster common
neighbor of `u` and `v`. Otherwise, it is considered as
inter-cluster common neighbor of `u` and `v`. The ratio between the
size of the set of within- and inter-cluster common neighbors is
defined as the WIC measure. [1]_
Parameters
----------
G : graph
A NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
The WIC measure will be computed for each pair of nodes given in
the iterable. The pairs must be given as 2-tuples (u, v) where
u and v are nodes in the graph. If ebunch is None then all
non-existent edges in the graph will be used.
Default value: None.
delta : float, optional (default = 0.001)
Value to prevent division by zero in case there is no
inter-cluster common neighbor between two nodes. See [1]_ for
details. Default value: 0.001.
community : string, optional (default = 'community')
Nodes attribute name containing the community information.
G[u][community] identifies which community u belongs to. Each
node belongs to at most one community. Default value: 'community'.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their WIC measure.
Examples
--------
>>> import networkx as nx
>>> G = nx.Graph()
>>> G.add_edges_from([(0, 1), (0, 2), (0, 3), (1, 4), (2, 4), (3, 4)])
>>> G.nodes[0]['community'] = 0
>>> G.nodes[1]['community'] = 1
>>> G.nodes[2]['community'] = 0
>>> G.nodes[3]['community'] = 0
>>> G.nodes[4]['community'] = 0
>>> preds = nx.within_inter_cluster(G, [(0, 4)])
>>> for u, v, p in preds:
... print(f'({u}, {v}) -> {p:.8f}')
(0, 4) -> 1.99800200
>>> preds = nx.within_inter_cluster(G, [(0, 4)], delta=0.5)
>>> for u, v, p in preds:
... print(f'({u}, {v}) -> {p:.8f}')
(0, 4) -> 1.33333333
References
----------
.. [1] Jorge Carlos Valverde-Rebaza and Alneu de Andrade Lopes.
Link prediction in complex networks based on cluster information.
In Proceedings of the 21st Brazilian conference on Advances in
Artificial Intelligence (SBIA'12)
https://doi.org/10.1007/978-3-642-34459-6_10<|endoftext|> |
9d46ef0d90016bcc1dea25b31782d23848010a69eb1f66505bdbbd6b739d653e | def _community(G, u, community):
'Get the community of the given node.'
node_u = G.nodes[u]
try:
return node_u[community]
except KeyError:
raise nx.NetworkXAlgorithmError('No community information') | Get the community of the given node. | networkx/algorithms/link_prediction.py | _community | MingshanJia/networkx | 10 | python | def _community(G, u, community):
node_u = G.nodes[u]
try:
return node_u[community]
except KeyError:
raise nx.NetworkXAlgorithmError('No community information') | def _community(G, u, community):
node_u = G.nodes[u]
try:
return node_u[community]
except KeyError:
raise nx.NetworkXAlgorithmError('No community information')<|docstring|>Get the community of the given node.<|endoftext|> |
8d2a8537bd1c96c993e71463028c93fb8f9e8b3a380841ced95b87584405013e | def chorale_to_inputs(chorale, voice_ids, index2notes, note2indexes):
'\n :param chorale: music21 chorale\n :param voice_ids:\n :param index2notes:\n :param note2indexes:\n :return: (num_voices, time) matrix of indexes\n '
length = int((chorale.duration.quarterLength * SUBDIVISION))
inputs = []
instrument.partitionByInstrument(chorale)
inputs.append(part_to_inputs(chorale.parts[1], length, index2notes[0], note2indexes[0]))
inputs.append(roots_to_input(chorale, length, index2notes, note2indexes))
inputs.append(colors_to_input(chorale, length, index2notes, note2indexes))
output = np.array(inputs)
assert (len(output.shape) == 2)
return output | :param chorale: music21 chorale
:param voice_ids:
:param index2notes:
:param note2indexes:
:return: (num_voices, time) matrix of indexes | MusicChordExtraction/functions.py | chorale_to_inputs | GuiMarion/DeepJazz | 1 | python | def chorale_to_inputs(chorale, voice_ids, index2notes, note2indexes):
'\n :param chorale: music21 chorale\n :param voice_ids:\n :param index2notes:\n :param note2indexes:\n :return: (num_voices, time) matrix of indexes\n '
length = int((chorale.duration.quarterLength * SUBDIVISION))
inputs = []
instrument.partitionByInstrument(chorale)
inputs.append(part_to_inputs(chorale.parts[1], length, index2notes[0], note2indexes[0]))
inputs.append(roots_to_input(chorale, length, index2notes, note2indexes))
inputs.append(colors_to_input(chorale, length, index2notes, note2indexes))
output = np.array(inputs)
assert (len(output.shape) == 2)
return output | def chorale_to_inputs(chorale, voice_ids, index2notes, note2indexes):
'\n :param chorale: music21 chorale\n :param voice_ids:\n :param index2notes:\n :param note2indexes:\n :return: (num_voices, time) matrix of indexes\n '
length = int((chorale.duration.quarterLength * SUBDIVISION))
inputs = []
instrument.partitionByInstrument(chorale)
inputs.append(part_to_inputs(chorale.parts[1], length, index2notes[0], note2indexes[0]))
inputs.append(roots_to_input(chorale, length, index2notes, note2indexes))
inputs.append(colors_to_input(chorale, length, index2notes, note2indexes))
output = np.array(inputs)
assert (len(output.shape) == 2)
return output<|docstring|>:param chorale: music21 chorale
:param voice_ids:
:param index2notes:
:param note2indexes:
:return: (num_voices, time) matrix of indexes<|endoftext|> |
e224a4031c612ae265af93558d07fa50b63e3ed993a7ed72a94f48fe218b5856 | def _min_max_midi_pitch(note_strings):
'\n\n :param note_strings:\n :return:\n '
all_notes = list(map((lambda note_string: standard_note(note_string)), note_strings))
min_pitch = min(list(map((lambda n: (n.pitch.midi if n.isNote else 128)), all_notes)))
max_pitch = max(list(map((lambda n: (n.pitch.midi if n.isNote else 0)), all_notes)))
return (min_pitch, max_pitch) | :param note_strings:
:return: | MusicChordExtraction/functions.py | _min_max_midi_pitch | GuiMarion/DeepJazz | 1 | python | def _min_max_midi_pitch(note_strings):
'\n\n :param note_strings:\n :return:\n '
all_notes = list(map((lambda note_string: standard_note(note_string)), note_strings))
min_pitch = min(list(map((lambda n: (n.pitch.midi if n.isNote else 128)), all_notes)))
max_pitch = max(list(map((lambda n: (n.pitch.midi if n.isNote else 0)), all_notes)))
return (min_pitch, max_pitch) | def _min_max_midi_pitch(note_strings):
'\n\n :param note_strings:\n :return:\n '
all_notes = list(map((lambda note_string: standard_note(note_string)), note_strings))
min_pitch = min(list(map((lambda n: (n.pitch.midi if n.isNote else 128)), all_notes)))
max_pitch = max(list(map((lambda n: (n.pitch.midi if n.isNote else 0)), all_notes)))
return (min_pitch, max_pitch)<|docstring|>:param note_strings:
:return:<|endoftext|> |
305a75e30a1729c92a910e955bcdb2e3896a72f7669c71a0bbf08302e7aabb9b | def create_index_dicts(chorale_list, voice_ids=voice_ids_default):
'\n Returns two lists (index2notes, note2indexes) of size num_voices containing dictionaries\n :param chorale_list:\n :param voice_ids:\n :param min_pitches:\n :param max_pitches:\n :return:\n '
notes = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'A-', 'B-', 'C-', 'D-', 'E-', 'F-', 'G-', 'A#', 'B#', 'C#', 'D#', 'E#', 'F#', 'G#']
voice_ranges = []
MelodyRange = []
MelodyRange.append('rest')
MelodyRange.append(SLUR_SYMBOL)
MelodyRange.append(START_SYMBOL)
MelodyRange.append(END_SYMBOL)
r = getRange(chorale_list)
for i in range(int(r[0][(- 1)]), (int(r[1][(- 1)]) + 1)):
for elem in notes:
MelodyRange.append((elem + str(i)))
voice_ranges.append(MelodyRange)
chordsRootRange = []
chordsRootRange0 = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'A-', 'B-', 'C-', 'D-', 'E-', 'F-', 'G_', 'A#', 'B#', 'C#', 'D#', 'E#', 'F#', 'G#']
chordsRootRange.append('rest')
chordsRootRange.append(SLUR_SYMBOL)
chordsRootRange.append(START_SYMBOL)
chordsRootRange.append(END_SYMBOL)
for elem in chordsRootRange0:
chordsRootRange.append(elem)
voice_ranges.append(chordsRootRange)
chordsColorRange = []
chordsColorRange0 = ['maj', 'min', 'min#5', 'dim', '+', 'maj7', 'min(maj7)', 'min7', '7', '7sus4', '7b5', '7+', 'dim7', 'm7b5', '9', 'm9', 'min6', '6', 'maj9', '7b9', '7b5b9', '9', 'sus49', '#59', '7#5b9', '#5#9', '7#9', '713', '7b5#9', 'min11', '11', '7alt', '69', 'min69', '9#11', '7#11', '7sus', '7sus43', '13']
chordsColorRange.append('rest')
chordsColorRange.append(SLUR_SYMBOL)
chordsColorRange.append(START_SYMBOL)
chordsColorRange.append(END_SYMBOL)
for elem in chordsColorRange0:
chordsColorRange.append(elem)
voice_ranges.append(chordsColorRange)
index2notes = []
note2indexes = []
for (voice_index, _) in enumerate(voice_ids):
l = list(voice_ranges[voice_index])
index2note = {}
note2index = {}
for (k, n) in enumerate(l):
index2note.update({k: n})
note2index.update({n: k})
index2notes.append(index2note)
note2indexes.append(note2index)
return (index2notes, note2indexes) | Returns two lists (index2notes, note2indexes) of size num_voices containing dictionaries
:param chorale_list:
:param voice_ids:
:param min_pitches:
:param max_pitches:
:return: | MusicChordExtraction/functions.py | create_index_dicts | GuiMarion/DeepJazz | 1 | python | def create_index_dicts(chorale_list, voice_ids=voice_ids_default):
'\n Returns two lists (index2notes, note2indexes) of size num_voices containing dictionaries\n :param chorale_list:\n :param voice_ids:\n :param min_pitches:\n :param max_pitches:\n :return:\n '
notes = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'A-', 'B-', 'C-', 'D-', 'E-', 'F-', 'G-', 'A#', 'B#', 'C#', 'D#', 'E#', 'F#', 'G#']
voice_ranges = []
MelodyRange = []
MelodyRange.append('rest')
MelodyRange.append(SLUR_SYMBOL)
MelodyRange.append(START_SYMBOL)
MelodyRange.append(END_SYMBOL)
r = getRange(chorale_list)
for i in range(int(r[0][(- 1)]), (int(r[1][(- 1)]) + 1)):
for elem in notes:
MelodyRange.append((elem + str(i)))
voice_ranges.append(MelodyRange)
chordsRootRange = []
chordsRootRange0 = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'A-', 'B-', 'C-', 'D-', 'E-', 'F-', 'G_', 'A#', 'B#', 'C#', 'D#', 'E#', 'F#', 'G#']
chordsRootRange.append('rest')
chordsRootRange.append(SLUR_SYMBOL)
chordsRootRange.append(START_SYMBOL)
chordsRootRange.append(END_SYMBOL)
for elem in chordsRootRange0:
chordsRootRange.append(elem)
voice_ranges.append(chordsRootRange)
chordsColorRange = []
chordsColorRange0 = ['maj', 'min', 'min#5', 'dim', '+', 'maj7', 'min(maj7)', 'min7', '7', '7sus4', '7b5', '7+', 'dim7', 'm7b5', '9', 'm9', 'min6', '6', 'maj9', '7b9', '7b5b9', '9', 'sus49', '#59', '7#5b9', '#5#9', '7#9', '713', '7b5#9', 'min11', '11', '7alt', '69', 'min69', '9#11', '7#11', '7sus', '7sus43', '13']
chordsColorRange.append('rest')
chordsColorRange.append(SLUR_SYMBOL)
chordsColorRange.append(START_SYMBOL)
chordsColorRange.append(END_SYMBOL)
for elem in chordsColorRange0:
chordsColorRange.append(elem)
voice_ranges.append(chordsColorRange)
index2notes = []
note2indexes = []
for (voice_index, _) in enumerate(voice_ids):
l = list(voice_ranges[voice_index])
index2note = {}
note2index = {}
for (k, n) in enumerate(l):
index2note.update({k: n})
note2index.update({n: k})
index2notes.append(index2note)
note2indexes.append(note2index)
return (index2notes, note2indexes) | def create_index_dicts(chorale_list, voice_ids=voice_ids_default):
'\n Returns two lists (index2notes, note2indexes) of size num_voices containing dictionaries\n :param chorale_list:\n :param voice_ids:\n :param min_pitches:\n :param max_pitches:\n :return:\n '
notes = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'A-', 'B-', 'C-', 'D-', 'E-', 'F-', 'G-', 'A#', 'B#', 'C#', 'D#', 'E#', 'F#', 'G#']
voice_ranges = []
MelodyRange = []
MelodyRange.append('rest')
MelodyRange.append(SLUR_SYMBOL)
MelodyRange.append(START_SYMBOL)
MelodyRange.append(END_SYMBOL)
r = getRange(chorale_list)
for i in range(int(r[0][(- 1)]), (int(r[1][(- 1)]) + 1)):
for elem in notes:
MelodyRange.append((elem + str(i)))
voice_ranges.append(MelodyRange)
chordsRootRange = []
chordsRootRange0 = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'A-', 'B-', 'C-', 'D-', 'E-', 'F-', 'G_', 'A#', 'B#', 'C#', 'D#', 'E#', 'F#', 'G#']
chordsRootRange.append('rest')
chordsRootRange.append(SLUR_SYMBOL)
chordsRootRange.append(START_SYMBOL)
chordsRootRange.append(END_SYMBOL)
for elem in chordsRootRange0:
chordsRootRange.append(elem)
voice_ranges.append(chordsRootRange)
chordsColorRange = []
chordsColorRange0 = ['maj', 'min', 'min#5', 'dim', '+', 'maj7', 'min(maj7)', 'min7', '7', '7sus4', '7b5', '7+', 'dim7', 'm7b5', '9', 'm9', 'min6', '6', 'maj9', '7b9', '7b5b9', '9', 'sus49', '#59', '7#5b9', '#5#9', '7#9', '713', '7b5#9', 'min11', '11', '7alt', '69', 'min69', '9#11', '7#11', '7sus', '7sus43', '13']
chordsColorRange.append('rest')
chordsColorRange.append(SLUR_SYMBOL)
chordsColorRange.append(START_SYMBOL)
chordsColorRange.append(END_SYMBOL)
for elem in chordsColorRange0:
chordsColorRange.append(elem)
voice_ranges.append(chordsColorRange)
index2notes = []
note2indexes = []
for (voice_index, _) in enumerate(voice_ids):
l = list(voice_ranges[voice_index])
index2note = {}
note2index = {}
for (k, n) in enumerate(l):
index2note.update({k: n})
note2index.update({n: k})
index2notes.append(index2note)
note2indexes.append(note2index)
return (index2notes, note2indexes)<|docstring|>Returns two lists (index2notes, note2indexes) of size num_voices containing dictionaries
:param chorale_list:
:param voice_ids:
:param min_pitches:
:param max_pitches:
:return:<|endoftext|> |
4b208bb168c50537cabc11d0a4ea19d2b226ec9e400266a8ceb24ad45d729bf0 | def compute_min_max_pitches(file_list, voices=[0]):
'\n Removes wrong chorales\n :param file_list:\n :type voices: list containing voices ids\n :returns: two lists min_p, max_p containing min and max pitches for each voice\n '
(min_p, max_p) = (([128] * len(voices)), ([0] * len(voices)))
to_remove = []
for file_name in file_list:
choral = converter.parse(file_name)
for (k, voice_id) in enumerate(voices):
try:
c = choral.parts[voice_id]
l = list(map((lambda n: n.pitch.midi), c.flat.notes))
min_p[k] = min(min_p[k], min(l))
max_p[k] = max(max_p[k], max(l))
except AttributeError:
to_remove.append(file_name)
for file_name in set(to_remove):
file_list.remove(file_name)
return (np.array(min_p), np.array(max_p)) | Removes wrong chorales
:param file_list:
:type voices: list containing voices ids
:returns: two lists min_p, max_p containing min and max pitches for each voice | MusicChordExtraction/functions.py | compute_min_max_pitches | GuiMarion/DeepJazz | 1 | python | def compute_min_max_pitches(file_list, voices=[0]):
'\n Removes wrong chorales\n :param file_list:\n :type voices: list containing voices ids\n :returns: two lists min_p, max_p containing min and max pitches for each voice\n '
(min_p, max_p) = (([128] * len(voices)), ([0] * len(voices)))
to_remove = []
for file_name in file_list:
choral = converter.parse(file_name)
for (k, voice_id) in enumerate(voices):
try:
c = choral.parts[voice_id]
l = list(map((lambda n: n.pitch.midi), c.flat.notes))
min_p[k] = min(min_p[k], min(l))
max_p[k] = max(max_p[k], max(l))
except AttributeError:
to_remove.append(file_name)
for file_name in set(to_remove):
file_list.remove(file_name)
return (np.array(min_p), np.array(max_p)) | def compute_min_max_pitches(file_list, voices=[0]):
'\n Removes wrong chorales\n :param file_list:\n :type voices: list containing voices ids\n :returns: two lists min_p, max_p containing min and max pitches for each voice\n '
(min_p, max_p) = (([128] * len(voices)), ([0] * len(voices)))
to_remove = []
for file_name in file_list:
choral = converter.parse(file_name)
for (k, voice_id) in enumerate(voices):
try:
c = choral.parts[voice_id]
l = list(map((lambda n: n.pitch.midi), c.flat.notes))
min_p[k] = min(min_p[k], min(l))
max_p[k] = max(max_p[k], max(l))
except AttributeError:
to_remove.append(file_name)
for file_name in set(to_remove):
file_list.remove(file_name)
return (np.array(min_p), np.array(max_p))<|docstring|>Removes wrong chorales
:param file_list:
:type voices: list containing voices ids
:returns: two lists min_p, max_p containing min and max pitches for each voice<|endoftext|> |
621665a04ee8e3a0164c77b49553562bd32d89234089d269fcc0e9d3aeaad4a6 | def filter_file_list(file_list, num_voices=3):
'\n Only retain num_voices voices chorales\n '
l = []
for (k, file_name) in enumerate(file_list):
c = converter.parse(file_name)
print(k, file_name, ' ', len(c.parts))
if (len(c.parts) == num_voices):
l.append(file_name)
return l | Only retain num_voices voices chorales | MusicChordExtraction/functions.py | filter_file_list | GuiMarion/DeepJazz | 1 | python | def filter_file_list(file_list, num_voices=3):
'\n \n '
l = []
for (k, file_name) in enumerate(file_list):
c = converter.parse(file_name)
print(k, file_name, ' ', len(c.parts))
if (len(c.parts) == num_voices):
l.append(file_name)
return l | def filter_file_list(file_list, num_voices=3):
'\n \n '
l = []
for (k, file_name) in enumerate(file_list):
c = converter.parse(file_name)
print(k, file_name, ' ', len(c.parts))
if (len(c.parts) == num_voices):
l.append(file_name)
return l<|docstring|>Only retain num_voices voices chorales<|endoftext|> |
594ee048ae4ef9f1dd2181efb162d0539d5b80444b35a68084572ce559140582 | def part_to_inputs(part, length, index2note, note2index):
'\n Can modify note2index and index2note!\n :param part:\n :param note2index:\n :param index2note:\n :return:\n '
list_notes = part.flat.notes
list_note_strings = [n.nameWithOctave for n in list_notes]
for note_name in list_note_strings:
if (note_name not in index2note.values()):
print('___________')
print('Illegaly adding entries to indexes, should never append,\ncheck create_index_dicts function. It should be missing tis note : ')
print(note_name)
print('___________')
print()
new_index = len(index2note)
index2note.update({new_index: note_name})
note2index.update({note_name: new_index})
j = 0
i = 0
t = np.zeros((length, 2))
is_articulated = True
list_notes_and_rests = part.flat.notesAndRests
num_notes = len(list_notes_and_rests)
while (i < length):
if (j < (num_notes - 1)):
if (list_notes_and_rests[(j + 1)].offset > (i / SUBDIVISION)):
t[(i, :)] = [note2index[standard_name(list_notes_and_rests[j])], is_articulated]
i += 1
is_articulated = False
else:
j += 1
is_articulated = True
else:
t[(i, :)] = [note2index[standard_name(list_notes_and_rests[j])], is_articulated]
i += 1
is_articulated = False
return list(map((lambda pa: (pa[0] if pa[1] else note2index[SLUR_SYMBOL])), t)) | Can modify note2index and index2note!
:param part:
:param note2index:
:param index2note:
:return: | MusicChordExtraction/functions.py | part_to_inputs | GuiMarion/DeepJazz | 1 | python | def part_to_inputs(part, length, index2note, note2index):
'\n Can modify note2index and index2note!\n :param part:\n :param note2index:\n :param index2note:\n :return:\n '
list_notes = part.flat.notes
list_note_strings = [n.nameWithOctave for n in list_notes]
for note_name in list_note_strings:
if (note_name not in index2note.values()):
print('___________')
print('Illegaly adding entries to indexes, should never append,\ncheck create_index_dicts function. It should be missing tis note : ')
print(note_name)
print('___________')
print()
new_index = len(index2note)
index2note.update({new_index: note_name})
note2index.update({note_name: new_index})
j = 0
i = 0
t = np.zeros((length, 2))
is_articulated = True
list_notes_and_rests = part.flat.notesAndRests
num_notes = len(list_notes_and_rests)
while (i < length):
if (j < (num_notes - 1)):
if (list_notes_and_rests[(j + 1)].offset > (i / SUBDIVISION)):
t[(i, :)] = [note2index[standard_name(list_notes_and_rests[j])], is_articulated]
i += 1
is_articulated = False
else:
j += 1
is_articulated = True
else:
t[(i, :)] = [note2index[standard_name(list_notes_and_rests[j])], is_articulated]
i += 1
is_articulated = False
return list(map((lambda pa: (pa[0] if pa[1] else note2index[SLUR_SYMBOL])), t)) | def part_to_inputs(part, length, index2note, note2index):
'\n Can modify note2index and index2note!\n :param part:\n :param note2index:\n :param index2note:\n :return:\n '
list_notes = part.flat.notes
list_note_strings = [n.nameWithOctave for n in list_notes]
for note_name in list_note_strings:
if (note_name not in index2note.values()):
print('___________')
print('Illegaly adding entries to indexes, should never append,\ncheck create_index_dicts function. It should be missing tis note : ')
print(note_name)
print('___________')
print()
new_index = len(index2note)
index2note.update({new_index: note_name})
note2index.update({note_name: new_index})
j = 0
i = 0
t = np.zeros((length, 2))
is_articulated = True
list_notes_and_rests = part.flat.notesAndRests
num_notes = len(list_notes_and_rests)
while (i < length):
if (j < (num_notes - 1)):
if (list_notes_and_rests[(j + 1)].offset > (i / SUBDIVISION)):
t[(i, :)] = [note2index[standard_name(list_notes_and_rests[j])], is_articulated]
i += 1
is_articulated = False
else:
j += 1
is_articulated = True
else:
t[(i, :)] = [note2index[standard_name(list_notes_and_rests[j])], is_articulated]
i += 1
is_articulated = False
return list(map((lambda pa: (pa[0] if pa[1] else note2index[SLUR_SYMBOL])), t))<|docstring|>Can modify note2index and index2note!
:param part:
:param note2index:
:param index2note:
:return:<|endoftext|> |
5b231e95fd274d3f5022481fc32cabe06a86d1313d5317a5e68a021ae8c7ee70 | def log(msg, *args, dialog=False, error=False, **kwargs):
'\n Generate a message to the console and optionally as either a message or\n error dialog. The message will be formatted and dedented before being\n displayed, and will be prefixed with its origin.\n '
msg = textwrap.dedent(msg.format(*args, **kwargs)).strip()
if error:
print('my_package error:')
return sublime.error_message(msg)
for line in msg.splitlines():
print('my_package: {msg}'.format(msg=line))
if dialog:
sublime.message_dialog(msg) | Generate a message to the console and optionally as either a message or
error dialog. The message will be formatted and dedented before being
displayed, and will be prefixed with its origin. | package_bootstrap/core/bootstrapper.py | log | ginanjarn/SublimeScraps | 61 | python | def log(msg, *args, dialog=False, error=False, **kwargs):
'\n Generate a message to the console and optionally as either a message or\n error dialog. The message will be formatted and dedented before being\n displayed, and will be prefixed with its origin.\n '
msg = textwrap.dedent(msg.format(*args, **kwargs)).strip()
if error:
print('my_package error:')
return sublime.error_message(msg)
for line in msg.splitlines():
print('my_package: {msg}'.format(msg=line))
if dialog:
sublime.message_dialog(msg) | def log(msg, *args, dialog=False, error=False, **kwargs):
'\n Generate a message to the console and optionally as either a message or\n error dialog. The message will be formatted and dedented before being\n displayed, and will be prefixed with its origin.\n '
msg = textwrap.dedent(msg.format(*args, **kwargs)).strip()
if error:
print('my_package error:')
return sublime.error_message(msg)
for line in msg.splitlines():
print('my_package: {msg}'.format(msg=line))
if dialog:
sublime.message_dialog(msg)<|docstring|>Generate a message to the console and optionally as either a message or
error dialog. The message will be formatted and dedented before being
displayed, and will be prefixed with its origin.<|endoftext|> |
de7699ba37717f5014af3ebefe3197f2a996ec5d09c171b21412d346eeee02bc | def enable_package(self, reenable_resources):
'\n Enables the system bootstrap package (if it exists) by ensuring that\n it is not in the list of ignored packages and then restoring any\n resources that were unloaded back to the views that were using them.\n '
ignored_packages = self.settings.get('ignored_packages', [])
if (bootstrap_pkg in ignored_packages):
ignored_packages.remove(bootstrap_pkg)
self.settings.set('ignored_packages', ignored_packages)
if reenable_resources:
sublime.set_timeout_async((lambda : self.enable_resources())) | Enables the system bootstrap package (if it exists) by ensuring that
it is not in the list of ignored packages and then restoring any
resources that were unloaded back to the views that were using them. | package_bootstrap/core/bootstrapper.py | enable_package | ginanjarn/SublimeScraps | 61 | python | def enable_package(self, reenable_resources):
'\n Enables the system bootstrap package (if it exists) by ensuring that\n it is not in the list of ignored packages and then restoring any\n resources that were unloaded back to the views that were using them.\n '
ignored_packages = self.settings.get('ignored_packages', [])
if (bootstrap_pkg in ignored_packages):
ignored_packages.remove(bootstrap_pkg)
self.settings.set('ignored_packages', ignored_packages)
if reenable_resources:
sublime.set_timeout_async((lambda : self.enable_resources())) | def enable_package(self, reenable_resources):
'\n Enables the system bootstrap package (if it exists) by ensuring that\n it is not in the list of ignored packages and then restoring any\n resources that were unloaded back to the views that were using them.\n '
ignored_packages = self.settings.get('ignored_packages', [])
if (bootstrap_pkg in ignored_packages):
ignored_packages.remove(bootstrap_pkg)
self.settings.set('ignored_packages', ignored_packages)
if reenable_resources:
sublime.set_timeout_async((lambda : self.enable_resources()))<|docstring|>Enables the system bootstrap package (if it exists) by ensuring that
it is not in the list of ignored packages and then restoring any
resources that were unloaded back to the views that were using them.<|endoftext|> |
dbc944e039b05f147d38b785c3af9292b9c77a17605bd55664eeeebde0737a52 | def disable_package(self):
'\n Disables the system bootstrap package (if it exists) by ensuring that\n none of the resources that it provides are currently in use and then\n adding it to the list of ignored packages so that Sublime will unload\n it.\n '
self.disable_resources()
ignored_packages = self.settings.get('ignored_packages', [])
if (bootstrap_pkg not in ignored_packages):
ignored_packages.append(bootstrap_pkg)
self.settings.set('ignored_packages', ignored_packages) | Disables the system bootstrap package (if it exists) by ensuring that
none of the resources that it provides are currently in use and then
adding it to the list of ignored packages so that Sublime will unload
it. | package_bootstrap/core/bootstrapper.py | disable_package | ginanjarn/SublimeScraps | 61 | python | def disable_package(self):
'\n Disables the system bootstrap package (if it exists) by ensuring that\n none of the resources that it provides are currently in use and then\n adding it to the list of ignored packages so that Sublime will unload\n it.\n '
self.disable_resources()
ignored_packages = self.settings.get('ignored_packages', [])
if (bootstrap_pkg not in ignored_packages):
ignored_packages.append(bootstrap_pkg)
self.settings.set('ignored_packages', ignored_packages) | def disable_package(self):
'\n Disables the system bootstrap package (if it exists) by ensuring that\n none of the resources that it provides are currently in use and then\n adding it to the list of ignored packages so that Sublime will unload\n it.\n '
self.disable_resources()
ignored_packages = self.settings.get('ignored_packages', [])
if (bootstrap_pkg not in ignored_packages):
ignored_packages.append(bootstrap_pkg)
self.settings.set('ignored_packages', ignored_packages)<|docstring|>Disables the system bootstrap package (if it exists) by ensuring that
none of the resources that it provides are currently in use and then
adding it to the list of ignored packages so that Sublime will unload
it.<|endoftext|> |
fe89e9a757e3b0613c390e91093410ae00433f29fe30ba3e2e77df7bd1065bf6 | def enable_resources(self):
'\n Enables all resources being provided by the system boostrap package by\n restoring the state that was saved when the resources were disabled.\n '
for window in sublime.windows():
for view in window.views():
s = view.settings()
old_syntax = s.get('_mp_boot_syntax', None)
if (old_syntax is not None):
s.set('syntax', old_syntax)
s.erase('_mp_boot_syntax') | Enables all resources being provided by the system boostrap package by
restoring the state that was saved when the resources were disabled. | package_bootstrap/core/bootstrapper.py | enable_resources | ginanjarn/SublimeScraps | 61 | python | def enable_resources(self):
'\n Enables all resources being provided by the system boostrap package by\n restoring the state that was saved when the resources were disabled.\n '
for window in sublime.windows():
for view in window.views():
s = view.settings()
old_syntax = s.get('_mp_boot_syntax', None)
if (old_syntax is not None):
s.set('syntax', old_syntax)
s.erase('_mp_boot_syntax') | def enable_resources(self):
'\n Enables all resources being provided by the system boostrap package by\n restoring the state that was saved when the resources were disabled.\n '
for window in sublime.windows():
for view in window.views():
s = view.settings()
old_syntax = s.get('_mp_boot_syntax', None)
if (old_syntax is not None):
s.set('syntax', old_syntax)
s.erase('_mp_boot_syntax')<|docstring|>Enables all resources being provided by the system boostrap package by
restoring the state that was saved when the resources were disabled.<|endoftext|> |
13cf899443517f8bc98e3628153df8bc78442f20a2c10f745ca757fb73a6b77d | def disable_resources(self):
'\n Disables all resources being provided by the system bootstrap package\n by saving the state of items that are using them and then reverting\n them to temporary defaults.\n '
prefix = 'Packages/{pkg}/'.format(pkg=bootstrap_pkg)
for window in sublime.windows():
for view in window.views():
s = view.settings()
syntax = s.get('syntax')
if syntax.startswith(prefix):
s.set('_mp_boot_syntax', syntax)
s.set('syntax', 'Packages/Text/Plain text.tmLanguage') | Disables all resources being provided by the system bootstrap package
by saving the state of items that are using them and then reverting
them to temporary defaults. | package_bootstrap/core/bootstrapper.py | disable_resources | ginanjarn/SublimeScraps | 61 | python | def disable_resources(self):
'\n Disables all resources being provided by the system bootstrap package\n by saving the state of items that are using them and then reverting\n them to temporary defaults.\n '
prefix = 'Packages/{pkg}/'.format(pkg=bootstrap_pkg)
for window in sublime.windows():
for view in window.views():
s = view.settings()
syntax = s.get('syntax')
if syntax.startswith(prefix):
s.set('_mp_boot_syntax', syntax)
s.set('syntax', 'Packages/Text/Plain text.tmLanguage') | def disable_resources(self):
'\n Disables all resources being provided by the system bootstrap package\n by saving the state of items that are using them and then reverting\n them to temporary defaults.\n '
prefix = 'Packages/{pkg}/'.format(pkg=bootstrap_pkg)
for window in sublime.windows():
for view in window.views():
s = view.settings()
syntax = s.get('syntax')
if syntax.startswith(prefix):
s.set('_mp_boot_syntax', syntax)
s.set('syntax', 'Packages/Text/Plain text.tmLanguage')<|docstring|>Disables all resources being provided by the system bootstrap package
by saving the state of items that are using them and then reverting
them to temporary defaults.<|endoftext|> |
44bca1c4440f7aa4157e3f675860262a7b5e327861fd1a20d82008c9b0da5f00 | def create_boot_loader(self, stub_loader_name):
'\n Given the name of a file containing a stub system bootstrap loader,\n return the body of a loader that contains the version number of the\n core dependency.\n '
try:
from package_bootstrap import version as ver_info
with codecs.open(stub_loader_name, 'r', 'utf-8') as file:
content = file.read()
return re.sub('^__core_version_tuple\\s+=\\s+\\(.*\\)$', '__core_version_tuple = {version}'.format(version=str(ver_info())), content, count=1, flags=re.MULTILINE)
except:
log('Bootstrap error: Unable to create bootloader')
raise | Given the name of a file containing a stub system bootstrap loader,
return the body of a loader that contains the version number of the
core dependency. | package_bootstrap/core/bootstrapper.py | create_boot_loader | ginanjarn/SublimeScraps | 61 | python | def create_boot_loader(self, stub_loader_name):
'\n Given the name of a file containing a stub system bootstrap loader,\n return the body of a loader that contains the version number of the\n core dependency.\n '
try:
from package_bootstrap import version as ver_info
with codecs.open(stub_loader_name, 'r', 'utf-8') as file:
content = file.read()
return re.sub('^__core_version_tuple\\s+=\\s+\\(.*\\)$', '__core_version_tuple = {version}'.format(version=str(ver_info())), content, count=1, flags=re.MULTILINE)
except:
log('Bootstrap error: Unable to create bootloader')
raise | def create_boot_loader(self, stub_loader_name):
'\n Given the name of a file containing a stub system bootstrap loader,\n return the body of a loader that contains the version number of the\n core dependency.\n '
try:
from package_bootstrap import version as ver_info
with codecs.open(stub_loader_name, 'r', 'utf-8') as file:
content = file.read()
return re.sub('^__core_version_tuple\\s+=\\s+\\(.*\\)$', '__core_version_tuple = {version}'.format(version=str(ver_info())), content, count=1, flags=re.MULTILINE)
except:
log('Bootstrap error: Unable to create bootloader')
raise<|docstring|>Given the name of a file containing a stub system bootstrap loader,
return the body of a loader that contains the version number of the
core dependency.<|endoftext|> |
46426d421200a42a9668cd80db1f74968044796cb5c21c6778892b6f009d05b4 | def create_bootstrap_package(self, package, res_path):
'\n Perform the task of actually creating the system bootstrap package from\n files in the given resource folder into the provided package.\n '
try:
success = True
boot_file = '{file}.py'.format(file=bootloader)
with ZipFile(package, 'w') as zFile:
for (path, dirs, files) in os.walk(res_path):
rPath = (relpath(path, res_path) if (path != res_path) else '')
for file in files:
real_file = join(res_path, path, file)
archive_file = join(rPath, file)
if archive_file.endswith('.sublime-ignored'):
archive_file = archive_file[:(- len('.sublime-ignored'))]
if (archive_file == boot_file):
content = self.create_boot_loader(real_file)
zFile.writestr(archive_file, content)
else:
zFile.write(real_file, archive_file)
except Exception as err:
success = False
log('Bootstrap error: {reason}', reason=str(err))
if os.path.exists(package):
os.remove(package)
return success | Perform the task of actually creating the system bootstrap package from
files in the given resource folder into the provided package. | package_bootstrap/core/bootstrapper.py | create_bootstrap_package | ginanjarn/SublimeScraps | 61 | python | def create_bootstrap_package(self, package, res_path):
'\n Perform the task of actually creating the system bootstrap package from\n files in the given resource folder into the provided package.\n '
try:
success = True
boot_file = '{file}.py'.format(file=bootloader)
with ZipFile(package, 'w') as zFile:
for (path, dirs, files) in os.walk(res_path):
rPath = (relpath(path, res_path) if (path != res_path) else )
for file in files:
real_file = join(res_path, path, file)
archive_file = join(rPath, file)
if archive_file.endswith('.sublime-ignored'):
archive_file = archive_file[:(- len('.sublime-ignored'))]
if (archive_file == boot_file):
content = self.create_boot_loader(real_file)
zFile.writestr(archive_file, content)
else:
zFile.write(real_file, archive_file)
except Exception as err:
success = False
log('Bootstrap error: {reason}', reason=str(err))
if os.path.exists(package):
os.remove(package)
return success | def create_bootstrap_package(self, package, res_path):
'\n Perform the task of actually creating the system bootstrap package from\n files in the given resource folder into the provided package.\n '
try:
success = True
boot_file = '{file}.py'.format(file=bootloader)
with ZipFile(package, 'w') as zFile:
for (path, dirs, files) in os.walk(res_path):
rPath = (relpath(path, res_path) if (path != res_path) else )
for file in files:
real_file = join(res_path, path, file)
archive_file = join(rPath, file)
if archive_file.endswith('.sublime-ignored'):
archive_file = archive_file[:(- len('.sublime-ignored'))]
if (archive_file == boot_file):
content = self.create_boot_loader(real_file)
zFile.writestr(archive_file, content)
else:
zFile.write(real_file, archive_file)
except Exception as err:
success = False
log('Bootstrap error: {reason}', reason=str(err))
if os.path.exists(package):
os.remove(package)
return success<|docstring|>Perform the task of actually creating the system bootstrap package from
files in the given resource folder into the provided package.<|endoftext|> |
09f0593fceef9dc75b57119b3dd3eeb36836ef0f4f3c03f9f0d29a5542bbd437 | def run(self):
'\n Creates or updates the system bootstrap package by packaging up the\n contents of the resource directory.\n '
self.disable_package()
res_path = normpath(join(dirname(__file__), '..', bootstrap_pkg))
package = join(sublime.installed_packages_path(), (bootstrap_pkg + '.sublime-package'))
prefix = os.path.commonprefix([res_path, package])
log('Bootstraping {path} to {pkg}', path=res_path[len(prefix):], pkg=package[len(prefix):])
pkg_existed = os.path.isfile(package)
success = self.create_bootstrap_package(package, res_path)
self.enable_package(success)
if (not success):
return log('\n An error was encountered while updating my_package.\n\n Please check the console to see what went wrong.\n my_package will not be available until the problem\n is resolved.\n ', error=True)
if pkg_existed:
log('\n my_package has been updated!\n\n In order to complete the update, restart Sublime\n Text.\n ', dialog=True)
else:
log('\n my_package has been installed!\n ', dialog=True) | Creates or updates the system bootstrap package by packaging up the
contents of the resource directory. | package_bootstrap/core/bootstrapper.py | run | ginanjarn/SublimeScraps | 61 | python | def run(self):
'\n Creates or updates the system bootstrap package by packaging up the\n contents of the resource directory.\n '
self.disable_package()
res_path = normpath(join(dirname(__file__), '..', bootstrap_pkg))
package = join(sublime.installed_packages_path(), (bootstrap_pkg + '.sublime-package'))
prefix = os.path.commonprefix([res_path, package])
log('Bootstraping {path} to {pkg}', path=res_path[len(prefix):], pkg=package[len(prefix):])
pkg_existed = os.path.isfile(package)
success = self.create_bootstrap_package(package, res_path)
self.enable_package(success)
if (not success):
return log('\n An error was encountered while updating my_package.\n\n Please check the console to see what went wrong.\n my_package will not be available until the problem\n is resolved.\n ', error=True)
if pkg_existed:
log('\n my_package has been updated!\n\n In order to complete the update, restart Sublime\n Text.\n ', dialog=True)
else:
log('\n my_package has been installed!\n ', dialog=True) | def run(self):
'\n Creates or updates the system bootstrap package by packaging up the\n contents of the resource directory.\n '
self.disable_package()
res_path = normpath(join(dirname(__file__), '..', bootstrap_pkg))
package = join(sublime.installed_packages_path(), (bootstrap_pkg + '.sublime-package'))
prefix = os.path.commonprefix([res_path, package])
log('Bootstraping {path} to {pkg}', path=res_path[len(prefix):], pkg=package[len(prefix):])
pkg_existed = os.path.isfile(package)
success = self.create_bootstrap_package(package, res_path)
self.enable_package(success)
if (not success):
return log('\n An error was encountered while updating my_package.\n\n Please check the console to see what went wrong.\n my_package will not be available until the problem\n is resolved.\n ', error=True)
if pkg_existed:
log('\n my_package has been updated!\n\n In order to complete the update, restart Sublime\n Text.\n ', dialog=True)
else:
log('\n my_package has been installed!\n ', dialog=True)<|docstring|>Creates or updates the system bootstrap package by packaging up the
contents of the resource directory.<|endoftext|> |
079abe75279801dd0f14f29fa39206e1e31ffd135890d2e8fbfa3958412bbbc6 | def nice_print(message, colour=None, indent=0, debug=False):
'\n Just prints things in colour with consistent indentation\n '
if (debug and (not args.debug)):
return
if (colour == None):
if ('OK' in message):
colour = 'green'
elif ('ERROR' in message):
colour = 'red'
print(colored('{0}{1}'.format(((' ' * indent) * 2), message), colour)) | Just prints things in colour with consistent indentation | scripts/streamcleaner.py | nice_print | bitsbb01/m3u8_creator | 31 | python | def nice_print(message, colour=None, indent=0, debug=False):
'\n \n '
if (debug and (not args.debug)):
return
if (colour == None):
if ('OK' in message):
colour = 'green'
elif ('ERROR' in message):
colour = 'red'
print(colored('{0}{1}'.format(((' ' * indent) * 2), message), colour)) | def nice_print(message, colour=None, indent=0, debug=False):
'\n \n '
if (debug and (not args.debug)):
return
if (colour == None):
if ('OK' in message):
colour = 'green'
elif ('ERROR' in message):
colour = 'red'
print(colored('{0}{1}'.format(((' ' * indent) * 2), message), colour))<|docstring|>Just prints things in colour with consistent indentation<|endoftext|> |
b6de3cc46c8d4b1148f3c0afb1ed0326964d913f68d1eebe7a4057da055d0df3 | def verify_video_link(url, timeout, indent=1):
'\n Verifies a video stream link works\n '
nice_print('Loading video: {0}'.format(url), indent=indent, debug=True)
indent = (indent + 1)
parsed_url = urlparse(url)
if (not parsed_url.path.endswith('.ts')):
nice_print('ERROR unsupported video file: {0}'.format(url))
return False
try:
r = requests.head(url, timeout=(timeout, timeout))
except Exception as e:
nice_print('ERROR loading video URL: {0}'.format(str(e)[:100]), indent=indent, debug=True)
return False
else:
video_stream = (('Content-Type' in r.headers) and (('video' in r.headers['Content-Type']) or ('octet-stream' in r.headers['Content-Type'])))
if (r.status_code != 200):
nice_print('ERROR {0} video URL'.format(r.status_code, indent=indent, debug=True))
return False
elif video_stream:
nice_print('OK loading video data', indent=indent, debug=True)
return True
else:
nice_print('ERROR unknown URL: {0}'.format(url, indent=indent, debug=True))
return False | Verifies a video stream link works | scripts/streamcleaner.py | verify_video_link | bitsbb01/m3u8_creator | 31 | python | def verify_video_link(url, timeout, indent=1):
'\n \n '
nice_print('Loading video: {0}'.format(url), indent=indent, debug=True)
indent = (indent + 1)
parsed_url = urlparse(url)
if (not parsed_url.path.endswith('.ts')):
nice_print('ERROR unsupported video file: {0}'.format(url))
return False
try:
r = requests.head(url, timeout=(timeout, timeout))
except Exception as e:
nice_print('ERROR loading video URL: {0}'.format(str(e)[:100]), indent=indent, debug=True)
return False
else:
video_stream = (('Content-Type' in r.headers) and (('video' in r.headers['Content-Type']) or ('octet-stream' in r.headers['Content-Type'])))
if (r.status_code != 200):
nice_print('ERROR {0} video URL'.format(r.status_code, indent=indent, debug=True))
return False
elif video_stream:
nice_print('OK loading video data', indent=indent, debug=True)
return True
else:
nice_print('ERROR unknown URL: {0}'.format(url, indent=indent, debug=True))
return False | def verify_video_link(url, timeout, indent=1):
'\n \n '
nice_print('Loading video: {0}'.format(url), indent=indent, debug=True)
indent = (indent + 1)
parsed_url = urlparse(url)
if (not parsed_url.path.endswith('.ts')):
nice_print('ERROR unsupported video file: {0}'.format(url))
return False
try:
r = requests.head(url, timeout=(timeout, timeout))
except Exception as e:
nice_print('ERROR loading video URL: {0}'.format(str(e)[:100]), indent=indent, debug=True)
return False
else:
video_stream = (('Content-Type' in r.headers) and (('video' in r.headers['Content-Type']) or ('octet-stream' in r.headers['Content-Type'])))
if (r.status_code != 200):
nice_print('ERROR {0} video URL'.format(r.status_code, indent=indent, debug=True))
return False
elif video_stream:
nice_print('OK loading video data', indent=indent, debug=True)
return True
else:
nice_print('ERROR unknown URL: {0}'.format(url, indent=indent, debug=True))
return False<|docstring|>Verifies a video stream link works<|endoftext|> |
16d4576746583f4e5ff09c3d762525dd9442da8284a0e6b03a232905364977b8 | def verify_playlist_item(item, timeout):
'\n Tests playlist url for valid m3u8 data\n '
nice_title = item['metadata'].split(',')[(- 1)]
nice_print('{0} | {1}'.format(nice_title, item['url']), colour='yellow')
indent = 1
if item['url'].endswith('.ts'):
if verify_video_link(item['url'], timeout, indent):
nice_print('OK video data', indent=indent)
return True
else:
nice_print('ERROR video data', indent=indent)
return False
elif item['url'].endswith('.m3u8'):
if verify_playlist_link(item['url'], timeout, indent):
nice_print('OK playlist data', indent=indent)
return True
else:
nice_print('ERROR playlist data', indent=indent)
return False
else:
try:
r = requests.head(item['url'], timeout=(timeout, timeout))
except Exception as e:
nice_print('ERROR loading URL: {0}'.format(str(e)[:100]), indent=indent, debug=True)
return False
else:
video_stream = (('Content-Type' in r.headers) and (('video' in r.headers['Content-Type']) or ('octet-stream' in r.headers['Content-Type'])))
playlist_link = (('Content-Type' in r.headers) and ('x-mpegurl' in r.headers['Content-Type']))
if (r.status_code != 200):
nice_print('ERROR {0} loading URL: {1}'.format(r.status_code, item['url']), indent=indent, debug=True)
return False
elif video_stream:
nice_print('OK loading video data', indent=indent, debug=True)
return True
elif playlist_link:
return verify_playlist_link(item['url'], timeout, (indent + 1))
else:
nice_print('ERROR unknown URL: {0}'.format(item['url']), indent=indent, debug=True)
return False | Tests playlist url for valid m3u8 data | scripts/streamcleaner.py | verify_playlist_item | bitsbb01/m3u8_creator | 31 | python | def verify_playlist_item(item, timeout):
'\n \n '
nice_title = item['metadata'].split(',')[(- 1)]
nice_print('{0} | {1}'.format(nice_title, item['url']), colour='yellow')
indent = 1
if item['url'].endswith('.ts'):
if verify_video_link(item['url'], timeout, indent):
nice_print('OK video data', indent=indent)
return True
else:
nice_print('ERROR video data', indent=indent)
return False
elif item['url'].endswith('.m3u8'):
if verify_playlist_link(item['url'], timeout, indent):
nice_print('OK playlist data', indent=indent)
return True
else:
nice_print('ERROR playlist data', indent=indent)
return False
else:
try:
r = requests.head(item['url'], timeout=(timeout, timeout))
except Exception as e:
nice_print('ERROR loading URL: {0}'.format(str(e)[:100]), indent=indent, debug=True)
return False
else:
video_stream = (('Content-Type' in r.headers) and (('video' in r.headers['Content-Type']) or ('octet-stream' in r.headers['Content-Type'])))
playlist_link = (('Content-Type' in r.headers) and ('x-mpegurl' in r.headers['Content-Type']))
if (r.status_code != 200):
nice_print('ERROR {0} loading URL: {1}'.format(r.status_code, item['url']), indent=indent, debug=True)
return False
elif video_stream:
nice_print('OK loading video data', indent=indent, debug=True)
return True
elif playlist_link:
return verify_playlist_link(item['url'], timeout, (indent + 1))
else:
nice_print('ERROR unknown URL: {0}'.format(item['url']), indent=indent, debug=True)
return False | def verify_playlist_item(item, timeout):
'\n \n '
nice_title = item['metadata'].split(',')[(- 1)]
nice_print('{0} | {1}'.format(nice_title, item['url']), colour='yellow')
indent = 1
if item['url'].endswith('.ts'):
if verify_video_link(item['url'], timeout, indent):
nice_print('OK video data', indent=indent)
return True
else:
nice_print('ERROR video data', indent=indent)
return False
elif item['url'].endswith('.m3u8'):
if verify_playlist_link(item['url'], timeout, indent):
nice_print('OK playlist data', indent=indent)
return True
else:
nice_print('ERROR playlist data', indent=indent)
return False
else:
try:
r = requests.head(item['url'], timeout=(timeout, timeout))
except Exception as e:
nice_print('ERROR loading URL: {0}'.format(str(e)[:100]), indent=indent, debug=True)
return False
else:
video_stream = (('Content-Type' in r.headers) and (('video' in r.headers['Content-Type']) or ('octet-stream' in r.headers['Content-Type'])))
playlist_link = (('Content-Type' in r.headers) and ('x-mpegurl' in r.headers['Content-Type']))
if (r.status_code != 200):
nice_print('ERROR {0} loading URL: {1}'.format(r.status_code, item['url']), indent=indent, debug=True)
return False
elif video_stream:
nice_print('OK loading video data', indent=indent, debug=True)
return True
elif playlist_link:
return verify_playlist_link(item['url'], timeout, (indent + 1))
else:
nice_print('ERROR unknown URL: {0}'.format(item['url']), indent=indent, debug=True)
return False<|docstring|>Tests playlist url for valid m3u8 data<|endoftext|> |
77150a027915290731b0ba2c86bf15f54e8fdf32b3f5eb69f1f3bb70e977a6d9 | def filter_streams(m3u_files, timeout, blacklist_file):
'\n Returns filtered streams from a m3u file as a list\n '
blacklist = []
if blacklist_file:
try:
with open(blacklist_file) as f:
content = f.readlines()
blacklist = [x.strip() for x in content]
except Exception as e:
print('blacklist file issue: {0} {1}'.format(type(e), e))
sys.exit(1)
else:
print('Successfully loaded {0} blacklisted urls.'.format(len(blacklist)))
playlist_items = []
num_blacklisted = 0
for m3u_file in m3u_files:
try:
with open(m3u_file) as f:
content = f.readlines()
content = [x.strip() for x in content]
except IsADirectoryError:
continue
if ((content[0] != '#EXTM3U') and (content[0].encode('ascii', 'ignore').decode('utf-8').strip() != '#EXTM3U')):
raise Exception('Invalid file, no EXTM3U header in "{0}"'.format(m3u_file))
url_indexes = [i for (i, s) in enumerate(content) if s.startswith('http')]
if (len(url_indexes) < 1):
raise Exception('Invalid file, no URLs')
for u in url_indexes:
if (content[u] in blacklist):
num_blacklisted += 1
else:
detail = {'metadata': content[(u - 1)], 'url': content[u]}
playlist_items.append(detail)
if num_blacklisted:
print('Input list already reduced by {0} items, because those urls are on the blacklist.'.format(num_blacklisted))
print('Input list now has {0} entries, patience please. Timeout for each test is {1} seconds.'.format(len(playlist_items), timeout))
filtered_playlist_items = [item for item in playlist_items if verify_playlist_item(item, timeout)]
print('{0} items filtered out of {1} in total'.format((len(playlist_items) - len(filtered_playlist_items)), len(playlist_items)))
return filtered_playlist_items | Returns filtered streams from a m3u file as a list | scripts/streamcleaner.py | filter_streams | bitsbb01/m3u8_creator | 31 | python | def filter_streams(m3u_files, timeout, blacklist_file):
'\n \n '
blacklist = []
if blacklist_file:
try:
with open(blacklist_file) as f:
content = f.readlines()
blacklist = [x.strip() for x in content]
except Exception as e:
print('blacklist file issue: {0} {1}'.format(type(e), e))
sys.exit(1)
else:
print('Successfully loaded {0} blacklisted urls.'.format(len(blacklist)))
playlist_items = []
num_blacklisted = 0
for m3u_file in m3u_files:
try:
with open(m3u_file) as f:
content = f.readlines()
content = [x.strip() for x in content]
except IsADirectoryError:
continue
if ((content[0] != '#EXTM3U') and (content[0].encode('ascii', 'ignore').decode('utf-8').strip() != '#EXTM3U')):
raise Exception('Invalid file, no EXTM3U header in "{0}"'.format(m3u_file))
url_indexes = [i for (i, s) in enumerate(content) if s.startswith('http')]
if (len(url_indexes) < 1):
raise Exception('Invalid file, no URLs')
for u in url_indexes:
if (content[u] in blacklist):
num_blacklisted += 1
else:
detail = {'metadata': content[(u - 1)], 'url': content[u]}
playlist_items.append(detail)
if num_blacklisted:
print('Input list already reduced by {0} items, because those urls are on the blacklist.'.format(num_blacklisted))
print('Input list now has {0} entries, patience please. Timeout for each test is {1} seconds.'.format(len(playlist_items), timeout))
filtered_playlist_items = [item for item in playlist_items if verify_playlist_item(item, timeout)]
print('{0} items filtered out of {1} in total'.format((len(playlist_items) - len(filtered_playlist_items)), len(playlist_items)))
return filtered_playlist_items | def filter_streams(m3u_files, timeout, blacklist_file):
'\n \n '
blacklist = []
if blacklist_file:
try:
with open(blacklist_file) as f:
content = f.readlines()
blacklist = [x.strip() for x in content]
except Exception as e:
print('blacklist file issue: {0} {1}'.format(type(e), e))
sys.exit(1)
else:
print('Successfully loaded {0} blacklisted urls.'.format(len(blacklist)))
playlist_items = []
num_blacklisted = 0
for m3u_file in m3u_files:
try:
with open(m3u_file) as f:
content = f.readlines()
content = [x.strip() for x in content]
except IsADirectoryError:
continue
if ((content[0] != '#EXTM3U') and (content[0].encode('ascii', 'ignore').decode('utf-8').strip() != '#EXTM3U')):
raise Exception('Invalid file, no EXTM3U header in "{0}"'.format(m3u_file))
url_indexes = [i for (i, s) in enumerate(content) if s.startswith('http')]
if (len(url_indexes) < 1):
raise Exception('Invalid file, no URLs')
for u in url_indexes:
if (content[u] in blacklist):
num_blacklisted += 1
else:
detail = {'metadata': content[(u - 1)], 'url': content[u]}
playlist_items.append(detail)
if num_blacklisted:
print('Input list already reduced by {0} items, because those urls are on the blacklist.'.format(num_blacklisted))
print('Input list now has {0} entries, patience please. Timeout for each test is {1} seconds.'.format(len(playlist_items), timeout))
filtered_playlist_items = [item for item in playlist_items if verify_playlist_item(item, timeout)]
print('{0} items filtered out of {1} in total'.format((len(playlist_items) - len(filtered_playlist_items)), len(playlist_items)))
return filtered_playlist_items<|docstring|>Returns filtered streams from a m3u file as a list<|endoftext|> |
01efaef72e2cbca41947636a0293c31139abc6f02194c88d595f192412a54699 | def init_gst(options=None):
'Initializes the GStreamer library'
Gst.init(options) | Initializes the GStreamer library | src/animasnd/video.py | init_gst | N-z0/commonz | 0 | python | def init_gst(options=None):
Gst.init(options) | def init_gst(options=None):
Gst.init(options)<|docstring|>Initializes the GStreamer library<|endoftext|> |
77f94d716b2650833c133d007a64b4716d59abf0e3f36de27928c4c0ace84d05 | def get_frame(video_file, moment, output_image_file):
' save as image a frame from video'
caps = Gst.Caps('image/png')
pipeline = Gst.ElementFactory.make('playbin', 'new_playbin')
pipeline.set_property('uri', ('file://' + video_file))
pipeline.set_state(Gst.State.PLAYING)
time.sleep(0.5)
seek_time = (moment * Gst.SECOND)
pipeline.seek(1.0, Gst.Format.TIME, (Gst.SeekFlags.FLUSH | Gst.SeekFlags.ACCURATE), Gst.SeekType.SET, seek_time, Gst.SeekType.NONE, (- 1))
time.sleep(1)
buffer = pipeline.emit('convert-sample', caps)
buff = buffer.get_buffer()
(result, map) = buff.map(Gst.MapFlags.READ)
if result:
data = map.data
pipeline.set_state(Gst.State.NULL)
with open(output_image_file, 'wb') as snapshot:
snapshot.write(data)
return True
else:
return False | save as image a frame from video | src/animasnd/video.py | get_frame | N-z0/commonz | 0 | python | def get_frame(video_file, moment, output_image_file):
' '
caps = Gst.Caps('image/png')
pipeline = Gst.ElementFactory.make('playbin', 'new_playbin')
pipeline.set_property('uri', ('file://' + video_file))
pipeline.set_state(Gst.State.PLAYING)
time.sleep(0.5)
seek_time = (moment * Gst.SECOND)
pipeline.seek(1.0, Gst.Format.TIME, (Gst.SeekFlags.FLUSH | Gst.SeekFlags.ACCURATE), Gst.SeekType.SET, seek_time, Gst.SeekType.NONE, (- 1))
time.sleep(1)
buffer = pipeline.emit('convert-sample', caps)
buff = buffer.get_buffer()
(result, map) = buff.map(Gst.MapFlags.READ)
if result:
data = map.data
pipeline.set_state(Gst.State.NULL)
with open(output_image_file, 'wb') as snapshot:
snapshot.write(data)
return True
else:
return False | def get_frame(video_file, moment, output_image_file):
' '
caps = Gst.Caps('image/png')
pipeline = Gst.ElementFactory.make('playbin', 'new_playbin')
pipeline.set_property('uri', ('file://' + video_file))
pipeline.set_state(Gst.State.PLAYING)
time.sleep(0.5)
seek_time = (moment * Gst.SECOND)
pipeline.seek(1.0, Gst.Format.TIME, (Gst.SeekFlags.FLUSH | Gst.SeekFlags.ACCURATE), Gst.SeekType.SET, seek_time, Gst.SeekType.NONE, (- 1))
time.sleep(1)
buffer = pipeline.emit('convert-sample', caps)
buff = buffer.get_buffer()
(result, map) = buff.map(Gst.MapFlags.READ)
if result:
data = map.data
pipeline.set_state(Gst.State.NULL)
with open(output_image_file, 'wb') as snapshot:
snapshot.write(data)
return True
else:
return False<|docstring|>save as image a frame from video<|endoftext|> |
be8f3f845f8a51c80c721052b9aa42b59ab47c39c2bc8d592422ed168569652b | def __setitem__(self, key: str, value: str) -> None:
'\n Set the header `key` to `value`, removing any duplicate entries.\n Retains insertion order.\n '
set_key = key.lower().encode('latin-1')
set_value = value.encode('latin-1')
found_indexes = []
for (idx, (item_key, item_value)) in enumerate(self._list):
if (item_key == set_key):
found_indexes.append(idx)
for idx in reversed(found_indexes[1:]):
del self._list[idx]
if found_indexes:
idx = found_indexes[0]
self._list[idx] = (set_key, set_value)
else:
self._list.append((set_key, set_value)) | Set the header `key` to `value`, removing any duplicate entries.
Retains insertion order. | starlette/datastructures.py | __setitem__ | Dith3r/starlette | 0 | python | def __setitem__(self, key: str, value: str) -> None:
'\n Set the header `key` to `value`, removing any duplicate entries.\n Retains insertion order.\n '
set_key = key.lower().encode('latin-1')
set_value = value.encode('latin-1')
found_indexes = []
for (idx, (item_key, item_value)) in enumerate(self._list):
if (item_key == set_key):
found_indexes.append(idx)
for idx in reversed(found_indexes[1:]):
del self._list[idx]
if found_indexes:
idx = found_indexes[0]
self._list[idx] = (set_key, set_value)
else:
self._list.append((set_key, set_value)) | def __setitem__(self, key: str, value: str) -> None:
'\n Set the header `key` to `value`, removing any duplicate entries.\n Retains insertion order.\n '
set_key = key.lower().encode('latin-1')
set_value = value.encode('latin-1')
found_indexes = []
for (idx, (item_key, item_value)) in enumerate(self._list):
if (item_key == set_key):
found_indexes.append(idx)
for idx in reversed(found_indexes[1:]):
del self._list[idx]
if found_indexes:
idx = found_indexes[0]
self._list[idx] = (set_key, set_value)
else:
self._list.append((set_key, set_value))<|docstring|>Set the header `key` to `value`, removing any duplicate entries.
Retains insertion order.<|endoftext|> |
83020ed252b310f2a670b953c3e97443e5a440d8bb0ce92273351cfb4ff845cf | def __delitem__(self, key: str) -> None:
'\n Remove the header `key`.\n '
del_key = key.lower().encode('latin-1')
pop_indexes = []
for (idx, (item_key, item_value)) in enumerate(self._list):
if (item_key == del_key):
pop_indexes.append(idx)
for idx in reversed(pop_indexes):
del self._list[idx] | Remove the header `key`. | starlette/datastructures.py | __delitem__ | Dith3r/starlette | 0 | python | def __delitem__(self, key: str) -> None:
'\n \n '
del_key = key.lower().encode('latin-1')
pop_indexes = []
for (idx, (item_key, item_value)) in enumerate(self._list):
if (item_key == del_key):
pop_indexes.append(idx)
for idx in reversed(pop_indexes):
del self._list[idx] | def __delitem__(self, key: str) -> None:
'\n \n '
del_key = key.lower().encode('latin-1')
pop_indexes = []
for (idx, (item_key, item_value)) in enumerate(self._list):
if (item_key == del_key):
pop_indexes.append(idx)
for idx in reversed(pop_indexes):
del self._list[idx]<|docstring|>Remove the header `key`.<|endoftext|> |
b406cfd97ffcdddcebe8eeb359119117d4033c0bf1f92151e2124970d9a2b175 | def setdefault(self, key: str, value: str) -> str:
'\n If the header `key` does not exist, then set it to `value`.\n Returns the header value.\n '
set_key = key.lower().encode('latin-1')
set_value = value.encode('latin-1')
for (idx, (item_key, item_value)) in enumerate(self._list):
if (item_key == set_key):
return item_value.decode('latin-1')
self._list.append((set_key, set_value))
return value | If the header `key` does not exist, then set it to `value`.
Returns the header value. | starlette/datastructures.py | setdefault | Dith3r/starlette | 0 | python | def setdefault(self, key: str, value: str) -> str:
'\n If the header `key` does not exist, then set it to `value`.\n Returns the header value.\n '
set_key = key.lower().encode('latin-1')
set_value = value.encode('latin-1')
for (idx, (item_key, item_value)) in enumerate(self._list):
if (item_key == set_key):
return item_value.decode('latin-1')
self._list.append((set_key, set_value))
return value | def setdefault(self, key: str, value: str) -> str:
'\n If the header `key` does not exist, then set it to `value`.\n Returns the header value.\n '
set_key = key.lower().encode('latin-1')
set_value = value.encode('latin-1')
for (idx, (item_key, item_value)) in enumerate(self._list):
if (item_key == set_key):
return item_value.decode('latin-1')
self._list.append((set_key, set_value))
return value<|docstring|>If the header `key` does not exist, then set it to `value`.
Returns the header value.<|endoftext|> |
9b48feee3eb64c5a954dc0d659bc4a22b216b1e3c461c79c5a7660593093d364 | def append(self, key: str, value: str) -> None:
'\n Append a header, preserving any duplicate entries.\n '
append_key = key.lower().encode('latin-1')
append_value = value.encode('latin-1')
self._list.append((append_key, append_value)) | Append a header, preserving any duplicate entries. | starlette/datastructures.py | append | Dith3r/starlette | 0 | python | def append(self, key: str, value: str) -> None:
'\n \n '
append_key = key.lower().encode('latin-1')
append_value = value.encode('latin-1')
self._list.append((append_key, append_value)) | def append(self, key: str, value: str) -> None:
'\n \n '
append_key = key.lower().encode('latin-1')
append_value = value.encode('latin-1')
self._list.append((append_key, append_value))<|docstring|>Append a header, preserving any duplicate entries.<|endoftext|> |
b0a4fd2f4e6783676ab26bf10fb40373d0232317cfe41c32cb8b3c647102e702 | def _fix2comp(num):
"\n Convert from two's complement to negative.\n "
assert (0 <= num < (2 ** 32))
if (num & (2 ** 31)):
return (num - (2 ** 32))
else:
return num | Convert from two's complement to negative. | lib/matplotlib/dviread.py | _fix2comp | mkcor/matplotlib | 35 | python | def _fix2comp(num):
"\n \n "
assert (0 <= num < (2 ** 32))
if (num & (2 ** 31)):
return (num - (2 ** 32))
else:
return num | def _fix2comp(num):
"\n \n "
assert (0 <= num < (2 ** 32))
if (num & (2 ** 31)):
return (num - (2 ** 32))
else:
return num<|docstring|>Convert from two's complement to negative.<|endoftext|> |
e497cf3035d33c63533114663d1a5bd06294fdd183576659b7df55b168010695 | def _mul2012(num1, num2):
'\n Multiply two numbers in 20.12 fixed point format.\n '
return ((num1 * num2) >> 20) | Multiply two numbers in 20.12 fixed point format. | lib/matplotlib/dviread.py | _mul2012 | mkcor/matplotlib | 35 | python | def _mul2012(num1, num2):
'\n \n '
return ((num1 * num2) >> 20) | def _mul2012(num1, num2):
'\n \n '
return ((num1 * num2) >> 20)<|docstring|>Multiply two numbers in 20.12 fixed point format.<|endoftext|> |
b0bf4590b5c38057fd630af62f42b78f75e75b4d73cfe2d37341cfccf7b24eac | def find_tex_file(filename, format=None):
"\n Call :program:`kpsewhich` to find a file in the texmf tree. If\n *format* is not None, it is used as the value for the\n :option:`--format` option.\n\n Apparently most existing TeX distributions on Unix-like systems\n use kpathsea. I hear MikTeX (a popular distribution on Windows)\n doesn't use kpathsea, so what do we do? (TODO)\n\n .. seealso::\n\n `Kpathsea documentation <http://www.tug.org/kpathsea/>`_\n The library that :program:`kpsewhich` is part of.\n "
cmd = ['kpsewhich']
if (format is not None):
cmd += [('--format=' + format)]
cmd += [filename]
matplotlib.verbose.report(('find_tex_file(%s): %s' % (filename, cmd)), 'debug')
pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
result = pipe.communicate()[0].rstrip()
matplotlib.verbose.report(('find_tex_file result: %s' % result), 'debug')
return result.decode('ascii') | Call :program:`kpsewhich` to find a file in the texmf tree. If
*format* is not None, it is used as the value for the
:option:`--format` option.
Apparently most existing TeX distributions on Unix-like systems
use kpathsea. I hear MikTeX (a popular distribution on Windows)
doesn't use kpathsea, so what do we do? (TODO)
.. seealso::
`Kpathsea documentation <http://www.tug.org/kpathsea/>`_
The library that :program:`kpsewhich` is part of. | lib/matplotlib/dviread.py | find_tex_file | mkcor/matplotlib | 35 | python | def find_tex_file(filename, format=None):
"\n Call :program:`kpsewhich` to find a file in the texmf tree. If\n *format* is not None, it is used as the value for the\n :option:`--format` option.\n\n Apparently most existing TeX distributions on Unix-like systems\n use kpathsea. I hear MikTeX (a popular distribution on Windows)\n doesn't use kpathsea, so what do we do? (TODO)\n\n .. seealso::\n\n `Kpathsea documentation <http://www.tug.org/kpathsea/>`_\n The library that :program:`kpsewhich` is part of.\n "
cmd = ['kpsewhich']
if (format is not None):
cmd += [('--format=' + format)]
cmd += [filename]
matplotlib.verbose.report(('find_tex_file(%s): %s' % (filename, cmd)), 'debug')
pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
result = pipe.communicate()[0].rstrip()
matplotlib.verbose.report(('find_tex_file result: %s' % result), 'debug')
return result.decode('ascii') | def find_tex_file(filename, format=None):
"\n Call :program:`kpsewhich` to find a file in the texmf tree. If\n *format* is not None, it is used as the value for the\n :option:`--format` option.\n\n Apparently most existing TeX distributions on Unix-like systems\n use kpathsea. I hear MikTeX (a popular distribution on Windows)\n doesn't use kpathsea, so what do we do? (TODO)\n\n .. seealso::\n\n `Kpathsea documentation <http://www.tug.org/kpathsea/>`_\n The library that :program:`kpsewhich` is part of.\n "
cmd = ['kpsewhich']
if (format is not None):
cmd += [('--format=' + format)]
cmd += [filename]
matplotlib.verbose.report(('find_tex_file(%s): %s' % (filename, cmd)), 'debug')
pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
result = pipe.communicate()[0].rstrip()
matplotlib.verbose.report(('find_tex_file result: %s' % result), 'debug')
return result.decode('ascii')<|docstring|>Call :program:`kpsewhich` to find a file in the texmf tree. If
*format* is not None, it is used as the value for the
:option:`--format` option.
Apparently most existing TeX distributions on Unix-like systems
use kpathsea. I hear MikTeX (a popular distribution on Windows)
doesn't use kpathsea, so what do we do? (TODO)
.. seealso::
`Kpathsea documentation <http://www.tug.org/kpathsea/>`_
The library that :program:`kpsewhich` is part of.<|endoftext|> |
2520f67979a788ea6b5512cfdb8d8900bc1139d7ea2b341338238472b5deac8c | def __init__(self, filename, dpi):
'\n Initialize the object. This takes the filename as input and\n opens the file; actually reading the file happens when\n iterating through the pages of the file.\n '
matplotlib.verbose.report(('Dvi: ' + filename), 'debug')
self.file = open(filename, 'rb')
self.dpi = dpi
self.fonts = {}
self.state = _dvistate.pre
self.baseline = self._get_baseline(filename) | Initialize the object. This takes the filename as input and
opens the file; actually reading the file happens when
iterating through the pages of the file. | lib/matplotlib/dviread.py | __init__ | mkcor/matplotlib | 35 | python | def __init__(self, filename, dpi):
'\n Initialize the object. This takes the filename as input and\n opens the file; actually reading the file happens when\n iterating through the pages of the file.\n '
matplotlib.verbose.report(('Dvi: ' + filename), 'debug')
self.file = open(filename, 'rb')
self.dpi = dpi
self.fonts = {}
self.state = _dvistate.pre
self.baseline = self._get_baseline(filename) | def __init__(self, filename, dpi):
'\n Initialize the object. This takes the filename as input and\n opens the file; actually reading the file happens when\n iterating through the pages of the file.\n '
matplotlib.verbose.report(('Dvi: ' + filename), 'debug')
self.file = open(filename, 'rb')
self.dpi = dpi
self.fonts = {}
self.state = _dvistate.pre
self.baseline = self._get_baseline(filename)<|docstring|>Initialize the object. This takes the filename as input and
opens the file; actually reading the file happens when
iterating through the pages of the file.<|endoftext|> |
d1a017fddeb319f0030a78fc1515a0f9287857a1a447f85f18f6e4116c822822 | def __iter__(self):
'\n Iterate through the pages of the file.\n\n Returns (text, boxes) pairs, where:\n text is a list of (x, y, fontnum, glyphnum, width) tuples\n boxes is a list of (x, y, height, width) tuples\n\n The coordinates are transformed into a standard Cartesian\n coordinate system at the dpi value given when initializing.\n The coordinates are floating point numbers, but otherwise\n precision is not lost and coordinate values are not clipped to\n integers.\n '
while True:
have_page = self._read()
if have_page:
(yield self._output())
else:
break | Iterate through the pages of the file.
Returns (text, boxes) pairs, where:
text is a list of (x, y, fontnum, glyphnum, width) tuples
boxes is a list of (x, y, height, width) tuples
The coordinates are transformed into a standard Cartesian
coordinate system at the dpi value given when initializing.
The coordinates are floating point numbers, but otherwise
precision is not lost and coordinate values are not clipped to
integers. | lib/matplotlib/dviread.py | __iter__ | mkcor/matplotlib | 35 | python | def __iter__(self):
'\n Iterate through the pages of the file.\n\n Returns (text, boxes) pairs, where:\n text is a list of (x, y, fontnum, glyphnum, width) tuples\n boxes is a list of (x, y, height, width) tuples\n\n The coordinates are transformed into a standard Cartesian\n coordinate system at the dpi value given when initializing.\n The coordinates are floating point numbers, but otherwise\n precision is not lost and coordinate values are not clipped to\n integers.\n '
while True:
have_page = self._read()
if have_page:
(yield self._output())
else:
break | def __iter__(self):
'\n Iterate through the pages of the file.\n\n Returns (text, boxes) pairs, where:\n text is a list of (x, y, fontnum, glyphnum, width) tuples\n boxes is a list of (x, y, height, width) tuples\n\n The coordinates are transformed into a standard Cartesian\n coordinate system at the dpi value given when initializing.\n The coordinates are floating point numbers, but otherwise\n precision is not lost and coordinate values are not clipped to\n integers.\n '
while True:
have_page = self._read()
if have_page:
(yield self._output())
else:
break<|docstring|>Iterate through the pages of the file.
Returns (text, boxes) pairs, where:
text is a list of (x, y, fontnum, glyphnum, width) tuples
boxes is a list of (x, y, height, width) tuples
The coordinates are transformed into a standard Cartesian
coordinate system at the dpi value given when initializing.
The coordinates are floating point numbers, but otherwise
precision is not lost and coordinate values are not clipped to
integers.<|endoftext|> |
b5194b1fb90c2109adf4f7be3d494445c6206efae27c10679d9801e7c27abe5c | def close(self):
'\n Close the underlying file if it is open.\n '
if (not self.file.closed):
self.file.close() | Close the underlying file if it is open. | lib/matplotlib/dviread.py | close | mkcor/matplotlib | 35 | python | def close(self):
'\n \n '
if (not self.file.closed):
self.file.close() | def close(self):
'\n \n '
if (not self.file.closed):
self.file.close()<|docstring|>Close the underlying file if it is open.<|endoftext|> |
91841f29f0f80eca8097a549a87dd5cefe887ce64281b48278c6724b3f5cab83 | def _output(self):
'\n Output the text and boxes belonging to the most recent page.\n page = dvi._output()\n '
(minx, miny, maxx, maxy) = (np.inf, np.inf, (- np.inf), (- np.inf))
maxy_pure = (- np.inf)
for elt in (self.text + self.boxes):
if (len(elt) == 4):
(x, y, h, w) = elt
e = 0
else:
(x, y, font, g, w) = elt
(h, e) = font._height_depth_of(g)
minx = min(minx, x)
miny = min(miny, (y - h))
maxx = max(maxx, (x + w))
maxy = max(maxy, (y + e))
maxy_pure = max(maxy_pure, y)
if (self.dpi is None):
return mpl_cbook.Bunch(text=self.text, boxes=self.boxes, width=(maxx - minx), height=(maxy_pure - miny), descent=descent)
d = (self.dpi / (72.27 * (2 ** 16)))
if (self.baseline is None):
descent = ((maxy - maxy_pure) * d)
else:
descent = self.baseline
text = [(((x - minx) * d), (((maxy - y) * d) - descent), f, g, (w * d)) for (x, y, f, g, w) in self.text]
boxes = [(((x - minx) * d), (((maxy - y) * d) - descent), (h * d), (w * d)) for (x, y, h, w) in self.boxes]
return mpl_cbook.Bunch(text=text, boxes=boxes, width=((maxx - minx) * d), height=((maxy_pure - miny) * d), descent=descent) | Output the text and boxes belonging to the most recent page.
page = dvi._output() | lib/matplotlib/dviread.py | _output | mkcor/matplotlib | 35 | python | def _output(self):
'\n Output the text and boxes belonging to the most recent page.\n page = dvi._output()\n '
(minx, miny, maxx, maxy) = (np.inf, np.inf, (- np.inf), (- np.inf))
maxy_pure = (- np.inf)
for elt in (self.text + self.boxes):
if (len(elt) == 4):
(x, y, h, w) = elt
e = 0
else:
(x, y, font, g, w) = elt
(h, e) = font._height_depth_of(g)
minx = min(minx, x)
miny = min(miny, (y - h))
maxx = max(maxx, (x + w))
maxy = max(maxy, (y + e))
maxy_pure = max(maxy_pure, y)
if (self.dpi is None):
return mpl_cbook.Bunch(text=self.text, boxes=self.boxes, width=(maxx - minx), height=(maxy_pure - miny), descent=descent)
d = (self.dpi / (72.27 * (2 ** 16)))
if (self.baseline is None):
descent = ((maxy - maxy_pure) * d)
else:
descent = self.baseline
text = [(((x - minx) * d), (((maxy - y) * d) - descent), f, g, (w * d)) for (x, y, f, g, w) in self.text]
boxes = [(((x - minx) * d), (((maxy - y) * d) - descent), (h * d), (w * d)) for (x, y, h, w) in self.boxes]
return mpl_cbook.Bunch(text=text, boxes=boxes, width=((maxx - minx) * d), height=((maxy_pure - miny) * d), descent=descent) | def _output(self):
'\n Output the text and boxes belonging to the most recent page.\n page = dvi._output()\n '
(minx, miny, maxx, maxy) = (np.inf, np.inf, (- np.inf), (- np.inf))
maxy_pure = (- np.inf)
for elt in (self.text + self.boxes):
if (len(elt) == 4):
(x, y, h, w) = elt
e = 0
else:
(x, y, font, g, w) = elt
(h, e) = font._height_depth_of(g)
minx = min(minx, x)
miny = min(miny, (y - h))
maxx = max(maxx, (x + w))
maxy = max(maxy, (y + e))
maxy_pure = max(maxy_pure, y)
if (self.dpi is None):
return mpl_cbook.Bunch(text=self.text, boxes=self.boxes, width=(maxx - minx), height=(maxy_pure - miny), descent=descent)
d = (self.dpi / (72.27 * (2 ** 16)))
if (self.baseline is None):
descent = ((maxy - maxy_pure) * d)
else:
descent = self.baseline
text = [(((x - minx) * d), (((maxy - y) * d) - descent), f, g, (w * d)) for (x, y, f, g, w) in self.text]
boxes = [(((x - minx) * d), (((maxy - y) * d) - descent), (h * d), (w * d)) for (x, y, h, w) in self.boxes]
return mpl_cbook.Bunch(text=text, boxes=boxes, width=((maxx - minx) * d), height=((maxy_pure - miny) * d), descent=descent)<|docstring|>Output the text and boxes belonging to the most recent page.
page = dvi._output()<|endoftext|> |
2bf3d1b79e289d31da9200dca75fddeb7f43ee5aaf3b4a433a78c013430480d2 | def _read(self):
'\n Read one page from the file. Return True if successful,\n False if there were no more pages.\n '
while True:
byte = ord(self.file.read(1)[0])
self._dispatch(byte)
if (byte == 140):
return True
if (self.state == _dvistate.post_post):
self.close()
return False | Read one page from the file. Return True if successful,
False if there were no more pages. | lib/matplotlib/dviread.py | _read | mkcor/matplotlib | 35 | python | def _read(self):
'\n Read one page from the file. Return True if successful,\n False if there were no more pages.\n '
while True:
byte = ord(self.file.read(1)[0])
self._dispatch(byte)
if (byte == 140):
return True
if (self.state == _dvistate.post_post):
self.close()
return False | def _read(self):
'\n Read one page from the file. Return True if successful,\n False if there were no more pages.\n '
while True:
byte = ord(self.file.read(1)[0])
self._dispatch(byte)
if (byte == 140):
return True
if (self.state == _dvistate.post_post):
self.close()
return False<|docstring|>Read one page from the file. Return True if successful,
False if there were no more pages.<|endoftext|> |
83116a33374e1de0a5c29c311910920f77c37a707381fbcf81051a333d2df1f8 | def _arg(self, nbytes, signed=False):
'\n Read and return an integer argument *nbytes* long.\n Signedness is determined by the *signed* keyword.\n '
str = self.file.read(nbytes)
value = ord(str[0])
if (signed and (value >= 128)):
value = (value - 256)
for i in range(1, nbytes):
value = ((256 * value) + ord(str[i]))
return value | Read and return an integer argument *nbytes* long.
Signedness is determined by the *signed* keyword. | lib/matplotlib/dviread.py | _arg | mkcor/matplotlib | 35 | python | def _arg(self, nbytes, signed=False):
'\n Read and return an integer argument *nbytes* long.\n Signedness is determined by the *signed* keyword.\n '
str = self.file.read(nbytes)
value = ord(str[0])
if (signed and (value >= 128)):
value = (value - 256)
for i in range(1, nbytes):
value = ((256 * value) + ord(str[i]))
return value | def _arg(self, nbytes, signed=False):
'\n Read and return an integer argument *nbytes* long.\n Signedness is determined by the *signed* keyword.\n '
str = self.file.read(nbytes)
value = ord(str[0])
if (signed and (value >= 128)):
value = (value - 256)
for i in range(1, nbytes):
value = ((256 * value) + ord(str[i]))
return value<|docstring|>Read and return an integer argument *nbytes* long.
Signedness is determined by the *signed* keyword.<|endoftext|> |
8425919e9c606eb048fa657a8e33b0ae95a50726209a897d58d10d871f5deba8 | def _dispatch(self, byte):
'\n Based on the opcode *byte*, read the correct kinds of\n arguments from the dvi file and call the method implementing\n that opcode with those arguments.\n '
if (0 <= byte <= 127):
self._set_char(byte)
elif (byte == 128):
self._set_char(self._arg(1))
elif (byte == 129):
self._set_char(self._arg(2))
elif (byte == 130):
self._set_char(self._arg(3))
elif (byte == 131):
self._set_char(self._arg(4, True))
elif (byte == 132):
self._set_rule(self._arg(4, True), self._arg(4, True))
elif (byte == 133):
self._put_char(self._arg(1))
elif (byte == 134):
self._put_char(self._arg(2))
elif (byte == 135):
self._put_char(self._arg(3))
elif (byte == 136):
self._put_char(self._arg(4, True))
elif (byte == 137):
self._put_rule(self._arg(4, True), self._arg(4, True))
elif (byte == 138):
self._nop()
elif (byte == 139):
self._bop(*[self._arg(4, True) for i in range(11)])
elif (byte == 140):
self._eop()
elif (byte == 141):
self._push()
elif (byte == 142):
self._pop()
elif (byte == 143):
self._right(self._arg(1, True))
elif (byte == 144):
self._right(self._arg(2, True))
elif (byte == 145):
self._right(self._arg(3, True))
elif (byte == 146):
self._right(self._arg(4, True))
elif (byte == 147):
self._right_w(None)
elif (byte == 148):
self._right_w(self._arg(1, True))
elif (byte == 149):
self._right_w(self._arg(2, True))
elif (byte == 150):
self._right_w(self._arg(3, True))
elif (byte == 151):
self._right_w(self._arg(4, True))
elif (byte == 152):
self._right_x(None)
elif (byte == 153):
self._right_x(self._arg(1, True))
elif (byte == 154):
self._right_x(self._arg(2, True))
elif (byte == 155):
self._right_x(self._arg(3, True))
elif (byte == 156):
self._right_x(self._arg(4, True))
elif (byte == 157):
self._down(self._arg(1, True))
elif (byte == 158):
self._down(self._arg(2, True))
elif (byte == 159):
self._down(self._arg(3, True))
elif (byte == 160):
self._down(self._arg(4, True))
elif (byte == 161):
self._down_y(None)
elif (byte == 162):
self._down_y(self._arg(1, True))
elif (byte == 163):
self._down_y(self._arg(2, True))
elif (byte == 164):
self._down_y(self._arg(3, True))
elif (byte == 165):
self._down_y(self._arg(4, True))
elif (byte == 166):
self._down_z(None)
elif (byte == 167):
self._down_z(self._arg(1, True))
elif (byte == 168):
self._down_z(self._arg(2, True))
elif (byte == 169):
self._down_z(self._arg(3, True))
elif (byte == 170):
self._down_z(self._arg(4, True))
elif (171 <= byte <= 234):
self._fnt_num((byte - 171))
elif (byte == 235):
self._fnt_num(self._arg(1))
elif (byte == 236):
self._fnt_num(self._arg(2))
elif (byte == 237):
self._fnt_num(self._arg(3))
elif (byte == 238):
self._fnt_num(self._arg(4, True))
elif (239 <= byte <= 242):
len = self._arg((byte - 238))
special = self.file.read(len)
self._xxx(special)
elif (243 <= byte <= 246):
k = self._arg((byte - 242), (byte == 246))
(c, s, d, a, l) = [self._arg(x) for x in (4, 4, 4, 1, 1)]
n = self.file.read((a + l))
self._fnt_def(k, c, s, d, a, l, n)
elif (byte == 247):
(i, num, den, mag, k) = [self._arg(x) for x in (1, 4, 4, 4, 1)]
x = self.file.read(k)
self._pre(i, num, den, mag, x)
elif (byte == 248):
self._post()
elif (byte == 249):
self._post_post()
else:
raise ValueError(('unknown command: byte %d' % byte)) | Based on the opcode *byte*, read the correct kinds of
arguments from the dvi file and call the method implementing
that opcode with those arguments. | lib/matplotlib/dviread.py | _dispatch | mkcor/matplotlib | 35 | python | def _dispatch(self, byte):
'\n Based on the opcode *byte*, read the correct kinds of\n arguments from the dvi file and call the method implementing\n that opcode with those arguments.\n '
if (0 <= byte <= 127):
self._set_char(byte)
elif (byte == 128):
self._set_char(self._arg(1))
elif (byte == 129):
self._set_char(self._arg(2))
elif (byte == 130):
self._set_char(self._arg(3))
elif (byte == 131):
self._set_char(self._arg(4, True))
elif (byte == 132):
self._set_rule(self._arg(4, True), self._arg(4, True))
elif (byte == 133):
self._put_char(self._arg(1))
elif (byte == 134):
self._put_char(self._arg(2))
elif (byte == 135):
self._put_char(self._arg(3))
elif (byte == 136):
self._put_char(self._arg(4, True))
elif (byte == 137):
self._put_rule(self._arg(4, True), self._arg(4, True))
elif (byte == 138):
self._nop()
elif (byte == 139):
self._bop(*[self._arg(4, True) for i in range(11)])
elif (byte == 140):
self._eop()
elif (byte == 141):
self._push()
elif (byte == 142):
self._pop()
elif (byte == 143):
self._right(self._arg(1, True))
elif (byte == 144):
self._right(self._arg(2, True))
elif (byte == 145):
self._right(self._arg(3, True))
elif (byte == 146):
self._right(self._arg(4, True))
elif (byte == 147):
self._right_w(None)
elif (byte == 148):
self._right_w(self._arg(1, True))
elif (byte == 149):
self._right_w(self._arg(2, True))
elif (byte == 150):
self._right_w(self._arg(3, True))
elif (byte == 151):
self._right_w(self._arg(4, True))
elif (byte == 152):
self._right_x(None)
elif (byte == 153):
self._right_x(self._arg(1, True))
elif (byte == 154):
self._right_x(self._arg(2, True))
elif (byte == 155):
self._right_x(self._arg(3, True))
elif (byte == 156):
self._right_x(self._arg(4, True))
elif (byte == 157):
self._down(self._arg(1, True))
elif (byte == 158):
self._down(self._arg(2, True))
elif (byte == 159):
self._down(self._arg(3, True))
elif (byte == 160):
self._down(self._arg(4, True))
elif (byte == 161):
self._down_y(None)
elif (byte == 162):
self._down_y(self._arg(1, True))
elif (byte == 163):
self._down_y(self._arg(2, True))
elif (byte == 164):
self._down_y(self._arg(3, True))
elif (byte == 165):
self._down_y(self._arg(4, True))
elif (byte == 166):
self._down_z(None)
elif (byte == 167):
self._down_z(self._arg(1, True))
elif (byte == 168):
self._down_z(self._arg(2, True))
elif (byte == 169):
self._down_z(self._arg(3, True))
elif (byte == 170):
self._down_z(self._arg(4, True))
elif (171 <= byte <= 234):
self._fnt_num((byte - 171))
elif (byte == 235):
self._fnt_num(self._arg(1))
elif (byte == 236):
self._fnt_num(self._arg(2))
elif (byte == 237):
self._fnt_num(self._arg(3))
elif (byte == 238):
self._fnt_num(self._arg(4, True))
elif (239 <= byte <= 242):
len = self._arg((byte - 238))
special = self.file.read(len)
self._xxx(special)
elif (243 <= byte <= 246):
k = self._arg((byte - 242), (byte == 246))
(c, s, d, a, l) = [self._arg(x) for x in (4, 4, 4, 1, 1)]
n = self.file.read((a + l))
self._fnt_def(k, c, s, d, a, l, n)
elif (byte == 247):
(i, num, den, mag, k) = [self._arg(x) for x in (1, 4, 4, 4, 1)]
x = self.file.read(k)
self._pre(i, num, den, mag, x)
elif (byte == 248):
self._post()
elif (byte == 249):
self._post_post()
else:
raise ValueError(('unknown command: byte %d' % byte)) | def _dispatch(self, byte):
'\n Based on the opcode *byte*, read the correct kinds of\n arguments from the dvi file and call the method implementing\n that opcode with those arguments.\n '
if (0 <= byte <= 127):
self._set_char(byte)
elif (byte == 128):
self._set_char(self._arg(1))
elif (byte == 129):
self._set_char(self._arg(2))
elif (byte == 130):
self._set_char(self._arg(3))
elif (byte == 131):
self._set_char(self._arg(4, True))
elif (byte == 132):
self._set_rule(self._arg(4, True), self._arg(4, True))
elif (byte == 133):
self._put_char(self._arg(1))
elif (byte == 134):
self._put_char(self._arg(2))
elif (byte == 135):
self._put_char(self._arg(3))
elif (byte == 136):
self._put_char(self._arg(4, True))
elif (byte == 137):
self._put_rule(self._arg(4, True), self._arg(4, True))
elif (byte == 138):
self._nop()
elif (byte == 139):
self._bop(*[self._arg(4, True) for i in range(11)])
elif (byte == 140):
self._eop()
elif (byte == 141):
self._push()
elif (byte == 142):
self._pop()
elif (byte == 143):
self._right(self._arg(1, True))
elif (byte == 144):
self._right(self._arg(2, True))
elif (byte == 145):
self._right(self._arg(3, True))
elif (byte == 146):
self._right(self._arg(4, True))
elif (byte == 147):
self._right_w(None)
elif (byte == 148):
self._right_w(self._arg(1, True))
elif (byte == 149):
self._right_w(self._arg(2, True))
elif (byte == 150):
self._right_w(self._arg(3, True))
elif (byte == 151):
self._right_w(self._arg(4, True))
elif (byte == 152):
self._right_x(None)
elif (byte == 153):
self._right_x(self._arg(1, True))
elif (byte == 154):
self._right_x(self._arg(2, True))
elif (byte == 155):
self._right_x(self._arg(3, True))
elif (byte == 156):
self._right_x(self._arg(4, True))
elif (byte == 157):
self._down(self._arg(1, True))
elif (byte == 158):
self._down(self._arg(2, True))
elif (byte == 159):
self._down(self._arg(3, True))
elif (byte == 160):
self._down(self._arg(4, True))
elif (byte == 161):
self._down_y(None)
elif (byte == 162):
self._down_y(self._arg(1, True))
elif (byte == 163):
self._down_y(self._arg(2, True))
elif (byte == 164):
self._down_y(self._arg(3, True))
elif (byte == 165):
self._down_y(self._arg(4, True))
elif (byte == 166):
self._down_z(None)
elif (byte == 167):
self._down_z(self._arg(1, True))
elif (byte == 168):
self._down_z(self._arg(2, True))
elif (byte == 169):
self._down_z(self._arg(3, True))
elif (byte == 170):
self._down_z(self._arg(4, True))
elif (171 <= byte <= 234):
self._fnt_num((byte - 171))
elif (byte == 235):
self._fnt_num(self._arg(1))
elif (byte == 236):
self._fnt_num(self._arg(2))
elif (byte == 237):
self._fnt_num(self._arg(3))
elif (byte == 238):
self._fnt_num(self._arg(4, True))
elif (239 <= byte <= 242):
len = self._arg((byte - 238))
special = self.file.read(len)
self._xxx(special)
elif (243 <= byte <= 246):
k = self._arg((byte - 242), (byte == 246))
(c, s, d, a, l) = [self._arg(x) for x in (4, 4, 4, 1, 1)]
n = self.file.read((a + l))
self._fnt_def(k, c, s, d, a, l, n)
elif (byte == 247):
(i, num, den, mag, k) = [self._arg(x) for x in (1, 4, 4, 4, 1)]
x = self.file.read(k)
self._pre(i, num, den, mag, x)
elif (byte == 248):
self._post()
elif (byte == 249):
self._post_post()
else:
raise ValueError(('unknown command: byte %d' % byte))<|docstring|>Based on the opcode *byte*, read the correct kinds of
arguments from the dvi file and call the method implementing
that opcode with those arguments.<|endoftext|> |
2326776fce973626e2f25f460fa2e154558a074455481f32d0bec3a186a14309 | def _width_of(self, char):
'\n Width of char in dvi units. For internal use by dviread.py.\n '
width = self._tfm.width.get(char, None)
if (width is not None):
return _mul2012(width, self._scale)
matplotlib.verbose.report(('No width for char %d in font %s' % (char, self.texname)), 'debug')
return 0 | Width of char in dvi units. For internal use by dviread.py. | lib/matplotlib/dviread.py | _width_of | mkcor/matplotlib | 35 | python | def _width_of(self, char):
'\n \n '
width = self._tfm.width.get(char, None)
if (width is not None):
return _mul2012(width, self._scale)
matplotlib.verbose.report(('No width for char %d in font %s' % (char, self.texname)), 'debug')
return 0 | def _width_of(self, char):
'\n \n '
width = self._tfm.width.get(char, None)
if (width is not None):
return _mul2012(width, self._scale)
matplotlib.verbose.report(('No width for char %d in font %s' % (char, self.texname)), 'debug')
return 0<|docstring|>Width of char in dvi units. For internal use by dviread.py.<|endoftext|> |
9b27edcdbe063c1ef16e59ebd5377788e7c8c680551b317613d9fee39a871b14 | def _height_depth_of(self, char):
'\n Height and depth of char in dvi units. For internal use by dviread.py.\n '
result = []
for (metric, name) in ((self._tfm.height, 'height'), (self._tfm.depth, 'depth')):
value = metric.get(char, None)
if (value is None):
matplotlib.verbose.report(('No %s for char %d in font %s' % (name, char, self.texname)), 'debug')
result.append(0)
else:
result.append(_mul2012(value, self._scale))
return result | Height and depth of char in dvi units. For internal use by dviread.py. | lib/matplotlib/dviread.py | _height_depth_of | mkcor/matplotlib | 35 | python | def _height_depth_of(self, char):
'\n \n '
result = []
for (metric, name) in ((self._tfm.height, 'height'), (self._tfm.depth, 'depth')):
value = metric.get(char, None)
if (value is None):
matplotlib.verbose.report(('No %s for char %d in font %s' % (name, char, self.texname)), 'debug')
result.append(0)
else:
result.append(_mul2012(value, self._scale))
return result | def _height_depth_of(self, char):
'\n \n '
result = []
for (metric, name) in ((self._tfm.height, 'height'), (self._tfm.depth, 'depth')):
value = metric.get(char, None)
if (value is None):
matplotlib.verbose.report(('No %s for char %d in font %s' % (name, char, self.texname)), 'debug')
result.append(0)
else:
result.append(_mul2012(value, self._scale))
return result<|docstring|>Height and depth of char in dvi units. For internal use by dviread.py.<|endoftext|> |
9ebae7ebb1f1098ca81aad20cc324127091f16745f18f300c9703c1674b5ae3e | def _parse(self, file):
'Parse each line into words.'
for line in file:
line = line.strip()
if ((line == '') or line.startswith('%')):
continue
(words, pos) = ([], 0)
while (pos < len(line)):
if (line[pos] == '"'):
pos += 1
end = line.index('"', pos)
words.append(line[pos:end])
pos = (end + 1)
else:
end = line.find(' ', (pos + 1))
if (end == (- 1)):
end = len(line)
words.append(line[pos:end])
pos = end
while ((pos < len(line)) and (line[pos] == ' ')):
pos += 1
self._register(words) | Parse each line into words. | lib/matplotlib/dviread.py | _parse | mkcor/matplotlib | 35 | python | def _parse(self, file):
for line in file:
line = line.strip()
if ((line == ) or line.startswith('%')):
continue
(words, pos) = ([], 0)
while (pos < len(line)):
if (line[pos] == '"'):
pos += 1
end = line.index('"', pos)
words.append(line[pos:end])
pos = (end + 1)
else:
end = line.find(' ', (pos + 1))
if (end == (- 1)):
end = len(line)
words.append(line[pos:end])
pos = end
while ((pos < len(line)) and (line[pos] == ' ')):
pos += 1
self._register(words) | def _parse(self, file):
for line in file:
line = line.strip()
if ((line == ) or line.startswith('%')):
continue
(words, pos) = ([], 0)
while (pos < len(line)):
if (line[pos] == '"'):
pos += 1
end = line.index('"', pos)
words.append(line[pos:end])
pos = (end + 1)
else:
end = line.find(' ', (pos + 1))
if (end == (- 1)):
end = len(line)
words.append(line[pos:end])
pos = end
while ((pos < len(line)) and (line[pos] == ' ')):
pos += 1
self._register(words)<|docstring|>Parse each line into words.<|endoftext|> |
93f03c169271ddf87e4e2944b1893139a1e74fa0543ce49cc61a6d0817e7b407 | def _register(self, words):
'Register a font described by "words".\n\n The format is, AFAIK: texname fontname [effects and filenames]\n Effects are PostScript snippets like ".177 SlantFont",\n filenames begin with one or two less-than signs. A filename\n ending in enc is an encoding file, other filenames are font\n files. This can be overridden with a left bracket: <[foobar\n indicates an encoding file named foobar.\n\n There is some difference between <foo.pfb and <<bar.pfb in\n subsetting, but I have no example of << in my TeX installation.\n '
(texname, psname) = words[:2]
(effects, encoding, filename) = ('', None, None)
for word in words[2:]:
if (not word.startswith('<')):
effects = word
else:
word = word.lstrip('<')
if (word.startswith('[') or word.endswith('.enc')):
if (encoding is not None):
matplotlib.verbose.report(('Multiple encodings for %s = %s' % (texname, psname)), 'debug')
if word.startswith('['):
encoding = word[1:]
else:
encoding = word
else:
assert (filename is None)
filename = word
eff = effects.split()
effects = {}
try:
effects['slant'] = float(eff[(eff.index('SlantFont') - 1)])
except ValueError:
pass
try:
effects['extend'] = float(eff[(eff.index('ExtendFont') - 1)])
except ValueError:
pass
self._font[texname] = mpl_cbook.Bunch(texname=texname, psname=psname, effects=effects, encoding=encoding, filename=filename) | Register a font described by "words".
The format is, AFAIK: texname fontname [effects and filenames]
Effects are PostScript snippets like ".177 SlantFont",
filenames begin with one or two less-than signs. A filename
ending in enc is an encoding file, other filenames are font
files. This can be overridden with a left bracket: <[foobar
indicates an encoding file named foobar.
There is some difference between <foo.pfb and <<bar.pfb in
subsetting, but I have no example of << in my TeX installation. | lib/matplotlib/dviread.py | _register | mkcor/matplotlib | 35 | python | def _register(self, words):
'Register a font described by "words".\n\n The format is, AFAIK: texname fontname [effects and filenames]\n Effects are PostScript snippets like ".177 SlantFont",\n filenames begin with one or two less-than signs. A filename\n ending in enc is an encoding file, other filenames are font\n files. This can be overridden with a left bracket: <[foobar\n indicates an encoding file named foobar.\n\n There is some difference between <foo.pfb and <<bar.pfb in\n subsetting, but I have no example of << in my TeX installation.\n '
(texname, psname) = words[:2]
(effects, encoding, filename) = (, None, None)
for word in words[2:]:
if (not word.startswith('<')):
effects = word
else:
word = word.lstrip('<')
if (word.startswith('[') or word.endswith('.enc')):
if (encoding is not None):
matplotlib.verbose.report(('Multiple encodings for %s = %s' % (texname, psname)), 'debug')
if word.startswith('['):
encoding = word[1:]
else:
encoding = word
else:
assert (filename is None)
filename = word
eff = effects.split()
effects = {}
try:
effects['slant'] = float(eff[(eff.index('SlantFont') - 1)])
except ValueError:
pass
try:
effects['extend'] = float(eff[(eff.index('ExtendFont') - 1)])
except ValueError:
pass
self._font[texname] = mpl_cbook.Bunch(texname=texname, psname=psname, effects=effects, encoding=encoding, filename=filename) | def _register(self, words):
'Register a font described by "words".\n\n The format is, AFAIK: texname fontname [effects and filenames]\n Effects are PostScript snippets like ".177 SlantFont",\n filenames begin with one or two less-than signs. A filename\n ending in enc is an encoding file, other filenames are font\n files. This can be overridden with a left bracket: <[foobar\n indicates an encoding file named foobar.\n\n There is some difference between <foo.pfb and <<bar.pfb in\n subsetting, but I have no example of << in my TeX installation.\n '
(texname, psname) = words[:2]
(effects, encoding, filename) = (, None, None)
for word in words[2:]:
if (not word.startswith('<')):
effects = word
else:
word = word.lstrip('<')
if (word.startswith('[') or word.endswith('.enc')):
if (encoding is not None):
matplotlib.verbose.report(('Multiple encodings for %s = %s' % (texname, psname)), 'debug')
if word.startswith('['):
encoding = word[1:]
else:
encoding = word
else:
assert (filename is None)
filename = word
eff = effects.split()
effects = {}
try:
effects['slant'] = float(eff[(eff.index('SlantFont') - 1)])
except ValueError:
pass
try:
effects['extend'] = float(eff[(eff.index('ExtendFont') - 1)])
except ValueError:
pass
self._font[texname] = mpl_cbook.Bunch(texname=texname, psname=psname, effects=effects, encoding=encoding, filename=filename)<|docstring|>Register a font described by "words".
The format is, AFAIK: texname fontname [effects and filenames]
Effects are PostScript snippets like ".177 SlantFont",
filenames begin with one or two less-than signs. A filename
ending in enc is an encoding file, other filenames are font
files. This can be overridden with a left bracket: <[foobar
indicates an encoding file named foobar.
There is some difference between <foo.pfb and <<bar.pfb in
subsetting, but I have no example of << in my TeX installation.<|endoftext|> |
d16059775e5d31079217e4e875453213968d24f2047bbf0e9ccd0a63ffed25c0 | @pytest.fixture
def test_app():
'Sets up a test app.'
app.config['TESTING'] = True
app.config['WTF_CSRF_ENABLED'] = False
(yield app) | Sets up a test app. | tests/unit/conftest.py | test_app | williamhakim10/Benchmarks-Program | 18 | python | @pytest.fixture
def test_app():
app.config['TESTING'] = True
app.config['WTF_CSRF_ENABLED'] = False
(yield app) | @pytest.fixture
def test_app():
app.config['TESTING'] = True
app.config['WTF_CSRF_ENABLED'] = False
(yield app)<|docstring|>Sets up a test app.<|endoftext|> |
1fe578228e29634a22a7b1ab42d219f59086c412523df072c44119614b1691a5 | @pytest.fixture
def client(test_app):
'Sets up a test client.'
with test_app.test_client() as client:
(yield client) | Sets up a test client. | tests/unit/conftest.py | client | williamhakim10/Benchmarks-Program | 18 | python | @pytest.fixture
def client(test_app):
with test_app.test_client() as client:
(yield client) | @pytest.fixture
def client(test_app):
with test_app.test_client() as client:
(yield client)<|docstring|>Sets up a test client.<|endoftext|> |
e08caaef9720f16e68018c5e3297677a8f9352192f87bacce11027aec916ff1c | @pytest.fixture
def mocked_userform(mocker):
'Mocks the UserForm. For use in testing application routes.'
mocked_userform = mocker.patch('app.routes.UserForm')
mocked_userform.return_value.news_org.data = 'foo and a bar'
mocked_userform.return_value.name.data = 'foo bar'
mocked_userform.return_value.email.data = '[email protected]'
mocked_userform.return_value.validate_on_submit.return_value = True
(yield mocked_userform) | Mocks the UserForm. For use in testing application routes. | tests/unit/conftest.py | mocked_userform | williamhakim10/Benchmarks-Program | 18 | python | @pytest.fixture
def mocked_userform(mocker):
mocked_userform = mocker.patch('app.routes.UserForm')
mocked_userform.return_value.news_org.data = 'foo and a bar'
mocked_userform.return_value.name.data = 'foo bar'
mocked_userform.return_value.email.data = '[email protected]'
mocked_userform.return_value.validate_on_submit.return_value = True
(yield mocked_userform) | @pytest.fixture
def mocked_userform(mocker):
mocked_userform = mocker.patch('app.routes.UserForm')
mocked_userform.return_value.news_org.data = 'foo and a bar'
mocked_userform.return_value.name.data = 'foo bar'
mocked_userform.return_value.email.data = '[email protected]'
mocked_userform.return_value.validate_on_submit.return_value = True
(yield mocked_userform)<|docstring|>Mocks the UserForm. For use in testing application routes.<|endoftext|> |
46adddc01f3dbddd767e40e8be3945e5e7ae604ec7228d0f6b11a8805d922184 | @pytest.fixture
def mocked_orgform(mocker):
'Mocks the OrgForm. For use in testing application routes.'
mocked_orgform = mocker.patch('app.routes.OrgForm')
mocked_orgform.return_value.financial_classification.data = 'foo'
mocked_orgform.return_value.coverage_scope.data = 'bar'
mocked_orgform.return_value.coverage_focus.data = 'baz'
mocked_orgform.return_value.platform.data = 'qux'
mocked_orgform.return_value.employee_range.data = 'quux'
mocked_orgform.return_value.budget.data = 'quuz'
mocked_corge_booleanfield = MagicMock(spec=BooleanField, data=True, label=MagicMock(text='corge'))
mocked_other_booleanfield = MagicMock(spec=BooleanField, data=True, label=MagicMock(text='Other'))
mocked_orgform.return_value.__iter__.return_value = [mocked_corge_booleanfield, mocked_other_booleanfield]
mocked_orgform.return_value.other_affiliation_name.data = 'garply'
mocked_orgform.return_value.validate_on_submit.return_value = True
(yield mocked_orgform) | Mocks the OrgForm. For use in testing application routes. | tests/unit/conftest.py | mocked_orgform | williamhakim10/Benchmarks-Program | 18 | python | @pytest.fixture
def mocked_orgform(mocker):
mocked_orgform = mocker.patch('app.routes.OrgForm')
mocked_orgform.return_value.financial_classification.data = 'foo'
mocked_orgform.return_value.coverage_scope.data = 'bar'
mocked_orgform.return_value.coverage_focus.data = 'baz'
mocked_orgform.return_value.platform.data = 'qux'
mocked_orgform.return_value.employee_range.data = 'quux'
mocked_orgform.return_value.budget.data = 'quuz'
mocked_corge_booleanfield = MagicMock(spec=BooleanField, data=True, label=MagicMock(text='corge'))
mocked_other_booleanfield = MagicMock(spec=BooleanField, data=True, label=MagicMock(text='Other'))
mocked_orgform.return_value.__iter__.return_value = [mocked_corge_booleanfield, mocked_other_booleanfield]
mocked_orgform.return_value.other_affiliation_name.data = 'garply'
mocked_orgform.return_value.validate_on_submit.return_value = True
(yield mocked_orgform) | @pytest.fixture
def mocked_orgform(mocker):
mocked_orgform = mocker.patch('app.routes.OrgForm')
mocked_orgform.return_value.financial_classification.data = 'foo'
mocked_orgform.return_value.coverage_scope.data = 'bar'
mocked_orgform.return_value.coverage_focus.data = 'baz'
mocked_orgform.return_value.platform.data = 'qux'
mocked_orgform.return_value.employee_range.data = 'quux'
mocked_orgform.return_value.budget.data = 'quuz'
mocked_corge_booleanfield = MagicMock(spec=BooleanField, data=True, label=MagicMock(text='corge'))
mocked_other_booleanfield = MagicMock(spec=BooleanField, data=True, label=MagicMock(text='Other'))
mocked_orgform.return_value.__iter__.return_value = [mocked_corge_booleanfield, mocked_other_booleanfield]
mocked_orgform.return_value.other_affiliation_name.data = 'garply'
mocked_orgform.return_value.validate_on_submit.return_value = True
(yield mocked_orgform)<|docstring|>Mocks the OrgForm. For use in testing application routes.<|endoftext|> |
69e316c83eac9d8e56e47e997f396ab39f39bc9956d44ea9132e5bbc7d611e5c | @pytest.fixture
def fake_list_data():
'Provides a dictionary containing fake data for a MailChimp list.'
data = {'list_id': 'foo', 'list_name': 'bar', 'org_id': 1, 'key': 'foo-bar1', 'data_center': 'bar1', 'monthly_updates': False, 'store_aggregates': False, 'total_count': 'baz', 'open_rate': 'qux', 'creation_timestamp': 'quux', 'campaign_count': 'quuz'}
(yield data) | Provides a dictionary containing fake data for a MailChimp list. | tests/unit/conftest.py | fake_list_data | williamhakim10/Benchmarks-Program | 18 | python | @pytest.fixture
def fake_list_data():
data = {'list_id': 'foo', 'list_name': 'bar', 'org_id': 1, 'key': 'foo-bar1', 'data_center': 'bar1', 'monthly_updates': False, 'store_aggregates': False, 'total_count': 'baz', 'open_rate': 'qux', 'creation_timestamp': 'quux', 'campaign_count': 'quuz'}
(yield data) | @pytest.fixture
def fake_list_data():
data = {'list_id': 'foo', 'list_name': 'bar', 'org_id': 1, 'key': 'foo-bar1', 'data_center': 'bar1', 'monthly_updates': False, 'store_aggregates': False, 'total_count': 'baz', 'open_rate': 'qux', 'creation_timestamp': 'quux', 'campaign_count': 'quuz'}
(yield data)<|docstring|>Provides a dictionary containing fake data for a MailChimp list.<|endoftext|> |
79d4fd19a64a45970317e935d1e1360ec1c05001c39e9247e2ab0c9ce62355ec | @pytest.fixture
def fake_calculation_results():
'Provides a dictionary containing fake calculation results for a\n MailChimp list.'
calculation_results = {'frequency': 0.1, 'subscribers': 2, 'open_rate': 0.5, 'hist_bin_counts': [0.1, 0.2, 0.3], 'subscribed_pct': 0.2, 'unsubscribed_pct': 0.2, 'cleaned_pct': 0.2, 'pending_pct': 0.1, 'high_open_rt_pct': 0.1, 'cur_yr_inactive_pct': 0.1}
(yield calculation_results) | Provides a dictionary containing fake calculation results for a
MailChimp list. | tests/unit/conftest.py | fake_calculation_results | williamhakim10/Benchmarks-Program | 18 | python | @pytest.fixture
def fake_calculation_results():
'Provides a dictionary containing fake calculation results for a\n MailChimp list.'
calculation_results = {'frequency': 0.1, 'subscribers': 2, 'open_rate': 0.5, 'hist_bin_counts': [0.1, 0.2, 0.3], 'subscribed_pct': 0.2, 'unsubscribed_pct': 0.2, 'cleaned_pct': 0.2, 'pending_pct': 0.1, 'high_open_rt_pct': 0.1, 'cur_yr_inactive_pct': 0.1}
(yield calculation_results) | @pytest.fixture
def fake_calculation_results():
'Provides a dictionary containing fake calculation results for a\n MailChimp list.'
calculation_results = {'frequency': 0.1, 'subscribers': 2, 'open_rate': 0.5, 'hist_bin_counts': [0.1, 0.2, 0.3], 'subscribed_pct': 0.2, 'unsubscribed_pct': 0.2, 'cleaned_pct': 0.2, 'pending_pct': 0.1, 'high_open_rt_pct': 0.1, 'cur_yr_inactive_pct': 0.1}
(yield calculation_results)<|docstring|>Provides a dictionary containing fake calculation results for a
MailChimp list.<|endoftext|> |
2c262d3096de33e9db18af8411c3c93eb600028e1fbe819ccacea7f3fed946ec | @pytest.fixture
def fake_list_stats_query_result_as_df():
'Provides a Pandas DataFrame containing fake stats as could be extracted\n from the database.'
(yield pd.DataFrame({'subscribers': [3, 4, 6], 'subscribed_pct': [1, 1, 4], 'unsubscribed_pct': [1, 1, 1], 'cleaned_pct': [1, 1, 1], 'pending_pct': [1, 1, 1], 'open_rate': [0.5, 1, 1.5], 'high_open_rt_pct': [1, 1, 1], 'cur_yr_inactive_pct': [1, 1, 1]})) | Provides a Pandas DataFrame containing fake stats as could be extracted
from the database. | tests/unit/conftest.py | fake_list_stats_query_result_as_df | williamhakim10/Benchmarks-Program | 18 | python | @pytest.fixture
def fake_list_stats_query_result_as_df():
'Provides a Pandas DataFrame containing fake stats as could be extracted\n from the database.'
(yield pd.DataFrame({'subscribers': [3, 4, 6], 'subscribed_pct': [1, 1, 4], 'unsubscribed_pct': [1, 1, 1], 'cleaned_pct': [1, 1, 1], 'pending_pct': [1, 1, 1], 'open_rate': [0.5, 1, 1.5], 'high_open_rt_pct': [1, 1, 1], 'cur_yr_inactive_pct': [1, 1, 1]})) | @pytest.fixture
def fake_list_stats_query_result_as_df():
'Provides a Pandas DataFrame containing fake stats as could be extracted\n from the database.'
(yield pd.DataFrame({'subscribers': [3, 4, 6], 'subscribed_pct': [1, 1, 4], 'unsubscribed_pct': [1, 1, 1], 'cleaned_pct': [1, 1, 1], 'pending_pct': [1, 1, 1], 'open_rate': [0.5, 1, 1.5], 'high_open_rt_pct': [1, 1, 1], 'cur_yr_inactive_pct': [1, 1, 1]}))<|docstring|>Provides a Pandas DataFrame containing fake stats as could be extracted
from the database.<|endoftext|> |
ea4319656347314d8f3582cc640d861aacca54a78fd248e11cb1a892bd415729 | @pytest.fixture
def fake_list_stats_query_result_means():
'Provides a dictionary containing the mean values for the\n fake_list_stats_query_result_as_df() fixture.'
(yield {'subscribers': [4], 'subscribed_pct': [2], 'unsubscribed_pct': [1], 'cleaned_pct': [1], 'pending_pct': [1], 'open_rate': [1], 'high_open_rt_pct': [1], 'cur_yr_inactive_pct': [1]}) | Provides a dictionary containing the mean values for the
fake_list_stats_query_result_as_df() fixture. | tests/unit/conftest.py | fake_list_stats_query_result_means | williamhakim10/Benchmarks-Program | 18 | python | @pytest.fixture
def fake_list_stats_query_result_means():
'Provides a dictionary containing the mean values for the\n fake_list_stats_query_result_as_df() fixture.'
(yield {'subscribers': [4], 'subscribed_pct': [2], 'unsubscribed_pct': [1], 'cleaned_pct': [1], 'pending_pct': [1], 'open_rate': [1], 'high_open_rt_pct': [1], 'cur_yr_inactive_pct': [1]}) | @pytest.fixture
def fake_list_stats_query_result_means():
'Provides a dictionary containing the mean values for the\n fake_list_stats_query_result_as_df() fixture.'
(yield {'subscribers': [4], 'subscribed_pct': [2], 'unsubscribed_pct': [1], 'cleaned_pct': [1], 'pending_pct': [1], 'open_rate': [1], 'high_open_rt_pct': [1], 'cur_yr_inactive_pct': [1]})<|docstring|>Provides a dictionary containing the mean values for the
fake_list_stats_query_result_as_df() fixture.<|endoftext|> |
756e4394a1d9da9b022cd1fda46e037acbbcad6ee564f64c518439c35aa8e860 | @pytest.fixture
def mocked_mailchimp_list(mocker, fake_calculation_results):
'Mocks the MailChimp list class from app/lists.py and attaches fake calculation\n results to the mock attributes.'
mocked_mailchimp_list = mocker.patch('app.tasks.MailChimpList')
mocked_mailchimp_list.return_value = MagicMock(**fake_calculation_results)
(yield mocked_mailchimp_list) | Mocks the MailChimp list class from app/lists.py and attaches fake calculation
results to the mock attributes. | tests/unit/conftest.py | mocked_mailchimp_list | williamhakim10/Benchmarks-Program | 18 | python | @pytest.fixture
def mocked_mailchimp_list(mocker, fake_calculation_results):
'Mocks the MailChimp list class from app/lists.py and attaches fake calculation\n results to the mock attributes.'
mocked_mailchimp_list = mocker.patch('app.tasks.MailChimpList')
mocked_mailchimp_list.return_value = MagicMock(**fake_calculation_results)
(yield mocked_mailchimp_list) | @pytest.fixture
def mocked_mailchimp_list(mocker, fake_calculation_results):
'Mocks the MailChimp list class from app/lists.py and attaches fake calculation\n results to the mock attributes.'
mocked_mailchimp_list = mocker.patch('app.tasks.MailChimpList')
mocked_mailchimp_list.return_value = MagicMock(**fake_calculation_results)
(yield mocked_mailchimp_list)<|docstring|>Mocks the MailChimp list class from app/lists.py and attaches fake calculation
results to the mock attributes.<|endoftext|> |
7d24c34c8f89a1735b2760cec9c1a0d0be5069f5930604e95989eb2e8859a3c6 | @pytest.fixture
def mailchimp_list():
'Creates a MailChimpList. Used for testing class/instance methiods.'
(yield MailChimpList(1, 2, 'foo-bar1', 'bar1')) | Creates a MailChimpList. Used for testing class/instance methiods. | tests/unit/conftest.py | mailchimp_list | williamhakim10/Benchmarks-Program | 18 | python | @pytest.fixture
def mailchimp_list():
(yield MailChimpList(1, 2, 'foo-bar1', 'bar1')) | @pytest.fixture
def mailchimp_list():
(yield MailChimpList(1, 2, 'foo-bar1', 'bar1'))<|docstring|>Creates a MailChimpList. Used for testing class/instance methiods.<|endoftext|> |
42270ece57896aef9affa107f91e4cdeb3445e77015fd5e55d284355b67c6876 | @abstractmethod
def move(self, player: int, row: int, col: int) -> (Set[Solution], Set[Solution]):
'Plays a move at the given row and column for the given player.\n\n Assumptions:\n 1. The internal state of the SolutionManager is not at a terminal state.\n\n Args:\n player (int): the player making the move.\n row (int): the row to play\n col (int): the column to play\n\n Returns:\n removed_solutions (Set[Solution]): the Solutions that were removed after the given move.\n added_solutions (Set[Solution]): the Solutions that were added after the given move.\n '
pass | Plays a move at the given row and column for the given player.
Assumptions:
1. The internal state of the SolutionManager is not at a terminal state.
Args:
player (int): the player making the move.
row (int): the row to play
col (int): the column to play
Returns:
removed_solutions (Set[Solution]): the Solutions that were removed after the given move.
added_solutions (Set[Solution]): the Solutions that were added after the given move. | connect_four/evaluation/incremental_victor/solution/solution_manager.py | move | rpachauri/connect4 | 0 | python | @abstractmethod
def move(self, player: int, row: int, col: int) -> (Set[Solution], Set[Solution]):
'Plays a move at the given row and column for the given player.\n\n Assumptions:\n 1. The internal state of the SolutionManager is not at a terminal state.\n\n Args:\n player (int): the player making the move.\n row (int): the row to play\n col (int): the column to play\n\n Returns:\n removed_solutions (Set[Solution]): the Solutions that were removed after the given move.\n added_solutions (Set[Solution]): the Solutions that were added after the given move.\n '
pass | @abstractmethod
def move(self, player: int, row: int, col: int) -> (Set[Solution], Set[Solution]):
'Plays a move at the given row and column for the given player.\n\n Assumptions:\n 1. The internal state of the SolutionManager is not at a terminal state.\n\n Args:\n player (int): the player making the move.\n row (int): the row to play\n col (int): the column to play\n\n Returns:\n removed_solutions (Set[Solution]): the Solutions that were removed after the given move.\n added_solutions (Set[Solution]): the Solutions that were added after the given move.\n '
pass<|docstring|>Plays a move at the given row and column for the given player.
Assumptions:
1. The internal state of the SolutionManager is not at a terminal state.
Args:
player (int): the player making the move.
row (int): the row to play
col (int): the column to play
Returns:
removed_solutions (Set[Solution]): the Solutions that were removed after the given move.
added_solutions (Set[Solution]): the Solutions that were added after the given move.<|endoftext|> |
3b4b67981c866b211d3fd99a6a046a8ac54da0e684b65be9472ae394c1c75d6d | @abstractmethod
def undo_move(self) -> (Set[Solution], Set[Solution]):
'Undoes the most recent move.\n\n Raises:\n (AssertionError): if the internal state of the ProblemManager is\n at the state given upon initialization.\n\n Returns:\n removed_solutions (Set[Solution]): the Solutions that were removed by undoing the most recent move.\n added_solutions (Set[Solution]): the Solutions that were added by undoing the most recent move.\n '
pass | Undoes the most recent move.
Raises:
(AssertionError): if the internal state of the ProblemManager is
at the state given upon initialization.
Returns:
removed_solutions (Set[Solution]): the Solutions that were removed by undoing the most recent move.
added_solutions (Set[Solution]): the Solutions that were added by undoing the most recent move. | connect_four/evaluation/incremental_victor/solution/solution_manager.py | undo_move | rpachauri/connect4 | 0 | python | @abstractmethod
def undo_move(self) -> (Set[Solution], Set[Solution]):
'Undoes the most recent move.\n\n Raises:\n (AssertionError): if the internal state of the ProblemManager is\n at the state given upon initialization.\n\n Returns:\n removed_solutions (Set[Solution]): the Solutions that were removed by undoing the most recent move.\n added_solutions (Set[Solution]): the Solutions that were added by undoing the most recent move.\n '
pass | @abstractmethod
def undo_move(self) -> (Set[Solution], Set[Solution]):
'Undoes the most recent move.\n\n Raises:\n (AssertionError): if the internal state of the ProblemManager is\n at the state given upon initialization.\n\n Returns:\n removed_solutions (Set[Solution]): the Solutions that were removed by undoing the most recent move.\n added_solutions (Set[Solution]): the Solutions that were added by undoing the most recent move.\n '
pass<|docstring|>Undoes the most recent move.
Raises:
(AssertionError): if the internal state of the ProblemManager is
at the state given upon initialization.
Returns:
removed_solutions (Set[Solution]): the Solutions that were removed by undoing the most recent move.
added_solutions (Set[Solution]): the Solutions that were added by undoing the most recent move.<|endoftext|> |
6d14ddc1977bbf539add7fb0db34fa60a428094975a11ebc1d9de22c5dbeb988 | @abstractmethod
def get_solutions(self) -> Set[Solution]:
'Returns all Solutions for the current game position.\n\n Returns:\n solutions (Set[Solution]): the set of all Solutions that can be used in the current state.\n '
pass | Returns all Solutions for the current game position.
Returns:
solutions (Set[Solution]): the set of all Solutions that can be used in the current state. | connect_four/evaluation/incremental_victor/solution/solution_manager.py | get_solutions | rpachauri/connect4 | 0 | python | @abstractmethod
def get_solutions(self) -> Set[Solution]:
'Returns all Solutions for the current game position.\n\n Returns:\n solutions (Set[Solution]): the set of all Solutions that can be used in the current state.\n '
pass | @abstractmethod
def get_solutions(self) -> Set[Solution]:
'Returns all Solutions for the current game position.\n\n Returns:\n solutions (Set[Solution]): the set of all Solutions that can be used in the current state.\n '
pass<|docstring|>Returns all Solutions for the current game position.
Returns:
solutions (Set[Solution]): the set of all Solutions that can be used in the current state.<|endoftext|> |
89e9359523a9b7d91c035fa666ef401ded7a7c5eb845ba7d3202c965182bd881 | @abstractmethod
def get_win_conditions(self) -> Set[Solution]:
'Returns all win conditions for the current game position.\n\n Returns:\n win_conditions (Set[Solution]): a subset of all Solutions in this state.\n\n Constraints on win_conditions:\n 1. No Solution in win_conditions may be combined with another Solution in win_conditions.\n '
pass | Returns all win conditions for the current game position.
Returns:
win_conditions (Set[Solution]): a subset of all Solutions in this state.
Constraints on win_conditions:
1. No Solution in win_conditions may be combined with another Solution in win_conditions. | connect_four/evaluation/incremental_victor/solution/solution_manager.py | get_win_conditions | rpachauri/connect4 | 0 | python | @abstractmethod
def get_win_conditions(self) -> Set[Solution]:
'Returns all win conditions for the current game position.\n\n Returns:\n win_conditions (Set[Solution]): a subset of all Solutions in this state.\n\n Constraints on win_conditions:\n 1. No Solution in win_conditions may be combined with another Solution in win_conditions.\n '
pass | @abstractmethod
def get_win_conditions(self) -> Set[Solution]:
'Returns all win conditions for the current game position.\n\n Returns:\n win_conditions (Set[Solution]): a subset of all Solutions in this state.\n\n Constraints on win_conditions:\n 1. No Solution in win_conditions may be combined with another Solution in win_conditions.\n '
pass<|docstring|>Returns all win conditions for the current game position.
Returns:
win_conditions (Set[Solution]): a subset of all Solutions in this state.
Constraints on win_conditions:
1. No Solution in win_conditions may be combined with another Solution in win_conditions.<|endoftext|> |
c18356a98ae2b064fd0ca986502d8448ae0ec66f782f75e0a4b8a3ece9a5c01b | def apply_datastore_env_vars(project):
'Fetch `env_var` entities from the datastore and apply to the current env.\n\n Each `env_var` entity should have two string properties, `key` and `value`.\n '
from google.cloud.datastore import Client
for entity in Client(project).query(kind='env_var').fetch():
key = str(entity['key'])
value = str(entity['value'])
os.environ.setdefault(key, value) | Fetch `env_var` entities from the datastore and apply to the current env.
Each `env_var` entity should have two string properties, `key` and `value`. | django/liamnewmarch/google_cloud.py | apply_datastore_env_vars | liamnewmarch/liam.nwmr.ch | 0 | python | def apply_datastore_env_vars(project):
'Fetch `env_var` entities from the datastore and apply to the current env.\n\n Each `env_var` entity should have two string properties, `key` and `value`.\n '
from google.cloud.datastore import Client
for entity in Client(project).query(kind='env_var').fetch():
key = str(entity['key'])
value = str(entity['value'])
os.environ.setdefault(key, value) | def apply_datastore_env_vars(project):
'Fetch `env_var` entities from the datastore and apply to the current env.\n\n Each `env_var` entity should have two string properties, `key` and `value`.\n '
from google.cloud.datastore import Client
for entity in Client(project).query(kind='env_var').fetch():
key = str(entity['key'])
value = str(entity['value'])
os.environ.setdefault(key, value)<|docstring|>Fetch `env_var` entities from the datastore and apply to the current env.
Each `env_var` entity should have two string properties, `key` and `value`.<|endoftext|> |
e60f1ebff7da92e9a620f5f3d7348df33ea6831a1d558b4d750a3009a86ecda7 | def setup_cloud_logging():
'Attaches Google Cloud Logging to the root logger.\n\n https://cloud.google.com/logging/docs/setup/python'
from google.cloud.logging import Client
logging = Client()
logging.get_default_handler()
logging.setup_logging() | Attaches Google Cloud Logging to the root logger.
https://cloud.google.com/logging/docs/setup/python | django/liamnewmarch/google_cloud.py | setup_cloud_logging | liamnewmarch/liam.nwmr.ch | 0 | python | def setup_cloud_logging():
'Attaches Google Cloud Logging to the root logger.\n\n https://cloud.google.com/logging/docs/setup/python'
from google.cloud.logging import Client
logging = Client()
logging.get_default_handler()
logging.setup_logging() | def setup_cloud_logging():
'Attaches Google Cloud Logging to the root logger.\n\n https://cloud.google.com/logging/docs/setup/python'
from google.cloud.logging import Client
logging = Client()
logging.get_default_handler()
logging.setup_logging()<|docstring|>Attaches Google Cloud Logging to the root logger.
https://cloud.google.com/logging/docs/setup/python<|endoftext|> |
d769800106315826bc64479f2de0e0bc165ff5719568d777625a166d373e78a1 | def encode_auth_token(self):
'\n Generates the Auth Token\n :return: string\n '
try:
payload = {'exp': (datetime.datetime.utcnow() + datetime.timedelta(days=0, seconds=5)), 'iat': datetime.datetime.utcnow(), 'id': self.id, 'first_name': self.first_name, 'last_name': self.last_name, 'email': self.email}
return jwt.encode(payload, current_app.config.get('SECRET_KEY'), algorithm='HS256')
except Exception as e:
return e | Generates the Auth Token
:return: string | web/app/models/auth_models.py | encode_auth_token | EMSL-Computing/CoreMS-Portal | 2 | python | def encode_auth_token(self):
'\n Generates the Auth Token\n :return: string\n '
try:
payload = {'exp': (datetime.datetime.utcnow() + datetime.timedelta(days=0, seconds=5)), 'iat': datetime.datetime.utcnow(), 'id': self.id, 'first_name': self.first_name, 'last_name': self.last_name, 'email': self.email}
return jwt.encode(payload, current_app.config.get('SECRET_KEY'), algorithm='HS256')
except Exception as e:
return e | def encode_auth_token(self):
'\n Generates the Auth Token\n :return: string\n '
try:
payload = {'exp': (datetime.datetime.utcnow() + datetime.timedelta(days=0, seconds=5)), 'iat': datetime.datetime.utcnow(), 'id': self.id, 'first_name': self.first_name, 'last_name': self.last_name, 'email': self.email}
return jwt.encode(payload, current_app.config.get('SECRET_KEY'), algorithm='HS256')
except Exception as e:
return e<|docstring|>Generates the Auth Token
:return: string<|endoftext|> |
bef673e4c1ebca059d0ea84a2057621c7562c2da5284e621033e57c97be8075b | def step_to(obs, new_position, lay_bomb=False):
'return: a copy of new observation after stepping into new_position. \n If lay_bomb==True, it is actually two-step change (i.e., lay bomb then go to new_position)\n '
new_obs = copy.deepcopy(obs)
sz = len(obs['board'])
old_board = obs['board']
old_position = obs['position']
old_position_value = constants.Item.Bomb.value
if (not lay_bomb):
old_position_value = (constants.Item.Bomb.value if (obs['bomb_life'][old_position] > 0) else constants.Item.Passage.value)
new_obs['position'] = new_position
new_obs['board'][old_position] = old_position_value
agent_id = old_board[old_position]
new_obs['board'][new_position] = agent_id
if lay_bomb:
new_obs['bomb_blast_strength'][old_position] = obs['blast_strength']
new_obs['bomb_life'][old_position] = constants.DEFAULT_BOMB_LIFE
for i in range(sz):
for j in range(sz):
time_step = (2 if lay_bomb else 1)
if (new_obs['bomb_life'][(i, j)] < 2):
continue
new_obs['bomb_life'][(i, j)] = max(1, (new_obs['bomb_life'][(i, j)] - time_step))
return new_obs | return: a copy of new observation after stepping into new_position.
If lay_bomb==True, it is actually two-step change (i.e., lay bomb then go to new_position) | pommerman/agents/direction_filter.py | step_to | eomiso/Deep-Pommerman | 0 | python | def step_to(obs, new_position, lay_bomb=False):
'return: a copy of new observation after stepping into new_position. \n If lay_bomb==True, it is actually two-step change (i.e., lay bomb then go to new_position)\n '
new_obs = copy.deepcopy(obs)
sz = len(obs['board'])
old_board = obs['board']
old_position = obs['position']
old_position_value = constants.Item.Bomb.value
if (not lay_bomb):
old_position_value = (constants.Item.Bomb.value if (obs['bomb_life'][old_position] > 0) else constants.Item.Passage.value)
new_obs['position'] = new_position
new_obs['board'][old_position] = old_position_value
agent_id = old_board[old_position]
new_obs['board'][new_position] = agent_id
if lay_bomb:
new_obs['bomb_blast_strength'][old_position] = obs['blast_strength']
new_obs['bomb_life'][old_position] = constants.DEFAULT_BOMB_LIFE
for i in range(sz):
for j in range(sz):
time_step = (2 if lay_bomb else 1)
if (new_obs['bomb_life'][(i, j)] < 2):
continue
new_obs['bomb_life'][(i, j)] = max(1, (new_obs['bomb_life'][(i, j)] - time_step))
return new_obs | def step_to(obs, new_position, lay_bomb=False):
'return: a copy of new observation after stepping into new_position. \n If lay_bomb==True, it is actually two-step change (i.e., lay bomb then go to new_position)\n '
new_obs = copy.deepcopy(obs)
sz = len(obs['board'])
old_board = obs['board']
old_position = obs['position']
old_position_value = constants.Item.Bomb.value
if (not lay_bomb):
old_position_value = (constants.Item.Bomb.value if (obs['bomb_life'][old_position] > 0) else constants.Item.Passage.value)
new_obs['position'] = new_position
new_obs['board'][old_position] = old_position_value
agent_id = old_board[old_position]
new_obs['board'][new_position] = agent_id
if lay_bomb:
new_obs['bomb_blast_strength'][old_position] = obs['blast_strength']
new_obs['bomb_life'][old_position] = constants.DEFAULT_BOMB_LIFE
for i in range(sz):
for j in range(sz):
time_step = (2 if lay_bomb else 1)
if (new_obs['bomb_life'][(i, j)] < 2):
continue
new_obs['bomb_life'][(i, j)] = max(1, (new_obs['bomb_life'][(i, j)] - time_step))
return new_obs<|docstring|>return: a copy of new observation after stepping into new_position.
If lay_bomb==True, it is actually two-step change (i.e., lay bomb then go to new_position)<|endoftext|> |
e668762ff6ea19f6c422db680533f088a76a12d5f0a02aa9ad5feea8f1daa6fe | def bomb_real_life(bomb_position, board, bomb_blast_st, bomb_life):
"One bomb's real life is the minimum life of its adjacent bomb. \n Not that this could be chained, so please call it on each bomb mulitple times until\n converge\n "
(x, y) = bomb_position
i = x
min_life = 900
sz = len(board)
while (i >= 0):
pos = (i, y)
dist = abs((i - x))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
i -= 1
i = x
while (i < sz):
pos = (i, y)
dist = abs((i - x))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
i += 1
j = y
while (j >= 0):
pos = (x, j)
dist = abs((j - y))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
j -= 1
j = y
while (j < sz):
pos = (x, j)
dist = abs((j - y))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
j += 1
return min_life | One bomb's real life is the minimum life of its adjacent bomb.
Not that this could be chained, so please call it on each bomb mulitple times until
converge | pommerman/agents/direction_filter.py | bomb_real_life | eomiso/Deep-Pommerman | 0 | python | def bomb_real_life(bomb_position, board, bomb_blast_st, bomb_life):
"One bomb's real life is the minimum life of its adjacent bomb. \n Not that this could be chained, so please call it on each bomb mulitple times until\n converge\n "
(x, y) = bomb_position
i = x
min_life = 900
sz = len(board)
while (i >= 0):
pos = (i, y)
dist = abs((i - x))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
i -= 1
i = x
while (i < sz):
pos = (i, y)
dist = abs((i - x))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
i += 1
j = y
while (j >= 0):
pos = (x, j)
dist = abs((j - y))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
j -= 1
j = y
while (j < sz):
pos = (x, j)
dist = abs((j - y))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
j += 1
return min_life | def bomb_real_life(bomb_position, board, bomb_blast_st, bomb_life):
"One bomb's real life is the minimum life of its adjacent bomb. \n Not that this could be chained, so please call it on each bomb mulitple times until\n converge\n "
(x, y) = bomb_position
i = x
min_life = 900
sz = len(board)
while (i >= 0):
pos = (i, y)
dist = abs((i - x))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
i -= 1
i = x
while (i < sz):
pos = (i, y)
dist = abs((i - x))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
i += 1
j = y
while (j >= 0):
pos = (x, j)
dist = abs((j - y))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
j -= 1
j = y
while (j < sz):
pos = (x, j)
dist = abs((j - y))
if utility.position_is_wall(board, pos):
break
if ((bomb_life[pos] > 0) and (dist <= (bomb_blast_st[pos] - 1))):
min_life = int(bomb_life[pos])
j += 1
return min_life<|docstring|>One bomb's real life is the minimum life of its adjacent bomb.
Not that this could be chained, so please call it on each bomb mulitple times until
converge<|endoftext|> |
0cdfb7b2217149196a27ec0f1d8b83c22bda1648aa21215b77adb104b4729703 | def add_to_danger_positions(pos, danger_positions, board, blast_st, bomb_life, covered_bomb_positions):
'due to bombing chain, bombs with life>=2 would still blow up if they are in the danger positions '
sz = int(blast_st[pos])
(x, y) = pos
danger_positions.add(pos)
for i in range(1, sz):
pos2_list = [((x + i), y), ((x - i), y), (x, (y + i)), (x, (y - i))]
for pos2 in pos2_list:
if utility.position_on_board(board, pos2):
danger_positions.add(pos2)
if (bomb_life[pos2] > 1):
covered_bomb_positions.add(pos2) | due to bombing chain, bombs with life>=2 would still blow up if they are in the danger positions | pommerman/agents/direction_filter.py | add_to_danger_positions | eomiso/Deep-Pommerman | 0 | python | def add_to_danger_positions(pos, danger_positions, board, blast_st, bomb_life, covered_bomb_positions):
' '
sz = int(blast_st[pos])
(x, y) = pos
danger_positions.add(pos)
for i in range(1, sz):
pos2_list = [((x + i), y), ((x - i), y), (x, (y + i)), (x, (y - i))]
for pos2 in pos2_list:
if utility.position_on_board(board, pos2):
danger_positions.add(pos2)
if (bomb_life[pos2] > 1):
covered_bomb_positions.add(pos2) | def add_to_danger_positions(pos, danger_positions, board, blast_st, bomb_life, covered_bomb_positions):
' '
sz = int(blast_st[pos])
(x, y) = pos
danger_positions.add(pos)
for i in range(1, sz):
pos2_list = [((x + i), y), ((x - i), y), (x, (y + i)), (x, (y - i))]
for pos2 in pos2_list:
if utility.position_on_board(board, pos2):
danger_positions.add(pos2)
if (bomb_life[pos2] > 1):
covered_bomb_positions.add(pos2)<|docstring|>due to bombing chain, bombs with life>=2 would still blow up if they are in the danger positions<|endoftext|> |
d2c198f0eeb808fdd132ec1b0ff0358d3ff5a39c9393fa1de576a1bcdffc90a7 | def isPalindrome(self, x):
'\n :type x: int\n :rtype: bool\n '
if (x < 0):
return False
temp = x
cnt = 0
while temp:
cnt += 1
temp //= 10
low = 0
high = (cnt - 1)
while (low < high):
if (((x % (10 ** (low + 1))) // (10 ** low)) != ((x % (10 ** (high + 1))) // (10 ** high))):
return False
low += 1
high -= 1
return True | :type x: int
:rtype: bool | Python/leetcode.009.palindrome-number.py | isPalindrome | tedye/leetcode | 4 | python | def isPalindrome(self, x):
'\n :type x: int\n :rtype: bool\n '
if (x < 0):
return False
temp = x
cnt = 0
while temp:
cnt += 1
temp //= 10
low = 0
high = (cnt - 1)
while (low < high):
if (((x % (10 ** (low + 1))) // (10 ** low)) != ((x % (10 ** (high + 1))) // (10 ** high))):
return False
low += 1
high -= 1
return True | def isPalindrome(self, x):
'\n :type x: int\n :rtype: bool\n '
if (x < 0):
return False
temp = x
cnt = 0
while temp:
cnt += 1
temp //= 10
low = 0
high = (cnt - 1)
while (low < high):
if (((x % (10 ** (low + 1))) // (10 ** low)) != ((x % (10 ** (high + 1))) // (10 ** high))):
return False
low += 1
high -= 1
return True<|docstring|>:type x: int
:rtype: bool<|endoftext|> |
7f732cb97f5fe15f6f0f1fc4cbefda8293b63723c10dd19365b3d038bd9f0354 | def __init__(self, save_dir: str, env: Env, policy: Policy, max_iter: int, num_rollouts: int, pop_size: [int, None]=None, num_sampler_envs: int=4, base_seed: [int, None]=None, logger: StepLogger=None):
'\n Constructor\n\n :param save_dir: directory to save the snapshots i.e. the results in\n :param env: the environment which the policy operates\n :param policy: policy to be updated\n :param max_iter: maximum number of iterations (i.e. policy updates) that this algorithm runs\n :param num_rollouts: number of rollouts per solution\n :param pop_size: number of solutions in the population, pass `None` to use a default that scales logarithmically\n with the number of policy parameters\n :param num_sampler_envs: number of parallel environments in the sampler\n :param base_seed: seed added to all other seeds in order to make the experiments distinct but repeatable\n :param logger: logger for every step of the algorithm, if `None` the default logger will be created\n '
if (not isinstance(env, Env)):
raise pyrado.TypeErr(given=env, expected_type=Env)
if (not (isinstance(pop_size, int) or (pop_size is None))):
raise pyrado.TypeErr(given=pop_size, expected_type=int)
if (isinstance(pop_size, int) and (pop_size <= 0)):
raise pyrado.ValueErr(given=pop_size, g_constraint='0')
super().__init__(save_dir, max_iter, policy, logger)
self._env = env
self.num_rollouts = num_rollouts
if (pop_size is None):
pop_size = (4 + int((3 * np.log(policy.num_param))))
print_cbt(f'Initialized population size to {pop_size}.', 'y')
self.pop_size = pop_size
self.sampler = ParameterExplorationSampler(env, policy, num_envs=num_sampler_envs, num_rollouts_per_param=num_rollouts, seed=base_seed)
self.ret_avg_stack = (1000.0 * np.random.randn(20))
self.thold_ret_std = 0.1
self.best_policy_param = policy.param_values.clone()
self._expl_strat = None | Constructor
:param save_dir: directory to save the snapshots i.e. the results in
:param env: the environment which the policy operates
:param policy: policy to be updated
:param max_iter: maximum number of iterations (i.e. policy updates) that this algorithm runs
:param num_rollouts: number of rollouts per solution
:param pop_size: number of solutions in the population, pass `None` to use a default that scales logarithmically
with the number of policy parameters
:param num_sampler_envs: number of parallel environments in the sampler
:param base_seed: seed added to all other seeds in order to make the experiments distinct but repeatable
:param logger: logger for every step of the algorithm, if `None` the default logger will be created | Pyrado/pyrado/algorithms/parameter_exploring.py | __init__ | jacarvalho/SimuRLacra | 0 | python | def __init__(self, save_dir: str, env: Env, policy: Policy, max_iter: int, num_rollouts: int, pop_size: [int, None]=None, num_sampler_envs: int=4, base_seed: [int, None]=None, logger: StepLogger=None):
'\n Constructor\n\n :param save_dir: directory to save the snapshots i.e. the results in\n :param env: the environment which the policy operates\n :param policy: policy to be updated\n :param max_iter: maximum number of iterations (i.e. policy updates) that this algorithm runs\n :param num_rollouts: number of rollouts per solution\n :param pop_size: number of solutions in the population, pass `None` to use a default that scales logarithmically\n with the number of policy parameters\n :param num_sampler_envs: number of parallel environments in the sampler\n :param base_seed: seed added to all other seeds in order to make the experiments distinct but repeatable\n :param logger: logger for every step of the algorithm, if `None` the default logger will be created\n '
if (not isinstance(env, Env)):
raise pyrado.TypeErr(given=env, expected_type=Env)
if (not (isinstance(pop_size, int) or (pop_size is None))):
raise pyrado.TypeErr(given=pop_size, expected_type=int)
if (isinstance(pop_size, int) and (pop_size <= 0)):
raise pyrado.ValueErr(given=pop_size, g_constraint='0')
super().__init__(save_dir, max_iter, policy, logger)
self._env = env
self.num_rollouts = num_rollouts
if (pop_size is None):
pop_size = (4 + int((3 * np.log(policy.num_param))))
print_cbt(f'Initialized population size to {pop_size}.', 'y')
self.pop_size = pop_size
self.sampler = ParameterExplorationSampler(env, policy, num_envs=num_sampler_envs, num_rollouts_per_param=num_rollouts, seed=base_seed)
self.ret_avg_stack = (1000.0 * np.random.randn(20))
self.thold_ret_std = 0.1
self.best_policy_param = policy.param_values.clone()
self._expl_strat = None | def __init__(self, save_dir: str, env: Env, policy: Policy, max_iter: int, num_rollouts: int, pop_size: [int, None]=None, num_sampler_envs: int=4, base_seed: [int, None]=None, logger: StepLogger=None):
'\n Constructor\n\n :param save_dir: directory to save the snapshots i.e. the results in\n :param env: the environment which the policy operates\n :param policy: policy to be updated\n :param max_iter: maximum number of iterations (i.e. policy updates) that this algorithm runs\n :param num_rollouts: number of rollouts per solution\n :param pop_size: number of solutions in the population, pass `None` to use a default that scales logarithmically\n with the number of policy parameters\n :param num_sampler_envs: number of parallel environments in the sampler\n :param base_seed: seed added to all other seeds in order to make the experiments distinct but repeatable\n :param logger: logger for every step of the algorithm, if `None` the default logger will be created\n '
if (not isinstance(env, Env)):
raise pyrado.TypeErr(given=env, expected_type=Env)
if (not (isinstance(pop_size, int) or (pop_size is None))):
raise pyrado.TypeErr(given=pop_size, expected_type=int)
if (isinstance(pop_size, int) and (pop_size <= 0)):
raise pyrado.ValueErr(given=pop_size, g_constraint='0')
super().__init__(save_dir, max_iter, policy, logger)
self._env = env
self.num_rollouts = num_rollouts
if (pop_size is None):
pop_size = (4 + int((3 * np.log(policy.num_param))))
print_cbt(f'Initialized population size to {pop_size}.', 'y')
self.pop_size = pop_size
self.sampler = ParameterExplorationSampler(env, policy, num_envs=num_sampler_envs, num_rollouts_per_param=num_rollouts, seed=base_seed)
self.ret_avg_stack = (1000.0 * np.random.randn(20))
self.thold_ret_std = 0.1
self.best_policy_param = policy.param_values.clone()
self._expl_strat = None<|docstring|>Constructor
:param save_dir: directory to save the snapshots i.e. the results in
:param env: the environment which the policy operates
:param policy: policy to be updated
:param max_iter: maximum number of iterations (i.e. policy updates) that this algorithm runs
:param num_rollouts: number of rollouts per solution
:param pop_size: number of solutions in the population, pass `None` to use a default that scales logarithmically
with the number of policy parameters
:param num_sampler_envs: number of parallel environments in the sampler
:param base_seed: seed added to all other seeds in order to make the experiments distinct but repeatable
:param logger: logger for every step of the algorithm, if `None` the default logger will be created<|endoftext|> |
7ec580cc2c3e509c1c41204bc628735882226dc4197b69bd7b8b23028f2cfa12 | def stopping_criterion_met(self) -> bool:
'\n Check if the average reward of the mean policy did not change more than the specified threshold over the\n last iterations.\n '
if (np.std(self.ret_avg_stack) < self.thold_ret_std):
return True
else:
return False | Check if the average reward of the mean policy did not change more than the specified threshold over the
last iterations. | Pyrado/pyrado/algorithms/parameter_exploring.py | stopping_criterion_met | jacarvalho/SimuRLacra | 0 | python | def stopping_criterion_met(self) -> bool:
'\n Check if the average reward of the mean policy did not change more than the specified threshold over the\n last iterations.\n '
if (np.std(self.ret_avg_stack) < self.thold_ret_std):
return True
else:
return False | def stopping_criterion_met(self) -> bool:
'\n Check if the average reward of the mean policy did not change more than the specified threshold over the\n last iterations.\n '
if (np.std(self.ret_avg_stack) < self.thold_ret_std):
return True
else:
return False<|docstring|>Check if the average reward of the mean policy did not change more than the specified threshold over the
last iterations.<|endoftext|> |
78741961634485bdf14ade7c4c6ad91c3714b96f8d2d3ac81be2f1fee308ba82 | @abstractmethod
def update(self, param_results: ParameterSamplingResult, ret_avg_curr: float):
'\n Update the policy from the given samples.\n\n :param param_results: Sampled parameters with evaluation\n :param ret_avg_curr: Average return for the current parameters\n '
raise NotImplementedError | Update the policy from the given samples.
:param param_results: Sampled parameters with evaluation
:param ret_avg_curr: Average return for the current parameters | Pyrado/pyrado/algorithms/parameter_exploring.py | update | jacarvalho/SimuRLacra | 0 | python | @abstractmethod
def update(self, param_results: ParameterSamplingResult, ret_avg_curr: float):
'\n Update the policy from the given samples.\n\n :param param_results: Sampled parameters with evaluation\n :param ret_avg_curr: Average return for the current parameters\n '
raise NotImplementedError | @abstractmethod
def update(self, param_results: ParameterSamplingResult, ret_avg_curr: float):
'\n Update the policy from the given samples.\n\n :param param_results: Sampled parameters with evaluation\n :param ret_avg_curr: Average return for the current parameters\n '
raise NotImplementedError<|docstring|>Update the policy from the given samples.
:param param_results: Sampled parameters with evaluation
:param ret_avg_curr: Average return for the current parameters<|endoftext|> |
355a8b2bedc474e0cacd57072f19baccb992a9b688c9c053f4e0f4d42cdb692a | def deploy_cmd():
'Creates deploy command'
@ersilia_cli.command(short_help='Deploy model to the cloud', help='Deploy model in a cloud service. This option is only for developers and requires credentials.')
@click.argument('model', type=click.STRING)
@click.option('--cloud', default='heroku', type=click.STRING)
def deploy(model, cloud):
model_id = ModelBase(model).model_id
dp = Deployer(cloud=cloud)
if (dp.dep is None):
click.echo(click.style('Please enter a valid cloud option', fg='red'))
click.echo(click.style("Only 'heroku' and 'local' are available for the moment...", fg='yellow'))
return
dp.deploy(model_id) | Creates deploy command | ersilia/cli/commands/deploy.py | deploy_cmd | ersilia-os/ersilia | 32 | python | def deploy_cmd():
@ersilia_cli.command(short_help='Deploy model to the cloud', help='Deploy model in a cloud service. This option is only for developers and requires credentials.')
@click.argument('model', type=click.STRING)
@click.option('--cloud', default='heroku', type=click.STRING)
def deploy(model, cloud):
model_id = ModelBase(model).model_id
dp = Deployer(cloud=cloud)
if (dp.dep is None):
click.echo(click.style('Please enter a valid cloud option', fg='red'))
click.echo(click.style("Only 'heroku' and 'local' are available for the moment...", fg='yellow'))
return
dp.deploy(model_id) | def deploy_cmd():
@ersilia_cli.command(short_help='Deploy model to the cloud', help='Deploy model in a cloud service. This option is only for developers and requires credentials.')
@click.argument('model', type=click.STRING)
@click.option('--cloud', default='heroku', type=click.STRING)
def deploy(model, cloud):
model_id = ModelBase(model).model_id
dp = Deployer(cloud=cloud)
if (dp.dep is None):
click.echo(click.style('Please enter a valid cloud option', fg='red'))
click.echo(click.style("Only 'heroku' and 'local' are available for the moment...", fg='yellow'))
return
dp.deploy(model_id)<|docstring|>Creates deploy command<|endoftext|> |
3738a2e4e6742a792ccc3c969795dc08db0c2a737172037d4aca122c3735460c | def Gauss_resample(x, y, N):
'\n Resample features based on Gaussian approximation\n\n Note we divide the covariance by 2!\n\n Parameters\n ----------\n X : numpy.ndarray\n Feature array\n y : numpy.ndarray\n Label array\n N : int\n Total samples to simulate (to be added to original sample)\n\n Returns\n -------\n newX : numpy.ndarray\n New Feature array\n newY : numpy.ndarray\n New label array\n '
uys = np.unique(y)
newX = np.zeros((int((N * len(uys))), np.size(x, axis=1)))
newy = np.zeros((int((N * len(uys))),))
for (i, uy) in enumerate(uys):
gind = np.where((y == uy))
newX[((i * N):((i * N) + len(gind[0])), :)] = x[(gind[0], :)]
newy[(i * N):((i + 1) * N)] = uy
cx = x[(gind[0], :)]
mean = np.mean(cx, axis=0)
cov = np.cov(cx, rowvar=False)
newX[((i * N) + len(gind[0])):((i + 1) * N)] = np.random.multivariate_normal(mean, (cov / 2.0), size=(N - len(gind[0])))
return (newX, newy) | Resample features based on Gaussian approximation
Note we divide the covariance by 2!
Parameters
----------
X : numpy.ndarray
Feature array
y : numpy.ndarray
Label array
N : int
Total samples to simulate (to be added to original sample)
Returns
-------
newX : numpy.ndarray
New Feature array
newY : numpy.ndarray
New label array | vraenn/classify.py | Gauss_resample | villrv/agnvae | 2 | python | def Gauss_resample(x, y, N):
'\n Resample features based on Gaussian approximation\n\n Note we divide the covariance by 2!\n\n Parameters\n ----------\n X : numpy.ndarray\n Feature array\n y : numpy.ndarray\n Label array\n N : int\n Total samples to simulate (to be added to original sample)\n\n Returns\n -------\n newX : numpy.ndarray\n New Feature array\n newY : numpy.ndarray\n New label array\n '
uys = np.unique(y)
newX = np.zeros((int((N * len(uys))), np.size(x, axis=1)))
newy = np.zeros((int((N * len(uys))),))
for (i, uy) in enumerate(uys):
gind = np.where((y == uy))
newX[((i * N):((i * N) + len(gind[0])), :)] = x[(gind[0], :)]
newy[(i * N):((i + 1) * N)] = uy
cx = x[(gind[0], :)]
mean = np.mean(cx, axis=0)
cov = np.cov(cx, rowvar=False)
newX[((i * N) + len(gind[0])):((i + 1) * N)] = np.random.multivariate_normal(mean, (cov / 2.0), size=(N - len(gind[0])))
return (newX, newy) | def Gauss_resample(x, y, N):
'\n Resample features based on Gaussian approximation\n\n Note we divide the covariance by 2!\n\n Parameters\n ----------\n X : numpy.ndarray\n Feature array\n y : numpy.ndarray\n Label array\n N : int\n Total samples to simulate (to be added to original sample)\n\n Returns\n -------\n newX : numpy.ndarray\n New Feature array\n newY : numpy.ndarray\n New label array\n '
uys = np.unique(y)
newX = np.zeros((int((N * len(uys))), np.size(x, axis=1)))
newy = np.zeros((int((N * len(uys))),))
for (i, uy) in enumerate(uys):
gind = np.where((y == uy))
newX[((i * N):((i * N) + len(gind[0])), :)] = x[(gind[0], :)]
newy[(i * N):((i + 1) * N)] = uy
cx = x[(gind[0], :)]
mean = np.mean(cx, axis=0)
cov = np.cov(cx, rowvar=False)
newX[((i * N) + len(gind[0])):((i + 1) * N)] = np.random.multivariate_normal(mean, (cov / 2.0), size=(N - len(gind[0])))
return (newX, newy)<|docstring|>Resample features based on Gaussian approximation
Note we divide the covariance by 2!
Parameters
----------
X : numpy.ndarray
Feature array
y : numpy.ndarray
Label array
N : int
Total samples to simulate (to be added to original sample)
Returns
-------
newX : numpy.ndarray
New Feature array
newY : numpy.ndarray
New label array<|endoftext|> |
c4a4703277bd230b638eff4e3016e94de98757e149cfb693b172bc5374864a6a | def KDE_resample(x, y, N, bandwidth=0.5):
'\n Resample features based on Kernel Density approximation\n\n Parameters\n ----------\n X : numpy.ndarray\n Feature array\n y : numpy.ndarray\n Label array\n N : int\n Total samples to simulate (to be added to original sample)\n\n Returns\n -------\n newX : numpy.ndarray\n New Feature array\n newY : numpy.ndarray\n New label array\n '
uys = np.unique(y)
newX = np.zeros((int((N * len(uys))), np.size(x, axis=1)))
newy = np.zeros((int((N * len(uys))),))
for (i, uy) in enumerate(uys):
gind = np.where((y == uy))
newX[((i * N):((i * N) + len(gind[0])), :)] = x[(gind[0], :)]
newy[(i * N):((i + 1) * N)] = uy
cx = x[(gind[0], :)]
kde = KernelDensity(kernel='gaussian', bandwidth=bandwidth).fit(cx)
newX[((i * N) + len(gind[0])):((i + 1) * N)] = kde.sample(n_samples=(N - len(gind[0])))
return (newX, newy) | Resample features based on Kernel Density approximation
Parameters
----------
X : numpy.ndarray
Feature array
y : numpy.ndarray
Label array
N : int
Total samples to simulate (to be added to original sample)
Returns
-------
newX : numpy.ndarray
New Feature array
newY : numpy.ndarray
New label array | vraenn/classify.py | KDE_resample | villrv/agnvae | 2 | python | def KDE_resample(x, y, N, bandwidth=0.5):
'\n Resample features based on Kernel Density approximation\n\n Parameters\n ----------\n X : numpy.ndarray\n Feature array\n y : numpy.ndarray\n Label array\n N : int\n Total samples to simulate (to be added to original sample)\n\n Returns\n -------\n newX : numpy.ndarray\n New Feature array\n newY : numpy.ndarray\n New label array\n '
uys = np.unique(y)
newX = np.zeros((int((N * len(uys))), np.size(x, axis=1)))
newy = np.zeros((int((N * len(uys))),))
for (i, uy) in enumerate(uys):
gind = np.where((y == uy))
newX[((i * N):((i * N) + len(gind[0])), :)] = x[(gind[0], :)]
newy[(i * N):((i + 1) * N)] = uy
cx = x[(gind[0], :)]
kde = KernelDensity(kernel='gaussian', bandwidth=bandwidth).fit(cx)
newX[((i * N) + len(gind[0])):((i + 1) * N)] = kde.sample(n_samples=(N - len(gind[0])))
return (newX, newy) | def KDE_resample(x, y, N, bandwidth=0.5):
'\n Resample features based on Kernel Density approximation\n\n Parameters\n ----------\n X : numpy.ndarray\n Feature array\n y : numpy.ndarray\n Label array\n N : int\n Total samples to simulate (to be added to original sample)\n\n Returns\n -------\n newX : numpy.ndarray\n New Feature array\n newY : numpy.ndarray\n New label array\n '
uys = np.unique(y)
newX = np.zeros((int((N * len(uys))), np.size(x, axis=1)))
newy = np.zeros((int((N * len(uys))),))
for (i, uy) in enumerate(uys):
gind = np.where((y == uy))
newX[((i * N):((i * N) + len(gind[0])), :)] = x[(gind[0], :)]
newy[(i * N):((i + 1) * N)] = uy
cx = x[(gind[0], :)]
kde = KernelDensity(kernel='gaussian', bandwidth=bandwidth).fit(cx)
newX[((i * N) + len(gind[0])):((i + 1) * N)] = kde.sample(n_samples=(N - len(gind[0])))
return (newX, newy)<|docstring|>Resample features based on Kernel Density approximation
Parameters
----------
X : numpy.ndarray
Feature array
y : numpy.ndarray
Label array
N : int
Total samples to simulate (to be added to original sample)
Returns
-------
newX : numpy.ndarray
New Feature array
newY : numpy.ndarray
New label array<|endoftext|> |
8f46e605bc071a9415dc2f51bfe1f4eb9852f3db023de9852a3488d9923e6260 | def prep_data_for_classifying(featurefile, means, stds, whiten=True, verbose=False):
'\n Resample features based on Kernel Density approximation\n\n Parameters\n ----------\n featurefile : str\n File with pre-processed features\n means : numpy.ndarray\n Means of features, used to whiten\n stds : numpy.ndarray\n St. dev of features, used to whiten\n whiten : bool\n Whiten features before classification\n verbose : bool\n Print if SNe fail\n\n Returns\n -------\n X : numpy.ndarray\n Feature array\n final_sn_names : numpy.ndarray\n Label array\n means : numpy.ndarray\n Means of features, used to whiten\n stds : numpy.ndarray\n St. dev of features, used to whiten\n feat_names : numpy.ndarray\n Array of feature names\n '
feat_data = np.load(featurefile, allow_pickle=True)
ids = feat_data['ids']
features = feat_data['features']
feat_names = feat_data['feat_names']
X = []
final_sn_names = []
for sn_name in ids:
gind = np.where((sn_name == ids))
if verbose:
if (len(gind[0]) == 0):
print('SN not found')
sys.exit(0)
if (not np.isfinite(features[gind][0]).all()):
print('Warning: ', sn_name, ' has a non-finite feature')
if (X == []):
X = features[gind][0]
else:
X = np.vstack((X, features[gind][0]))
final_sn_names.append(sn_name)
gind = np.where(np.isnan(X))
if (len(gind) > 0):
X[(gind[0], gind[1])] = means[gind[1]]
if whiten:
X = ((X - means) / stds)
return (X, final_sn_names, means, stds, feat_names) | Resample features based on Kernel Density approximation
Parameters
----------
featurefile : str
File with pre-processed features
means : numpy.ndarray
Means of features, used to whiten
stds : numpy.ndarray
St. dev of features, used to whiten
whiten : bool
Whiten features before classification
verbose : bool
Print if SNe fail
Returns
-------
X : numpy.ndarray
Feature array
final_sn_names : numpy.ndarray
Label array
means : numpy.ndarray
Means of features, used to whiten
stds : numpy.ndarray
St. dev of features, used to whiten
feat_names : numpy.ndarray
Array of feature names | vraenn/classify.py | prep_data_for_classifying | villrv/agnvae | 2 | python | def prep_data_for_classifying(featurefile, means, stds, whiten=True, verbose=False):
'\n Resample features based on Kernel Density approximation\n\n Parameters\n ----------\n featurefile : str\n File with pre-processed features\n means : numpy.ndarray\n Means of features, used to whiten\n stds : numpy.ndarray\n St. dev of features, used to whiten\n whiten : bool\n Whiten features before classification\n verbose : bool\n Print if SNe fail\n\n Returns\n -------\n X : numpy.ndarray\n Feature array\n final_sn_names : numpy.ndarray\n Label array\n means : numpy.ndarray\n Means of features, used to whiten\n stds : numpy.ndarray\n St. dev of features, used to whiten\n feat_names : numpy.ndarray\n Array of feature names\n '
feat_data = np.load(featurefile, allow_pickle=True)
ids = feat_data['ids']
features = feat_data['features']
feat_names = feat_data['feat_names']
X = []
final_sn_names = []
for sn_name in ids:
gind = np.where((sn_name == ids))
if verbose:
if (len(gind[0]) == 0):
print('SN not found')
sys.exit(0)
if (not np.isfinite(features[gind][0]).all()):
print('Warning: ', sn_name, ' has a non-finite feature')
if (X == []):
X = features[gind][0]
else:
X = np.vstack((X, features[gind][0]))
final_sn_names.append(sn_name)
gind = np.where(np.isnan(X))
if (len(gind) > 0):
X[(gind[0], gind[1])] = means[gind[1]]
if whiten:
X = ((X - means) / stds)
return (X, final_sn_names, means, stds, feat_names) | def prep_data_for_classifying(featurefile, means, stds, whiten=True, verbose=False):
'\n Resample features based on Kernel Density approximation\n\n Parameters\n ----------\n featurefile : str\n File with pre-processed features\n means : numpy.ndarray\n Means of features, used to whiten\n stds : numpy.ndarray\n St. dev of features, used to whiten\n whiten : bool\n Whiten features before classification\n verbose : bool\n Print if SNe fail\n\n Returns\n -------\n X : numpy.ndarray\n Feature array\n final_sn_names : numpy.ndarray\n Label array\n means : numpy.ndarray\n Means of features, used to whiten\n stds : numpy.ndarray\n St. dev of features, used to whiten\n feat_names : numpy.ndarray\n Array of feature names\n '
feat_data = np.load(featurefile, allow_pickle=True)
ids = feat_data['ids']
features = feat_data['features']
feat_names = feat_data['feat_names']
X = []
final_sn_names = []
for sn_name in ids:
gind = np.where((sn_name == ids))
if verbose:
if (len(gind[0]) == 0):
print('SN not found')
sys.exit(0)
if (not np.isfinite(features[gind][0]).all()):
print('Warning: ', sn_name, ' has a non-finite feature')
if (X == []):
X = features[gind][0]
else:
X = np.vstack((X, features[gind][0]))
final_sn_names.append(sn_name)
gind = np.where(np.isnan(X))
if (len(gind) > 0):
X[(gind[0], gind[1])] = means[gind[1]]
if whiten:
X = ((X - means) / stds)
return (X, final_sn_names, means, stds, feat_names)<|docstring|>Resample features based on Kernel Density approximation
Parameters
----------
featurefile : str
File with pre-processed features
means : numpy.ndarray
Means of features, used to whiten
stds : numpy.ndarray
St. dev of features, used to whiten
whiten : bool
Whiten features before classification
verbose : bool
Print if SNe fail
Returns
-------
X : numpy.ndarray
Feature array
final_sn_names : numpy.ndarray
Label array
means : numpy.ndarray
Means of features, used to whiten
stds : numpy.ndarray
St. dev of features, used to whiten
feat_names : numpy.ndarray
Array of feature names<|endoftext|> |
7e46e1cdc40d3e0631b02c719d3a955e68533ea81e5dc84455e19c17694f4af5 | def prep_data_for_training(featurefile, metatable, whiten=True):
'\n Resample features based on Kernel Density approximation\n\n Parameters\n ----------\n featurefile : str\n File with pre-processed features\n metatable : numpy.ndarray\n Table which must include: Object Name, Redshift, Type, Estimate\n Peak Time, and EBV_MW\n whiten : bool\n Whiten features before classification\n\n Returns\n -------\n X : numpy.ndarray\n Feature array\n y : numpy.ndarray\n Label array\n final_sn_names : numpy.ndarray\n Label array\n means : numpy.ndarray\n Means of features, used to whiten\n stds : numpy.ndarray\n St. dev of features, used to whiten\n feat_names : numpy.ndarray\n Array of feature names\n '
feat_data = np.load(featurefile, allow_pickle=True)
ids = feat_data['ids']
features = feat_data['features']
feat_names = feat_data['feat_names']
metadata = np.loadtxt(metatable, dtype=str, usecols=(0, 2))
sn_dict = {'SLSN': 0, 'SLSN-I': 0, 'SNIIL': 0, 'SNII': 1, 'SNIIP': 1, 'SNIIb': 1, 'SNII-pec': 1, 'SNIIn': 2, 'SLSN-II': 2, 'SNIa': 3, 'SNIa-91bg-like': 3, 'SNIa-91T-like': 3, 'SNIax[02cx-like]': 3, 'SNIa-pec': 3, 'SNIa-CSM': 3, 'SNIbc': 4, 'SNIc': 4, 'SNIb': 4, 'SNIbn': 4, 'SNIc-BL': 4, 'SNIb/c': 4, 'SNIb-Ca-rich': 4}
X = []
y = []
final_sn_names = []
for (sn_name, sn_type) in metadata:
gind = np.where((sn_name == ids))
if ('SN' not in sn_type):
continue
else:
sn_num = sn_dict[sn_type]
if (len(gind[0]) == 0):
continue
if (not np.isfinite(features[gind][0]).all()):
continue
if (X == []):
X = features[gind][0]
y = sn_num
else:
X = np.vstack((X, features[gind][0]))
y = np.append(y, sn_num)
final_sn_names.append(sn_name)
means = np.mean(X, axis=0)
stds = np.std(X, axis=0)
if whiten:
X = preprocessing.scale(X)
return (X, y, final_sn_names, means, stds, feat_names) | Resample features based on Kernel Density approximation
Parameters
----------
featurefile : str
File with pre-processed features
metatable : numpy.ndarray
Table which must include: Object Name, Redshift, Type, Estimate
Peak Time, and EBV_MW
whiten : bool
Whiten features before classification
Returns
-------
X : numpy.ndarray
Feature array
y : numpy.ndarray
Label array
final_sn_names : numpy.ndarray
Label array
means : numpy.ndarray
Means of features, used to whiten
stds : numpy.ndarray
St. dev of features, used to whiten
feat_names : numpy.ndarray
Array of feature names | vraenn/classify.py | prep_data_for_training | villrv/agnvae | 2 | python | def prep_data_for_training(featurefile, metatable, whiten=True):
'\n Resample features based on Kernel Density approximation\n\n Parameters\n ----------\n featurefile : str\n File with pre-processed features\n metatable : numpy.ndarray\n Table which must include: Object Name, Redshift, Type, Estimate\n Peak Time, and EBV_MW\n whiten : bool\n Whiten features before classification\n\n Returns\n -------\n X : numpy.ndarray\n Feature array\n y : numpy.ndarray\n Label array\n final_sn_names : numpy.ndarray\n Label array\n means : numpy.ndarray\n Means of features, used to whiten\n stds : numpy.ndarray\n St. dev of features, used to whiten\n feat_names : numpy.ndarray\n Array of feature names\n '
feat_data = np.load(featurefile, allow_pickle=True)
ids = feat_data['ids']
features = feat_data['features']
feat_names = feat_data['feat_names']
metadata = np.loadtxt(metatable, dtype=str, usecols=(0, 2))
sn_dict = {'SLSN': 0, 'SLSN-I': 0, 'SNIIL': 0, 'SNII': 1, 'SNIIP': 1, 'SNIIb': 1, 'SNII-pec': 1, 'SNIIn': 2, 'SLSN-II': 2, 'SNIa': 3, 'SNIa-91bg-like': 3, 'SNIa-91T-like': 3, 'SNIax[02cx-like]': 3, 'SNIa-pec': 3, 'SNIa-CSM': 3, 'SNIbc': 4, 'SNIc': 4, 'SNIb': 4, 'SNIbn': 4, 'SNIc-BL': 4, 'SNIb/c': 4, 'SNIb-Ca-rich': 4}
X = []
y = []
final_sn_names = []
for (sn_name, sn_type) in metadata:
gind = np.where((sn_name == ids))
if ('SN' not in sn_type):
continue
else:
sn_num = sn_dict[sn_type]
if (len(gind[0]) == 0):
continue
if (not np.isfinite(features[gind][0]).all()):
continue
if (X == []):
X = features[gind][0]
y = sn_num
else:
X = np.vstack((X, features[gind][0]))
y = np.append(y, sn_num)
final_sn_names.append(sn_name)
means = np.mean(X, axis=0)
stds = np.std(X, axis=0)
if whiten:
X = preprocessing.scale(X)
return (X, y, final_sn_names, means, stds, feat_names) | def prep_data_for_training(featurefile, metatable, whiten=True):
'\n Resample features based on Kernel Density approximation\n\n Parameters\n ----------\n featurefile : str\n File with pre-processed features\n metatable : numpy.ndarray\n Table which must include: Object Name, Redshift, Type, Estimate\n Peak Time, and EBV_MW\n whiten : bool\n Whiten features before classification\n\n Returns\n -------\n X : numpy.ndarray\n Feature array\n y : numpy.ndarray\n Label array\n final_sn_names : numpy.ndarray\n Label array\n means : numpy.ndarray\n Means of features, used to whiten\n stds : numpy.ndarray\n St. dev of features, used to whiten\n feat_names : numpy.ndarray\n Array of feature names\n '
feat_data = np.load(featurefile, allow_pickle=True)
ids = feat_data['ids']
features = feat_data['features']
feat_names = feat_data['feat_names']
metadata = np.loadtxt(metatable, dtype=str, usecols=(0, 2))
sn_dict = {'SLSN': 0, 'SLSN-I': 0, 'SNIIL': 0, 'SNII': 1, 'SNIIP': 1, 'SNIIb': 1, 'SNII-pec': 1, 'SNIIn': 2, 'SLSN-II': 2, 'SNIa': 3, 'SNIa-91bg-like': 3, 'SNIa-91T-like': 3, 'SNIax[02cx-like]': 3, 'SNIa-pec': 3, 'SNIa-CSM': 3, 'SNIbc': 4, 'SNIc': 4, 'SNIb': 4, 'SNIbn': 4, 'SNIc-BL': 4, 'SNIb/c': 4, 'SNIb-Ca-rich': 4}
X = []
y = []
final_sn_names = []
for (sn_name, sn_type) in metadata:
gind = np.where((sn_name == ids))
if ('SN' not in sn_type):
continue
else:
sn_num = sn_dict[sn_type]
if (len(gind[0]) == 0):
continue
if (not np.isfinite(features[gind][0]).all()):
continue
if (X == []):
X = features[gind][0]
y = sn_num
else:
X = np.vstack((X, features[gind][0]))
y = np.append(y, sn_num)
final_sn_names.append(sn_name)
means = np.mean(X, axis=0)
stds = np.std(X, axis=0)
if whiten:
X = preprocessing.scale(X)
return (X, y, final_sn_names, means, stds, feat_names)<|docstring|>Resample features based on Kernel Density approximation
Parameters
----------
featurefile : str
File with pre-processed features
metatable : numpy.ndarray
Table which must include: Object Name, Redshift, Type, Estimate
Peak Time, and EBV_MW
whiten : bool
Whiten features before classification
Returns
-------
X : numpy.ndarray
Feature array
y : numpy.ndarray
Label array
final_sn_names : numpy.ndarray
Label array
means : numpy.ndarray
Means of features, used to whiten
stds : numpy.ndarray
St. dev of features, used to whiten
feat_names : numpy.ndarray
Array of feature names<|endoftext|> |
ade4f231910c01fd1c384fd42db684eb640456b73acd594b61a541e5a0f1d0ac | def main():
' Calls the other functions to demonstrate and/or test them. '
circle_and_rectangle() | Calls the other functions to demonstrate and/or test them. | src/m9_using_objects.py | main | whitemg1/03-AccumulatorsAndFunctionsWithParameters | 0 | python | def main():
' '
circle_and_rectangle() | def main():
' '
circle_and_rectangle()<|docstring|>Calls the other functions to demonstrate and/or test them.<|endoftext|> |
cc3d780d65b0ec08b2b9767611c0b7d2797c122aac4ab36edb93e216c1a75f9f | def two_circles():
'\n -- Constructs an rg.RoseWindow.\n -- Constructs and draws two rg.Circle objects on the window\n such that:\n -- They fit in the window and are easily visible.\n -- They have different radii.\n -- One is filled with some color and one is not filled.\n -- Waits for the user to press the mouse, then closes the window.\n '
window = rg.RoseWindow()
center_point1 = rg.Point(300, 100)
radius1 = 50
circle1 = rg.Circle(center_point1, radius1)
circle1.fill_color = 'green'
circle1.attach_to(window)
center_point2 = rg.Point(200, 100)
radius2 = 15
circle2 = rg.Circle(center_point2, radius2)
circle2.attach_to(window)
window.render()
window.close_on_mouse_click() | -- Constructs an rg.RoseWindow.
-- Constructs and draws two rg.Circle objects on the window
such that:
-- They fit in the window and are easily visible.
-- They have different radii.
-- One is filled with some color and one is not filled.
-- Waits for the user to press the mouse, then closes the window. | src/m9_using_objects.py | two_circles | whitemg1/03-AccumulatorsAndFunctionsWithParameters | 0 | python | def two_circles():
'\n -- Constructs an rg.RoseWindow.\n -- Constructs and draws two rg.Circle objects on the window\n such that:\n -- They fit in the window and are easily visible.\n -- They have different radii.\n -- One is filled with some color and one is not filled.\n -- Waits for the user to press the mouse, then closes the window.\n '
window = rg.RoseWindow()
center_point1 = rg.Point(300, 100)
radius1 = 50
circle1 = rg.Circle(center_point1, radius1)
circle1.fill_color = 'green'
circle1.attach_to(window)
center_point2 = rg.Point(200, 100)
radius2 = 15
circle2 = rg.Circle(center_point2, radius2)
circle2.attach_to(window)
window.render()
window.close_on_mouse_click() | def two_circles():
'\n -- Constructs an rg.RoseWindow.\n -- Constructs and draws two rg.Circle objects on the window\n such that:\n -- They fit in the window and are easily visible.\n -- They have different radii.\n -- One is filled with some color and one is not filled.\n -- Waits for the user to press the mouse, then closes the window.\n '
window = rg.RoseWindow()
center_point1 = rg.Point(300, 100)
radius1 = 50
circle1 = rg.Circle(center_point1, radius1)
circle1.fill_color = 'green'
circle1.attach_to(window)
center_point2 = rg.Point(200, 100)
radius2 = 15
circle2 = rg.Circle(center_point2, radius2)
circle2.attach_to(window)
window.render()
window.close_on_mouse_click()<|docstring|>-- Constructs an rg.RoseWindow.
-- Constructs and draws two rg.Circle objects on the window
such that:
-- They fit in the window and are easily visible.
-- They have different radii.
-- One is filled with some color and one is not filled.
-- Waits for the user to press the mouse, then closes the window.<|endoftext|> |
6b176be1a8cc6fab57617cd01ef5c548fa1a1006b142318e4a95a3f1f1b79303 | def circle_and_rectangle():
"\n -- Constructs an rg.RoseWindow.\n -- Constructs and draws a rg.Circle and rg.Rectangle\n on the window such that:\n -- They fit in the window and are easily visible.\n -- The rg.Circle is filled with 'blue'\n -- Prints (on the console, on SEPARATE lines) the following data\n associated with your rg.Circle:\n -- Its outline thickness.\n -- Its fill color.\n -- Its center.\n -- Its center's x coordinate.\n -- Its center's y coordinate.\n -- Prints (on the console, on SEPARATE lines) the same data\n but for your rg.Rectangle.\n -- Waits for the user to press the mouse, then closes the window.\n\n Here is an example of the output on the console,\n for one particular circle and rectangle:\n\n 1\n blue\n Point(180.0, 115.0)\n 180\n 115\n 1\n None\n Point(75.0, 150.0)\n 75.0\n 150.0\n "
window = rg.RoseWindow()
x = 300
y = 100
center_point1 = rg.Point(x, y)
radius1 = 50
circle1 = rg.Circle(center_point1, radius1)
circle1.outline_thickness = 5
circle1.fill_color = 'blue'
circle1.attach_to(window)
thickness2 = 5
corner1 = rg.Point(100, 100)
corner2 = rg.Point(200, 200)
rectangle = rg.Rectangle(corner1, corner2)
rectangle.outline_thickness = 3
rectangle.attach_to(window)
print(circle1.outline_thickness)
print(circle1.fill_color)
print(center_point1)
print(x)
print(y)
print(rectangle.outline_thickness)
print(rectangle.fill_color)
print(corner1)
print('100')
print('100')
window.render()
window.close_on_mouse_click() | -- Constructs an rg.RoseWindow.
-- Constructs and draws a rg.Circle and rg.Rectangle
on the window such that:
-- They fit in the window and are easily visible.
-- The rg.Circle is filled with 'blue'
-- Prints (on the console, on SEPARATE lines) the following data
associated with your rg.Circle:
-- Its outline thickness.
-- Its fill color.
-- Its center.
-- Its center's x coordinate.
-- Its center's y coordinate.
-- Prints (on the console, on SEPARATE lines) the same data
but for your rg.Rectangle.
-- Waits for the user to press the mouse, then closes the window.
Here is an example of the output on the console,
for one particular circle and rectangle:
1
blue
Point(180.0, 115.0)
180
115
1
None
Point(75.0, 150.0)
75.0
150.0 | src/m9_using_objects.py | circle_and_rectangle | whitemg1/03-AccumulatorsAndFunctionsWithParameters | 0 | python | def circle_and_rectangle():
"\n -- Constructs an rg.RoseWindow.\n -- Constructs and draws a rg.Circle and rg.Rectangle\n on the window such that:\n -- They fit in the window and are easily visible.\n -- The rg.Circle is filled with 'blue'\n -- Prints (on the console, on SEPARATE lines) the following data\n associated with your rg.Circle:\n -- Its outline thickness.\n -- Its fill color.\n -- Its center.\n -- Its center's x coordinate.\n -- Its center's y coordinate.\n -- Prints (on the console, on SEPARATE lines) the same data\n but for your rg.Rectangle.\n -- Waits for the user to press the mouse, then closes the window.\n\n Here is an example of the output on the console,\n for one particular circle and rectangle:\n\n 1\n blue\n Point(180.0, 115.0)\n 180\n 115\n 1\n None\n Point(75.0, 150.0)\n 75.0\n 150.0\n "
window = rg.RoseWindow()
x = 300
y = 100
center_point1 = rg.Point(x, y)
radius1 = 50
circle1 = rg.Circle(center_point1, radius1)
circle1.outline_thickness = 5
circle1.fill_color = 'blue'
circle1.attach_to(window)
thickness2 = 5
corner1 = rg.Point(100, 100)
corner2 = rg.Point(200, 200)
rectangle = rg.Rectangle(corner1, corner2)
rectangle.outline_thickness = 3
rectangle.attach_to(window)
print(circle1.outline_thickness)
print(circle1.fill_color)
print(center_point1)
print(x)
print(y)
print(rectangle.outline_thickness)
print(rectangle.fill_color)
print(corner1)
print('100')
print('100')
window.render()
window.close_on_mouse_click() | def circle_and_rectangle():
"\n -- Constructs an rg.RoseWindow.\n -- Constructs and draws a rg.Circle and rg.Rectangle\n on the window such that:\n -- They fit in the window and are easily visible.\n -- The rg.Circle is filled with 'blue'\n -- Prints (on the console, on SEPARATE lines) the following data\n associated with your rg.Circle:\n -- Its outline thickness.\n -- Its fill color.\n -- Its center.\n -- Its center's x coordinate.\n -- Its center's y coordinate.\n -- Prints (on the console, on SEPARATE lines) the same data\n but for your rg.Rectangle.\n -- Waits for the user to press the mouse, then closes the window.\n\n Here is an example of the output on the console,\n for one particular circle and rectangle:\n\n 1\n blue\n Point(180.0, 115.0)\n 180\n 115\n 1\n None\n Point(75.0, 150.0)\n 75.0\n 150.0\n "
window = rg.RoseWindow()
x = 300
y = 100
center_point1 = rg.Point(x, y)
radius1 = 50
circle1 = rg.Circle(center_point1, radius1)
circle1.outline_thickness = 5
circle1.fill_color = 'blue'
circle1.attach_to(window)
thickness2 = 5
corner1 = rg.Point(100, 100)
corner2 = rg.Point(200, 200)
rectangle = rg.Rectangle(corner1, corner2)
rectangle.outline_thickness = 3
rectangle.attach_to(window)
print(circle1.outline_thickness)
print(circle1.fill_color)
print(center_point1)
print(x)
print(y)
print(rectangle.outline_thickness)
print(rectangle.fill_color)
print(corner1)
print('100')
print('100')
window.render()
window.close_on_mouse_click()<|docstring|>-- Constructs an rg.RoseWindow.
-- Constructs and draws a rg.Circle and rg.Rectangle
on the window such that:
-- They fit in the window and are easily visible.
-- The rg.Circle is filled with 'blue'
-- Prints (on the console, on SEPARATE lines) the following data
associated with your rg.Circle:
-- Its outline thickness.
-- Its fill color.
-- Its center.
-- Its center's x coordinate.
-- Its center's y coordinate.
-- Prints (on the console, on SEPARATE lines) the same data
but for your rg.Rectangle.
-- Waits for the user to press the mouse, then closes the window.
Here is an example of the output on the console,
for one particular circle and rectangle:
1
blue
Point(180.0, 115.0)
180
115
1
None
Point(75.0, 150.0)
75.0
150.0<|endoftext|> |
8225bc2072cbbfd9a6874e268aaf8cdfa2fca632e1a5cd5141823134736ab896 | def lines():
'\n -- Constructs a rg.RoseWindow.\n -- Constructs and draws on the window two rg.Lines such that:\n -- They both fit in the window and are easily visible.\n -- One rg.Line has the default thickness.\n -- The other rg.Line is thicker (i.e., has a bigger width).\n -- Uses a rg.Line method to get the midpoint (center) of the\n thicker rg.Line.\n -- Then prints (on the console, on SEPARATE lines):\n -- the midpoint itself\n -- the x-coordinate of the midpoint\n -- the y-coordinate of the midpoint\n\n Here is an example of the output on the console, if the two\n endpoints of the thicker line are at (100, 100) and (121, 200):\n Point(110.5, 150.0)\n 110.5\n 150.0\n\n -- Waits for the user to press the mouse, then closes the window.\n ' | -- Constructs a rg.RoseWindow.
-- Constructs and draws on the window two rg.Lines such that:
-- They both fit in the window and are easily visible.
-- One rg.Line has the default thickness.
-- The other rg.Line is thicker (i.e., has a bigger width).
-- Uses a rg.Line method to get the midpoint (center) of the
thicker rg.Line.
-- Then prints (on the console, on SEPARATE lines):
-- the midpoint itself
-- the x-coordinate of the midpoint
-- the y-coordinate of the midpoint
Here is an example of the output on the console, if the two
endpoints of the thicker line are at (100, 100) and (121, 200):
Point(110.5, 150.0)
110.5
150.0
-- Waits for the user to press the mouse, then closes the window. | src/m9_using_objects.py | lines | whitemg1/03-AccumulatorsAndFunctionsWithParameters | 0 | python | def lines():
'\n -- Constructs a rg.RoseWindow.\n -- Constructs and draws on the window two rg.Lines such that:\n -- They both fit in the window and are easily visible.\n -- One rg.Line has the default thickness.\n -- The other rg.Line is thicker (i.e., has a bigger width).\n -- Uses a rg.Line method to get the midpoint (center) of the\n thicker rg.Line.\n -- Then prints (on the console, on SEPARATE lines):\n -- the midpoint itself\n -- the x-coordinate of the midpoint\n -- the y-coordinate of the midpoint\n\n Here is an example of the output on the console, if the two\n endpoints of the thicker line are at (100, 100) and (121, 200):\n Point(110.5, 150.0)\n 110.5\n 150.0\n\n -- Waits for the user to press the mouse, then closes the window.\n ' | def lines():
'\n -- Constructs a rg.RoseWindow.\n -- Constructs and draws on the window two rg.Lines such that:\n -- They both fit in the window and are easily visible.\n -- One rg.Line has the default thickness.\n -- The other rg.Line is thicker (i.e., has a bigger width).\n -- Uses a rg.Line method to get the midpoint (center) of the\n thicker rg.Line.\n -- Then prints (on the console, on SEPARATE lines):\n -- the midpoint itself\n -- the x-coordinate of the midpoint\n -- the y-coordinate of the midpoint\n\n Here is an example of the output on the console, if the two\n endpoints of the thicker line are at (100, 100) and (121, 200):\n Point(110.5, 150.0)\n 110.5\n 150.0\n\n -- Waits for the user to press the mouse, then closes the window.\n '<|docstring|>-- Constructs a rg.RoseWindow.
-- Constructs and draws on the window two rg.Lines such that:
-- They both fit in the window and are easily visible.
-- One rg.Line has the default thickness.
-- The other rg.Line is thicker (i.e., has a bigger width).
-- Uses a rg.Line method to get the midpoint (center) of the
thicker rg.Line.
-- Then prints (on the console, on SEPARATE lines):
-- the midpoint itself
-- the x-coordinate of the midpoint
-- the y-coordinate of the midpoint
Here is an example of the output on the console, if the two
endpoints of the thicker line are at (100, 100) and (121, 200):
Point(110.5, 150.0)
110.5
150.0
-- Waits for the user to press the mouse, then closes the window.<|endoftext|> |
c41523d06022d9f2ffbae7dd786e387ede47ad04e52fb7d7568dffe74d885af5 | def sample(self, assign_result, boxes, gt_boxes, gt_labels=None, **kwargs):
'Sample positive and negative boxes.\n\n This is a simple implementation of bbox sampling given candidates,\n assigning results and ground truth boxes.\n\n Args:\n assign_result (:obj:`AssignResult`): Bbox assigning results.\n boxes (Tensor): Boxes to be sampled from.\n gt_boxes (Tensor): Ground truth boxes.\n gt_labels (Tensor, optional): Class labels of ground truth boxes.\n\n Returns:\n :obj:`SamplingResult`: Sampling result.\n '
boxes = boxes[(:, :6)]
gt_flags = boxes.new_zeros((boxes.shape[0],), dtype=torch.uint8)
if self.add_gt_as_proposals:
boxes = torch.cat([gt_boxes, boxes], dim=0)
assign_result.add_gt_(gt_labels)
gt_ones = boxes.new_ones(gt_boxes.shape[0], dtype=torch.uint8)
gt_flags = torch.cat([gt_ones, gt_flags])
num_expected_pos = int((self.num * self.pos_fraction))
pos_inds = self.pos_sampler._sample_pos(assign_result, num_expected_pos, boxes=boxes, **kwargs)
pos_inds = pos_inds.unique()
num_sampled_pos = pos_inds.numel()
num_expected_neg = (self.num - num_sampled_pos)
if (self.neg_pos_ub >= 0):
_pos = max(1, num_sampled_pos)
neg_upper_bound = int((self.neg_pos_ub * _pos))
if (num_expected_neg > neg_upper_bound):
num_expected_neg = neg_upper_bound
neg_inds = self.neg_sampler._sample_neg(assign_result, num_expected_neg, boxes=boxes, **kwargs)
neg_inds = neg_inds.unique()
return SamplingResult(pos_inds, neg_inds, boxes, gt_boxes, assign_result, gt_flags) | Sample positive and negative boxes.
This is a simple implementation of bbox sampling given candidates,
assigning results and ground truth boxes.
Args:
assign_result (:obj:`AssignResult`): Bbox assigning results.
boxes (Tensor): Boxes to be sampled from.
gt_boxes (Tensor): Ground truth boxes.
gt_labels (Tensor, optional): Class labels of ground truth boxes.
Returns:
:obj:`SamplingResult`: Sampling result. | model/pointgroup/box/samplers/base_sampler.py | sample | thangvubk/SphereRPN | 8 | python | def sample(self, assign_result, boxes, gt_boxes, gt_labels=None, **kwargs):
'Sample positive and negative boxes.\n\n This is a simple implementation of bbox sampling given candidates,\n assigning results and ground truth boxes.\n\n Args:\n assign_result (:obj:`AssignResult`): Bbox assigning results.\n boxes (Tensor): Boxes to be sampled from.\n gt_boxes (Tensor): Ground truth boxes.\n gt_labels (Tensor, optional): Class labels of ground truth boxes.\n\n Returns:\n :obj:`SamplingResult`: Sampling result.\n '
boxes = boxes[(:, :6)]
gt_flags = boxes.new_zeros((boxes.shape[0],), dtype=torch.uint8)
if self.add_gt_as_proposals:
boxes = torch.cat([gt_boxes, boxes], dim=0)
assign_result.add_gt_(gt_labels)
gt_ones = boxes.new_ones(gt_boxes.shape[0], dtype=torch.uint8)
gt_flags = torch.cat([gt_ones, gt_flags])
num_expected_pos = int((self.num * self.pos_fraction))
pos_inds = self.pos_sampler._sample_pos(assign_result, num_expected_pos, boxes=boxes, **kwargs)
pos_inds = pos_inds.unique()
num_sampled_pos = pos_inds.numel()
num_expected_neg = (self.num - num_sampled_pos)
if (self.neg_pos_ub >= 0):
_pos = max(1, num_sampled_pos)
neg_upper_bound = int((self.neg_pos_ub * _pos))
if (num_expected_neg > neg_upper_bound):
num_expected_neg = neg_upper_bound
neg_inds = self.neg_sampler._sample_neg(assign_result, num_expected_neg, boxes=boxes, **kwargs)
neg_inds = neg_inds.unique()
return SamplingResult(pos_inds, neg_inds, boxes, gt_boxes, assign_result, gt_flags) | def sample(self, assign_result, boxes, gt_boxes, gt_labels=None, **kwargs):
'Sample positive and negative boxes.\n\n This is a simple implementation of bbox sampling given candidates,\n assigning results and ground truth boxes.\n\n Args:\n assign_result (:obj:`AssignResult`): Bbox assigning results.\n boxes (Tensor): Boxes to be sampled from.\n gt_boxes (Tensor): Ground truth boxes.\n gt_labels (Tensor, optional): Class labels of ground truth boxes.\n\n Returns:\n :obj:`SamplingResult`: Sampling result.\n '
boxes = boxes[(:, :6)]
gt_flags = boxes.new_zeros((boxes.shape[0],), dtype=torch.uint8)
if self.add_gt_as_proposals:
boxes = torch.cat([gt_boxes, boxes], dim=0)
assign_result.add_gt_(gt_labels)
gt_ones = boxes.new_ones(gt_boxes.shape[0], dtype=torch.uint8)
gt_flags = torch.cat([gt_ones, gt_flags])
num_expected_pos = int((self.num * self.pos_fraction))
pos_inds = self.pos_sampler._sample_pos(assign_result, num_expected_pos, boxes=boxes, **kwargs)
pos_inds = pos_inds.unique()
num_sampled_pos = pos_inds.numel()
num_expected_neg = (self.num - num_sampled_pos)
if (self.neg_pos_ub >= 0):
_pos = max(1, num_sampled_pos)
neg_upper_bound = int((self.neg_pos_ub * _pos))
if (num_expected_neg > neg_upper_bound):
num_expected_neg = neg_upper_bound
neg_inds = self.neg_sampler._sample_neg(assign_result, num_expected_neg, boxes=boxes, **kwargs)
neg_inds = neg_inds.unique()
return SamplingResult(pos_inds, neg_inds, boxes, gt_boxes, assign_result, gt_flags)<|docstring|>Sample positive and negative boxes.
This is a simple implementation of bbox sampling given candidates,
assigning results and ground truth boxes.
Args:
assign_result (:obj:`AssignResult`): Bbox assigning results.
boxes (Tensor): Boxes to be sampled from.
gt_boxes (Tensor): Ground truth boxes.
gt_labels (Tensor, optional): Class labels of ground truth boxes.
Returns:
:obj:`SamplingResult`: Sampling result.<|endoftext|> |
c5687d31584dd2310a0d079bc03da67a67f892675d3ec2770da5add79d891dbf | def store_repo_info(self, response: Dict) -> None:
'Find and store information for each repo in repsonse'
for repo in response:
if ((repo['fork'] is False) and self.language_match(repo)):
(fork_count, star_count, default_branch) = self.get_repo_specific_info(repo)
self.repo_info[repo['id']] = {'repo_name': get_repo_name(repo), 'owner': get_owner_login(repo), 'html_url': get_html_url(repo), 'repo_url': get_repo_url(repo), 'fork_count': fork_count, 'star_count': star_count, 'default_branch': default_branch, 'contributor_count': self.get_contributor_count(repo), 'collaborator_count': self.get_collaborator_count(repo)} | Find and store information for each repo in repsonse | snippeteer/crawl/GithubCrawler.py | store_repo_info | malyvsen/snippeteer | 0 | python | def store_repo_info(self, response: Dict) -> None:
for repo in response:
if ((repo['fork'] is False) and self.language_match(repo)):
(fork_count, star_count, default_branch) = self.get_repo_specific_info(repo)
self.repo_info[repo['id']] = {'repo_name': get_repo_name(repo), 'owner': get_owner_login(repo), 'html_url': get_html_url(repo), 'repo_url': get_repo_url(repo), 'fork_count': fork_count, 'star_count': star_count, 'default_branch': default_branch, 'contributor_count': self.get_contributor_count(repo), 'collaborator_count': self.get_collaborator_count(repo)} | def store_repo_info(self, response: Dict) -> None:
for repo in response:
if ((repo['fork'] is False) and self.language_match(repo)):
(fork_count, star_count, default_branch) = self.get_repo_specific_info(repo)
self.repo_info[repo['id']] = {'repo_name': get_repo_name(repo), 'owner': get_owner_login(repo), 'html_url': get_html_url(repo), 'repo_url': get_repo_url(repo), 'fork_count': fork_count, 'star_count': star_count, 'default_branch': default_branch, 'contributor_count': self.get_contributor_count(repo), 'collaborator_count': self.get_collaborator_count(repo)}<|docstring|>Find and store information for each repo in repsonse<|endoftext|> |
8fd00b93a4cc2964438b6764871935dcb792f88e3c96fdf3524f6bfcd2e46645 | def perform_searches(self, queries, page_limit=5):
'Retrieve the '
for query in queries:
i = 0
r = requests.get(f'https://api.github.com/search/repositories?q={query}', headers=self.header)
while ((r.status_code == 200) and (i < page_limit)):
print(f'Query: {query}, page: {i}')
r_json = json.loads(r.text)
self.store_query_info(r_json)
if ('next' in r.links):
r = requests.get(r.links['next']['url'], headers=self.header)
i += 1
else:
break
if (r.status_code != 200):
self.check_sleep()
save_obj(self.repo_info, file_name=f'crawled_info/query_{query}')
self.repo_info.clear() | Retrieve the | snippeteer/crawl/GithubCrawler.py | perform_searches | malyvsen/snippeteer | 0 | python | def perform_searches(self, queries, page_limit=5):
' '
for query in queries:
i = 0
r = requests.get(f'https://api.github.com/search/repositories?q={query}', headers=self.header)
while ((r.status_code == 200) and (i < page_limit)):
print(f'Query: {query}, page: {i}')
r_json = json.loads(r.text)
self.store_query_info(r_json)
if ('next' in r.links):
r = requests.get(r.links['next']['url'], headers=self.header)
i += 1
else:
break
if (r.status_code != 200):
self.check_sleep()
save_obj(self.repo_info, file_name=f'crawled_info/query_{query}')
self.repo_info.clear() | def perform_searches(self, queries, page_limit=5):
' '
for query in queries:
i = 0
r = requests.get(f'https://api.github.com/search/repositories?q={query}', headers=self.header)
while ((r.status_code == 200) and (i < page_limit)):
print(f'Query: {query}, page: {i}')
r_json = json.loads(r.text)
self.store_query_info(r_json)
if ('next' in r.links):
r = requests.get(r.links['next']['url'], headers=self.header)
i += 1
else:
break
if (r.status_code != 200):
self.check_sleep()
save_obj(self.repo_info, file_name=f'crawled_info/query_{query}')
self.repo_info.clear()<|docstring|>Retrieve the<|endoftext|> |
b08f4ccbfe1bd0e5dd033f30a3ee4e19803ab61266ca010b491be3ea065291d9 | def store_query_info(self, response: Dict) -> None:
'Find and store information for each repo in repsonse'
for repo in response['items']:
if ((repo['fork'] is False) and self.language_match(repo)):
(fork_count, star_count, default_branch) = self.get_repo_specific_info(repo)
self.repo_info[repo['id']] = {'repo_name': get_repo_name(repo), 'owner': get_owner_login(repo), 'html_url': get_html_url(repo), 'repo_url': get_repo_url(repo), 'fork_count': fork_count, 'star_count': star_count, 'default_branch': default_branch, 'contributor_count': self.get_contributor_count(repo), 'collaborator_count': self.get_collaborator_count(repo)} | Find and store information for each repo in repsonse | snippeteer/crawl/GithubCrawler.py | store_query_info | malyvsen/snippeteer | 0 | python | def store_query_info(self, response: Dict) -> None:
for repo in response['items']:
if ((repo['fork'] is False) and self.language_match(repo)):
(fork_count, star_count, default_branch) = self.get_repo_specific_info(repo)
self.repo_info[repo['id']] = {'repo_name': get_repo_name(repo), 'owner': get_owner_login(repo), 'html_url': get_html_url(repo), 'repo_url': get_repo_url(repo), 'fork_count': fork_count, 'star_count': star_count, 'default_branch': default_branch, 'contributor_count': self.get_contributor_count(repo), 'collaborator_count': self.get_collaborator_count(repo)} | def store_query_info(self, response: Dict) -> None:
for repo in response['items']:
if ((repo['fork'] is False) and self.language_match(repo)):
(fork_count, star_count, default_branch) = self.get_repo_specific_info(repo)
self.repo_info[repo['id']] = {'repo_name': get_repo_name(repo), 'owner': get_owner_login(repo), 'html_url': get_html_url(repo), 'repo_url': get_repo_url(repo), 'fork_count': fork_count, 'star_count': star_count, 'default_branch': default_branch, 'contributor_count': self.get_contributor_count(repo), 'collaborator_count': self.get_collaborator_count(repo)}<|docstring|>Find and store information for each repo in repsonse<|endoftext|> |
534e3ec08d1bf98d631c51628c7e2dd384fb17a4292b85bfc19a9a3875d6d09d | def index_points(points, idx):
'\n\n Input:\n points: input points data, [B, N, C]\n idx: sample index data, [B, S]\n Return:\n new_points:, indexed points data, [B, S, C] S是N的一个子集\n '
device = points.device
B = points.shape[0]
view_shape = list(idx.shape)
view_shape[1:] = ([1] * (len(view_shape) - 1))
repeat_shape = list(idx.shape)
repeat_shape[0] = 1
batch_indices = torch.arange(B, dtype=torch.long).to(device).view(view_shape).repeat(repeat_shape)
new_points = points[(batch_indices, idx, :)]
return new_points | Input:
points: input points data, [B, N, C]
idx: sample index data, [B, S]
Return:
new_points:, indexed points data, [B, S, C] S是N的一个子集 | 3d_point_cls/models/pointnet_util.py | index_points | ZJZAC/Passport-aware-Normalization | 16 | python | def index_points(points, idx):
'\n\n Input:\n points: input points data, [B, N, C]\n idx: sample index data, [B, S]\n Return:\n new_points:, indexed points data, [B, S, C] S是N的一个子集\n '
device = points.device
B = points.shape[0]
view_shape = list(idx.shape)
view_shape[1:] = ([1] * (len(view_shape) - 1))
repeat_shape = list(idx.shape)
repeat_shape[0] = 1
batch_indices = torch.arange(B, dtype=torch.long).to(device).view(view_shape).repeat(repeat_shape)
new_points = points[(batch_indices, idx, :)]
return new_points | def index_points(points, idx):
'\n\n Input:\n points: input points data, [B, N, C]\n idx: sample index data, [B, S]\n Return:\n new_points:, indexed points data, [B, S, C] S是N的一个子集\n '
device = points.device
B = points.shape[0]
view_shape = list(idx.shape)
view_shape[1:] = ([1] * (len(view_shape) - 1))
repeat_shape = list(idx.shape)
repeat_shape[0] = 1
batch_indices = torch.arange(B, dtype=torch.long).to(device).view(view_shape).repeat(repeat_shape)
new_points = points[(batch_indices, idx, :)]
return new_points<|docstring|>Input:
points: input points data, [B, N, C]
idx: sample index data, [B, S]
Return:
new_points:, indexed points data, [B, S, C] S是N的一个子集<|endoftext|> |
e6bd2c88c6321cecafb65b60673b0b3bf35e4a2ce2710fad97742b12df00b74e | def farthest_point_sample(xyz, npoint):
'\n Input:\n xyz: pointcloud data, [B, N, 3]\n npoint: number of samples\n Return:\n centroids: sampled pointcloud index, [B, npoint]\n '
device = xyz.device
(B, N, C) = xyz.shape
centroids = torch.zeros(B, npoint, dtype=torch.long).to(device)
distance = (torch.ones(B, N).to(device) * 10000000000.0)
farthest = torch.randint(0, N, (B,), dtype=torch.long).to(device)
batch_indices = torch.arange(B, dtype=torch.long).to(device)
for i in range(npoint):
centroids[(:, i)] = farthest
centroid = xyz[(batch_indices, farthest, :)].view(B, 1, 3)
dist = torch.sum(((xyz - centroid) ** 2), (- 1))
mask = (dist < distance)
distance[mask] = dist[mask]
farthest = torch.max(distance, (- 1))[1]
return centroids | Input:
xyz: pointcloud data, [B, N, 3]
npoint: number of samples
Return:
centroids: sampled pointcloud index, [B, npoint] | 3d_point_cls/models/pointnet_util.py | farthest_point_sample | ZJZAC/Passport-aware-Normalization | 16 | python | def farthest_point_sample(xyz, npoint):
'\n Input:\n xyz: pointcloud data, [B, N, 3]\n npoint: number of samples\n Return:\n centroids: sampled pointcloud index, [B, npoint]\n '
device = xyz.device
(B, N, C) = xyz.shape
centroids = torch.zeros(B, npoint, dtype=torch.long).to(device)
distance = (torch.ones(B, N).to(device) * 10000000000.0)
farthest = torch.randint(0, N, (B,), dtype=torch.long).to(device)
batch_indices = torch.arange(B, dtype=torch.long).to(device)
for i in range(npoint):
centroids[(:, i)] = farthest
centroid = xyz[(batch_indices, farthest, :)].view(B, 1, 3)
dist = torch.sum(((xyz - centroid) ** 2), (- 1))
mask = (dist < distance)
distance[mask] = dist[mask]
farthest = torch.max(distance, (- 1))[1]
return centroids | def farthest_point_sample(xyz, npoint):
'\n Input:\n xyz: pointcloud data, [B, N, 3]\n npoint: number of samples\n Return:\n centroids: sampled pointcloud index, [B, npoint]\n '
device = xyz.device
(B, N, C) = xyz.shape
centroids = torch.zeros(B, npoint, dtype=torch.long).to(device)
distance = (torch.ones(B, N).to(device) * 10000000000.0)
farthest = torch.randint(0, N, (B,), dtype=torch.long).to(device)
batch_indices = torch.arange(B, dtype=torch.long).to(device)
for i in range(npoint):
centroids[(:, i)] = farthest
centroid = xyz[(batch_indices, farthest, :)].view(B, 1, 3)
dist = torch.sum(((xyz - centroid) ** 2), (- 1))
mask = (dist < distance)
distance[mask] = dist[mask]
farthest = torch.max(distance, (- 1))[1]
return centroids<|docstring|>Input:
xyz: pointcloud data, [B, N, 3]
npoint: number of samples
Return:
centroids: sampled pointcloud index, [B, npoint]<|endoftext|> |
aa1a490d038f695798d7cf1d4115642a2610064666c99b67309dafe31204a747 | def query_ball_point(radius, nsample, xyz, new_xyz):
'\n Input:\n radius: local region radius\n nsample: max sample number in local region\n xyz: all points, [B, N, 3]\n new_xyz: query points, [B, S, 3]\n Return:\n group_idx: grouped points index, [B, S, nsample]\n '
device = xyz.device
(B, N, C) = xyz.shape
(_, S, _) = new_xyz.shape
group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1])
sqrdists = square_distance(new_xyz, xyz)
group_idx[(sqrdists > (radius ** 2))] = N
group_idx = group_idx.sort(dim=(- 1))[0][(:, :, :nsample)]
group_first = group_idx[(:, :, 0)].view(B, S, 1).repeat([1, 1, nsample])
mask = (group_idx == N)
group_idx[mask] = group_first[mask]
return group_idx | Input:
radius: local region radius
nsample: max sample number in local region
xyz: all points, [B, N, 3]
new_xyz: query points, [B, S, 3]
Return:
group_idx: grouped points index, [B, S, nsample] | 3d_point_cls/models/pointnet_util.py | query_ball_point | ZJZAC/Passport-aware-Normalization | 16 | python | def query_ball_point(radius, nsample, xyz, new_xyz):
'\n Input:\n radius: local region radius\n nsample: max sample number in local region\n xyz: all points, [B, N, 3]\n new_xyz: query points, [B, S, 3]\n Return:\n group_idx: grouped points index, [B, S, nsample]\n '
device = xyz.device
(B, N, C) = xyz.shape
(_, S, _) = new_xyz.shape
group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1])
sqrdists = square_distance(new_xyz, xyz)
group_idx[(sqrdists > (radius ** 2))] = N
group_idx = group_idx.sort(dim=(- 1))[0][(:, :, :nsample)]
group_first = group_idx[(:, :, 0)].view(B, S, 1).repeat([1, 1, nsample])
mask = (group_idx == N)
group_idx[mask] = group_first[mask]
return group_idx | def query_ball_point(radius, nsample, xyz, new_xyz):
'\n Input:\n radius: local region radius\n nsample: max sample number in local region\n xyz: all points, [B, N, 3]\n new_xyz: query points, [B, S, 3]\n Return:\n group_idx: grouped points index, [B, S, nsample]\n '
device = xyz.device
(B, N, C) = xyz.shape
(_, S, _) = new_xyz.shape
group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1])
sqrdists = square_distance(new_xyz, xyz)
group_idx[(sqrdists > (radius ** 2))] = N
group_idx = group_idx.sort(dim=(- 1))[0][(:, :, :nsample)]
group_first = group_idx[(:, :, 0)].view(B, S, 1).repeat([1, 1, nsample])
mask = (group_idx == N)
group_idx[mask] = group_first[mask]
return group_idx<|docstring|>Input:
radius: local region radius
nsample: max sample number in local region
xyz: all points, [B, N, 3]
new_xyz: query points, [B, S, 3]
Return:
group_idx: grouped points index, [B, S, nsample]<|endoftext|> |
6309de3a383de6dd06af7ea4a59de87bbe90f39d64dc2c17ffb3503fe109026e | def sample_and_group(npoint, radius, nsample, xyz, points, returnfps=False):
'\n Input:\n npoint:\n radius:\n nsample:\n xyz: input points position data, [B, N, 3]\n points: input points data, [B, N, D]\n Return:\n new_xyz: sampled points position data, [B, npoint, nsample, 3]\n new_points: sampled points data, [B, npoint, nsample, 3+D]\n '
(B, N, C) = xyz.shape
S = npoint
fps_idx = farthest_point_sample(xyz, npoint)
torch.cuda.empty_cache()
new_xyz = index_points(xyz, fps_idx)
torch.cuda.empty_cache()
idx = query_ball_point(radius, nsample, xyz, new_xyz)
torch.cuda.empty_cache()
grouped_xyz = index_points(xyz, idx)
torch.cuda.empty_cache()
grouped_xyz_norm = (grouped_xyz - new_xyz.view(B, S, 1, C))
torch.cuda.empty_cache()
if (points is not None):
grouped_points = index_points(points, idx)
new_points = torch.cat([grouped_xyz_norm, grouped_points], dim=(- 1))
else:
new_points = grouped_xyz_norm
if returnfps:
return (new_xyz, new_points, grouped_xyz, fps_idx)
else:
return (new_xyz, new_points) | Input:
npoint:
radius:
nsample:
xyz: input points position data, [B, N, 3]
points: input points data, [B, N, D]
Return:
new_xyz: sampled points position data, [B, npoint, nsample, 3]
new_points: sampled points data, [B, npoint, nsample, 3+D] | 3d_point_cls/models/pointnet_util.py | sample_and_group | ZJZAC/Passport-aware-Normalization | 16 | python | def sample_and_group(npoint, radius, nsample, xyz, points, returnfps=False):
'\n Input:\n npoint:\n radius:\n nsample:\n xyz: input points position data, [B, N, 3]\n points: input points data, [B, N, D]\n Return:\n new_xyz: sampled points position data, [B, npoint, nsample, 3]\n new_points: sampled points data, [B, npoint, nsample, 3+D]\n '
(B, N, C) = xyz.shape
S = npoint
fps_idx = farthest_point_sample(xyz, npoint)
torch.cuda.empty_cache()
new_xyz = index_points(xyz, fps_idx)
torch.cuda.empty_cache()
idx = query_ball_point(radius, nsample, xyz, new_xyz)
torch.cuda.empty_cache()
grouped_xyz = index_points(xyz, idx)
torch.cuda.empty_cache()
grouped_xyz_norm = (grouped_xyz - new_xyz.view(B, S, 1, C))
torch.cuda.empty_cache()
if (points is not None):
grouped_points = index_points(points, idx)
new_points = torch.cat([grouped_xyz_norm, grouped_points], dim=(- 1))
else:
new_points = grouped_xyz_norm
if returnfps:
return (new_xyz, new_points, grouped_xyz, fps_idx)
else:
return (new_xyz, new_points) | def sample_and_group(npoint, radius, nsample, xyz, points, returnfps=False):
'\n Input:\n npoint:\n radius:\n nsample:\n xyz: input points position data, [B, N, 3]\n points: input points data, [B, N, D]\n Return:\n new_xyz: sampled points position data, [B, npoint, nsample, 3]\n new_points: sampled points data, [B, npoint, nsample, 3+D]\n '
(B, N, C) = xyz.shape
S = npoint
fps_idx = farthest_point_sample(xyz, npoint)
torch.cuda.empty_cache()
new_xyz = index_points(xyz, fps_idx)
torch.cuda.empty_cache()
idx = query_ball_point(radius, nsample, xyz, new_xyz)
torch.cuda.empty_cache()
grouped_xyz = index_points(xyz, idx)
torch.cuda.empty_cache()
grouped_xyz_norm = (grouped_xyz - new_xyz.view(B, S, 1, C))
torch.cuda.empty_cache()
if (points is not None):
grouped_points = index_points(points, idx)
new_points = torch.cat([grouped_xyz_norm, grouped_points], dim=(- 1))
else:
new_points = grouped_xyz_norm
if returnfps:
return (new_xyz, new_points, grouped_xyz, fps_idx)
else:
return (new_xyz, new_points)<|docstring|>Input:
npoint:
radius:
nsample:
xyz: input points position data, [B, N, 3]
points: input points data, [B, N, D]
Return:
new_xyz: sampled points position data, [B, npoint, nsample, 3]
new_points: sampled points data, [B, npoint, nsample, 3+D]<|endoftext|> |
4cee877ea58d32bbfc6fba91c81a18cc50dc7c045e657c5aafc81663e2bfedeb | def sample_and_group_all(xyz, points):
'\n Input:\n xyz: input points position data, [B, N, 3]\n points: input points data, [B, N, D]\n Return:\n new_xyz: sampled points position data, [B, 1, 3]\n new_points: sampled points data, [B, 1, N, 3+D]\n '
device = xyz.device
(B, N, C) = xyz.shape
new_xyz = torch.zeros(B, 1, C).to(device)
grouped_xyz = xyz.view(B, 1, N, C)
if (points is not None):
new_points = torch.cat([grouped_xyz, points.view(B, 1, N, (- 1))], dim=(- 1))
else:
new_points = grouped_xyz
return (new_xyz, new_points) | Input:
xyz: input points position data, [B, N, 3]
points: input points data, [B, N, D]
Return:
new_xyz: sampled points position data, [B, 1, 3]
new_points: sampled points data, [B, 1, N, 3+D] | 3d_point_cls/models/pointnet_util.py | sample_and_group_all | ZJZAC/Passport-aware-Normalization | 16 | python | def sample_and_group_all(xyz, points):
'\n Input:\n xyz: input points position data, [B, N, 3]\n points: input points data, [B, N, D]\n Return:\n new_xyz: sampled points position data, [B, 1, 3]\n new_points: sampled points data, [B, 1, N, 3+D]\n '
device = xyz.device
(B, N, C) = xyz.shape
new_xyz = torch.zeros(B, 1, C).to(device)
grouped_xyz = xyz.view(B, 1, N, C)
if (points is not None):
new_points = torch.cat([grouped_xyz, points.view(B, 1, N, (- 1))], dim=(- 1))
else:
new_points = grouped_xyz
return (new_xyz, new_points) | def sample_and_group_all(xyz, points):
'\n Input:\n xyz: input points position data, [B, N, 3]\n points: input points data, [B, N, D]\n Return:\n new_xyz: sampled points position data, [B, 1, 3]\n new_points: sampled points data, [B, 1, N, 3+D]\n '
device = xyz.device
(B, N, C) = xyz.shape
new_xyz = torch.zeros(B, 1, C).to(device)
grouped_xyz = xyz.view(B, 1, N, C)
if (points is not None):
new_points = torch.cat([grouped_xyz, points.view(B, 1, N, (- 1))], dim=(- 1))
else:
new_points = grouped_xyz
return (new_xyz, new_points)<|docstring|>Input:
xyz: input points position data, [B, N, 3]
points: input points data, [B, N, D]
Return:
new_xyz: sampled points position data, [B, 1, 3]
new_points: sampled points data, [B, 1, N, 3+D]<|endoftext|> |
86ee2d10cbbe1bf6fe34b7fbb07b060f0752fa426906d0647ae86d250cbf0fe0 | def init_params(net):
'Init layer parameters.'
for m in net.modules():
if isinstance(m, nn.Conv2d):
init.kaiming_normal(m.weight, mode='fan_out')
if m.bias:
init.constant(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
init.constant(m.weight, 1)
init.constant(m.bias, 0)
elif isinstance(m, nn.Linear):
init.normal(m.weight, std=0.001)
if m.bias:
init.constant(m.bias, 0) | Init layer parameters. | 3d_point_cls/models/pointnet_util.py | init_params | ZJZAC/Passport-aware-Normalization | 16 | python | def init_params(net):
for m in net.modules():
if isinstance(m, nn.Conv2d):
init.kaiming_normal(m.weight, mode='fan_out')
if m.bias:
init.constant(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
init.constant(m.weight, 1)
init.constant(m.bias, 0)
elif isinstance(m, nn.Linear):
init.normal(m.weight, std=0.001)
if m.bias:
init.constant(m.bias, 0) | def init_params(net):
for m in net.modules():
if isinstance(m, nn.Conv2d):
init.kaiming_normal(m.weight, mode='fan_out')
if m.bias:
init.constant(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
init.constant(m.weight, 1)
init.constant(m.bias, 0)
elif isinstance(m, nn.Linear):
init.normal(m.weight, std=0.001)
if m.bias:
init.constant(m.bias, 0)<|docstring|>Init layer parameters.<|endoftext|> |
f0d29687e270885ce78c7cebf4ff8d5cbd24bc0d394f69ccc5dd2cc16b4e135a | def re_initializer_layer(model, num_classes, layer=None):
'remove the last layer and add a new one'
indim = model.fc3.in_features
private_key = model.fc3
if layer:
model.fc3 = layer
else:
model.fc3 = nn.Linear(indim, num_classes).cuda()
return (model, private_key) | remove the last layer and add a new one | 3d_point_cls/models/pointnet_util.py | re_initializer_layer | ZJZAC/Passport-aware-Normalization | 16 | python | def re_initializer_layer(model, num_classes, layer=None):
indim = model.fc3.in_features
private_key = model.fc3
if layer:
model.fc3 = layer
else:
model.fc3 = nn.Linear(indim, num_classes).cuda()
return (model, private_key) | def re_initializer_layer(model, num_classes, layer=None):
indim = model.fc3.in_features
private_key = model.fc3
if layer:
model.fc3 = layer
else:
model.fc3 = nn.Linear(indim, num_classes).cuda()
return (model, private_key)<|docstring|>remove the last layer and add a new one<|endoftext|> |
f957e7ee4eca9df9080b757cffaf138a9ac00209931b0b8d6a7501500fd0fdf5 | def re_initializer_passport_layer(model, num_classes, layer=None):
'remove the last layer and add a new one'
indim = model.p3.in_features
private_key = model.p3.key_private
private_skey = model.p3.skey_private
private_layer = model.p3
if layer:
model.p3 = layer
else:
model.p3 = fc3_ft(indim, num_classes).cuda()
model.p3.key_private = private_key
model.p3.skey_private = private_skey
return (model, private_layer) | remove the last layer and add a new one | 3d_point_cls/models/pointnet_util.py | re_initializer_passport_layer | ZJZAC/Passport-aware-Normalization | 16 | python | def re_initializer_passport_layer(model, num_classes, layer=None):
indim = model.p3.in_features
private_key = model.p3.key_private
private_skey = model.p3.skey_private
private_layer = model.p3
if layer:
model.p3 = layer
else:
model.p3 = fc3_ft(indim, num_classes).cuda()
model.p3.key_private = private_key
model.p3.skey_private = private_skey
return (model, private_layer) | def re_initializer_passport_layer(model, num_classes, layer=None):
indim = model.p3.in_features
private_key = model.p3.key_private
private_skey = model.p3.skey_private
private_layer = model.p3
if layer:
model.p3 = layer
else:
model.p3 = fc3_ft(indim, num_classes).cuda()
model.p3.key_private = private_key
model.p3.skey_private = private_skey
return (model, private_layer)<|docstring|>remove the last layer and add a new one<|endoftext|> |
9cb4e6bd6f3e4fbf19c3a895ca9264811cb0830a1d93ee2375ce8cc69fe2e0b9 | def forward(self, xyz, points):
"\n Input:\n xyz: input points position data, [B, C, N]\n points: input points data, [B, D, N]\n Return:\n new_xyz: sampled points position data, [B, C, S]\n new_points_concat: sample points feature data, [B, D', S]\n "
xyz = xyz.permute(0, 2, 1)
if (points is not None):
points = points.permute(0, 2, 1)
if self.group_all:
(new_xyz, new_points) = sample_and_group_all(xyz, points)
else:
(new_xyz, new_points) = sample_and_group(self.npoint, self.radius, self.nsample, xyz, points)
new_points = new_points.permute(0, 3, 2, 1)
for (i, conv) in enumerate(self.mlp_convs):
bn = self.mlp_bns[i]
new_points = F.relu(bn(conv(new_points)))
new_points = torch.max(new_points, 2)[0]
new_xyz = new_xyz.permute(0, 2, 1)
return (new_xyz, new_points) | Input:
xyz: input points position data, [B, C, N]
points: input points data, [B, D, N]
Return:
new_xyz: sampled points position data, [B, C, S]
new_points_concat: sample points feature data, [B, D', S] | 3d_point_cls/models/pointnet_util.py | forward | ZJZAC/Passport-aware-Normalization | 16 | python | def forward(self, xyz, points):
"\n Input:\n xyz: input points position data, [B, C, N]\n points: input points data, [B, D, N]\n Return:\n new_xyz: sampled points position data, [B, C, S]\n new_points_concat: sample points feature data, [B, D', S]\n "
xyz = xyz.permute(0, 2, 1)
if (points is not None):
points = points.permute(0, 2, 1)
if self.group_all:
(new_xyz, new_points) = sample_and_group_all(xyz, points)
else:
(new_xyz, new_points) = sample_and_group(self.npoint, self.radius, self.nsample, xyz, points)
new_points = new_points.permute(0, 3, 2, 1)
for (i, conv) in enumerate(self.mlp_convs):
bn = self.mlp_bns[i]
new_points = F.relu(bn(conv(new_points)))
new_points = torch.max(new_points, 2)[0]
new_xyz = new_xyz.permute(0, 2, 1)
return (new_xyz, new_points) | def forward(self, xyz, points):
"\n Input:\n xyz: input points position data, [B, C, N]\n points: input points data, [B, D, N]\n Return:\n new_xyz: sampled points position data, [B, C, S]\n new_points_concat: sample points feature data, [B, D', S]\n "
xyz = xyz.permute(0, 2, 1)
if (points is not None):
points = points.permute(0, 2, 1)
if self.group_all:
(new_xyz, new_points) = sample_and_group_all(xyz, points)
else:
(new_xyz, new_points) = sample_and_group(self.npoint, self.radius, self.nsample, xyz, points)
new_points = new_points.permute(0, 3, 2, 1)
for (i, conv) in enumerate(self.mlp_convs):
bn = self.mlp_bns[i]
new_points = F.relu(bn(conv(new_points)))
new_points = torch.max(new_points, 2)[0]
new_xyz = new_xyz.permute(0, 2, 1)
return (new_xyz, new_points)<|docstring|>Input:
xyz: input points position data, [B, C, N]
points: input points data, [B, D, N]
Return:
new_xyz: sampled points position data, [B, C, S]
new_points_concat: sample points feature data, [B, D', S]<|endoftext|> |
6924e9addf417c3a2acebf9eb4fcd8c5f6c8f70a9709c23e0ddeb439d0c9bebe | def forward(self, xyz, points):
"\n Input:\n xyz: input points position data, [B, C, N]\n points: input points data, [B, D, N]\n Return:\n new_xyz: sampled points position data, [B, C, S]\n new_points_concat: sample points feature data, [B, D', S]\n "
xyz = xyz.permute(0, 2, 1)
if (points is not None):
points = points.permute(0, 2, 1)
(B, N, C) = xyz.shape
S = self.npoint
new_xyz = index_points(xyz, farthest_point_sample(xyz, S))
new_points_list = []
for (i, radius) in enumerate(self.radius_list):
K = self.nsample_list[i]
group_idx = query_ball_point(radius, K, xyz, new_xyz)
grouped_xyz = index_points(xyz, group_idx)
grouped_xyz -= new_xyz.view(B, S, 1, C)
if (points is not None):
grouped_points = index_points(points, group_idx)
grouped_points = torch.cat([grouped_points, grouped_xyz], dim=(- 1))
else:
grouped_points = grouped_xyz
grouped_points = grouped_points.permute(0, 3, 2, 1)
for j in range(len(self.conv_blocks[i])):
conv = self.conv_blocks[i][j]
bn = self.bn_blocks[i][j]
grouped_points = F.relu(bn(conv(grouped_points)))
new_points = torch.max(grouped_points, 2)[0]
new_points_list.append(new_points)
new_xyz = new_xyz.permute(0, 2, 1)
new_points_concat = torch.cat(new_points_list, dim=1)
return (new_xyz, new_points_concat) | Input:
xyz: input points position data, [B, C, N]
points: input points data, [B, D, N]
Return:
new_xyz: sampled points position data, [B, C, S]
new_points_concat: sample points feature data, [B, D', S] | 3d_point_cls/models/pointnet_util.py | forward | ZJZAC/Passport-aware-Normalization | 16 | python | def forward(self, xyz, points):
"\n Input:\n xyz: input points position data, [B, C, N]\n points: input points data, [B, D, N]\n Return:\n new_xyz: sampled points position data, [B, C, S]\n new_points_concat: sample points feature data, [B, D', S]\n "
xyz = xyz.permute(0, 2, 1)
if (points is not None):
points = points.permute(0, 2, 1)
(B, N, C) = xyz.shape
S = self.npoint
new_xyz = index_points(xyz, farthest_point_sample(xyz, S))
new_points_list = []
for (i, radius) in enumerate(self.radius_list):
K = self.nsample_list[i]
group_idx = query_ball_point(radius, K, xyz, new_xyz)
grouped_xyz = index_points(xyz, group_idx)
grouped_xyz -= new_xyz.view(B, S, 1, C)
if (points is not None):
grouped_points = index_points(points, group_idx)
grouped_points = torch.cat([grouped_points, grouped_xyz], dim=(- 1))
else:
grouped_points = grouped_xyz
grouped_points = grouped_points.permute(0, 3, 2, 1)
for j in range(len(self.conv_blocks[i])):
conv = self.conv_blocks[i][j]
bn = self.bn_blocks[i][j]
grouped_points = F.relu(bn(conv(grouped_points)))
new_points = torch.max(grouped_points, 2)[0]
new_points_list.append(new_points)
new_xyz = new_xyz.permute(0, 2, 1)
new_points_concat = torch.cat(new_points_list, dim=1)
return (new_xyz, new_points_concat) | def forward(self, xyz, points):
"\n Input:\n xyz: input points position data, [B, C, N]\n points: input points data, [B, D, N]\n Return:\n new_xyz: sampled points position data, [B, C, S]\n new_points_concat: sample points feature data, [B, D', S]\n "
xyz = xyz.permute(0, 2, 1)
if (points is not None):
points = points.permute(0, 2, 1)
(B, N, C) = xyz.shape
S = self.npoint
new_xyz = index_points(xyz, farthest_point_sample(xyz, S))
new_points_list = []
for (i, radius) in enumerate(self.radius_list):
K = self.nsample_list[i]
group_idx = query_ball_point(radius, K, xyz, new_xyz)
grouped_xyz = index_points(xyz, group_idx)
grouped_xyz -= new_xyz.view(B, S, 1, C)
if (points is not None):
grouped_points = index_points(points, group_idx)
grouped_points = torch.cat([grouped_points, grouped_xyz], dim=(- 1))
else:
grouped_points = grouped_xyz
grouped_points = grouped_points.permute(0, 3, 2, 1)
for j in range(len(self.conv_blocks[i])):
conv = self.conv_blocks[i][j]
bn = self.bn_blocks[i][j]
grouped_points = F.relu(bn(conv(grouped_points)))
new_points = torch.max(grouped_points, 2)[0]
new_points_list.append(new_points)
new_xyz = new_xyz.permute(0, 2, 1)
new_points_concat = torch.cat(new_points_list, dim=1)
return (new_xyz, new_points_concat)<|docstring|>Input:
xyz: input points position data, [B, C, N]
points: input points data, [B, D, N]
Return:
new_xyz: sampled points position data, [B, C, S]
new_points_concat: sample points feature data, [B, D', S]<|endoftext|> |
a0c8779b195819037c801640f2b434502018a6fcd5abea5b653ec986cd3b68bd | def forward(self, xyz1, xyz2, points1, points2):
"\n Input:\n xyz1: input points position data, [B, C, N]\n xyz2: sampled input points position data, [B, C, S]\n points1: input points data, [B, D, N]\n points2: input points data, [B, D, S]\n Return:\n new_points: upsampled points data, [B, D', N]\n "
xyz1 = xyz1.permute(0, 2, 1)
xyz2 = xyz2.permute(0, 2, 1)
points2 = points2.permute(0, 2, 1)
(B, N, C) = xyz1.shape
(_, S, _) = xyz2.shape
if (S == 1):
interpolated_points = points2.repeat(1, N, 1)
else:
dists = square_distance(xyz1, xyz2)
(dists, idx) = dists.sort(dim=(- 1))
(dists, idx) = (dists[(:, :, :3)], idx[(:, :, :3)])
dist_recip = (1.0 / (dists + 1e-08))
norm = torch.sum(dist_recip, dim=2, keepdim=True)
weight = (dist_recip / norm)
interpolated_points = torch.sum((index_points(points2, idx) * weight.view(B, N, 3, 1)), dim=2)
if (points1 is not None):
points1 = points1.permute(0, 2, 1)
new_points = torch.cat([points1, interpolated_points], dim=(- 1))
else:
new_points = interpolated_points
new_points = new_points.permute(0, 2, 1)
for (i, conv) in enumerate(self.mlp_convs):
bn = self.mlp_bns[i]
new_points = F.relu(bn(conv(new_points)))
return new_points | Input:
xyz1: input points position data, [B, C, N]
xyz2: sampled input points position data, [B, C, S]
points1: input points data, [B, D, N]
points2: input points data, [B, D, S]
Return:
new_points: upsampled points data, [B, D', N] | 3d_point_cls/models/pointnet_util.py | forward | ZJZAC/Passport-aware-Normalization | 16 | python | def forward(self, xyz1, xyz2, points1, points2):
"\n Input:\n xyz1: input points position data, [B, C, N]\n xyz2: sampled input points position data, [B, C, S]\n points1: input points data, [B, D, N]\n points2: input points data, [B, D, S]\n Return:\n new_points: upsampled points data, [B, D', N]\n "
xyz1 = xyz1.permute(0, 2, 1)
xyz2 = xyz2.permute(0, 2, 1)
points2 = points2.permute(0, 2, 1)
(B, N, C) = xyz1.shape
(_, S, _) = xyz2.shape
if (S == 1):
interpolated_points = points2.repeat(1, N, 1)
else:
dists = square_distance(xyz1, xyz2)
(dists, idx) = dists.sort(dim=(- 1))
(dists, idx) = (dists[(:, :, :3)], idx[(:, :, :3)])
dist_recip = (1.0 / (dists + 1e-08))
norm = torch.sum(dist_recip, dim=2, keepdim=True)
weight = (dist_recip / norm)
interpolated_points = torch.sum((index_points(points2, idx) * weight.view(B, N, 3, 1)), dim=2)
if (points1 is not None):
points1 = points1.permute(0, 2, 1)
new_points = torch.cat([points1, interpolated_points], dim=(- 1))
else:
new_points = interpolated_points
new_points = new_points.permute(0, 2, 1)
for (i, conv) in enumerate(self.mlp_convs):
bn = self.mlp_bns[i]
new_points = F.relu(bn(conv(new_points)))
return new_points | def forward(self, xyz1, xyz2, points1, points2):
"\n Input:\n xyz1: input points position data, [B, C, N]\n xyz2: sampled input points position data, [B, C, S]\n points1: input points data, [B, D, N]\n points2: input points data, [B, D, S]\n Return:\n new_points: upsampled points data, [B, D', N]\n "
xyz1 = xyz1.permute(0, 2, 1)
xyz2 = xyz2.permute(0, 2, 1)
points2 = points2.permute(0, 2, 1)
(B, N, C) = xyz1.shape
(_, S, _) = xyz2.shape
if (S == 1):
interpolated_points = points2.repeat(1, N, 1)
else:
dists = square_distance(xyz1, xyz2)
(dists, idx) = dists.sort(dim=(- 1))
(dists, idx) = (dists[(:, :, :3)], idx[(:, :, :3)])
dist_recip = (1.0 / (dists + 1e-08))
norm = torch.sum(dist_recip, dim=2, keepdim=True)
weight = (dist_recip / norm)
interpolated_points = torch.sum((index_points(points2, idx) * weight.view(B, N, 3, 1)), dim=2)
if (points1 is not None):
points1 = points1.permute(0, 2, 1)
new_points = torch.cat([points1, interpolated_points], dim=(- 1))
else:
new_points = interpolated_points
new_points = new_points.permute(0, 2, 1)
for (i, conv) in enumerate(self.mlp_convs):
bn = self.mlp_bns[i]
new_points = F.relu(bn(conv(new_points)))
return new_points<|docstring|>Input:
xyz1: input points position data, [B, C, N]
xyz2: sampled input points position data, [B, C, S]
points1: input points data, [B, D, N]
points2: input points data, [B, D, S]
Return:
new_points: upsampled points data, [B, D', N]<|endoftext|> |
ef62d22cd8758eb49092dc3e5ba73b3c7fb1b3f6c876051c27c4b7a20799ebc5 | @distributed_trace
def list(self, resource_group_name: str, environment_name: str, **kwargs: Any) -> Iterable['_models.DaprComponentsCollection']:
'Get the Dapr Components for a managed environment.\n\n Get the Dapr Components for a managed environment.\n\n :param resource_group_name: The name of the resource group. The name is case insensitive.\n :type resource_group_name: str\n :param environment_name: Name of the Managed Environment.\n :type environment_name: str\n :keyword callable cls: A custom type or function that will be passed the direct response\n :return: An iterator like instance of either DaprComponentsCollection or the result of\n cls(response)\n :rtype: ~azure.core.paging.ItemPaged[~azure.mgmt.appcontainers.models.DaprComponentsCollection]\n :raises: ~azure.core.exceptions.HttpResponseError\n '
api_version = kwargs.pop('api_version', '2022-03-01')
cls = kwargs.pop('cls', None)
error_map = {401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
def prepare_request(next_link=None):
if (not next_link):
request = build_list_request(subscription_id=self._config.subscription_id, resource_group_name=resource_group_name, environment_name=environment_name, api_version=api_version, template_url=self.list.metadata['url'])
request = _convert_request(request)
request.url = self._client.format_url(request.url)
else:
request = build_list_request(subscription_id=self._config.subscription_id, resource_group_name=resource_group_name, environment_name=environment_name, api_version=api_version, template_url=next_link)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
request.method = 'GET'
return request
def extract_data(pipeline_response):
deserialized = self._deserialize('DaprComponentsCollection', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return ((deserialized.next_link or None), iter(list_of_elem))
def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if (response.status_code not in [200]):
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.DefaultErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return ItemPaged(get_next, extract_data) | Get the Dapr Components for a managed environment.
Get the Dapr Components for a managed environment.
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param environment_name: Name of the Managed Environment.
:type environment_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either DaprComponentsCollection or the result of
cls(response)
:rtype: ~azure.core.paging.ItemPaged[~azure.mgmt.appcontainers.models.DaprComponentsCollection]
:raises: ~azure.core.exceptions.HttpResponseError | sdk/appcontainers/azure-mgmt-appcontainers/azure/mgmt/appcontainers/operations/_dapr_components_operations.py | list | AikoBB/azure-sdk-for-python | 1 | python | @distributed_trace
def list(self, resource_group_name: str, environment_name: str, **kwargs: Any) -> Iterable['_models.DaprComponentsCollection']:
'Get the Dapr Components for a managed environment.\n\n Get the Dapr Components for a managed environment.\n\n :param resource_group_name: The name of the resource group. The name is case insensitive.\n :type resource_group_name: str\n :param environment_name: Name of the Managed Environment.\n :type environment_name: str\n :keyword callable cls: A custom type or function that will be passed the direct response\n :return: An iterator like instance of either DaprComponentsCollection or the result of\n cls(response)\n :rtype: ~azure.core.paging.ItemPaged[~azure.mgmt.appcontainers.models.DaprComponentsCollection]\n :raises: ~azure.core.exceptions.HttpResponseError\n '
api_version = kwargs.pop('api_version', '2022-03-01')
cls = kwargs.pop('cls', None)
error_map = {401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
def prepare_request(next_link=None):
if (not next_link):
request = build_list_request(subscription_id=self._config.subscription_id, resource_group_name=resource_group_name, environment_name=environment_name, api_version=api_version, template_url=self.list.metadata['url'])
request = _convert_request(request)
request.url = self._client.format_url(request.url)
else:
request = build_list_request(subscription_id=self._config.subscription_id, resource_group_name=resource_group_name, environment_name=environment_name, api_version=api_version, template_url=next_link)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
request.method = 'GET'
return request
def extract_data(pipeline_response):
deserialized = self._deserialize('DaprComponentsCollection', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return ((deserialized.next_link or None), iter(list_of_elem))
def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if (response.status_code not in [200]):
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.DefaultErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return ItemPaged(get_next, extract_data) | @distributed_trace
def list(self, resource_group_name: str, environment_name: str, **kwargs: Any) -> Iterable['_models.DaprComponentsCollection']:
'Get the Dapr Components for a managed environment.\n\n Get the Dapr Components for a managed environment.\n\n :param resource_group_name: The name of the resource group. The name is case insensitive.\n :type resource_group_name: str\n :param environment_name: Name of the Managed Environment.\n :type environment_name: str\n :keyword callable cls: A custom type or function that will be passed the direct response\n :return: An iterator like instance of either DaprComponentsCollection or the result of\n cls(response)\n :rtype: ~azure.core.paging.ItemPaged[~azure.mgmt.appcontainers.models.DaprComponentsCollection]\n :raises: ~azure.core.exceptions.HttpResponseError\n '
api_version = kwargs.pop('api_version', '2022-03-01')
cls = kwargs.pop('cls', None)
error_map = {401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
def prepare_request(next_link=None):
if (not next_link):
request = build_list_request(subscription_id=self._config.subscription_id, resource_group_name=resource_group_name, environment_name=environment_name, api_version=api_version, template_url=self.list.metadata['url'])
request = _convert_request(request)
request.url = self._client.format_url(request.url)
else:
request = build_list_request(subscription_id=self._config.subscription_id, resource_group_name=resource_group_name, environment_name=environment_name, api_version=api_version, template_url=next_link)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
request.method = 'GET'
return request
def extract_data(pipeline_response):
deserialized = self._deserialize('DaprComponentsCollection', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return ((deserialized.next_link or None), iter(list_of_elem))
def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if (response.status_code not in [200]):
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.DefaultErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
return pipeline_response
return ItemPaged(get_next, extract_data)<|docstring|>Get the Dapr Components for a managed environment.
Get the Dapr Components for a managed environment.
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param environment_name: Name of the Managed Environment.
:type environment_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either DaprComponentsCollection or the result of
cls(response)
:rtype: ~azure.core.paging.ItemPaged[~azure.mgmt.appcontainers.models.DaprComponentsCollection]
:raises: ~azure.core.exceptions.HttpResponseError<|endoftext|> |
8731c9c9a2b14ef7ead0c39e9e539d1b8213f274d04b0ccdfee15c507c4f94e7 | @distributed_trace
def get(self, resource_group_name: str, environment_name: str, component_name: str, **kwargs: Any) -> '_models.DaprComponent':
'Get a dapr component.\n\n Get a dapr component.\n\n :param resource_group_name: The name of the resource group. The name is case insensitive.\n :type resource_group_name: str\n :param environment_name: Name of the Managed Environment.\n :type environment_name: str\n :param component_name: Name of the Dapr Component.\n :type component_name: str\n :keyword callable cls: A custom type or function that will be passed the direct response\n :return: DaprComponent, or the result of cls(response)\n :rtype: ~azure.mgmt.appcontainers.models.DaprComponent\n :raises: ~azure.core.exceptions.HttpResponseError\n '
cls = kwargs.pop('cls', None)
error_map = {401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
api_version = kwargs.pop('api_version', '2022-03-01')
request = build_get_request(subscription_id=self._config.subscription_id, resource_group_name=resource_group_name, environment_name=environment_name, component_name=component_name, api_version=api_version, template_url=self.get.metadata['url'])
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if (response.status_code not in [200]):
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.DefaultErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('DaprComponent', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized | Get a dapr component.
Get a dapr component.
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param environment_name: Name of the Managed Environment.
:type environment_name: str
:param component_name: Name of the Dapr Component.
:type component_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: DaprComponent, or the result of cls(response)
:rtype: ~azure.mgmt.appcontainers.models.DaprComponent
:raises: ~azure.core.exceptions.HttpResponseError | sdk/appcontainers/azure-mgmt-appcontainers/azure/mgmt/appcontainers/operations/_dapr_components_operations.py | get | AikoBB/azure-sdk-for-python | 1 | python | @distributed_trace
def get(self, resource_group_name: str, environment_name: str, component_name: str, **kwargs: Any) -> '_models.DaprComponent':
'Get a dapr component.\n\n Get a dapr component.\n\n :param resource_group_name: The name of the resource group. The name is case insensitive.\n :type resource_group_name: str\n :param environment_name: Name of the Managed Environment.\n :type environment_name: str\n :param component_name: Name of the Dapr Component.\n :type component_name: str\n :keyword callable cls: A custom type or function that will be passed the direct response\n :return: DaprComponent, or the result of cls(response)\n :rtype: ~azure.mgmt.appcontainers.models.DaprComponent\n :raises: ~azure.core.exceptions.HttpResponseError\n '
cls = kwargs.pop('cls', None)
error_map = {401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
api_version = kwargs.pop('api_version', '2022-03-01')
request = build_get_request(subscription_id=self._config.subscription_id, resource_group_name=resource_group_name, environment_name=environment_name, component_name=component_name, api_version=api_version, template_url=self.get.metadata['url'])
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if (response.status_code not in [200]):
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.DefaultErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('DaprComponent', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized | @distributed_trace
def get(self, resource_group_name: str, environment_name: str, component_name: str, **kwargs: Any) -> '_models.DaprComponent':
'Get a dapr component.\n\n Get a dapr component.\n\n :param resource_group_name: The name of the resource group. The name is case insensitive.\n :type resource_group_name: str\n :param environment_name: Name of the Managed Environment.\n :type environment_name: str\n :param component_name: Name of the Dapr Component.\n :type component_name: str\n :keyword callable cls: A custom type or function that will be passed the direct response\n :return: DaprComponent, or the result of cls(response)\n :rtype: ~azure.mgmt.appcontainers.models.DaprComponent\n :raises: ~azure.core.exceptions.HttpResponseError\n '
cls = kwargs.pop('cls', None)
error_map = {401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError}
error_map.update(kwargs.pop('error_map', {}))
api_version = kwargs.pop('api_version', '2022-03-01')
request = build_get_request(subscription_id=self._config.subscription_id, resource_group_name=resource_group_name, environment_name=environment_name, component_name=component_name, api_version=api_version, template_url=self.get.metadata['url'])
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if (response.status_code not in [200]):
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.DefaultErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
deserialized = self._deserialize('DaprComponent', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized<|docstring|>Get a dapr component.
Get a dapr component.
:param resource_group_name: The name of the resource group. The name is case insensitive.
:type resource_group_name: str
:param environment_name: Name of the Managed Environment.
:type environment_name: str
:param component_name: Name of the Dapr Component.
:type component_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: DaprComponent, or the result of cls(response)
:rtype: ~azure.mgmt.appcontainers.models.DaprComponent
:raises: ~azure.core.exceptions.HttpResponseError<|endoftext|> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.