markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Introduction **What?** Operators Basic Python Semantics: Operators In the previous section, we began to look at the semantics of Python variables and objects; here we'll dig into the semantics of the various *operators* included in the language.By the end of this section, you'll have the basic tools to begin comparing and operating on data in Python. Arithmetic OperationsPython implements seven basic binary arithmetic operators, two of which can double as unary operators.They are summarized in the following table:| Operator | Name | Description ||--------------|----------------|--------------------------------------------------------|| ``a + b`` | Addition | Sum of ``a`` and ``b`` || ``a - b`` | Subtraction | Difference of ``a`` and ``b`` || ``a * b`` | Multiplication | Product of ``a`` and ``b`` || ``a / b`` | True division | Quotient of ``a`` and ``b`` || ``a // b`` | Floor division | Quotient of ``a`` and ``b``, removing fractional parts || ``a % b`` | Modulus | Integer remainder after division of ``a`` by ``b`` || ``a ** b`` | Exponentiation | ``a`` raised to the power of ``b`` || ``-a`` | Negation | The negative of ``a`` || ``+a`` | Unary plus | ``a`` unchanged (rarely used) |These operators can be used and combined in intuitive ways, using standard parentheses to group operations.For example:
# addition, subtraction, multiplication (4 + 8) * (6.5 - 3)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
Floor division is true division with fractional parts truncated:
# True division print(11 / 2) # Floor division print(11 // 2)
5
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
The floor division operator was added in Python 3; you should be aware if working in Python 2 that the standard division operator (``/``) acts like floor division for integers and like true division for floating-point numbers.Finally, I'll mention an eighth arithmetic operator that was added in Python 3.5: the ``a @ b`` operator, which is meant to indicate the *matrix product* of ``a`` and ``b``, for use in various linear algebra packages. Bitwise OperationsIn addition to the standard numerical operations, Python includes operators to perform bitwise logical operations on integers.These are much less commonly used than the standard arithmetic operations, but it's useful to know that they exist.The six bitwise operators are summarized in the following table:| Operator | Name | Description ||--------------|-----------------|---------------------------------------------|| ``a & b`` | Bitwise AND | Bits defined in both ``a`` and ``b`` || a &124; b| Bitwise OR | Bits defined in ``a`` or ``b`` or both || ``a ^ b`` | Bitwise XOR | Bits defined in ``a`` or ``b`` but not both || ``a << b`` | Bit shift left | Shift bits of ``a`` left by ``b`` units || ``a >> b`` | Bit shift right | Shift bits of ``a`` right by ``b`` units || ``~a`` | Bitwise NOT | Bitwise negation of ``a`` |These bitwise operators only make sense in terms of the binary representation of numbers, which you can see using the built-in ``bin`` function:
bin(10)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
The result is prefixed with ``'0b'``, which indicates a binary representation.The rest of the digits indicate that the number 10 is expressed as the sum $1 \cdot 2^3 + 0 \cdot 2^2 + 1 \cdot 2^1 + 0 \cdot 2^0$.Similarly, we can write:
bin(4)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
Now, using bitwise OR, we can find the number which combines the bits of 4 and 10:
4 | 10 bin(4 | 10)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
These bitwise operators are not as immediately useful as the standard arithmetic operators, but it's helpful to see them at least once to understand what class of operation they perform.In particular, users from other languages are sometimes tempted to use XOR (i.e., ``a ^ b``) when they really mean exponentiation (i.e., ``a ** b``). Assignment OperationsWe've seen that variables can be assigned with the "``=``" operator, and the values stored for later use. For example:
a = 24 print(a)
24
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
We can use these variables in expressions with any of the operators mentioned earlier.For example, to add 2 to ``a`` we write:
a + 2
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
We might want to update the variable ``a`` with this new value; in this case, we could combine the addition and the assignment and write ``a = a + 2``.Because this type of combined operation and assignment is so common, Python includes built-in update operators for all of the arithmetic operations:
a += 2 # equivalent to a = a + 2 print(a)
26
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
There is an augmented assignment operator corresponding to each of the binary operators listed earlier; in brief, they are:||||||-|-||``a += b``| ``a -= b``|``a *= b``| ``a /= b``||``a //= b``| ``a %= b``|``a **= b``|``a &= b``||a &124;= b| ``a ^= b``|``a >= b``|Each one is equivalent to the corresponding operation followed by assignment: that is, for any operator "``■``", the expression ``a ■= b`` is equivalent to ``a = a ■ b``, with a slight catch.For mutable objects like lists, arrays, or DataFrames, these augmented assignment operations are actually subtly different than their more verbose counterparts: they modify the contents of the original object rather than creating a new object to store the result. Comparison OperationsAnother type of operation which can be very useful is comparison of different values.For this, Python implements standard comparison operators, which return Boolean values ``True`` and ``False``.The comparison operations are listed in the following table:| Operation | Description || Operation | Description ||---------------|-----------------------------------||---------------|--------------------------------------|| ``a == b`` | ``a`` equal to ``b`` || ``a != b`` | ``a`` not equal to ``b`` || ``a b`` | ``a`` greater than ``b`` || ``a = b`` | ``a`` greater than or equal to ``b`` |These comparison operators can be combined with the arithmetic and bitwise operators to express a virtually limitless range of tests for the numbers.For example, we can check if a number is odd by checking that the modulus with 2 returns 1:
# 25 is odd 25 % 2 == 1 # 66 is odd 66 % 2 == 1
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
We can string-together multiple comparisons to check more complicated relationships:
# check if a is between 15 and 30 a = 25 15 < a < 30
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
And, just to make your head hurt a bit, take a look at this comparison:
-1 == ~0
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
Recall that ``~`` is the bit-flip operator, and evidently when you flip all the bits of zero you end up with -1.If you're curious as to why this is, look up the *two's complement* integer encoding scheme, which is what Python uses to encode signed integers, and think about what happens when you start flipping all the bits of integers encoded this way. Boolean OperationsWhen working with Boolean values, Python provides operators to combine the values using the standard concepts of "and", "or", and "not".Predictably, these operators are expressed using the words ``and``, ``or``, and ``not``:
x = 4 (x < 6) and (x > 2) (x > 10) or (x % 2 == 0) not (x < 6)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
Boolean algebra aficionados might notice that the XOR operator is not included; this can of course be constructed in several ways from a compound statement of the other operators.Otherwise, a clever trick you can use for XOR of Boolean values is the following:
# (x > 1) xor (x < 10) (x > 1) != (x < 10)
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
These sorts of Boolean operations will become extremely useful when we begin discussing *control flow statements* such as conditionals and loops.One sometimes confusing thing about the language is when to use Boolean operators (``and``, ``or``, ``not``), and when to use bitwise operations (``&``, ``|``, ``~``).The answer lies in their names: Boolean operators should be used when you want to compute *Boolean values (i.e., truth or falsehood) of entire statements*.Bitwise operations should be used when you want to *operate on individual bits or components of the objects in question*. Identity and Membership OperatorsLike ``and``, ``or``, and ``not``, Python also contains prose-like operators to check for identity and membership.They are the following:| Operator | Description ||---------------|---------------------------------------------------|| ``a is b`` | True if ``a`` and ``b`` are identical objects || ``a is not b``| True if ``a`` and ``b`` are not identical objects || ``a in b`` | True if ``a`` is a member of ``b`` || ``a not in b``| True if ``a`` is not a member of ``b`` | Identity Operators: "``is``" and "``is not``"The identity operators, "``is``" and "``is not``" check for *object identity*.Object identity is different than equality, as we can see here:
a = [1, 2, 3] b = [1, 2, 3] a == b a is b a is not b
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
What do identical objects look like? Here is an example:
a = [1, 2, 3] b = a a is b
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
The difference between the two cases here is that in the first, ``a`` and ``b`` point to *different objects*, while in the second they point to the *same object*.As we saw in the previous section, Python variables are pointers. The "``is``" operator checks whether the two variables are pointing to the same container (object), rather than referring to what the container contains.With this in mind, in most cases that a beginner is tempted to use "``is``" what they really mean is ``==``. Membership operatorsMembership operators check for membership within compound objects.So, for example, we can write:
1 in [1, 2, 3] 2 not in [1, 2, 3]
_____no_output_____
OLDAP-2.3
GitHub_MD_rendering/Operators.ipynb
kyaiooiayk/Python-Programming
**Note about the above cell:** There are a several options for this particular step depending on the computational resources available and network size. If the network is sufficiently small (<250k edges), it is recommended to use the 'small_network_AUPRC_wrapper' function as it can be much faster, especially when run in parallel (at least 8G per core is recommended). If you would like to parallelize the AUPRC calculation with a larger network (between 250K and 2.5M edges), at least 16G per core is recommended, 32G per core if the network contains more than 2.5M edges. For larger networks, it is recommended to use the 'large_network_AUPRC_wrapper', which may be a slightly slower function, but more equipped to handle the larger memory footprint required. To change the parllelization status of the function, change the 'cores' option to the number of threads you would like to utilize.
# Construct null networks and calculate the AUPRC of the gene sets of the null networks # We can use the AUPRC wrapper function for this null_AUPRCs = [] for i in range(10): shuffNet = nef.shuffle_network(network, max_tries_n=10, verbose=True) shuffNet_kernel = nef.construct_prop_kernel(shuffNet, alpha=alpha, verbose=False) shuffNet_AUPRCs = nef.small_network_AUPRC_wrapper(shuffNet_kernel, genesets, genesets_p, n=30, cores=4, verbose=False) null_AUPRCs.append(shuffNet_AUPRCs) print ('shuffNet', repr(i+1), 'AUPRCs calculated')
_____no_output_____
MIT
Network Evaluation Examples/Network Evaluation Example-BioPlex.ipynb
jdtibochab/network_bisb
**Note about the above cell:** We use a small number to calculate the null AUPRC values, but a larger number of shuffled networks may give a better representation of the true null AUPRC value. smaller number of networks here for this example, but larger numbers can be used, especially if the resulting distribution of null AUPRCs has a high variance relative to the actual AUPRC values, but we have found that the variance remains relatively small even with a small number of shuffled networks.
# Construct table of null AUPRCs null_AUPRCs_table = pd.concat(null_AUPRCs, axis=1) null_AUPRCs_table.columns = ['shuffNet'+repr(i+1) for i in range(len(null_AUPRCs))] # Calculate performance metric of gene sets network_performance = nef.calculate_network_performance_score(AUPRC_values, null_AUPRCs_table, verbose=True) network_performance.name = 'Test Network' # Calculate network performance gain over median null AUPRC network_perf_gain = nef.calculate_network_performance_gain(AUPRC_values, null_AUPRCs_table, verbose=True) network_perf_gain.name = 'Test Network' network_performance # Rank network on average performance across gene sets vs performance on same gene sets in previous network set all_network_performance = pd.read_csv('~/Data/Network_Performance.csv', index_col=0) all_network_performance_filt = pd.concat([network_performance, all_network_performance.ix[network_performance.index]], axis=1) network_performance_rank_table = all_network_performance_filt.rank(axis=1, ascending=False) network_performance_rankings = network_performance_rank_table['Test Network'] # Rank network on average performance gain across gene sets vs performance gain on same gene sets in previous network set all_network_perf_gain = pd.read_csv('~/Data/Network_Performance_Gain.csv', index_col=0) all_network_perf_gain_filt = pd.concat([network_perf_gain, all_network_perf_gain.ix[network_perf_gain.index]], axis=1) network_perf_gain_rank_table = all_network_performance_filt.rank(axis=1, ascending=False) network_perf_gain_rankings = network_perf_gain_rank_table['Test Network'] # Network Performance network_performance_metric_ranks = pd.concat([network_performance, network_performance_rankings, network_perf_gain, network_perf_gain_rankings], axis=1) network_performance_metric_ranks.columns = ['Network Performance', 'Network Performance Rank', 'Network Performance Gain', 'Network Performance Gain Rank'] network_performance_metric_ranks.sort_values(by=['Network Performance Rank', 'Network Performance', 'Network Performance Gain Rank', 'Network Performance Gain'], ascending=[True, False, True, False]) # Construct network summary table network_summary = {} network_summary['Nodes'] = int(len(network.nodes())) network_summary['Edges'] = int(len(network.edges())) network_summary['Avg Node Degree'] = np.mean(network.degree().values()) network_summary['Edge Density'] = 2*network_summary['Edges'] / float((network_summary['Nodes']*(network_summary['Nodes']-1))) network_summary['Avg Network Performance Rank'] = network_performance_rankings.mean() network_summary['Avg Network Performance Rank, Rank'] = int(network_performance_rank_table.mean().rank().ix['Test Network']) network_summary['Avg Network Performance Gain Rank'] = network_perf_gain_rankings.mean() network_summary['Avg Network Performance Gain Rank, Rank'] = int(network_perf_gain_rank_table.mean().rank().ix['Test Network']) for item in ['Nodes', 'Edges' ,'Avg Node Degree', 'Edge Density', 'Avg Network Performance Rank', 'Avg Network Performance Rank, Rank', 'Avg Network Performance Gain Rank', 'Avg Network Performance Gain Rank, Rank']: print item+':\t'+repr(network_summary[item])
Nodes: 9432 Edges: 151352 Avg Node Degree: 32.093299406276508 Edge Density: 0.0034029582659608213 Avg Network Performance Rank: 6.53125 Avg Network Performance Rank, Rank: 7 Avg Network Performance Gain Rank: 6.53125 Avg Network Performance Gain Rank, Rank: 7
MIT
Network Evaluation Examples/Network Evaluation Example-BioPlex.ipynb
jdtibochab/network_bisb
gym-anytrading `AnyTrading` is a collection of [OpenAI Gym](https://github.com/openai/gym) environments for reinforcement learning-based trading algorithms. Trading algorithms are mostly implemented in two markets: [FOREX](https://en.wikipedia.org/wiki/Foreign_exchange_market) and [Stock](https://en.wikipedia.org/wiki/Stock). AnyTrading aims to provide some Gym environments to improve and facilitate the procedure of developing and testing RL-based algorithms in this area. This purpose is obtained by implementing three Gym environments: **TradingEnv**, **ForexEnv**, and **StocksEnv**. TradingEnv is an abstract environment which is defined to support all kinds of trading environments. ForexEnv and StocksEnv are simply two environments that inherit and extend TradingEnv. In the future sections, more explanations will be given about them but before that, some environment properties should be discussed. **Note:** For experts, it is recommended to check out the [gym-mtsim](https://github.com/AminHP/gym-mtsim) project. Installation Via PIP ```bash pip install gym-anytrading ``` From Repository ```bash git clone https://github.com/AminHP/gym-anytrading cd gym-anytrading pip install -e . or pip install --upgrade --no-deps --force-reinstall https://github.com/AminHP/gym-anytrading/archive/master.zip ``` Environment Properties First of all, **you can't simply expect an RL agent to do everything for you and just sit back on your chair in such complex trading markets!** Things need to be simplified as much as possible in order to let the agent learn in a faster and more efficient way. In all trading algorithms, the first thing that should be done is to define **actions** and **positions**. In the two following subsections, I will explain these actions and positions and how to simplify them. Trading Actions If you search on the Internet for trading algorithms, you will find them using numerous actions such as **Buy**, **Sell**, **Hold**, **Enter**, **Exit**, etc. Referring to the first statement of this section, a typical RL agent can only solve a part of the main problem in this area. If you work in trading markets you will learn that deciding whether to hold, enter, or exit a pair (in FOREX) or stock (in Stocks) is a statistical decision depending on many parameters such as your budget, pairs or stocks you trade, your money distribution policy in multiple markets, etc. It's a massive burden for an RL agent to consider all these parameters and may take years to develop such an agent! In this case, you certainly will not use this environment but you will extend your own. So after months of work, I finally found out that these actions just make things complicated with no real positive impact. In fact, they just increase the learning time and an action like **Hold** will be barely used by a well-trained agent because it doesn't want to miss a single penny. Therefore there is no need to have such numerous actions and only `Sell=0` and `Buy=1` actions are adequate to train an agent just as well. Trading Positions If you're not familiar with trading positions, refer [here](https://en.wikipedia.org/wiki/Position_\(finance\)). It's a very important concept and you should learn it as soon as possible. In a simple vision: **Long** position wants to buy shares when prices are low and profit by sticking with them while their value is going up, and **Short** position wants to sell shares with high value and use this value to buy shares at a lower value, keeping the difference as profit. Again, in some trading algorithms, you may find numerous positions such as **Short**, **Long**, **Flat**, etc. As discussed earlier, I use only `Short=0` and `Long=1` positions. Trading Environments As I noticed earlier, now it's time to introduce the three environments. Before creating this project, I spent so much time to search for a simple and flexible Gym environment for any trading market but didn't find one. They were almost a bunch of complex codes with many unclear parameters that you couldn't simply look at them and comprehend what's going on. So I concluded to implement this project with a great focus on simplicity, flexibility, and comprehensiveness. In the three following subsections, I will introduce our trading environments and in the next section, some IPython examples will be mentioned and briefly explained. TradingEnv TradingEnv is an abstract class which inherits `gym.Env`. This class aims to provide a general-purpose environment for all kinds of trading markets. Here I explain its public properties and methods. But feel free to take a look at the complete [source code](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/envs/trading_env.py). * Properties: > `df`: An abbreviation for **DataFrame**. It's a **pandas'** DataFrame which contains your dataset and is passed in the class' constructor. > > `prices`: Real prices over time. Used to calculate profit and render the environment. > > `signal_features`: Extracted features over time. Used to create *Gym observations*. > > `window_size`: Number of ticks (current and previous ticks) returned as a *Gym observation*. It is passed in the class' constructor. > > `action_space`: The *Gym action_space* property. Containing discrete values of **0=Sell** and **1=Buy**. > > `observation_space`: The *Gym observation_space* property. Each observation is a window on `signal_features` from index **current_tick - window_size + 1** to **current_tick**. So `_start_tick` of the environment would be equal to `window_size`. In addition, initial value for `_last_trade_tick` is **window_size - 1** . > > `shape`: Shape of a single observation. > > `history`: Stores the information of all steps. * Methods: > `seed`: Typical *Gym seed* method. > > `reset`: Typical *Gym reset* method. > > `step`: Typical *Gym step* method. > > `render`: Typical *Gym render* method. Renders the information of the environment's current tick. > > `render_all`: Renders the whole environment. > > `close`: Typical *Gym close* method. * Abstract Methods: > `_process_data`: It is called in the constructor and returns `prices` and `signal_features` as a tuple. In different trading markets, different features need to be obtained. So this method enables our TradingEnv to be a general-purpose environment and specific features can be returned for specific environments such as *FOREX*, *Stocks*, etc. > > `_calculate_reward`: The reward function for the RL agent. > > `_update_profit`: Calculates and updates total profit which the RL agent has achieved so far. Profit indicates the amount of units of currency you have achieved by starting with *1.0* unit (Profit = FinalMoney / StartingMoney). > > `max_possible_profit`: The maximum possible profit that an RL agent can obtain regardless of trade fees. ForexEnv This is a concrete class which inherits TradingEnv and implements its abstract methods. Also, it has some specific properties for the *FOREX* market. For more information refer to the [source code](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/envs/forex_env.py). * Properties: > `frame_bound`: A tuple which specifies the start and end of `df`. It is passed in the class' constructor. > > `unit_side`: Specifies the side you start your trading. Containing string values of **left** (default value) and **right**. As you know, there are two sides in a currency pair in *FOREX*. For example in the *EUR/USD* pair, when you choose the `left` side, your currency unit is *EUR* and you start your trading with 1 EUR. It is passed in the class' constructor. > > `trade_fee`: A default constant fee which is subtracted from the real prices on every trade. StocksEnv Same as ForexEnv but for the *Stock* market. For more information refer to the [source code](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/envs/stocks_env.py). * Properties: > `frame_bound`: A tuple which specifies the start and end of `df`. It is passed in the class' constructor. > > `trade_fee_bid_percent`: A default constant fee percentage for bids. For example with trade_fee_bid_percent=0.01, you will lose 1% of your money every time you sell your shares. > > `trade_fee_ask_percent`: A default constant fee percentage for asks. For example with trade_fee_ask_percent=0.005, you will lose 0.5% of your money every time you buy some shares. Besides, you can create your own customized environment by extending TradingEnv or even ForexEnv or StocksEnv with your desired policies for calculating reward, profit, fee, etc. Examples Create an environment
import gym import gym_anytrading env = gym.make('forex-v0') # env = gym.make('stocks-v0')
_____no_output_____
MIT
README.ipynb
kaiserho/gym-anytrading
- This will create the default environment. You can change any parameters such as dataset, frame_bound, etc. Create an environment with custom parametersI put two default datasets for [*FOREX*](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/datasets/data/FOREX_EURUSD_1H_ASK.csv) and [*Stocks*](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/datasets/data/STOCKS_GOOGL.csv) but you can use your own.
from gym_anytrading.datasets import FOREX_EURUSD_1H_ASK, STOCKS_GOOGL custom_env = gym.make('forex-v0', df = FOREX_EURUSD_1H_ASK, window_size = 10, frame_bound = (10, 300), unit_side = 'right') # custom_env = gym.make('stocks-v0', # df = STOCKS_GOOGL, # window_size = 10, # frame_bound = (10, 300))
_____no_output_____
MIT
README.ipynb
kaiserho/gym-anytrading
- It is to be noted that the first element of `frame_bound` should be greater than or equal to `window_size`. Print some information
print("env information:") print("> shape:", env.shape) print("> df.shape:", env.df.shape) print("> prices.shape:", env.prices.shape) print("> signal_features.shape:", env.signal_features.shape) print("> max_possible_profit:", env.max_possible_profit()) print() print("custom_env information:") print("> shape:", custom_env.shape) print("> df.shape:", env.df.shape) print("> prices.shape:", custom_env.prices.shape) print("> signal_features.shape:", custom_env.signal_features.shape) print("> max_possible_profit:", custom_env.max_possible_profit())
env information: > shape: (24, 2) > df.shape: (6225, 5) > prices.shape: (6225,) > signal_features.shape: (6225, 2) > max_possible_profit: 4.054414887146586 custom_env information: > shape: (10, 2) > df.shape: (6225, 5) > prices.shape: (300,) > signal_features.shape: (300, 2) > max_possible_profit: 1.122900180008982
MIT
README.ipynb
kaiserho/gym-anytrading
- Here `max_possible_profit` signifies that if the market didn't have trade fees, you could have earned **4.054414887146586** (or **1.122900180008982**) units of currency by starting with **1.0**. In other words, your money is almost *quadrupled*. Plot the environment
env.reset() env.render()
_____no_output_____
MIT
README.ipynb
kaiserho/gym-anytrading
- **Short** and **Long** positions are shown in `red` and `green` colors.- As you see, the starting *position* of the environment is always **Short**. A complete example
import gym import gym_anytrading from gym_anytrading.envs import TradingEnv, ForexEnv, StocksEnv, Actions, Positions from gym_anytrading.datasets import FOREX_EURUSD_1H_ASK, STOCKS_GOOGL import matplotlib.pyplot as plt env = gym.make('forex-v0', frame_bound=(50, 100), window_size=10) # env = gym.make('stocks-v0', frame_bound=(50, 100), window_size=10) observation = env.reset() while True: action = env.action_space.sample() observation, reward, done, info = env.step(action) # env.render() if done: print("info:", info) break plt.cla() env.render_all() plt.show()
info: {'total_reward': -173.10000000000602, 'total_profit': 0.980652456904312, 'position': 0}
MIT
README.ipynb
kaiserho/gym-anytrading
- You can use `render_all` method to avoid rendering on each step and prevent time-wasting.- As you see, the first **10** points (`window_size`=10) on the plot don't have a *position*. Because they aren't involved in calculating reward, profit, etc. They just display the first observations. So the environment's `_start_tick` and initial `_last_trade_tick` are **10** and **9**. Mix with `stable-baselines` and `quantstats`[Here](https://github.com/AminHP/gym-anytrading/blob/master/examples/a2c_quantstats.ipynb) is an example that mixes `gym-anytrading` with the mentioned famous libraries and shows how to utilize our trading environments in other RL or trading libraries. Extend and manipulate TradingEnvIn case you want to process data and extract features outside the environment, it can be simply done by two methods:**Method 1 (Recommended):**
def my_process_data(env): start = env.frame_bound[0] - env.window_size end = env.frame_bound[1] prices = env.df.loc[:, 'Low'].to_numpy()[start:end] signal_features = env.df.loc[:, ['Close', 'Open', 'High', 'Low']].to_numpy()[start:end] return prices, signal_features class MyForexEnv(ForexEnv): _process_data = my_process_data env = MyForexEnv(df=FOREX_EURUSD_1H_ASK, window_size=12, frame_bound=(12, len(FOREX_EURUSD_1H_ASK)))
_____no_output_____
MIT
README.ipynb
kaiserho/gym-anytrading
**Method 2:**
def my_process_data(df, window_size, frame_bound): start = frame_bound[0] - window_size end = frame_bound[1] prices = df.loc[:, 'Low'].to_numpy()[start:end] signal_features = df.loc[:, ['Close', 'Open', 'High', 'Low']].to_numpy()[start:end] return prices, signal_features class MyStocksEnv(StocksEnv): def __init__(self, prices, signal_features, **kwargs): self._prices = prices self._signal_features = signal_features super().__init__(**kwargs) def _process_data(self): return self._prices, self._signal_features prices, signal_features = my_process_data(df=STOCKS_GOOGL, window_size=30, frame_bound=(30, len(STOCKS_GOOGL))) env = MyStocksEnv(prices, signal_features, df=STOCKS_GOOGL, window_size=30, frame_bound=(30, len(STOCKS_GOOGL)))
_____no_output_____
MIT
README.ipynb
kaiserho/gym-anytrading
Generate some training dataSample points from the true underlying function and add some noise
def black_box(x): return x * np.sin(x) x_lim = [0, 20] n_train = 10 sigma_n = 0.0 x_train = np.random.uniform(x_lim[0], x_lim[1], n_train)[None, :] y_train = black_box(x_train).T # Get some samples from the black-box function y_train += np.random.normal(0, sigma_n, n_train)[:, None] # Add noise
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Regression-checkpoint.ipynb
tsitsimis/gau-pro
Pick some test points for evaluation
# test inputs n_test = 200 x_test = np.linspace(x_lim[0], x_lim[1], n_test)[None, :]
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Regression-checkpoint.ipynb
tsitsimis/gau-pro
Fit and Evaluate Gaussian Process
# fit regressor = gp.Regressor(kernels.se_kernel()) regressor.fit(x_train, y_train) mu, cov = regressor.predict(x_test) plt.figure(figsize=(12, 6)) t = np.linspace(x_lim[0], x_lim[1], 100) plt.plot(t, black_box(t), c='k', linestyle=':', label="True function") plt.scatter(x_train, y_train, marker='+', c='r', s=220, zorder=100, label="Samples") plt.plot(x_test[0], regressor.mu.T[0], c='k', zorder=10, label="Mean") plt.fill_between(x_test[0], regressor.mu.T[0] - 3 * np.sqrt(regressor.cov[np.diag_indices_from(regressor.cov)]), regressor.mu.T[0] + 3 * np.sqrt(regressor.cov[np.diag_indices_from(regressor.cov)]), facecolor='gray', alpha=0.5, label="Covariance") plt.legend() plt.show()
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/Regression-checkpoint.ipynb
tsitsimis/gau-pro
How to define a scikit-learn pipeline and visualize it The goal of keeping this notebook is to:- make it available for users that want to reproduce it locally- archive the script in the event we want to rerecord this video with an update in the UI of scikit-learn in a future release. First we load the dataset We need to define our data and target. In this case we will build a classification model
import pandas as pd ames_housing = pd.read_csv("../datasets/house_prices.csv", na_values='?') target_name = "SalePrice" data, target = ames_housing.drop(columns=target_name), ames_housing[target_name] target = (target > 200_000).astype(int)
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
We inspect the first rows of the dataframe
data
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
We can cherry-pick some features and only retain this subset of data
numeric_features = ['LotArea', 'FullBath', 'HalfBath'] categorical_features = ['Neighborhood', 'HouseStyle'] data = data[numeric_features + categorical_features]
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
Then we create the pipeline The first step is to define the preprocessing steps
from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, OneHotEncoder numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='median')), ('scaler', StandardScaler(), )]) categorical_transformer = OneHotEncoder(handle_unknown='ignore')
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
The next step is to apply the transformations using `ColumnTransformer`
from sklearn.compose import ColumnTransformer preprocessor = ColumnTransformer(transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features), ])
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
Then we define the model and join the steps in order
from sklearn.linear_model import LogisticRegression model = Pipeline(steps=[ ('preprocessor', preprocessor), ('classifier', LogisticRegression()), ])
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
Let's visualize it!
from sklearn import set_config set_config(display='diagram') model
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
Finally we score the model
from sklearn.model_selection import cross_validate cv_results = cross_validate(model, data, target, cv=5) scores = cv_results["test_score"] print("The mean cross-validation accuracy is: " f"{scores.mean():.3f} +/- {scores.std():.3f}")
_____no_output_____
CC-BY-4.0
notebooks/video_pipeline.ipynb
Imarcos/scikit-learn-mooc
Pipeline: Clean Continuous FeaturesUsing the Titanic dataset from [this](https://www.kaggle.com/c/titanic/overview) Kaggle competition.This dataset contains information about 891 people who were on board the ship when departed on April 15th, 1912. As noted in the description on Kaggle's website, some people aboard the ship were more likely to survive the wreck than others. There were not enough lifeboats for everybody so women, children, and the upper-class were prioritized. Using the information about these 891 passengers, the challenge is to build a model to predict which people would survive based on the following fields:- **Name** (str) - Name of the passenger- **Pclass** (int) - Ticket class- **Sex** (str) - Sex of the passenger- **Age** (float) - Age in years- **SibSp** (int) - Number of siblings and spouses aboard- **Parch** (int) - Number of parents and children aboard- **Ticket** (str) - Ticket number- **Fare** (float) - Passenger fare- **Cabin** (str) - Cabin number- **Embarked** (str) - Port of embarkation (C = Cherbourg, Q = Queenstown, S = Southampton)**This notebook will implement some of the cleaning that was done in Section 2: EDA & Data Cleaning**![Clean Data](../../img/clean_data.png) Read in Data
import matplotlib.pyplot as plt import pandas as pd import seaborn as sns %matplotlib inline titanic = pd.read_csv('../../../titanic.csv') titanic.head()
_____no_output_____
MIT
ml_foundations/05_Pipeline/05_02/End/05_02.ipynb
joejoeyjoseph/playground
Clean continuous variables1. Fill in missing values for `Age`2. Combine `SibSp` & `Parch`3. Drop irrelevant/repetitive variables (`SibSp`, `Parch`, `PassengerId`) Fill missing for `Age`
titanic.isnull().sum() titanic['Age'].fillna(titanic['Age'].mean(), inplace=True)
_____no_output_____
MIT
ml_foundations/05_Pipeline/05_02/End/05_02.ipynb
joejoeyjoseph/playground
Combine `SibSp` & `Parch`
for i, col in enumerate(['SibSp', 'Parch']): plt.figure(i) sns.catplot(x=col, y='Survived', data=titanic, kind='point', aspect=2, ) titanic['Family_cnt'] = titanic['SibSp'] + titanic['Parch']
_____no_output_____
MIT
ml_foundations/05_Pipeline/05_02/End/05_02.ipynb
joejoeyjoseph/playground
Drop unnnecessary variables
titanic.drop(['PassengerId', 'SibSp', 'Parch'], axis=1, inplace=True) titanic.head(10)
_____no_output_____
MIT
ml_foundations/05_Pipeline/05_02/End/05_02.ipynb
joejoeyjoseph/playground
Write out cleaned data
titanic.to_csv('../../../titanic_cleaned.csv', index=False)
_____no_output_____
MIT
ml_foundations/05_Pipeline/05_02/End/05_02.ipynb
joejoeyjoseph/playground
WeatherPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import time from random import uniform from scipy.stats import linregress from scipy import stats # Import API key from config import weather_api_key # Incorporated citipy to determine city based on latitude and longitude from citipy import citipy # Range of latitudes and longitudes lat_range = (-90, 90) lng_range = (-180, 180)
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Generate Cities List
# List for holding lat_lngs and cities lat_lngs = [] cities = [] # Create a set of random lat and lng combinations lats = np.random.uniform(low=-90.000, high=90.000, size=1500) lngs = np.random.uniform(low=-180.000, high=180.000, size=1500) lat_lngs = zip(lats, lngs) # Identify nearest city for each lat, lng combination for lat_lng in lat_lngs: city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name # If the city is unique, then add it to a our cities list if city not in cities: cities.append(city) # Print the city count to confirm sufficient count len(cities) cities
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Perform API Calls* Perform a weather check on each city using a series of successive API calls.* Include a print log of each city as it's being processed (with the city number and city name).
# Set up the url url = "http://api.openweathermap.org/data/2.5/weather?" units = "imperial" query_url = f"{url}appid={weather_api_key}&units={units}&q=" print(query_url) #creating lists to store extracted values per city city_name = [] country = [] date = [] lat = [] lng = [] temp = [] humidity = [] cloudiness = [] wind = [] city_id = [] #setting the counter values record = 0 print("Beginning Data Retrieval") print("--------------------------------") #creating loop to extract values per city and add them to the lists above for city in cities: try: response = requests.get(f"{query_url}{city}").json() country.append(response["sys"]["country"]) date.append(response["dt"]) lat.append(response["coord"]["lat"]) lng.append(response["coord"]["lon"]) temp.append(response["main"]["temp_max"]) humidity.append(response["main"]["humidity"]) cloudiness.append(response["clouds"]["all"]) wind.append(response["wind"]["speed"]) city_record = response["name"] city_id = response["id"] #creating an if statment to print the if record > 600: break else: record += 1 city_name.append(response["name"]) print(f"The city is {city_record} and the city id is {city_id}.") #using time.sleep to create time delay time.sleep(3) except : print("City not found. Skipping...") continue print("-------------------------------") print("Data Retrieval Complete") print("-------------------------------")
Beginning Data Retrieval -------------------------------- The city is Mehrān and the city id is 124291. The city is Busselton and the city id is 2075265. The city is Bulgan and the city id is 2032201. The city is Bethel and the city id is 5282297. City not found. Skipping... The city is Mar del Plata and the city id is 3430863. The city is Isangel and the city id is 2136825. The city is Tuktoyaktuk and the city id is 6170031. The city is Kirakira and the city id is 2178753. The city is Meulaboh and the city id is 1214488. The city is Rikitea and the city id is 4030556. The city is Nālūt and the city id is 2214432. The city is Riohacha and the city id is 3670745. City not found. Skipping... The city is Pacific Grove and the city id is 5380437. The city is Ati and the city id is 2436400. The city is Anadyr and the city id is 2127202. The city is Avarua and the city id is 4035715. The city is Souillac and the city id is 933995. The city is Punta Arenas and the city id is 3874787. City not found. Skipping... The city is Mataura and the city id is 6201424. The city is East London and the city id is 1006984. The city is Inuvik and the city id is 5983607. The city is San Patricio and the city id is 3985168. The city is Nikolskoye and the city id is 546105. The city is Puerto Escondido and the city id is 3520994. The city is Bandarbeyla and the city id is 64814. The city is Hilo and the city id is 5855927. The city is Castro and the city id is 3466704. The city is Al Bardīyah and the city id is 80509. The city is Hermanus and the city id is 3366880. The city is Vaini and the city id is 4032243. The city is Dakar and the city id is 2253354. The city is Luba and the city id is 2309528. The city is Katsuura and the city id is 2112309. The city is Tiznit Province and the city id is 2527087. The city is Mount Gambier and the city id is 2156643. The city is Cape Town and the city id is 3369157. The city is Gamba and the city id is 2400547. The city is Altay and the city id is 1529651. The city is Payson and the city id is 5779548. The city is Tezu and the city id is 1254709. The city is Erbaa and the city id is 747489. The city is Qaanaaq and the city id is 3831208. The city is Marabá and the city id is 3395503. The city is Bluff and the city id is 2206939. The city is Thompson and the city id is 6165406. The city is Khatanga and the city id is 2022572. The city is Albany and the city id is 5106841. The city is Port Blair and the city id is 1259385. The city is Esperance and the city id is 2071860. The city is Ushuaia and the city id is 3833367. The city is Cabo San Lucas and the city id is 3985710. The city is Omboué and the city id is 2396853. The city is Faanui and the city id is 4034551. The city is Arraial do Cabo and the city id is 3471451. The city is Rodrigues Alves and the city id is 3665210. The city is Port Macquarie and the city id is 2152659. The city is Catalina and the city id is 5288784. City not found. Skipping... The city is Beloha and the city id is 1067565. The city is Wanning and the city id is 1791779. The city is Hof and the city id is 2902768. The city is Morondava and the city id is 1058381. The city is Carnarvon and the city id is 2074865. The city is Shache and the city id is 1280037. The city is Constitución and the city id is 3893726. The city is Abhā and the city id is 110690. The city is Kapaa and the city id is 5848280. The city is Neiafu and the city id is 4032420. City not found. Skipping... The city is Itoman and the city id is 1861280. The city is Kamenka and the city id is 553766. City not found. Skipping... The city is Arman' and the city id is 2127060. The city is George Town and the city id is 1735106. The city is Xai-Xai and the city id is 1024552. The city is Naze and the city id is 1855540. The city is Butaritari and the city id is 2110227. The city is Maţāy and the city id is 352628. The city is Margate and the city id is 2643044. The city is Saskylakh and the city id is 2017155. The city is Bengkulu and the city id is 1649150. The city is Mahébourg and the city id is 934322. The city is Torbay and the city id is 6167817. City not found. Skipping... The city is Prince Rupert and the city id is 6113406. The city is Port Alfred and the city id is 964432. The city is Tasiilaq and the city id is 3424607. The city is Provideniya and the city id is 4031574. The city is Den Helder and the city id is 2757220. The city is Peniche and the city id is 2264923. The city is Noumea and the city id is 2139521. The city is Hofn and the city id is 2630299. The city is Atuona and the city id is 4020109. The city is Lebu and the city id is 3883457. The city is Ponta do Sol and the city id is 2264557. The city is Tazovsky and the city id is 1489853. The city is Kavieng and the city id is 2094342. The city is Chingola and the city id is 919009. The city is San Quintín and the city id is 3984997. The city is Jamestown and the city id is 5122534. The city is Isetskoye and the city id is 1505466. The city is Cherskiy and the city id is 2126199. The city is Erzin and the city id is 296852. The city is Dikson and the city id is 1507390. The city is Longyearbyen and the city id is 2729907. The city is Sampit and the city id is 1628884. The city is Hithadhoo and the city id is 1282256. City not found. Skipping... The city is Novy Urengoy and the city id is 1496511. The city is Moussoro and the city id is 2427336. The city is Kruisfontein and the city id is 986717. The city is Russell and the city id is 4047434. The city is Sabha and the city id is 2212775. The city is Alice Springs and the city id is 2077895. The city is Fortuna and the city id is 5563839. The city is Chokurdakh and the city id is 2126123. The city is Sinop Province and the city id is 739598. The city is Haikou and the city id is 1809078. The city is Sitka and the city id is 5557293. The city is Lorengau and the city id is 2092164. City not found. Skipping... The city is Vao and the city id is 2137773. The city is Igarka and the city id is 1505991. City not found. Skipping... The city is Hobart and the city id is 2163355. The city is Healdsburg and the city id is 5356012. The city is Eloy and the city id is 5294167. The city is Upernavik and the city id is 3418910. The city is Hervey Bay and the city id is 2146219. The city is Øksfjord and the city id is 778362. The city is Phek and the city id is 1259784. The city is Verkhovazh'ye and the city id is 474354. The city is Leningradskiy and the city id is 2123814. The city is Haines Junction and the city id is 5969025. The city is Rabak and the city id is 368277. The city is Plettenberg Bay and the city id is 964712. The city is Safi and the city id is 2537881. The city is Tondano and the city id is 1623424. The city is Ribeira Grande and the city id is 3372707. The city is Port Elizabeth and the city id is 964420. The city is Wonthaggi and the city id is 2154826. The city is Clyde River and the city id is 5924351. The city is Nizhniy Baskunchak and the city id is 520798. The city is Airai and the city id is 1651810. The city is Am Timan and the city id is 245338. The city is Vila Franca do Campo and the city id is 3372472. The city is Aklavik and the city id is 5882953. The city is Dukat and the city id is 2125906. The city is Saint-Philippe and the city id is 935215. The city is New Norfolk and the city id is 2155415. The city is Cassilândia and the city id is 3466750. The city is Guerrero Negro and the city id is 4021858. The city is Galle and the city id is 1246294. The city is Coyhaique and the city id is 3894426. The city is Kloulklubed and the city id is 7671223. The city is Bambous Virieux and the city id is 1106677. The city is Hrubieszów and the city id is 770966. The city is Ahipara and the city id is 2194098. The city is Mount Isa and the city id is 2065594. The city is Nome and the city id is 5870133. The city is Ostrovnoy and the city id is 556268. The city is Broome and the city id is 5110365. The city is Severo-Kuril'sk and the city id is 2121385. The city is Yellowknife and the city id is 6185377. The city is Tautira and the city id is 4033557. The city is Broken Hill and the city id is 2173911. The city is Zabaykal'sk and the city id is 2012780.
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Convert Raw Data to DataFrame* Export the city data into a .csv.* Display the DataFrame
# Create a data frame from the data weather_dict = { #key on left, right side is values "City": city_name, "Cloudiness": cloudiness, "Country": country, "Date": date, "Humidity": humidity, "Lat": lat, "Lng": lng, "Max Temp": temp, "Wind Speed": wind } # Put data into data frame weather_data_df = pd.DataFrame(weather_dict) # Push the new Data Frame to a new CSV file weather_data_df.to_csv("../weather_data.csv", encoding="utf-8", index=False, header=True) # Display the new data frame weather_data_df.head() #perform count on data frame, to make sure all columns are filled weather_data_df.count()
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Inspect the data and remove the cities where the humidity > 100%.----Skip this step if there are no cities that have humidity > 100%.
weather_data_df[weather_data_df["Humidity"]>100] # Get the indices of cities that have humidity over 100%. weather_data_df = weather_data_df.loc[(weather_data_df["Humidity"] < 100)] weather_data_df # Make a new DataFrame equal to the city data to drop all humidity outliers by index. # Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data". clean_city_data_df = weather_data_df.dropna(how='any') clean_city_data_df.count()
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Plotting the Data* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.* Save the plotted figures as .pngs. Latitude vs. Temperature Plot
# Plot the graph plt.scatter(lat, temp, marker="o", facecolors="tab:blue", edgecolors="black") # Setting the title and axises plt.title("City Latitude vs. Max Temperature (9/2020)") plt.xlabel("Latitude") plt.ylabel("Max Temperature (F)") # Add in a grid for the chart plt.grid() # Save our graph and show the grap plt.tight_layout() plt.savefig("../Images/city_lat_vs_max_temp.png") plt.show()
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Latitude vs. Humidity Plot
# Plot the graph plt.scatter(lat, humidity, marker="o", facecolors="tab:blue", edgecolors="black") # Setting the title and axises plt.title("City Latitude vs. Humidity (9/2020)") plt.xlabel("Latitude") plt.ylabel("Humidity (%)") # Add in a grid for the chart plt.grid() # Setting graph limits plt.xlim(-60, 85) plt.ylim(0, 105) # Save our graph and show the grap #plt.tight_layout() plt.savefig("../Images/city_lat_vs_humidity.png") plt.show()
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Latitude vs. Cloudiness Plot
# Plot the graph plt.scatter(lat, cloudiness, marker="o", facecolors="tab:blue", edgecolors="black") # Setting the title and axises plt.title("City Latitude vs. Cloudiness (9/2020)") plt.xlabel("Latitude") plt.ylabel("Cloudiness (%)") # Add in a grid for the chart plt.grid() # Save our graph and show the grap plt.tight_layout() plt.savefig("../Images/city_lat_vs_cloudiness.png") plt.show()
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Latitude vs. Wind Speed Plot
# Plot the graph plt.scatter(lat, wind, marker="o", facecolors="tab:blue", edgecolors="black") # Setting the title and axises plt.title("City Latitude vs. Wind Speed (9/2020)") plt.xlabel("Latitude") plt.ylabel("Wind Speed (MPH)") # Add in a grid for the chart plt.grid() # Save our graph and show the grap plt.tight_layout() plt.savefig("../Images/city_lat_vs_wind_speed.png") plt.show()
_____no_output_____
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Linear Regression Northern Hemisphere - Max Temp vs. Latitude Linear Regression
north_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(north_hem['Lat'], north_hem['Max Temp'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Max Temp') plt.title("Northern Hemisphere - Max Temp vs. Latitude Linear Regression") x_values = north_hem['Lat'] y_values = north_hem['Max Temp'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.plot(x_values,regress_values,"r-") #Printing R Value print(f"R Val is {rvalue**2}")
R Val is 0.694055871357037
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Southern Hemisphere - Max Temp vs. Latitude Linear Regression
south_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(south_hem['Lat'], south_hem['Max Temp'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Max Temp (F)') plt.title("Southern Hemisphere - Max Temp vs. Latitude Linear Regression") x_values = south_hem['Lat'] y_values = south_hem['Max Temp'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.plot(x_values,regress_values,"r-") #Printing R Value print(f"R Val is {rvalue**2}")
R Val is 0.694055871357037
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
north_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(north_hem['Lat'], north_hem['Humidity'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.title("Northern Hemisphere - Humidity vs. Latitude Linear Regression") x_values = north_hem['Lat'] y_values = north_hem['Humidity'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.plot(x_values,regress_values,"r-") #Printing R Value print(f"R Val is {rvalue**2}")
R Val is 0.00679787383915855
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
south_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(south_hem['Lat'], south_hem['Humidity'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.title("Southern Hemisphere - Humidity vs. Latitude Linear Regression") x_values = south_hem['Lat'] y_values = south_hem['Humidity'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.plot(x_values,regress_values,"r-") #Printing R Value print(f"R Val is {rvalue**2}")
R Val is 0.00679787383915855
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
north_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(north_hem['Lat'], north_hem['Cloudiness'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Cloudiness (%)') plt.title("Northern Hemisphere - Cloudiness vs. Latitude Linear Regression") x_values = north_hem['Lat'] y_values = north_hem['Cloudiness'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.plot(x_values,regress_values,"r-") #Printing R Value print(f"R Val is {rvalue**2}")
R Val is 0.00044310179247288993
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
south_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(south_hem['Lat'], south_hem['Cloudiness'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Cloudiness (%)') plt.title("Southern Hemisphere - Max Temp vs. Latitude Linear Regression") x_values = south_hem['Lat'] y_values = south_hem['Cloudiness'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.plot(x_values,regress_values,"r-") #Printing R Value print(f"R Val is {rvalue**2}")
R Val is 0.00044310179247288993
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
north_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(north_hem['Lat'], north_hem['Wind Speed'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Wind Speed (MPH)') plt.title("Northern Hemisphere - Wind Speed vs. Latitude Linear Regression") x_values = north_hem['Lat'] y_values = north_hem['Wind Speed'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.plot(x_values,regress_values,"r-") #Printing R Value print(f"R Val is {rvalue**2}")
R Val is 0.02084202630425654
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
south_hem = weather_data_df.loc[weather_data_df['Lat'] > 0,] plt.scatter(south_hem['Lat'], south_hem['Wind Speed'], marker="o", facecolors="dodgerblue") plt.xlabel('Latitude') plt.ylabel('Windspeed (MPH)') plt.title("Southern Hemisphere - Wind Speed vs. Latitude Linear Regression") x_values = south_hem['Lat'] y_values = south_hem['Wind Speed'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.plot(x_values,regress_values,"r-") #Printing R Value print(f"R Val is {rvalue**2}")
R Val is 0.02084202630425654
ADSL
**WeatherPy**/WeatherPy.ipynb
drupps/python-api-challenge
Figure 6
from sympy import symbols, exp, solve, logcombine, simplify, Piecewise, lambdify, N, init_printing, Eq import numpy import scipy.stats as ss from sympy.physics.units import seconds, siemens, volts, farads, amperes, milli, micro, nano, pico, ms, s, kg, meters from matplotlib import pyplot as plt import matplotlib from matplotlib.colors import LinearSegmentedColormap import matplotlib.patches as patches plt.style.use('neuron_color') import os import sys sys.path.append('../') from Linearity import Neuron import lmfit from pickle import dump def simpleaxis(axes, every=False, outward=False, hideTitle=True): if not isinstance(axes, (list, numpy.ndarray)): axes = [axes] for ax in axes: ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) if (outward): ax.spines['bottom'].set_position(('outward', 10)) ax.spines['left'].set_position(('outward', 10)) if every: ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() if hideTitle: ax.set_title('') from IPython.display import display, Markdown, Image init_printing()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 A Circuit diagram
prefix = '/home/bhalla/Documents/Codes/data'
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 B: Fitting voltage clamp data to get parameters
analysisFile = prefix + '/media/sahil/NCBS_Shares_BGStim/patch_data/170530/c1_EI/plots/c1_EI.pkl' plotDir = os.path.dirname(analysisFile) neuron = Neuron.load(analysisFile)
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
$g(t) = \bar{g}\frac{( e^\frac{\delta_{onset} - t }{\tau_{decay}} - e^\frac{\delta_{onset} - t }{\tau_{rise}})}{- \left(\frac{\tau_{rise}}{\tau_{decay}}\right)^{\frac{\tau_{decay}}{\tau_{decay} - \tau_{rise}}} + \left(\frac{\tau_{rise}}{\tau_{decay}}\right)^{\frac{\tau_{rise}}{\tau_{decay} - \tau_{rise}}}}$
def fitFunctionToPSP(time, vector, t_0=0, g_max=0): ''' Fits using lmfit ''' def _doubleExponentialFunction(t, t_0, tOn, tOff, g_max): ''' Returns the shape of an EPSP as a double exponential function ''' tPeak = t_0 + float(((tOff * tOn)/(tOff-tOn)) * numpy.log(tOff/tOn)) A = 1./(numpy.exp(-(tPeak-t_0)/tOff) - numpy.exp(-(tPeak-t_0)/tOn)) g = [ g_max * A * (numpy.exp(-(t_point-t_0)/tOff) - numpy.exp(-(t_point-t_0)/tOn)) if t_point >= t_0 else 0. for t_point in t] return numpy.array(g) model = lmfit.Model(_doubleExponentialFunction) # Fixing values of variables from data # Onset time if not t_0: model.set_param_hint('t_0', value =max(time)/10., min=0., max = max(time)) else: model.set_param_hint('t_0', value = t_0, vary=False) # g_max if not g_max: model.set_param_hint('g_max', value = max(vector)/10., min = 0., max = max(vector)) else: model.set_param_hint('g_max', value = g_max, vary=False) model.set_param_hint('tOn', value =max(time)/5.1 , min = 0., max = max(time)) model.set_param_hint('t_ratio', value =10., min=1.05) model.set_param_hint('tOff', min = 0., expr='tOn*t_ratio') model.set_param_hint('t_peak', expr = 't_0 + ((tOff * tOn)/(tOff-tOn)) * log(tOff/tOn)') pars = model.make_params() result = model.fit(vector, pars, t=time) # print (result.fit_report()) return result n = {key:value for key,value in neuron} for numSq in set(n[1]).intersection(set(n[2])): for i in set(n[1][numSq].trial).intersection(set(n[2][numSq].trial)): if i == 3 and numSq == 7: exc = -1e9*n[1][numSq].trial[i].interestWindow inh = 1e9*n[2][numSq].trial[i].interestWindow time = numpy.arange(len(n[1][numSq].trial[i].interestWindow))*n[1][numSq].trial[i].samplingTime exc_fit = fitFunctionToPSP(time, exc) inh_fit = fitFunctionToPSP(time, inh) print (exc_fit.fit_report()) print (inh_fit.fit_report()) fig,ax = plt.subplots() ax.plot(time*1e3, exc, alpha=0.6, c='indigo') ax.set_xlabel("Time (ms)") ax.set_ylabel("Current (pA)") ax.plot(time*1e3, exc_fit.best_fit, label="Excitation", c='indigo') ax.plot(time*1e3, -inh, alpha=0.6, c='k') ax.plot(time*1e3, -inh_fit.best_fit, label="Inhibition", c='k') ax.annotate('Excitation', (50,100), (50,100), xycoords='data',color='indigo') ax.annotate('Inhibition', (50,-300), (50,-300), xycoords='data',color='k') fig.set_figwidth(1.5) fig.set_figheight(1.5) simpleaxis(ax) dump(fig,file('figures/fig6/6b.pkl','wb')) plt.show() samplingRate = 20 # kHz, to get milliseconds sample_every = 10 # points timeStep, maxTime = (sample_every*1.)/ samplingRate, 100. # ms trange = numpy.arange( 0., maxTime, timeStep) # We will always use 100. ms timecourse of PSPs. #### Range of $g_e$ explored emax = 3. e_step = 0.5 erange = numpy.arange(0., emax, e_step) #### Range of proportionality ($P$) between $E$ and $I$ prop_array = numpy.arange(0, 6, 1) ## Setting up the variables, parameters and units for simulation t, P, e_r, e_d, delta_e, rho_e, g_e, i_r, i_d, delta_i, rho_i, g_i, b, Cm, g_L = symbols( 't P \\tau_{er} \\tau_{ed} \\delta_e \\rho_e \\bar{g}_e \\tau_{ir} \\tau_{id} \\delta_i \\rho_i \\bar{g}_i \\beta C_m \\bar{g}_L', positive=True, real=True) leak_rev, e_rev, i_rev, Vm = symbols( 'Leak_{rev} Exc_{rev} Inh_{rev} V_m', real=True) SymbolDict = { t: "Time (ms)", P: "Proportion of $g_i/g_e$", e_r: "Excitatory Rise (ms)", e_d: "Excitatory Fall (ms)", delta_e: "Excitatory onset time (ms)", rho_e: "Excitatory $tau$ ratio (fall/rise)", g_e: "Excitatory max conductance", i_r: "Inhibitory Rise (ms)", i_d: "Inhibitory Fall(ms)", delta_i: "Inhibitory onset time(ms)", rho_i: "Inhibitory $tau$ ratio (fall/rise)", g_i: "Inhibitory max conductance", b: "Inhibitory/Excitatory $tau$ rise ratio" } unitsDict = { 's': seconds, 'exp': exp, 'S': siemens, 'V': volts, 'A': amperes, 'm': meters, 'kg': kg } # This is for lamdify nS, pF, mV, pA = nano * siemens, pico * farads, milli * volts, pico*amperes ### Estimates from data and averaging them to get a number estimateDict = { P: (1,5), #e_r: (1.5 * ms, 5 * ms), #e_d: (8. * ms, 20. * ms), e_r: (7. * ms, 7. * ms), e_d: (16. * ms, 16. * ms), delta_e: (0. * ms, 0. * ms), rho_e: (2., 7.), g_e: (0.02 * nS, 0.25 * nS), #i_r: (1.5 * ms, 5. * ms), #i_d: (14. * ms, 60. * ms), i_r: (13. * ms, 13. * ms), i_d: (27. * ms, 27. * ms), delta_i: (2. * ms, 4. * ms), rho_i: (5., 20.), g_i: (0.04 * nS, 0.5 * nS), b: (0.5, 5.) } averageEstimateDict = { key: value[0] + value[1] / 2 for key, value in estimateDict.items() } ### Approximating the rest from literature approximateDict = { g_L: 6.25 * nS, # Changing from 10 to 6.25 e_rev: 0. * mV, i_rev: -70. * mV, leak_rev: -65. * mV, Cm: 100 * pF } sourceDict = { g_L: "None", e_rev: "None", i_rev: "None", leak_rev: "None", Cm: "Neuroelectro.org" }
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
| Variable | Meaning | Range ||---|---|---||$t$|Time (ms)|0-100||$P$|Proportion of $g_i/g_e$|2-4||$\tau_{er}$|Excitatory Rise (ms)|1.5-5||$\tau_{ed}$|Excitatory Fall (ms)|8-20||$\delta_e$|Excitatory onset time (ms)|0-0||$\rho_e$|Excitatory $tau$ ratio (fall/rise)|2-7||$\bar{g}_e$|Excitatory max conductance|0.02-0.25||$\tau_{ir}$|Inhibitory Rise (ms)|1.5-5||$\tau_{id}$|Inhibitory Fall(ms)|14-60||$\delta_i$|Inhibitory onset time(ms)|3-15||$\rho_i$|Inhibitory $tau$ ratio (fall/rise)|5-20||$\bar{g}_i$|Inhibitory max conductance|0.04-0.5||$\beta$|Inhibitory/Excitatory $tau$ rise ratio|0.5-5| | Variable | Meaning | Source | Value ||---|---|---||$g_L$|Leak conductance| Fernandos and White, J. Neuro. (2010) | 10 nS ||$Exc_{rev}$|Excitatory reversal|Calculated (Methods)| 0 mV||$Inh_{rev}$|Inhibitory reversal |Calculated (Methods)| -70 mV ||$Leak_{rev}$|Leak reversal |Fernandos and White, J. Neuro. (2010)| -65 mV ||$C_m$|Membrane capacitance |neuroelectro.org| 100 pF| ---
### Double exponential to explain the net synaptic conductance. alpha = exp(-(t - delta_e) / e_d) - exp(-(t - delta_e) / e_r) alpha_prime = alpha.diff(t) theta_e = solve(alpha_prime, t) # Time to peak theta_e = logcombine(theta_e[0]) simplify(theta_e.subs(averageEstimateDict)) alpha_star = simplify(alpha.subs(t, theta_e).doit()) ### Finding maximum of the curve and substituting ratio of taus g_E = Piecewise((0. * nS, t / ms < delta_e / ms), (g_e * (alpha / alpha_star), True)) ### Final equation for Excitation normalized to be maximum at $g_e$ ### Doing the same with inhibition g_I = g_E.xreplace({ g_e: g_i, rho_e: rho_i, e_r: i_r, e_d: i_d, delta_e: delta_i }) alpha_I = alpha.xreplace({e_r: i_r, e_d: i_d, delta_e: delta_i}) alpha_star_I = alpha_star.xreplace({e_r: i_r, e_d: i_d}) g_I = Piecewise((0. * nS, t / ms < delta_i / ms), (g_i * (alpha_I / alpha_star_I), True)) ### Now finding the control peak using difference of these double-exponentials compartment = Eq((1 / Cm) * (g_E * (Vm - e_rev) + g_I * (Vm - i_rev) + g_L * (Vm - leak_rev)), Vm.diff(t)) Vm_t = solve(compartment, Vm, rational=False, simplify=True) check_vm_t = Vm_t[0].subs({ i: averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e,g_i, P] }).subs(approximateDict).subs({ g_i: P * g_e }) ### Now finding the control peak using difference of these double-exponentials (checking with this form of the equation) compartment = Eq((1 / Cm) * (g_E * (e_rev - Vm) + g_I * (i_rev - Vm) + g_L * (leak_rev - Vm)), Vm.diff(t)) Vm_t = solve(compartment, Vm, rational=False, simplify=True) check_vm_t = Vm_t[0].subs({ i: averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e,g_i, P] }).subs(approximateDict).subs({ g_i: P * g_e }) f = lambdify((g_e, P, t), check_vm_t/mV, (unitsDict, "numpy"))
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 C Divisive Inhibition: Inhibition proportional to Excitation, or $g_i = P \times g_e$
di_exc = [[float(f(e * nS, 0., dt * ms)) for dt in trange] for e in erange] di_control = {prop: [[float(f(e * nS, prop, dt * ms)) for dt in trange] for e in erange] for prop in prop_array} fig, ax = plt.subplots() # plt.style.context('neuron-color') handles, labels = [], [] for prop in prop_array: v_max, e_max = [], [] for con_trace,e_t in zip(di_control[prop], di_exc): v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) handles.append(ax.scatter(e_max, v_max, s=10)) ax.plot(e_max, v_max, '--') labels.append("$P= {}$".format(prop)) ax.set_xlabel("Excitation $V_{max}$") ax.set_ylabel("Control $V_{max}$") # left, bottom, width, height = [0.25, 0.6, 0.2, 0.2] # ax2 = fig.add_axes([left, bottom, width, height]) # for prop in prop_array: # ax2.plot(trange, di_control[prop][5]) #ax2.set_xlabel("Time") #ax2.set_ylabel("Membrane potential (mV)") # fig.legend(handles, labels, loc ='center right') fig.set_figwidth(1.5) fig.set_figheight(1.5) simpleaxis(ax) dump(fig,file('figures/fig6/6c.pkl','wb')) # ax.set_title("Divisive Inhibition") plt.show() fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: ttp, e_max = [], [] for con_trace,e_t in zip(di_control[prop], di_exc): ttp.append(numpy.argmax(con_trace)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) handles.append(ax.scatter(e_max[1:], ttp[1:], s=10)) ax.plot(e_max[1:], ttp[1:], '--') labels.append("$P= {}$".format(prop)) ax.set_xlabel("Excitation $V_{max}$") ax.set_ylabel("Time to peak $t_{peak}$") ax.set_xlim(0,15) ax.set_ylim(0,55) # fig.legend(handles, labels, loc ='center right') fig.set_figwidth(1.5) fig.set_figheight(1.5) simpleaxis(ax) # dump(fig,file('figures/fig6/6g.pkl','wb')) plt.show() handles, labels = [], [] for prop in prop_array: fig, ax = plt.subplots() ttp, e_max = [], [] for con_trace,e_t in zip(di_control[prop], di_exc): ax.plot(trange, con_trace,c='k') fig.set_figwidth(12) plt.show() threshold = 5.5 fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: v_max, e_max, spk_t = [], [], [] for con_trace,e_t in zip(di_control[prop], di_exc): v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) spiking = numpy.where((numpy.array(con_trace) - float(approximateDict[leak_rev]/mV)) > threshold)[0] if len(spiking): spk_t.append(spiking[0]) else: spk_t.append(numpy.nan) # print(numpy.where(e_t>threshold)) handles.append(ax.plot(erange, spk_t, '.-')) # ax.plot(e_max, v_max, '--') labels.append("$P= {}$".format(prop)) ax.set_xlabel("Excitation $V_{max}$") ax.set_ylabel("Spike Time $t_{sp}$") # left, bottom, width, height = [0.25, 0.6, 0.2, 0.2] # ax2 = fig.add_axes([left, bottom, width, height]) # for prop in prop_array: # ax2.plot(trange, dn_control[prop][5]) # ax.hlines(y=threshold, linestyle='--') #ax2.set_xlabel("Time") #ax2.set_ylabel("Membrane potential (mV)") # fig.legend(handles, labels, loc ='right') fig.set_figwidth(2) fig.set_figheight(2) simpleaxis(ax) #dump(fig,file('figures/fig6/6e.pkl','wb')) # ax.set_title("Divisive Normalization", fontsize=18) plt.show() print ( "Constant $delta_i$ was {:.1f} ms".format(averageEstimateDict[delta_i]/ms))
Constant $delta_i$ was 4.0 ms
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 DEF: Divisive Normalization: Inhibition proportional to Excitation, or $g_i = P \times g_e$ and $\delta_i$ inversely proportional to $g_e$ 6 D Changing $\delta_i$ = $\delta_{min} + me^{-k\times{g_e}}$
time_erange = numpy.linspace(0.,4.,10) d = lambda minDelay,k,e: minDelay + m*exp(-(k*e)) nS = nano*siemens k, m, minDelay = 1.43/nS, 18.15*ms, 2.54*ms maxDelay = (minDelay + m)/ms fig, ax = plt.subplots() ax.scatter(time_erange, [d(minDelay,k,e*nS)/ms for e in time_erange], s=40, facecolor='k', edgecolor='k') ax.set_xlabel("$g_{exc}$ (nS)") ax.set_ylabel("$\\delta_i$ (ms)") fig.set_figwidth(1.5) fig.set_figheight(1.5) ax.set_xlim(0,4.5) ax.set_ylim(0, 13 ) ax.set_xticks(range(4)) ax.set_yticks(range(0,13,2)) simpleaxis(ax) dump(fig,file('figures/fig6/6d.pkl','wb')) plt.show() check_vm = simplify(Vm_t[0].subs({i:averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, delta_i]}).subs(approximateDict).subs({g_i: P*g_e, delta_i: d(minDelay,k,g_e)}).evalf()) f = lambdify((g_e, P, t), check_vm/mV, (unitsDict, "numpy")) dn_exc = [[float(f(e * nS, 0., dt * ms)) for dt in trange] for e in erange] dn_control = {prop: [[float(f(e * nS, prop, dt * ms)) for dt in trange] for e in erange] for prop in prop_array}
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 E Divisive Normalization
fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: v_max, e_max = [], [] for con_trace,e_t in zip(dn_control[prop], dn_exc): v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) handles.append(ax.scatter(e_max, v_max, s=10)) ax.plot(e_max, v_max, '--') labels.append("$P= {}$".format(prop)) ax.set_xlabel("Excitation $V_{max}$") ax.set_ylabel("Control $V_{max}$") # left, bottom, width, height = [0.25, 0.6, 0.2, 0.2] # ax2 = fig.add_axes([left, bottom, width, height]) # for prop in prop_array: # ax2.plot(trange, dn_control[prop][5]) #ax2.set_xlabel("Time") #ax2.set_ylabel("Membrane potential (mV)") # fig.legend(handles, labels, loc ='right') fig.set_figwidth(1.5) fig.set_figheight(1.5) simpleaxis(ax) dump(fig,file('figures/fig6/6e.pkl','wb')) # ax.set_title("Divisive Normalization", fontsize=18) plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Synapses to threshold
threshold = 5.5 fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: v_max, e_max, spk_t = [], [], [] for con_trace,e_t in zip(dn_control[prop], dn_exc): v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) spiking = numpy.where((numpy.array(con_trace) - float(approximateDict[leak_rev]/mV)) > threshold)[0] if len(spiking): spk_t.append(spiking[0]) else: spk_t.append(numpy.nan) # print(numpy.where(e_t>threshold)) handles.append(ax.plot(erange, spk_t, '.-')) # ax.plot(e_max, v_max, '--') labels.append("$P= {}$".format(prop)) ax.set_xlabel("Excitation $V_{max}$") ax.set_ylabel("Spike Time $t_{sp}$") # left, bottom, width, height = [0.25, 0.6, 0.2, 0.2] # ax2 = fig.add_axes([left, bottom, width, height]) # for prop in prop_array: # ax2.plot(trange, dn_control[prop][5]) # ax.hlines(y=threshold, linestyle='--') #ax2.set_xlabel("Time") #ax2.set_ylabel("Membrane potential (mV)") # fig.legend(handles, labels, loc ='right') fig.set_figwidth(2) fig.set_figheight(2) simpleaxis(ax) #dump(fig,file('figures/fig6/6e.pkl','wb')) # ax.set_title("Divisive Normalization", fontsize=18) plt.show() fig, ax = plt.subplots() threshold = 5.5 handles, labels = [], [] for prop in prop_array[:1]: v_max, e_max, spk_t = [], [], [] for con_trace,e_t in zip(dn_control[prop], dn_exc): v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) spiking = numpy.where((numpy.array(con_trace) - float(approximateDict[leak_rev]/mV)) >= threshold)[0] time = numpy.linspace(0., 100., len(con_trace)) if len(spiking): spk_t.append(spiking[0]) # print(spiking[0]) ax.plot(time[:len(time)/4], numpy.array(con_trace[:len(time)/4]) - float(approximateDict[leak_rev]/mV)) #ax.plot(time[spiking[0]], con_trace[spiking[0]] - float(approximateDict[leak_rev]/mV), 'o',markersize=4, color='k') else: spk_t.append(numpy.nan) # print(numpy.where(e_t>threshold)) #handles.append(ax.plot(erange, spk_t, '.-')) # ax.plot(e_max, v_max, '--') #labels.append("$P= {}$".format(prop)) ax.hlines(y=5, xmin=0, xmax=ax.get_xlim()[1], linestyles='--') ax.set_ylim(0,10.) ax.set_xlabel("Excitation $V_{max}$") ax.set_ylabel("Spike Time $t_{sp}$") # left, bottom, width, height = [0.25, 0.6, 0.2, 0.2] # ax2 = fig.add_axes([left, bottom, width, height]) # for prop in prop_array: # ax2.plot(trange, dn_control[prop][5]) # ax.hlines(y=threshold, linestyle='--') #ax2.set_xlabel("Time") #ax2.set_ylabel("Membrane potential (mV)") # fig.legend(handles, labels, loc ='right') fig.set_figwidth(2) fig.set_figheight(2) simpleaxis(ax) dump(fig,file('figures/fig6/6e.pkl','wb')) ax.set_title("Divisive Normalization", fontsize=18) plt.show() fig, ax = plt.subplots() for prop in prop_array: v_max, e_max = [], [] for con_trace,e_t in zip(dn_control[prop], dn_exc): v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) e_max = numpy.array(e_max) v_max = numpy.array(v_max) handles.append(ax.scatter(erange, e_max/v_max, s=10)) ax.plot(erange, e_max/v_max, '--') ax.set_xlabel("Excitation $g_{exc}$") ax.set_ylabel("Gain") fig.set_figwidth(2) fig.set_figheight(2) simpleaxis(ax) plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
5 B Model subtraction scheme
fig, ax = plt.subplots() handles, labels = [], [] prop = 4 i_max, e_max = [], [] trace_c, trace_e = numpy.array(dn_control[prop][-1]), numpy.array(dn_exc[-1]) ax.plot(trange, trace_c, label="PSP") ax.plot(trange, trace_e, label="EPSP") trace_i = float(approximateDict[leak_rev]/mV) + (trace_c - trace_e) ax.plot(trange, trace_i, label="Derived IPSP") ax.set_xlabel("Time") ax.set_ylabel("$V_m$") fig.set_figwidth(3) fig.set_figheight(3) simpleaxis(ax) dump(fig,file('figures/fig5/5b.pkl','wb')) plt.legend() plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 F Excitation - Derived Inhibition plot
fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: i_max, e_max = [], [] for con_trace,e_t in zip(dn_control[prop], dn_exc): i_t = numpy.array(e_t) - numpy.array(con_trace) i_max.append(numpy.max(i_t)) # i_max.append(max(e_t) - max(con_trace)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) handles.append(ax.scatter(e_max, i_max, s=10)) ax.plot(e_max, i_max, '--') labels.append("$P= {}$".format(prop)) ax.set_xlabel("Excitation $V_{max}$") ax.set_ylabel("Derived Inhibition $V_{max}$") xlim = ax.get_xlim() ax.set_ylim (xlim) ax.plot(xlim, xlim, '--') # fig.legend(handles, labels, loc ='center right') fig.set_figwidth(1.5) fig.set_figheight(1.5) simpleaxis(ax) dump(fig,file('figures/fig6/6f.pkl','wb')) plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 G Time to peak
fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: ttp, e_max = [], [] for con_trace,e_t in zip(dn_control[prop], dn_exc): ttp.append(numpy.argmax(con_trace)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) handles.append(ax.scatter(e_max[1:], ttp[1:], s=10)) ax.plot(e_max[1:], ttp[1:], '--') labels.append("$P= {}$".format(prop)) ax.set_xlabel("Excitation $V_{max}$") ax.set_ylabel("Time to peak $t_{peak}$") ax.set_xlim(0,15) ax.set_ylim(0,55) # fig.legend(handles, labels, loc ='center right') fig.set_figwidth(1.5) fig.set_figheight(1.5) simpleaxis(ax) dump(fig,file('figures/fig6/6g.pkl','wb')) plt.show() handles, labels = [], [] for prop in prop_array: fig, ax = plt.subplots() ttp, e_max = [], [] for con_trace,e_t in zip(dn_control[prop], dn_exc): ax.plot(trange, con_trace,c='k') fig.set_figwidth(12) plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 H Permutation of P
check_vm = simplify(Vm_t[0].subs({i:averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, delta_i]}).subs(approximateDict).subs({delta_i: d(minDelay,k,g_e)}).evalf()) f = lambdify((g_e, g_i, t), check_vm/mV, (unitsDict, "numpy")) p_perm_dn_exc = [[float(f(e * nS, 0., dt * ms)) for dt in trange] for e in erange] p_perm_dn_control = {prop: [[float(f(e * nS, i * nS, dt * ms)) for dt in trange] for (e,i) in zip(erange, numpy.random.permutation(erange*prop))] for prop in prop_array} fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: v_max, e_max = [], [] for con_trace,e_t in zip(p_perm_dn_control[prop], p_perm_dn_exc): v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) handles.append(ax.scatter(e_max, v_max, s=10)) ax.plot(e_max, v_max, '--') labels.append("$P= {}$".format(prop)) ax.set_xlabel("Excitation $V_{max}$") ax.set_ylabel("Control $V_{max}$") # left, bottom, width, height = [0.25, 0.6, 0.2, 0.2] # ax2 = fig.add_axes([left, bottom, width, height]) # for prop in prop_array: # ax2.plot(trange, p_perm_dn_control[prop][5]) #ax2.set_xlabel("Time") #ax2.set_ylabel("Membrane potential (mV)") # fig.legend(handles, labels, loc ='center right') fig.set_figwidth(1.5) fig.set_figheight(1.5) simpleaxis(ax) dump(fig,file('figures/fig6/6h.pkl','wb')) # ax.set_title("Divisive Normalization with E and I balance permuted", fontsize=18) plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 I Permutation of $\delta_i$
check_vm = simplify(Vm_t[0].subs({i:averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, delta_i]}).subs(approximateDict).subs({g_i: P*g_e}).evalf()) f = lambdify((g_e, P, delta_i, t), check_vm/mV, (unitsDict, "numpy")) d_perm_dn_exc = [[float(f(e * nS, 0., d(minDelay,k, e* nS), dt * ms)) for dt in trange] for e in erange] d_perm_dn_control = {prop: [[float(f(e * nS, prop, delay, dt * ms)) for dt in trange] for e,delay in zip(erange, numpy.random.permutation([d(minDelay,k, e* nS) for e in erange])) ] for prop in prop_array} fig, ax = plt.subplots() handles, labels = [], [] for prop in prop_array: v_max, e_max = [], [] for con_trace,e_t in zip(d_perm_dn_control[prop], d_perm_dn_exc): v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) handles.append(ax.scatter(e_max, v_max, s=10)) ax.plot(e_max, v_max, '--') labels.append("$P= {}$".format(prop)) ax.set_xlabel("Excitation $V_{max}$") ax.set_ylabel("Control $V_{max}$") # left, bottom, width, height = [0.25, 0.6, 0.2, 0.2] # ax2 = fig.add_axes([left, bottom, width, height]) # for prop in prop_array: # ax2.plot(trange, d_perm_dn_control[prop][5]) # fig.legend(handles, labels, loc ='center right') fig.set_figwidth(1.5) fig.set_figheight(1.5) simpleaxis(ax) dump(fig,file('figures/fig6/6i.pkl','wb')) # ax.set_title("Divisive Normalization", fontsize=18) plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 J Phase plot Divisive Normalization
import lmfit def DN_model(x,a=1): # Divisive normalization model return (a*x)/(x+a) DN_Model = lmfit.Model(DN_model) check_vm = simplify(Vm_t[0].subs({i:averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, delta_i]}).subs(approximateDict).subs({g_i: P*g_e}).evalf()) f = lambdify((g_e, P, delta_i, t), check_vm/mV, (unitsDict, "numpy")) inhib = simplify(Vm_t[0].subs({i:averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, delta_i]}).subs(approximateDict).evalf()) g = lambdify((g_e, g_i, delta_i, t), inhib/mV, (unitsDict, "numpy")) phase_dn_control = {} phase_dn_exc = {} phase_dn_inh = {} # prop_array = numpy.logspace(-1,1,7) # k_array = numpy.logspace(-1,1,7) prop_array = numpy.linspace(0,6,7) k_array = numpy.linspace(0.,3.,7) for k in k_array: phase_dn_exc[k] = [[float(f(e * nS, 0., d(minDelay,k/nS, e* nS), dt * ms)) for dt in trange] for e in erange] phase_dn_control[k] = {prop: [[float(f(e * nS, prop, delay, dt * ms)) for dt in trange] for e,delay in zip(erange, [d(minDelay,k/nS, e* nS) for e in erange]) ] for prop in prop_array} # phase_dn_inh[k] = {prop: [[float(g(0 * nS, prop*e, delay, dt * ms)) for dt in trange] for e,delay in zip(erange, [d(minDelay,k/nS, e* nS) for e in erange]) ] for prop in prop_array} phase_dn_inh = {} for k in k_array: phase_dn_inh[k] = {prop: [[float(g(0 * nS, prop*e* nS, delay, dt * ms)) for dt in trange] for e,delay in zip(erange, [d(minDelay,k/nS, e* nS) for e in erange]) ] for prop in prop_array} phaseMat_init = numpy.zeros((len(k_array),len(prop_array))) for ind1, k in enumerate(k_array): for ind2, prop in enumerate(prop_array): v_max, e_max = [], [] for con_trace,e_t in zip(phase_dn_control[k][prop], phase_dn_exc[k]): v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) X, y = e_max, v_max DN_pars = DN_Model.make_params() DN_result = DN_Model.fit(y, DN_pars, x=X) # plt.plot(X, y, 'bo') # plt.plot(X, DN_result.best_fit, 'r-') # plt.xlim(0,1.2*max(e_max)) # plt.ylim(0,1.2*max(e_max)) # plt.show() phaseMat_init[ind1][ind2] = DN_result.params['a'] # print(DN_result.fit_report()) # x,y = numpy.meshgrid(prop_array, k_array) #cmap = LinearSegmentedColormap.from_list('gamma_purple', [(0.,'purple' ), (1., 'white')]) cmap = matplotlib.cm.inferno_r cmap.set_bad(color='white') print ("Max gamma is {}".format(numpy.max(phaseMat_init))) gamma_cutOff = 40 cutOffmask = numpy.ma.masked_where(phaseMat_init > gamma_cutOff, phaseMat_init) phaseMat = numpy.ma.masked_where(numpy.isnan(phaseMat_init), cutOffmask) vmax = numpy.nanmax(phaseMat) vmin = numpy.nanmin(phaseMat) fig, ax = plt.subplots() phaseMat #heatmap = ax.pcolormesh(phaseMat, norm=matplotlib.colors.LogNorm(vmin=vmin, vmax=vmax), cmap = cmap, edgecolor='k', linewidths=.05) heatmap = ax.pcolormesh(phaseMat, vmin=0, vmax=gamma_cutOff, cmap = cmap, edgecolor='k', linewidths=.05) # ax.grid(True, which='minor', axis='both', linestyle='--', alpha=0.1, color='k') ax.invert_yaxis() ticks = numpy.arange(0,len(prop_array),2) ax.xaxis.set_ticks(ticks+0.5) ax.yaxis.set_ticks(ticks+0.5) ax.yaxis.set(ticklabels=["{:.0f}".format(j) for j in k_array[ticks]]) ax.xaxis.set(ticklabels=["{:.0f}".format(j) for j in prop_array[ticks]]) # ax.axis([int(k_array.min()),int(k_array.max()),int(prop_array.min()),int(prop_array.max())]) # for axis in [ax.xaxis, ax.yaxis]: # axis.set_ticks([0,10,10], minor=True) # axis.set(ticks=[0,10,10], ticklabels=numpy.linspace(0,10,10)) #Skipping square labels # ax.set_xlim((-1,1)) # ax.set_ylim((-1,1)) #Colorbar stuff cbar = plt.colorbar(heatmap, label = "$\\gamma$", ticks=[0,20,40]) cbar.ax.get_yaxis().labelpad = 6 # tick_locator = matplotlib.ticker.MaxNLocator(nbins=5) # cbar.locator = tick_locator # cbar.update_ticks() # ax.patch.set(hatch='xx', edgecolor='purple') simpleaxis(ax,every=True,outward=False) ax.set_aspect(1) fig.set_figwidth(2.) fig.set_figheight(2.) ax.set_ylabel("K") ax.set_xlabel("I/E") # ax.set_title("Divisive Normalization", fontsize=18) dump(fig,file('figures/supplementary/11a.pkl','wb')) plt.show() print (k_array)
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Delay plots
d = lambda minDelay,k,e: minDelay + m*exp(-(k*e)) nS = nano*siemens m, minDelay = 18.15*ms, 2.54*ms maxDelay = (minDelay + m)/ms k_sample_indices = [1,3,5] fig, ax = plt.subplots(len(k_array[k_sample_indices]),1,sharey=True) for axis,k in zip(ax,k_array[k_sample_indices]): axis.plot(time_erange, [d(minDelay,k/nS,e*nS)/ms for e in time_erange], '.-', c='k', markersize=5) axis.set_xlim(0,4.5) axis.set_ylim(0, 13 ) axis.set_xticks(range(4)) axis.set_yticks(range(0,13,6)) axis.set_title("k={}".format(k)) ax[0].set_ylabel("$\\delta_i$ (ms)") ax[-1].set_xlabel("$g_{exc}$ (nS)") simpleaxis(ax,hideTitle=False) fig.set_figwidth(1) fig.set_figheight(3) dump(fig,file('figures/supplementary/11b.pkl','wb')) plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
I/E differences
ie_sample_indices = [1,3,6] fig, ax = plt.subplots(1,3,sharey=True) for axis,i_by_e in zip(ax, prop_array[ie_sample_indices]): axis.plot(erange, i_by_e * erange, '.-', c='k', markersize=5) axis.set_xlabel("$g_{exc}$ (nS)") axis.set_xlim(0,4.5) axis.set_xticks(range(4)) # axis.set_yticks(range(0,13,2)) # axis.set_title("I/E={}".format(i_by_e)) ax[0].set_ylabel("$g_{inh}$ (nS)") simpleaxis(ax,hideTitle=False) fig.set_figwidth(3) fig.set_figheight(1) dump(fig,file('figures/supplementary/11c.pkl','wb')) plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
DN traces for these values
fig, ax = plt.subplots(len(k_sample_indices), len(ie_sample_indices), sharex=True, sharey=True) sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=0, vmax=gamma_cutOff)) for ind1,k_index in enumerate(k_sample_indices): for ind2,prop_index in enumerate(ie_sample_indices): k, prop = k_array[k_index], prop_array[prop_index] for trace in phase_dn_control[k][prop]: if phaseMat[k_index][prop_index]: ax[ind1][ind2].plot(trange, trace, c=sm.to_rgba(float(phaseMat[k_index][prop_index])), linewidth=1) else: ax[ind1][ind2].plot(trange, trace, c='k', linewidth=1) # ax[ind1][ind2].set_title("K={},I/E={}".format(k,prop)) simpleaxis(fig.get_axes(),hideTitle=False) fig.set_figwidth(3) fig.set_figheight(3) dump(fig,file('figures/supplementary/11d.pkl','wb')) plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
SDN curve for these values
sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=0, vmax=gamma_cutOff)) fig, ax = plt.subplots(len(k_sample), len(ie_sample), sharex=True, sharey=True) for ind1,k_index in enumerate(k_sample_indices): for ind2,prop_index in enumerate(ie_sample_indices): k, prop = k_array[k_index], prop_array[prop_index] obs_sdn = numpy.array([numpy.max(trace) for trace in phase_dn_control[k][prop]]) - float(approximateDict[leak_rev]/mV) exp_sdn = numpy.array([numpy.max(trace) for trace in phase_dn_exc[k]]) - float(approximateDict[leak_rev]/mV) if phaseMat[k_index][prop_index]: ax[ind1][ind2].plot(exp_sdn, obs_sdn, '.-', c=sm.to_rgba(float(phaseMat[k_index][prop_index])), markersize=5, linewidth=1) ax[ind1][ind2].set_title("$\gamma$ = " + "{:.2f}".format(phaseMat_init[k_index][prop_index])) # ax[ind1][ind2].set_title("K={}, I/E={}, ".format(k,prop) + "$\gamma$ = " + "{:.2e}".format(phaseMat_init[k_index][prop_index])) else: ax[ind1][ind2].plot(exp_sdn, obs_sdn, '.-', c='k', markersize=5, linewidth=1) #ax[ind1][ind2].set_title("$\gamma$ > 40") # ax[ind1][ind2].set_title("K={}, I/E={}, ".format(k,prop) + "$\gamma$ = " + "{:.2e}".format(phaseMat_init[k_index][prop_index])) # if phaseMat[k_index][prop_index]: # print (k_index, prop_index) # ax[ind1][ind2].set_title("$\gamma$ = " + "{:.2f}".format(phaseMat_init[k_index][prop_index])) # else: # print ("Didn't work, {},{}".format(k_index, prop_index)) # ax[ind1][ind2].set_title("$\gamma$ > 40") simpleaxis(fig.get_axes(),hideTitle=False) fig.set_figwidth(3) fig.set_figheight(3) dump(fig,file('figures/supplementary/11e.pkl','wb')) plt.show() exp_sdn, obs_sdn k = k_array[4] p = prop_array[4] numColors = 10 cm = matplotlib.cm.viridis_r cgen = (cm(1.*i/numColors) for i in range(numColors)) maxTime = 200 fig, ax = plt.subplots() for con_trace,exc_trace,inh_trace in zip(phase_dn_control[k][prop][1:], phase_dn_exc[k][1:], phase_dn_inh[k][prop][1:]): c = cgen.next() ax.plot(con_trace[:maxTime], '-', linewidth=2, c=c) ax.plot(exc_trace[:maxTime], '-', linewidth=2, c=c) ax.plot( [-65 - (a - b) for a,b in zip(exc_trace[:maxTime],con_trace[:maxTime])], '-', linewidth=2, c=c) # ax.plot(inh_trace[:maxTime], '-', linewidth=2, c=c) ax.hlines(y=max(con_trace[:maxTime]), xmin=0, xmax=maxTime, linestyles='--') # ax.hlines(y=max(con_trace[:maxTime])) simpleaxis(ax,every=True) fig.set_figheight(15) fig.set_figwidth(15) plt.show() fig, ax = plt.subplots() for inh_trace in phase_dn_inh[k][p]: ax.plot(inh_trace) plt.show() len(phase_dn_inh[8]) # for ind1, k in enumerate(k_array): # for ind2, prop in enumerate(prop_array): # v_max, e_max = [], [] # for con_trace,e_t in zip(phase_dn_control[k][prop], phase_dn_exc[k]): # v_max.append(max(con_trace) - float(approximateDict[leak_rev]/mV)) # e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV)) # X, y = e_max, v_max # DN_pars = DN_Model.make_params() # DN_result = DN_Model.fit(y, DN_pars, x=X) # print (k, prop) # print(DN_result.fit_report()) # f,ax = plt.subplots() # DN_result.plot_fit(ax) # plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
6 K $\delta_i$ as a function of $g_e$
prefix = '/home/bhalla/Documents/Codes/data' n = Neuron.load(prefix + '/media/sahil/NCBS_Shares_BGStim/patch_data/170720/c5_EI/plots/c5_EI.pkl') def delay_excitation(x, a=1., b=1., c=1.): # Delay as a function of excitation # return a + b*numpy.exp(-c*x) return a+(x/b) def findOnsetTime(trial, step=0.5, slide = 0.05, minOnset = 2., maxOnset = 50., initpValTolerance=1.0, pValMinTolerance = 0.1): maxIndex = int(trial.F_sample*maxOnset*1e-3) if expType == 1: maxOnsetIndex = numpy.argmax(-trial.interestWindow[:maxIndex]) elif expType == 2: maxOnsetIndex = numpy.argmax(trial.interestWindow[:maxIndex]) else: maxOnsetIndex = numpy.argmax(trial.interestWindow[:maxIndex]) window_size = len(trial.interestWindow) step_size = int(trial.F_sample*step*1e-3) overlap = int(trial.F_sample*slide*1e-3) index_right = maxOnsetIndex index_left = index_right - step_size minOnsetIndex = int(trial.F_sample*minOnset*1e-3) baseMean = numpy.mean(trial.interestWindow[:minOnsetIndex]) factor = 5 thresholdGradient = 0.01 pValTolerance = initpValTolerance l_window = trial.interestWindow[:minOnsetIndex] while (index_left>minOnset): r_window = trial.interestWindow[index_left:index_right] #, trial.baselineWindow #trial.interestWindow[index_left - step_size:index_left] stat, pVal = ss.ks_2samp(r_window, l_window) if pVal>pValTolerance: return float(index_right)/trial.F_sample else: index_left-=overlap index_right-=overlap if index_left<=minOnsetIndex: pValTolerance/=2 if pValTolerance<pValMinTolerance: # print ("Returning Nan") return numpy.nan else: index_right = maxOnsetIndex index_left = maxOnsetIndex - step_size # avg_exc_onset = {} # avg_inh_onset = {} # avg_exc_max = {} # exc_onsets, inh_onsets = {}, {} # exc_max,inh_max = {}, {} # err_inh_onsets = {} # scalingFactor = 1e6 # for expType, experiment in n: # for sqr in experiment: # for coord in experiment[sqr].coordwise: # if expType == 1: # for trial in experiment[sqr].coordwise[coord].trials: # exc_onsets[(sqr,trial.index)] = findOnsetTime(trial)*1e3 # exc_max[(sqr,trial.index)] = -trial.feature[5]*scalingFactor # #exp[sqr].coordwise[coord].average_feature[5] # if expType == 2: # list_inh_onset = [] # for trial in experiment[sqr].coordwise[coord].trials: # inh_onsets[(sqr,trial.index)] = findOnsetTime(trial)*1e3 # list_inh_onset.append(inh_onsets[(sqr,trial.index)]) # inh_max[(sqr,trial.index)] = trial.feature[0]*scalingFactor # avg_onset = numpy.nanmean([onset for onset in list_inh_onset if onset]) # err_onset = numpy.nanstd([onset for onset in list_inh_onset if onset]) # for trial in experiment[sqr].coordwise[coord].trials: # avg_inh_onset[(sqr,trial.index)] = avg_onset # err_inh_onsets[(sqr,trial.index)] = err_onset #print (avg_exc_max, avg_exc_onset, avg_inh_onset) avg_exc_onset = {} avg_inh_onset = {} avg_exc_max = {} exc_onsets, inh_onsets = {}, {} exc_max,inh_max = {}, {} err_exc_onset, err_inh_onset = {}, {} scalingFactor = 1e6 for expType, experiment in n: for sqr in experiment: for coord in experiment[sqr].coordwise: if expType == 1: list_exc_onset = [] list_exc_max = [] for trial in experiment[sqr].coordwise[coord].trials: onsetTime = findOnsetTime(trial) if onsetTime: exc_onsets[(sqr,trial.index)] = onsetTime*1e3 list_exc_onset.append(exc_onsets[(sqr,trial.index)]) list_exc_max.append(-trial.feature[5]*scalingFactor) #exp[sqr].coordwise[coord].average_feature[5] avg_exc_onset[coord] = numpy.nanmean([onset for onset in list_exc_onset if onset]) err_exc_onset[coord] = numpy.nanstd([onset for onset in list_exc_onset if onset]) exc_max[coord] = numpy.nanmean([maxC for maxC in list_exc_max if maxC]) # for trial in experiment[sqr].coordwise[coord].trials: # avg_exc_onset[(sqr,trial.index)] = avg_onset # err_exc_onsets[(sqr,trial.index)] = err_onset if expType == 2: list_inh_onset = [] for trial in experiment[sqr].coordwise[coord].trials: onsetTime = findOnsetTime(trial) if onsetTime: inh_onsets[(sqr,trial.index)] = onsetTime*1e3 list_inh_onset.append(inh_onsets[(sqr,trial.index)]) inh_max[(sqr,trial.index)] = trial.feature[0]*scalingFactor avg_inh_onset[coord] = numpy.nanmean([onset for onset in list_inh_onset if onset]) err_inh_onset[coord] = numpy.nanstd([onset for onset in list_inh_onset if onset]) # for trial in experiment[sqr].coordwise[coord].trials: # avg_inh_onset[(sqr,trial.index)] = avg_onset # err_inh_onsets[(sqr,trial.index)] = err_onset delay, max_current = [], [] del_err, max_err= [], [] inhibOnset = [] conductanceConversion = 70e-3 for key in set(avg_exc_onset).intersection(set(avg_inh_onset)): if avg_inh_onset[key] and avg_exc_onset[key]: if not numpy.isnan(avg_inh_onset[key]) and not numpy.isnan (avg_exc_onset[key]) and not numpy.isnan (exc_max[key]): delay.append(avg_inh_onset[key]- avg_exc_onset[key]) max_current.append(exc_max[key]) # del_err.append(err_inh_onset[key]) inhibOnset.append(avg_inh_onset[key]) maxConductance = numpy.array(max_current)/conductanceConversion # del_err.append() # max_err.append() delay_Model = lmfit.Model(delay_excitation) delay_pars = delay_Model.make_params() delay = numpy.array(delay) maxConductance = numpy.array(maxConductance) # print (delay_result.params) # print (delay_result.aic) # print (delay_result.redchi) delay_result = delay_Model.fit(delay, delay_pars, x=maxConductance) fig, ax = plt.subplots() ax.scatter(maxConductance, delay) ax.set_ylim(0,) plt.show() delay_result = delay_Model.fit(delay, delay_pars, x=maxConductance) fig, ax = plt.subplots() indices = numpy.argsort(maxConductance) ax.scatter(maxConductance[indices], delay[indices], s=30, facecolor='k', edgecolor='k') ax.plot(maxConductance[indices], delay_result.best_fit[indices], '-') # print(conductance_std, delay_std) # ax.errorbar(conductance_mean, delay_mean, xerr = conductance_std, yerr= delay_std, linestyle='',c='k') ax.set_xticks(range(4)) ax.set_yticks(range(0,12,2)) ax.set_xlim(0,4.5) ax.set_ylim(-3,12.5) ax.set_xlabel("$g_e$ (nS)") ax.set_ylabel("$\\delta_i$ (ms)") fig.set_figwidth(1.5) fig.set_figheight(1.5) simpleaxis(ax) # dump(fig,file('figures/fig6/6k.pkl','wb')) plt.show() # print ("{:.2f} + {:.2f}e^-{:.2f}E".format(delay_result.params['a'].value, delay_result.params['b'].value, delay_result.params['c'].value)) print ("{:.2f} + E^-{:.2f}".format(delay_result.params['a'].value, delay_result.params['b'].value, delay_result.params['c'].value)) print(delay_result.fit_report())
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Binning delays here
bins = numpy.linspace(0,max(maxConductance),6) digitized = numpy.digitize(maxConductance, bins) conductance_mean = [maxConductance[digitized == i].mean() for i in range(len(bins))] delay_mean = [delay[digitized == i].mean() for i in range(len(bins))] conductance_std = [maxConductance[digitized == i].std(ddof=1) for i in range(len(bins))] delay_std = [delay[digitized == i].std(ddof=1) for i in range(len(bins))] delay_mean, conductance_mean, delay_std, conductance_std = map(list, zip(*[ (d,c,sd,sc) for d,c,sd,sc in zip(delay_mean, conductance_mean, delay_std, conductance_std) if not any(numpy.isnan([d,c,sd,sc]))])) print ("{:.2f} + {:.2f}e^-{:.2f}E".format(delay_result.params['a'].value, delay_result.params['b'].value, delay_result.params['c'].value)) delay_result = delay_Model.fit(delay_mean, delay_pars, x=conductance_mean) fig, ax = plt.subplots() ax.scatter(conductance_mean, delay_mean, s=30, facecolor='k', edgecolor='k') # ax.plot(conductance_mean, delay_result.best_fit, '-') print(conductance_std, delay_std) ax.errorbar(conductance_mean, delay_mean, xerr = conductance_std, yerr= delay_std, linestyle='',c='k') ax.set_xticks(range(4)) ax.set_yticks(range(0,12,2)) ax.set_xlim(0,4.5) ax.set_ylim(0,12.5) ax.set_xlabel("$g_e$ (nS)") ax.set_ylabel("$\\delta_i$ (ms)") fig.set_figwidth(1.5) fig.set_figheight(1.5) simpleaxis(ax) # dump(fig,file('figures/fig6/6k.pkl','wb')) plt.show() print ("{:.2f} + {:.2f}e^-{:.2f}E".format(delay_result.params['a'].value, delay_result.params['b'].value, delay_result.params['c'].value)) delay_result = delay_Model.fit(delay, delay_pars, x=maxConductance) fig, ax = plt.subplots() ax.errorbar(numpy.array(maxConductance), numpy.array(delay), fmt ='o', markersize=2, alpha=0.4) #ax.scatter(numpy.array(maxConductance)*1e6, numpy.array(delay)*1e3) current_linspace= numpy.linspace(0,1.1*numpy.max(maxConductance)) ax.plot(current_linspace, delay_result.eval(x=current_linspace), '-', label="${:.2f} + {:.2f} \\times e^{{-{:.2f} \\times E }}$".format(delay_result.params['a'].value, delay_result.params['b'].value, delay_result.params['c'].value)) ax.plot(1./(delay_result.params['c'].value), delay_result.eval(x=1./(delay_result.params['c'].value)), 'ko', markersize=2) xmin, xmax = ax.get_xlim() ax.hlines(y=0, xmin=xmin, xmax=xmax, linestyles='--', alpha=0.5) ax.hlines(y=delay_result.params['a'].value, xmin=xmin, xmax=xmax, linestyles='--', alpha=0.5) ax.set_xlabel("$g_{max}^{exc}$") ax.set_ylabel("Delay $(\\delta_{inh})$") ax.annotate("", xy=(xmax, 0.), xycoords='data', xytext=(xmax, delay_result.params['a'].value), textcoords='data', arrowprops=dict(arrowstyle="<->", connectionstyle="arc3"), ) ax.text(1.01*xmax, 1., "$\\delta_{min}$") ax.annotate("", xy=(0, 0), xycoords='data', xytext=(0, delay_result.params['b'].value + delay_result.params['a'].value), textcoords='data', arrowprops=dict(arrowstyle="<->", connectionstyle="arc3"), ) ax.text(xmin*1.5, 10., "$\\delta_{max}$") ax.annotate("", xy=(xmax, delay_result.params['a'].value), xycoords='data', xytext=(xmax, delay_result.params['b'].value + delay_result.params['a'].value), textcoords='data', arrowprops=dict(arrowstyle="<->", connectionstyle="arc3"), ) ax.text(1.01*xmax, 10., "$m$") # ax.text(0.006, 6., "$k$") ax.set_xlim(xmax= xmax*1.1) simpleaxis(ax) plt.legend() fig.set_figwidth(6) fig.set_figheight(6) # dump(fig,file('figures/fig6/6k.pkl','wb')) plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Over all EI cells
voltageClampFiles = '/media/sahil/NCBS_Shares_BGStim/patch_data/voltage_clamp_files.txt' with open (voltageClampFiles,'r') as r: dirnames = r.read().splitlines() a = ['161220 c2_EI', '170510 c2_EI', '170524 c3_EI', '170524 c1_EI', '170530 c2_EI', '170530 c1_EI', '170531 c2_EI', '170531 c4_EI', '170531 c1_EI', '170720 c5_EI', '170720 c3_EI', '170720 c4_EI', '170720 c2_EI'] dirnames = (['/home/bhalla/Documents/Codes/data/media/sahil/NCBS_Shares_BGStim/patch_data/' + '/'.join(j.split(' ')) + '/' for j in a]) #Colorscheme for cells color_cell = matplotlib.cm.plasma(numpy.linspace(0,1,len(dirnames))) neurons = [] for dirname in dirnames: cellIndex = dirname.split('/')[-2] filename = dirname + 'plots/' + cellIndex + '.pkl' neurons.append(Neuron.load(filename)) all_delays = [] all_conductances = [] all_inh_conductances = [] scalingFactor = 1e6 for index, n in enumerate(neurons): avg_exc_onset = {} avg_inh_onset = {} avg_exc_max = {} exc_onsets, inh_onsets = {}, {} exc_max,inh_max = {}, {} err_exc_onset, err_inh_onset = {}, {} for expType, experiment in n: for sqr in experiment: for coord in experiment[sqr].coordwise: if expType == 1: list_exc_onset = [] list_exc_max = [] for trial in experiment[sqr].coordwise[coord].trials: onsetTime = findOnsetTime(trial) if onsetTime: exc_onsets[(sqr,trial.index)] = onsetTime*1e3 list_exc_onset.append(exc_onsets[(sqr,trial.index)]) list_exc_max.append(-trial.feature[5]*scalingFactor) #exp[sqr].coordwise[coord].average_feature[5] avg_exc_onset[coord] = numpy.nanmean([onset for onset in list_exc_onset if onset]) err_exc_onset[coord] = numpy.nanstd([onset for onset in list_exc_onset if onset]) exc_max[coord] = numpy.nanmean([maxC for maxC in list_exc_max if maxC]) # for trial in experiment[sqr].coordwise[coord].trials: # avg_exc_onset[(sqr,trial.index)] = avg_onset # err_exc_onsets[(sqr,trial.index)] = err_onset if expType == 2: list_inh_onset = [] list_inh_max = [] for trial in experiment[sqr].coordwise[coord].trials: onsetTime = findOnsetTime(trial) if onsetTime: inh_onsets[(sqr,trial.index)] = onsetTime*1e3 list_inh_onset.append(inh_onsets[(sqr,trial.index)]) list_inh_max.append(trial.feature[0]*scalingFactor) avg_inh_onset[coord] = numpy.nanmean([onset for onset in list_inh_onset if onset]) err_inh_onset[coord] = numpy.nanstd([onset for onset in list_inh_onset if onset]) inh_max[coord] = numpy.nanmean([maxC for maxC in list_inh_max if maxC]) delay, max_conductance, max_inh_conductance = [], [], [] inhibOnset = [] conductanceConversion = 70e-3 for key in set(avg_exc_onset).intersection(set(avg_inh_onset)): if avg_inh_onset[key] and avg_exc_onset[key]: if not numpy.isnan(avg_inh_onset[key]) and not numpy.isnan (avg_exc_onset[key]) and not numpy.isnan (exc_max[key]) and not numpy.isnan (inh_max[key]): delay.append(avg_inh_onset[key]- avg_exc_onset[key]) max_conductance.append(exc_max[key]/conductanceConversion) max_inh_conductance.append(inh_max[key]/conductanceConversion) all_delays.append(delay) all_conductances.append(max_conductance) all_inh_conductances.append(max_inh_conductance) print ("Done {}".format(index)) # all_delays = [] # all_conductances = [] # all_inh_conductances = [] # scalingFactor = 1e6 # for index, n in enumerate(neurons): # avg_exc_onset = {} # avg_inh_onset = {} # avg_exc_max = {} # exc_onsets, inh_onsets = {}, {} # exc_max,inh_max = {}, {} # err_inh_onsets = {} # for expType, experiment in n: # for sqr in experiment: # for coord in experiment[sqr].coordwise: # if expType == 1: # exc_onsets[(sqr,coord)] = [] # exc_max[(sqr,coord)] = [] # for trial in experiment[sqr].coordwise[coord].trials: # onsetTime = findOnsetTime(trial) # if onsetTime: # exc_onsets[(sqr,coord)].append(onsetTime*1e3) # exc_max[(sqr,coord)].append(-trial.feature[5]*scalingFactor) # #exp[sqr].coordwise[coord].average_feature[5] # exc_onsets[(sqr,coord)] = numpy.nanmean(exc_onsets[(sqr,coord)]) # exc_max[(sqr,coord)] = numpy.nanmean(exc_max[(sqr,coord)]) # if expType == 2: # inh_onsets[(sqr,coord)] = [] # inh_max[(sqr,coord)] = [] # #list_inh_onset = [] # for trial in experiment[sqr].coordwise[coord].trials: # onsetTime = findOnsetTime(trial) # if onsetTime: # inh_onsets[(sqr,coord)].append(onsetTime*1e3) # #list_inh_onset.append(onsetTime*1e3) # inh_max[(sqr,coord)].append(trial.feature[0]*scalingFactor) # #avg_onset = numpy.nanmean([onset for onset in list_inh_onset if onset]) # #err_onset = numpy.nanstd([onset for onset in list_inh_onset if onset]) # # for trial in exp[sqr].coordwise[coord].trials: # # avg_inh_onset[(sqr,trial.index)] = avg_onset # # err_inh_onsets[(sqr,trial.index)] = err_onset # inh_onsets[(sqr,coord)] = numpy.nanmean(inh_onsets[(sqr,coord)]) # inh_max[(sqr,coord)] = numpy.nanmean(inh_max[(sqr,coord)]) # delay, max_conductance, max_inh_conductance = [], [], [] # # del_err, max_err= [], [] # inhibOnset = [] # conductanceConversion = 70e-3 # for key in set(exc_onsets).intersection(set(inh_onsets)): # if inh_onsets[key] and exc_onsets[key]: # # print ("Doing {}".format(index)) # # print (inh_onsets[key], exc_onsets[key], exc_max[key]) # if not numpy.isnan(inh_onsets[key]) and not numpy.isnan (exc_onsets[key]) and not numpy.isnan (exc_max[key]) and not numpy.isnan (inh_max[key]): # # print ("Delay is {}".format(inh_onsets[key]- exc_onsets[key])) # delay.append(inh_onsets[key]- exc_onsets[key]) # max_conductance.append(exc_max[key]/conductanceConversion) # max_inh_conductance.append(inh_max[key]/conductanceConversion) # all_delays.append(delay) # all_conductances.append(max_conductance) # all_inh_conductances.append(max_inh_conductance) # print ("Done {}".format(index)) fig, ax = plt.subplots() cmap = matplotlib.cm.viridis colors = matplotlib.cm.viridis(numpy.linspace(0, 1, len(all_inh_conductances))) # norm = matplotlib.colors.Normalize(vmin=1, vmax=6) slopeArr = [] for i, (g, gi, d, c) in enumerate(zip(all_conductances, all_inh_conductances, all_delays, colors)): g, gi, d = numpy.array(g), numpy.array(gi), numpy.array(d) indices = numpy.argsort(g) #slope, intercept, rval, pvalue, err = ss.linregress(g[indices], gi[indices]) #cbar = ax.scatter(g,d, c= [slope]*len(g), s= 10, cmap='viridis', vmin=1.5, vmax=3.2) slope, intercept, lowConf, upperConf = ss.mstats.theilslopes(x=gi[indices], y=d[indices]) #slope, intercept, rval, pvalue, err = ss.linregress(g[indices], d[indices]) cbar = ax.scatter(gi,d, s=4, c=c, alpha=0.4, cmap=cmap) ax.plot(gi, slope*gi + intercept,'--', color='gray', linewidth=0.1) slopeArr.append(slope) flattened_g = numpy.array([g for sublist in all_conductances for g in sublist]) flattened_d = numpy.array([d for sublist in all_delays for d in sublist]) ax.set_xlabel("$g_e$ (nS)") ax.set_ylabel("$\\delta_i$ (ms)") # plt.colorbar(cbar) ax.set_ylim(ymin=-5) simpleaxis(ax) fig.set_figwidth(1.5) fig.set_figheight(1.5) dump(fig,file('figures/fig6/6l_1.pkl','wb')) plt.show() fig, (ax1, ax2, ax3) = plt.subplots(ncols=3) cmap = matplotlib.cm.viridis colors = matplotlib.cm.viridis(numpy.linspace(0, 1, len(all_inh_conductances))) # norm = matplotlib.colors.Normalize(vmin=1, vmax=6) slopeArr = [] for i, (g, gi, d, c) in enumerate(zip(all_conductances, all_inh_conductances, all_delays, colors)): g, gi, d = numpy.array(g), numpy.array(gi), numpy.array(d) ax1.scatter(gi/g,d,s=.1,color='k') ax2.scatter(g,d,s=.1,color='k') ax3.scatter(gi,d,s=.1,color='k') flattened_g = numpy.array([g for sublist in all_conductances for g in sublist]) flattened_gi = numpy.array([g for sublist in all_inh_conductances for g in sublist]) flattened_gi_by_g = numpy.array([g for sublist in zip(all_conductances,all_inh_conductances) for g in sublist]) flattened_d = numpy.array([d for sublist in all_delays for d in sublist]) slope, intercept, rval, pvalue, err = ss.linregress(flattened_gi,flattened_d) ax1.plot(gi/g, slope*(gi/g) + intercept) slope, intercept, rval, pvalue, err = ss.linregress(g, d) ax2.plot(gi/g, slope*(gi/g) + intercept) slope, intercept, rval, pvalue, err = ss.linregress(gi, d) ax3.plot(gi/g, slope*(gi/g) + intercept) ax.set_xlabel("I/E") ax.set_ylabel("$\\delta_i$ (ms)") # plt.colorbar(cbar) ax1.set_ylim(ymin=-5) ax2.set_ylim(ymin=-5) ax3.set_ylim(ymin=-5) simpleaxis([ax1, ax2, ax3]) fig.set_figwidth(4.5) fig.set_figheight(1.5) plt.show() fig, ax = plt.subplots() bins = numpy.linspace(-3,0.25,13) print(bins) ax.hist(slopeArr,bins=bins,color='k') ax.vlines(x=0,ymin=0,ymax=7.,color='r') simpleaxis(ax) fig.set_figwidth(1.5) fig.set_figheight(1.5) dump(fig,file('figures/fig6/6l_2.pkl','wb')) plt.show()
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Fitting through all cells
cmap = matplotlib.cm.viridis colors = matplotlib.cm.viridis(numpy.linspace(0, 1, len(all_inh_conductances))) fig, ax = plt.subplots() # norm = matplotlib.colors.Normalize(vmin=1, vmax=6) slopeArr = [] adist, bdist = [],[] flattened_g, flattened_d = [], [] for i, (g, gi, d, c) in enumerate(zip(all_conductances, all_inh_conductances, all_delays, colors)): g, gi, d = numpy.array(g), numpy.array(gi), numpy.array(d) indices = numpy.where(d>0) g, gi, d = g[indices], gi[indices], d[indices] flattened_g += list(g) flattened_d += list(d) indices = numpy.argsort(g) # delay_Model = lmfit.Model(delay_excitation) # delay_pars = delay_Model.make_params() # delay_result = delay_Model.fit(d, delay_pars, x=g) # indices = numpy.argsort(g) # # ax.scatter(g[indices], 1./d[indices], s=30, facecolor='k', edgecolor='k') ax.scatter(g[indices],1./d[indices], s=5, facecolor=colors[i], edgecolor=colors[i]) # ax.plot(g[indices], delay_result.best_fit[indices], '--', color=colors[i], linewidth=1) # print ("{:.2f} + {:.2f}g_e^{:.2f}".format(delay_result.params['a'].value, delay_result.params['b'].value, delay_result.params['c'].value)) # adist.append(delay_result.params['a'].value) # bdist.append(delay_result.params['b'].value) # print(delay_result.fit_report()) # ax.set_xlabel("$g_e$ (nS)") # ax.set_ylabel("$\\delta_i$ (ms)") # dump(fig,file('figures/fig6/6k.pkl','wb')) # flattened_g = numpy.array([g for sublist in all_conductances for g in sublist]) # flattened_d = 1./numpy.array([d for sublist in all_delays for d in sublist]) # delay_Model = lmfit.Model(delay_excitation) # delay_pars = delay_Model.make_params() flattened_d_nonInv = flattened_d[:] flattened_g = numpy.array(flattened_g) flattened_d = 1./numpy.array(flattened_d) #delay_result = delay_Model.fit(flattened_d, delay_pars, x=flattened_g) slope, intercept, lowerr, higherr = ss.mstats.theilslopes(y=flattened_d,x=flattened_g) indices = numpy.argsort(flattened_g) ax.plot(flattened_g[indices], slope*flattened_g[indices] + intercept, '-',color='k') # ax.scatter(g[indices], d[indices], s=30, facecolor='k', edgecolor='k') #ax.plot(flattened_g[indices], delay_result.best_fit[indices], '-',color='k') #print ("{:.2f} * g_e^{:.2f}".format(delay_result.params['a'].value, delay_result.params['b'].value))#, delay_result.params['c'].value)) #print(delay_result.fit_report()) ax.set_xlabel("$g_e$ (nS)") ax.set_ylabel("$\\delta_i$ (ms)") fig.set_figwidth(4.5) fig.set_figheight(4.5) ax.set_xlim(0,4.5) ax.set_ylim(0,15) simpleaxis(ax) plt.show() # ax.set_xticks(range(4)) # ax.set_yticks(range(0,12,2)) # print ("{:.2f} + {:.2f}e^-{:.2f}E".format(delay_result.params['a'].value, delay_result.params['b'].value, delay_result.params['c'].value)) from scipy.optimize import curve_fit fig, ax = plt.subplots() xArr = numpy.linspace(0.01,12,100) ax.scatter(flattened_g,flattened_d_nonInv,s=8) popt, pcov = curve_fit(delay_excitation, flattened_g, flattened_d_nonInv, bounds=(0, [2., 2, 1])) ax.plot(xArr, delay_excitation(xArr, *popt), 'r-') # ax.plot(xArr, (1.5/xArr) + 1.5, 'k--') ax.set_xlim(-1,15) ax.set_ylim(-1,15) plt.show() print (popt, pcov) print (slope, intercept) print(lowerr,higherr) residuals = (flattened_d - 1.5*flattened_g[indices]+1.5) plt.hist(residuals,bins=30) plt.vlines(x=0,ymin=0,ymax=200) plt.show() residualArr = [] for i, (g, gi, d, c) in enumerate(zip(all_conductances, all_inh_conductances, all_delays, colors)): g, gi, d = numpy.array(g), numpy.array(gi), numpy.array(d) indices = numpy.where(d>0) g, gi, d = g[indices], gi[indices], d[indices] residuals = (d - 1.5*g+1.5) residualArr.append(residuals) plt.hist(residualArr,stacked=True) plt.show() fig, ax = plt.subplots(3,1) ax[0].hist(adist) ax[1].hist(bdist,bins=5) plt.show() print(numpy.mean(adist)) print(numpy.mean(bdist)) slopeArr fig, ax = plt.subplots() cmap = matplotlib.cm.viridis colors = matplotlib.cm.viridis(numpy.linspace(0, 1, len(all_inh_conductances))) # norm = matplotlib.colors.Normalize(vmin=1, vmax=6) for i, (g, gi, d, c) in enumerate(zip(all_conductances, all_inh_conductances, all_delays, colors)): g, gi, d = numpy.array(g), numpy.array(gi), numpy.array(d) indices = numpy.argsort(g) # slope, intercept, rval, pvalue, err = ss.linregress(g[indices], gi[indices]) # print(slope) #cbar = ax.scatter(g,d, c= [slope]*len(g), s= 10, cmap='viridis', vmin=1.5, vmax=3.2) # print(g) gmax = numpy.ceil(max(g)) gmin = numpy.floor(min(g)) print(gmin, min(g), gmax, max(g)) # bins = numpy.linspace(gmin,gmax,(gmax - gmin) +1) print (gmin, gmax) bins = numpy.arange(gmin,gmax,1) indices = numpy.argsort(g) digitized = numpy.digitize(g[indices], bins) # bins = range(8) g_means = numpy.array([g[indices][digitized == i].mean() for i in bins]) g_err = numpy.array([g[indices][digitized == i].std() for i in bins]) d_means = numpy.array([d[indices][digitized == i].mean() for i in bins]) d_err = numpy.array([d[indices][digitized == i].std() for i in bins]) finiteYmask = numpy.isfinite(g_means) d_means = d_means[finiteYmask] g_means = g_means[finiteYmask] d_err = d_err[finiteYmask] g_err = g_err[finiteYmask] slope, intercept, rval, pvalue, err = ss.linregress(g_means, d_means) ax.errorbar(g_means, d_means, xerr = g_err, yerr = d_err, linestyle='') cbar = ax.scatter(g_means, d_means, s=10, c=c, alpha=0.5, cmap='viridis') # indices = numpy.argsort(g_means) print(g_means, d_means, intercept, slope) ax.plot(g_means, intercept + slope*g_means, c=c) plt.show() delay_Model = lmfit.Model(delay_excitation) delay_pars = delay_Model.make_params() indices = numpy.argsort(flattened_g) flattened_g = flattened_g[indices] flattened_d_fit = delay_result.eval(x=flattened_g) delay_result = delay_Model.fit(flattened_d, delay_pars, x=flattened_g) fig, ax = plt.subplots() ax.scatter(flattened_g, flattened_d, s=10, alpha=0.2,c='k') print(delay_result.fit_report()) ax.plot(flattened_g, flattened_d_fit) plt.show() # slope, intercept, rval, pvalue, err = ss.linregress(flattened_g[indices], flattened_d[indices]) # x_axis = numpy.linspace(numpy.min(flattened_g), numpy.max(flattened_g), 100) # y_axis = slope * x_axis + intercept # ax.set_xlim(0,6) # ax.set_ylim(-3,10) # ax.plot(x_axis, y_axis, '--') print ( delay_result.params['a'],delay_result.params['b'],delay_result.params['c']) keySet = set(inh_onsets).intersection(exc_onsets) for key in keySet: print (inh_onsets[key], exc_onsets[key])
_____no_output_____
MIT
Figure 6 (Conductance model), Figure 5B (Model derived inhibition).ipynb
sahilm89/PreciseBalance
Importing Libraries
import numpy as np import pandas as pd from matplotlib import pyplot as plt import os.path as op import pickle import tensorflow as tf from tensorflow import keras from keras.models import Model,Sequential,load_model from keras.layers import Input, Embedding from keras.layers import Dense, Bidirectional from keras.layers.recurrent import LSTM import keras.metrics as metrics import itertools from tensorflow.python.keras.utils.data_utils import Sequence from decimal import Decimal from keras import backend as K from keras.layers import Conv1D,MaxPooling1D,Flatten,Dense
_____no_output_____
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Data Fetching
A1=np.empty((0,5),dtype='float32') U1=np.empty((0,7),dtype='float32') node=['150','149','147','144','142','140','136','61'] mon=['Apr','Mar','Aug','Jun','Jul','Sep','May','Oct'] for j in node: for i in mon: inp= pd.read_csv('../../../data_gkv/AT510_Node_'+str(j)+'_'+str(i)+'19_OutputFile.csv',usecols=[1,2,3,15,16]) out= pd.read_csv('../../../data_gkv/AT510_Node_'+str(j)+'_'+str(i)+'19_OutputFile.csv',usecols=[5,6,7,8,17,18,19]) inp=np.array(inp,dtype='float32') out=np.array(out,dtype='float32') A1=np.append(A1, inp, axis=0) U1=np.append(U1, out, axis=0) print(A1) print(U1)
[[1.50000e+02 1.90401e+05 7.25000e+02 2.75500e+01 8.03900e+01] [1.50000e+02 1.90401e+05 8.25000e+02 2.75600e+01 8.03300e+01] [1.50000e+02 1.90401e+05 9.25000e+02 2.75800e+01 8.02400e+01] ... [6.10000e+01 1.91020e+05 1.94532e+05 2.93700e+01 7.52100e+01] [6.10000e+01 1.91020e+05 1.94632e+05 2.93500e+01 7.52700e+01] [6.10000e+01 1.91020e+05 1.94732e+05 2.93400e+01 7.53000e+01]] [[ 28. 3. -52. ... 16.97 19.63 20.06] [ 28. 15. -53. ... 16.63 19.57 23.06] [ 31. 16. -55. ... 17.24 19.98 20.24] ... [ 76. 12. -76. ... 3.47 3.95 4.35] [ 75. 13. -76. ... 3.88 4.33 4.42] [ 76. 12. -75. ... 3.46 4.07 4.28]]
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Min Max Scaler
from sklearn.decomposition import SparsePCA import warnings scaler_obj1=SparsePCA() scaler_obj2=SparsePCA() X1=scaler_obj1.fit_transform(A1) Y1=scaler_obj2.fit_transform(U1) warnings.filterwarnings(action='ignore', category=UserWarning) X1=X1[:,np.newaxis,:] Y1=Y1[:,np.newaxis,:] def rmse(y_true, y_pred): return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1)) def coeff_determination(y_true, y_pred): SS_res = K.sum(K.square( y_true-y_pred )) SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) ) return ( 1 - SS_res/(SS_tot + K.epsilon()) )
_____no_output_____
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Model
inp=keras.Input(shape=(1,5)) l=keras.layers.Conv1D(16,1,padding="same",activation="tanh",kernel_initializer="glorot_uniform")(inp) output = keras.layers.Conv1D(7,4,padding="same",activation='sigmoid')(l) model1=keras.Model(inputs=inp,outputs=output) model1.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-5), loss='binary_crossentropy',metrics=['accuracy','mse','mae',rmse]) model1.summary() from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(X1, Y1, test_size=0.25, random_state=42) history1 = model1.fit(x_train,y_train,batch_size=256,epochs=50, validation_data=(x_test, y_test),verbose = 2, shuffle= False) model1.evaluate(x_test,y_test)
13518/13518 [==============================] - 99s 7ms/step - loss: -140.0141 - accuracy: 0.3129 - mse: 1403845.8750 - mae: 73.5453 - rmse: 139.6234
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Saving Model as File
model1.evaluate(x_train,y_train) df1=pd.DataFrame(history1.history['loss'],columns=["Loss"]) df1=df1.join(pd.DataFrame(history1.history["val_loss"],columns=["Val Loss"])) df1=df1.join(pd.DataFrame(history1.history["accuracy"],columns=['Accuracy'])) df1=df1.join(pd.DataFrame(history1.history["val_accuracy"],columns=['Val Accuracy'])) df1=df1.join(pd.DataFrame(history1.history["mse"],columns=['MSE'])) df1=df1.join(pd.DataFrame(history1.history["val_mse"],columns=['Val MSE'])) df1=df1.join(pd.DataFrame(history1.history["mae"],columns=['MAE'])) df1=df1.join(pd.DataFrame(history1.history["val_mae"],columns=['Val MAE'])) df1=df1.join(pd.DataFrame(history1.history["rmse"],columns=['RMSE'])) df1=df1.join(pd.DataFrame(history1.history["val_mse"],columns=['Val RMSE'])) df1 df1.to_excel("GRU_tanh_mse.xlsx") model_json = model1.to_json() with open("cnn_relu.json", "w") as json_file: json_file.write(model_json) # serialize weights to HDF5 model1.save_weights("cnn_relu.h5") print("Saved model to disk") from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(X1, Y1, test_size=0.25, random_state=42) from keras.models import model_from_json json_file = open('cnn_relu.json', 'r') loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) # load weights into new model loaded_model.load_weights("cnn_relu.h5") print("Loaded model from disk") loaded_model.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-5), loss='mse',metrics=['accuracy','mse','mae',rmse]) loaded_model.evaluate(x_train, y_train, verbose=0) loaded_model.evaluate(x_test, y_test, verbose=0)
_____no_output_____
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Error Analysis
# summarize history for loss plt.plot(history1.history['loss']) plt.plot(history1.history['val_loss']) plt.title('Model Loss',fontweight ='bold',fontsize = 15) plt.ylabel('Loss',fontweight ='bold',fontsize = 15) plt.xlabel('Epoch',fontweight ='bold',fontsize = 15) plt.legend(['Train', 'Test'], loc='upper left') plt.show() # summarize history for accuracy plt.plot(history1.history['accuracy']) plt.plot(history1.history['val_accuracy']) plt.title('Model accuracy',fontweight ='bold',fontsize = 15) plt.ylabel('Accuracy',fontweight ='bold',fontsize = 15) plt.xlabel('Epoch',fontweight ='bold',fontsize = 15) plt.legend(['Train', 'Test'], loc='upper left') plt.show() from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(X1, Y1, test_size=0.25, random_state=42) y_test_pred=loaded_model.predict(x_test) y_test_pred y_test y_test=y_test[:,0] y_test_pred=y_test_pred[:,0] from numpy import savetxt savetxt('cnn_y_test_pred.csv', y_test_pred[:1001], delimiter=',') from numpy import savetxt savetxt('cnn_y_test.csv', y_test[:1001], delimiter=',') #completed
_____no_output_____
MIT
OPC_Sensor/Models With Decompositions/Models with SparsePCA/CNN/CNN_tanh_binary.ipynb
utkarshkanswal/Machine-Learning-application-on-Air-quality-dataset
Carreau-Yassuda
(nu*x).shape import pylab import matplotlib.pyplot as plt import numpy def get_cmap(n,name='viridis'): return plt.cm.get_cmap(name,n) a = 2 nu0=5e-3 nuinf=1e-3 h_channel = 1.2 #m shear_rate_order = 10 x = numpy.logspace(-shear_rate_order,shear_rate_order,10000) k=[5e-1,5e2,5e8] n=[0.8] fig, ax = plt.subplots(figsize=(6,4),dpi=250,facecolor='w') velocity = 2.0 #m/s shear_rate = velocity/h_channel #1/s dim=len(k)*len(n) cmap=get_cmap(dim) right, bottom, width, height = [0.6, 0.6, 0.28, 0.25] #ax2 = fig.add_axes([right, bottom, width, height]) for i_k in range(len(k)): for i_n in range(len(n)): index = i_k*len(n)+i_n nu = nuinf+(nu0-nuinf)*(1+(k[i_k]*x)**a)**((n[i_n]-1)/a) ax.loglog(x,nu,':',c=cmap(int(index)),mfc='w',markersize=2,label="k=%2.1e, n=%2.1e" % (k[i_k],n[i_n])) ax.set_ylabel(r'$\mu$'+' (Pa.s)') ax.set_xlabel(r'$\dot{\gamma} \ (\frac{1}{s})$') ax.axvline(x=(shear_rate),c='r',lw=0.5,linestyle='--',label=(r'$\dot{\gamma} = %2.2f $')%shear_rate) ax.legend(loc='center',bbox_to_anchor=(1.2,0.5)) ax.set_title('Carreau-Yasuda model parameters \n ' + r'$\mu=\mu_{\,\infty}+(\mu_{\,0}-\mu_{\,\infty})\big[{1+(k\dot{\gamma})^a}\big]^{\frac{{(n-1)}_{ }}{a}}$') pylab.show() k=[1,30,1e5] n=[0.5] f, ax = plt.subplots(figsize=(6,4),dpi=250,facecolor='w') velocity = 0.5 #m/s shear_rate = velocity/h_channel #1/s dim=len(k)*len(n) cmap=get_cmap(dim) for i_k in range(len(k)): for i_n in range(len(n)): index = i_k*len(n)+i_n nu = nuinf+(nu0-nuinf)*(1+(k[i_k]*x)**a)**((n[i_n]-1)/a) ax.loglog(x,nu,':',c=cmap(int(index)),mfc='w',markersize=2,label="k=%2.1e, n=%2.1e" % (k[i_k],n[i_n])) ax.set_ylabel(r'$\mu$'+' (Pa.s)') ax.set_xlabel(r'$\dot{\gamma} \ (\frac{1}{s})$') ax.axvline(x=(shear_rate),c='g',lw=0.5,linestyle='--',label=(r'$\dot{\gamma} = %2.2f $')%shear_rate) ax.legend(loc='center',bbox_to_anchor=(1.2,0.5)) pylab.show() k=[2,30,1e3] n=[0.2] f, ax = plt.subplots(figsize=(6,4),dpi=250,facecolor='w') velocity = 0.2 #m/s shear_rate = velocity/h_channel #1/s dim=len(k)*len(n) cmap=get_cmap(dim) for i_k in range(len(k)): for i_n in range(len(n)): index = i_k*len(n)+i_n nu = nuinf+(nu0-nuinf)*(1+(k[i_k]*x)**a)**((n[i_n]-1)/a) ax.loglog(x,nu,':',c=cmap(int(index)),mfc='w',markersize=2,label="k=%2.1e, n=%2.1e" % (k[i_k],n[i_n])) ax.set_ylabel(r'$\mu$'+' (Pa.s)') ax.set_xlabel(r'$\dot{\gamma} (\frac{1}{s})$') ax.axvline(x=shear_rate,c='b',lw=0.5,linestyle='--',label=(r'$\dot{\gamma} = %2.2f $')%shear_rate) ax.legend(loc='center',bbox_to_anchor=(1.2,0.5)) pylab.show()
_____no_output_____
MIT
notebook/safa.ipynb
alineu/elastic_beams_in_shear_flow
Herschel-Bulkley
import pylab import matplotlib.pyplot as plt import numpy def get_cmap(n,name='viridis'): return plt.cm.get_cmap(name,n) a = 2 npoints = 10000 h_channel = 1.2 #m velocity = 2.0 #m/s shear_rate = velocity/h_channel #1/s nu0=5e-3*numpy.ones(npoints) nuinf=1e-3 # Newtonian regime tau0=shear_rate*nuinf*numpy.ones(npoints) shear_rate_order = 10 x = numpy.logspace(-shear_rate_order,shear_rate_order,npoints) f, ax = plt.subplots(figsize=(6,4),dpi=250,facecolor='w') k=[4.4e-3,1.5e-3,1e-5] n=[0.8] dim=len(k)*len(n) cmap=get_cmap(dim) for i_k in range(len(k)): for i_n in range(len(n)): index = i_k*len(n)+i_n nu = numpy.minimum(nu0,(tau0/shear_rate)+k[i_k]*(x**(n[i_n]-1))) ax.loglog(x,nu,':',c=cmap(int(index)),mfc='w',markersize=2,label="k=%2.1e, n=%2.1e" % (k[i_k],n[i_n])) ax.set_ylabel(r'$\mu$'+' (Pa.s)') ax.set_xlabel(r'$\dot{\gamma} \ (\frac{1}{s})$') ax.axvline(x=(shear_rate),c='r',lw=0.5,linestyle='--',label=(r'$\dot{\gamma} = %2.2f $')%shear_rate) ax.set_title("Herschel-Bulkley model parameters \n " + r'$\mu = \mathrm{min}(\mu_0,\frac{\tau_0}{\dot{\gamma}}+k\,\dot{\gamma}^{\,n-1})$') ax.legend(loc='center',bbox_to_anchor=(1.2,0.5)) pylab.show() f, ax = plt.subplots(figsize=(6,4),dpi=250,facecolor='w') k=[2.5e-3,8e-4,1e-5] n=[0.5] velocity = 0.5 #m/s shear_rate = velocity/h_channel #1/s tau0=shear_rate*nuinf*numpy.ones(npoints) dim=len(k)*len(n) cmap=get_cmap(dim) for i_k in range(len(k)): for i_n in range(len(n)): index = i_k*len(n)+i_n nu = numpy.minimum(nu0,(tau0/shear_rate)+k[i_k]*(x**(n[i_n]-1))) ax.loglog(x,nu,':',c=cmap(int(index)),mfc='w',markersize=2,label="k=%2.1e, n=%2.1e" % (k[i_k],n[i_n])) ax.set_ylabel(r'$\mu$'+' (Pa.s)') ax.set_xlabel(r'$\dot{\gamma} \ (\frac{1}{s})$') ax.axvline(x=(shear_rate),c='g',lw=0.5,linestyle='--',label=(r'$\dot{\gamma} = %2.2f $')%shear_rate) ax.legend(loc='center',bbox_to_anchor=(1.2,0.5)) pylab.show() f, ax = plt.subplots(figsize=(6,4),dpi=250,facecolor='w') k=[9e-4,3e-4,1e-5] n=[0.2] velocity = 0.2 #m/s shear_rate = velocity/h_channel #1/s tau0=shear_rate*nuinf*numpy.ones(npoints) shear_rate = velocity/h_channel #1/s dim=len(k)*len(n) cmap=get_cmap(dim) for i_k in range(len(k)): for i_n in range(len(n)): index = i_k*len(n)+i_n nu = numpy.minimum(nu0,(tau0/shear_rate)+k[i_k]*(x**(n[i_n]-1))) ax.loglog(x,nu,':',c=cmap(int(index)),mfc='w',markersize=2,label="k=%2.1e, n=%2.1e" % (k[i_k],n[i_n])) ax.set_ylabel(r'$\mu$'+' (Pa.s)') ax.set_xlabel(r'$\dot{\gamma} \ (\frac{1}{s})$') ax.axvline(x=(shear_rate),c='b',lw=0.5,linestyle='--',label=(r'$\dot{\gamma} = %2.2f $')%shear_rate) ax.legend(loc='center',bbox_to_anchor=(1.2,0.5)) pylab.show() (tau0/shear_rate)+k[i_k]*(x**(n[i_n]-1)) min(nu0,k[i_k]*(x**(n[i_n]-1)))
_____no_output_____
MIT
notebook/safa.ipynb
alineu/elastic_beams_in_shear_flow
Vertex client library: AutoML video action recognition model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex client library for Python to create video action recognition models and do batch prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users). DatasetThe dataset used for this tutorial is the golf swing recognition portion of the [Human Motion dataset](https://todo) from [MIT](http://cbcl.mit.edu/publications/ps/Kuehne_etal_iccv11.pdf). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model will predict the start frame where a golf swing begins. ObjectiveIn this tutorial, you create an AutoML video action recognition model from a Python script, and then do a batch prediction using the Vertex client library. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Make a batch prediction.There is one key difference between using batch prediction and using online prediction:* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex client library.
import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = '--user' else: USER_FLAG = '' ! pip3 install -U google-cloud-aiplatform $USER_FLAG
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Install the latest GA version of *google-cloud-storage* library as well.
! pip3 install -U google-cloud-storage $USER_FLAG
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Restart the kernelOnce you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
PROJECT_ID = "[your-project-id]" #@param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)
REGION = 'us-central1' #@param {type: "string"}
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
# If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS ''
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
BUCKET_NAME = "gs://[your-bucket-name]" #@param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
! gsutil mb -l $REGION $BUCKET_NAME
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Finally, validate access to your Cloud Storage bucket by examining its contents:
! gsutil ls -al $BUCKET_NAME
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client libraryImport the Vertex client library into our Python environment.
import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples