markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Create a CTX ExampleIf the context for the operation does not yet exist, it can be created using the following methods.Creating a CTX in a List or Map:* `listIndexCreate`: Create list by base list's index offset.* `mapKeyCreate`: Create map by base map's key.The following are examples of creating a list and map CTX and then writing data to the new CTX.
ArrayList<Value> newWhaleMigration = new ArrayList<Value>(); newWhaleMigration.add(Value.get(1449)); newWhaleMigration.add(Value.get("sei whale")); newWhaleMigration.add(Value.get("Greenland")); newWhaleMigration.add(Value.get("Gulf of Maine")); Integer whaleIndex = 5; HashMap <Value, Value> mapCoords3 = new HashMap <Value, Value>(); mapCoords3.put(Value.get("lat"), Value.get(95)); mapCoords3.put(Value.get("long"), Value.get(110)); Integer newObsKey = 15678; Record createCTX = client.operate(client.writePolicyDefault, whaleKey, ListOperation.insertItems(listWhaleBinName, 0, newWhaleMigration, CTX.listIndexCreate(whaleIndex, ListOrder.UNORDERED, true)), MapOperation.putItems(mapObsPolicy, mapObsBinName, mapCoords3, CTX.mapKeyCreate(Value.get(newObsKey), MapOrder.KEY_ORDERED)) ); Record postCreate = client.get(null, whaleKey); System.out.println("Before, the whale migration list was: " + theRecord.getValue(listWhaleBinName) + "\n"); System.out.println("After the addition, it is:" + postCreate.getValue(listWhaleBinName) + "\n\n"); System.out.println("Before, the observation map was: " + theRecord.getValue(mapObsBinName) + "\n"); System.out.println("After the addition, it is: " + postCreate.getValue(mapObsBinName));
Before, the whale migration list was: [[1420, beluga whale, Beaufort Sea, Bering Sea], [13988, gray whale, Baja California, Chukchi Sea], [1278, north pacific right whale, Japan, Sea of Okhotsk], [5100, humpback whale, Columbia, Antarctic Peninsula], [3100, southern hemisphere blue whale, Corcovado Gulf, The Galapagos]] After the addition, it is:[[1420, beluga whale, Beaufort Sea, Bering Sea], [13988, gray whale, Baja California, Chukchi Sea], [1278, north pacific right whale, Japan, Sea of Okhotsk], [5100, humpback whale, Columbia, Antarctic Peninsula], [3100, southern hemisphere blue whale, Corcovado Gulf, The Galapagos], [1449, sei whale, Greenland, Gulf of Maine]] Before, the observation map was: {12345={lat=-85, long=-130}, 13456={lat=-25, long=-50}, 14567={lat=35, long=30}} After the addition, it is: {12345={lat=-85, long=-130}, 13456={lat=-25, long=-50}, 14567={lat=35, long=30}, 15678={lat=95, long=110}}
MIT
notebooks/java/java-advanced_collection_data_types.ipynb
markprincely/interactive-notebooks
Choosing the Return Type Options for CDTsOperations on CDTs can return different types of data, depending on the return type value specified. A return type can be combined with the INVERTED flag to return all data from the CDT that was not selected by the operation. The following are the [Return Types for Lists](https://docs.aerospike.com/apidocs/java/com/aerospike/client/cdt/ListReturnType.html) and [Maps](https://docs.aerospike.com/apidocs/java/com/aerospike/client/cdt/MapReturnType.html). Standard Return Type Options for CDTsAerospike Lists and Maps both provide the following return type options.* `COUNT`: Return count of items selected.* `INDEX`: Return index offset order.* `NONE`: Do not return a result.* `RANK`: Return value order. If the list/map is not ordered, Aerospike will JIT-sort the list/map.* `REVERSE_INDEX`: Return reverse index offset order.* `REVERSE_RANK`: Return value order from a version of the list sorted from maximum to minimum value. If the list is not ordered, Aerospike will JIT-sort the list. * `VALUE`: Return value for single item read and list of values from a range read.All indexes are 0-based, with the last element accessible by index -1. The following is an example demonstrating each possible return type from the same operation.
ArrayList<Value> lowTuple = new ArrayList<Value>(); lowTuple.add(Value.get(1400)); lowTuple.add(Value.NULL); ArrayList<Value> highTuple = new ArrayList<Value>(); highTuple.add(Value.get(3500)); highTuple.add(Value.NULL); Record between1400and3500 = client.operate(client.writePolicyDefault, whaleKey, ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.COUNT), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.INDEX), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.NONE), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.RANK), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.REVERSE_INDEX), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.REVERSE_RANK), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.VALUE) ); List<?> returnWhaleRange = between1400and3500.getList(listWhaleBinName); System.out.println("The current whale migration list is: " + postCreate.getValue(listWhaleBinName) + "\n"); System.out.println("For the whales who migrate between 1400 and 3500 miles..."); System.out.println("Return COUNT: " + returnWhaleRange.get(0)); System.out.println("Return INDEX: " + returnWhaleRange.get(1)); System.out.println("Return NONE: has no return value."); System.out.println("Return RANK: " + returnWhaleRange.get(2)); System.out.println("Return REVERSE_INDEX: " + returnWhaleRange.get(3)); System.out.println("Return REVERSE_RANK: " + returnWhaleRange.get(4)); System.out.println("Return Values: " + returnWhaleRange.get(5));
The current whale migration list is: [[1420, beluga whale, Beaufort Sea, Bering Sea], [13988, gray whale, Baja California, Chukchi Sea], [1278, north pacific right whale, Japan, Sea of Okhotsk], [5100, humpback whale, Columbia, Antarctic Peninsula], [3100, southern hemisphere blue whale, Corcovado Gulf, The Galapagos], [1449, sei whale, Greenland, Gulf of Maine]] For the whales who migrate between 1400 and 3500 miles... Return COUNT: 3 Return INDEX: [0, 4, 5] Return NONE: has no return value. Return RANK: [1, 2, 3] Return REVERSE_INDEX: [5, 1, 0] Return REVERSE_RANK: [2, 3, 4] Return Values: [[1420, beluga whale, Beaufort Sea, Bering Sea], [3100, southern hemisphere blue whale, Corcovado Gulf, The Galapagos], [1449, sei whale, Greenland, Gulf of Maine]]
MIT
notebooks/java/java-advanced_collection_data_types.ipynb
markprincely/interactive-notebooks
Additional Return Type Options for MapsBecause Maps have a replicable key/value structure, Aerospike provides options to return mapkeys or key/value pairs, in addition to value.* `KEY`: Return key for single key read and key list for range read.* `KEY_VALUE`: Return key/value pairs for items.The following is an example demonstrating returning a key or key/value pair.
Integer latestObsRank = -1; Record latestWhaleObs = client.operate(client.writePolicyDefault, whaleKey, MapOperation.getByRank(mapObsBinName, latestObsRank, MapReturnType.KEY), MapOperation.getByRank(mapObsBinName, latestObsRank, MapReturnType.KEY_VALUE) ); List<?> latestObs = latestWhaleObs.getList(mapObsBinName); System.out.println("The current whale observations map is: " + postCreate.getValue(mapObsBinName) + "\n"); System.out.println("For the most recent observation..."); System.out.println("Return the key: " + latestObs.get(0)); System.out.println("Return key/value pair: " + latestObs.get(1));
The current whale observations map is: {12345={lat=-85, long=-130}, 13456={lat=-25, long=-50}, 14567={lat=35, long=30}, 15678={lat=95, long=110}} For the most recent observation... Return the key: 15678 Return key/value pair: [15678={lat=95, long=110}]
MIT
notebooks/java/java-advanced_collection_data_types.ipynb
markprincely/interactive-notebooks
Invert the Operation Results for CDT Operations Aerospike also provides the `INVERTED` flag for CDT operations. When `INVERTED` is “logical or”-ed to the return type, the flag instructs a list or map operation to return the return type data for list or Map elements that were not selected by the operation. This flag instructs an operation to act as though a logical NOT operator was applied to the entire operation. The following is an example demonstrating inverted return values.
ArrayList<Value> lowTuple = new ArrayList<Value>(); lowTuple.add(Value.get(1400)); lowTuple.add(Value.NULL); ArrayList<Value> highTuple = new ArrayList<Value>(); highTuple.add(Value.get(3500)); highTuple.add(Value.NULL); Record between1400and3500 = client.operate(client.writePolicyDefault, whaleKey, ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.COUNT | ListReturnType.INVERTED), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.INDEX | ListReturnType.INVERTED), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.NONE | ListReturnType.INVERTED), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.RANK | ListReturnType.INVERTED), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.REVERSE_INDEX | ListReturnType.INVERTED), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.REVERSE_RANK | ListReturnType.INVERTED), ListOperation.getByValueRange(listWhaleBinName, Value.get(lowTuple), Value.get(highTuple), ListReturnType.VALUE | ListReturnType.INVERTED) ); List<?> returnWhaleRange = between1400and3500.getList(listWhaleBinName); System.out.println("The current whale migration list is: " + postCreate.getValue(listWhaleBinName) + "\n"); System.out.println("For the whales who migrate between 1400 and 3500 miles..."); System.out.println("Return INVERTED COUNT: " + returnWhaleRange.get(0)); System.out.println("Return INVERTED INDEX: " + returnWhaleRange.get(1)); System.out.println("Return INVERTED NONE: has no return value."); System.out.println("Return INVERTED RANK: " + returnWhaleRange.get(2)); System.out.println("Return INVERTED REVERSE_INDEX: " + returnWhaleRange.get(3)); System.out.println("Return INVERTED REVERSE_RANK: " + returnWhaleRange.get(4)); System.out.println("Return INVERTED Values: " + returnWhaleRange.get(5));
The current whale migration list is: [[1420, beluga whale, Beaufort Sea, Bering Sea], [13988, gray whale, Baja California, Chukchi Sea], [1278, north pacific right whale, Japan, Sea of Okhotsk], [5100, humpback whale, Columbia, Antarctic Peninsula], [3100, southern hemisphere blue whale, Corcovado Gulf, The Galapagos], [1449, sei whale, Greenland, Gulf of Maine]] For the whales who migrate between 1400 and 3500 miles... Return INVERTED COUNT: 3 Return INVERTED INDEX: [1, 2, 3] Return INVERTED NONE: has no return value. Return INVERTED RANK: [0, 4, 5] Return INVERTED REVERSE_INDEX: [4, 3, 2] Return INVERTED REVERSE_RANK: [5, 0, 1] Return INVERTED Values: [[13988, gray whale, Baja California, Chukchi Sea], [1278, north pacific right whale, Japan, Sea of Okhotsk], [5100, humpback whale, Columbia, Antarctic Peninsula]]
MIT
notebooks/java/java-advanced_collection_data_types.ipynb
markprincely/interactive-notebooks
Highlighting how policies shape application transactionsEach data type operation has a write policy which can be set per CDT write/put operation to optionally:* Just-in-time sort the data being operated on. * Apply flags that instruct Aerospike’s transaction write behavior.Create and set a MapPolicy or ListPolicy with the proper sort and write flags to change how Aerospike processes a transaction. MapOrder and ListOrder, Just-in-time Sorting for an Operation By default, Maps and Lists are stored unordered. There are explicit techniques to store a list or map in order. The Map data in this notebook is key sorted. Please refer to the code snippet creating the map data (above) for an example of this. There are examples of ordering lists in the notebook [Modeling Using Lists](./java-modeling_using_lists.ipynb). Applying a MapOrder or ListOrder has performance implications on operation performance. This can be a reason to apply a MapOrder or ListOrder when working with data. To understand the relative worst-case time complexity of Aerospike operations go [here for lists](https://docs.aerospike.com/docs/guide/cdt-list-performance.html) and [here for maps](https://docs.aerospike.com/docs/guide/cdt-map-performance.html). Whether to allow duplicates in a list is a function of ListOrder.**Note:** Aerospike finds that worst-case performance can be helpful in determining how to prioritize application use-cases against one another, but do not set realistic performance expectations for Aerospike Database. An example where they help is asking tough questions, like, “the worst case time complexity for operation A is X, is operation A important enough to do daily or just monthly in light of the other workloads that are more time sensitive?” Write FlagsThe following are lists of [write flags for Lists](https://docs.aerospike.com/apidocs/java/com/aerospike/client/cdt/ListWriteFlags.html) and [Maps](https://docs.aerospike.com/apidocs/java/com/aerospike/client/cdt/MapWriteFlags.html). Beneath each are example transactions. A powerful use case for Aerospike is to group operations together into single-record atomic transactions using the `Operate` method. This technique is used above in this notebook. When applying transactions to data, there are common circumstances where:* All possible operations should be executed in a fault tolerant manner * Specific operation failure should cause all operations to failWrite flags can be used in any combination, as appropriate to the application and Aerospike operation being applied. Write Flags for all CDTs* `DEFAULT` * For Lists, allow duplicate values and insertions at any index. * For Maps, allow map create or updates.* `NO_FAIL`: Do not raise an error if a CDT item is denied due to write flag constraints.* `PARTIAL`: Allow other valid CDT items to be committed if a CDT item is denied due to write flag constraints.These flags provide fault tolerance to transactions. Apply some combination of the above three flags–`DEFAULT`, `NO_FAIL`, and `PARTIAL`–to operations by using “logical or” as demonstrated below. All other write flags set conditions for operations. **Note:** Without `NO_FAIL`, operations that fail due to the below policies will throw [either error code 24 or 26](https://docs.aerospike.com/docs/dev_reference/error_codes.html). Default Examples All of the above code snippets use a Default write flag policy. These operations are unrestricted by write policies. No Fail Examples All of the examples in the following sections show both an exception caused by a write flag, and then pair the demonstrated write flag with No Fail to show how the same operation can fail silently. Partial Flag Example Partial is generally used only in a transaction containing operations using the No Fail write flag. Otherwise, the transaction would contain no failures to overlook. The following example are a list and map transaction combining both failing and successful map and list operations.
// create policy to apply and data to trigger operation failure Integer inBoundsIndex = 0; Integer outOfBoundsIndex = 20; HashMap <Value, Value> mapCoords4 = new HashMap <Value, Value>(); mapCoords4.put(Value.get("lat"), Value.get(0)); mapCoords4.put(Value.get("long"), Value.get(0)); Integer existingObsKey = 13456; Integer listPartialWriteFlags = ListWriteFlags.INSERT_BOUNDED | ListWriteFlags.NO_FAIL | ListWriteFlags.PARTIAL; ListPolicy listPartialWritePolicy = new ListPolicy(ListOrder.UNORDERED, listPartialWriteFlags); Integer mapPartialWriteFlags = MapWriteFlags.CREATE_ONLY | MapWriteFlags.NO_FAIL | MapWriteFlags.PARTIAL; MapPolicy mapPartialWritePolicy = new MapPolicy(MapOrder.KEY_ORDERED, mapPartialWriteFlags); // create fresh record Integer partialFlagKeyName = 6; Key partialFlagKey = new Key(nestedCDTNamespaceName, nestedCDTSetName, partialFlagKeyName); Bin bin1 = new Bin(listWhaleBinName, whaleMigration); Record putDataIn = client.operate(null, partialFlagKey, Operation.put(bin1), MapOperation.putItems(mapObsPolicy, mapObsBinName, mapObs) ); Record partialDataPutIn = client.get(client.writePolicyDefault, partialFlagKey); // one failed and one successful operation for both list and map Record partialSuccessOp = client.operate(null, partialFlagKey, ListOperation.insert(listPartialWritePolicy, listWhaleBinName, outOfBoundsIndex, Value.get(newWhaleMigration)), ListOperation.set(listPartialWritePolicy, listWhaleBinName, inBoundsIndex, Value.get(newWhaleMigration)), MapOperation.put(mapPartialWritePolicy, mapObsBinName, Value.get(existingObsKey), Value.get(mapCoords4)), MapOperation.put(mapPartialWritePolicy, mapObsBinName, Value.get(newObsKey), Value.get(mapCoords3)) ); Record partialSuccessData = client.get(client.writePolicyDefault, partialFlagKey); System.out.println ("Failed to add a 5th item.\nSucceeded at changing the first item.\n"); System.out.println ("Original List: " + partialDataPutIn.getValue(listWhaleBinName) + "\n"); System.out.println ("Updated List: " + partialSuccessData.getValue(listWhaleBinName) + "\n\n"); System.out.println ("Failed to modify an exiting observation.\nSucceeded at adding a new observation.\n"); System.out.println ("Original Map: " + partialDataPutIn.getValue(mapObsBinName) + "\n"); System.out.println ("Updated Map: " + partialSuccessData.getValue(mapObsBinName) + "\n\nFor more about the failed operations, see the examples below."); Boolean partialExampleRecordDeleted=client.delete(null, partialFlagKey);
Failed to add a 5th item. Succeeded at changing the first item. Original List: [[1420, beluga whale, Beaufort Sea, Bering Sea], [13988, gray whale, Baja California, Chukchi Sea], [1278, north pacific right whale, Japan, Sea of Okhotsk], [5100, humpback whale, Columbia, Antarctic Peninsula], [3100, southern hemisphere blue whale, Corcovado Gulf, The Galapagos]] Updated List: [[1449, sei whale, Greenland, Gulf of Maine], [13988, gray whale, Baja California, Chukchi Sea], [1278, north pacific right whale, Japan, Sea of Okhotsk], [5100, humpback whale, Columbia, Antarctic Peninsula], [3100, southern hemisphere blue whale, Corcovado Gulf, The Galapagos]] Failed to modify an exiting observation. Succeeded at adding a new observation. Original Map: {12345={lat=-85, long=-130}, 13456={lat=-25, long=-50}, 14567={lat=35, long=30}} Updated Map: {12345={lat=-85, long=-130}, 13456={lat=-25, long=-50}, 14567={lat=35, long=30}, 15678={lat=95, long=110}} For more about the failed operations, see the examples below.
MIT
notebooks/java/java-advanced_collection_data_types.ipynb
markprincely/interactive-notebooks
Write Flags for Lists Only:* `INSERT_BOUNDED`: Enforce list boundaries when inserting. Do not allow values to be inserted at index outside current list boundaries.* `ADD_UNIQUE`: Only add unique values. Insert Bounded Example
// create policy to apply and data to break policy Integer outOfBoundsIndex = 20; ListPolicy listInsertBoundedPolicy = new ListPolicy(ListOrder.UNORDERED, ListWriteFlags.INSERT_BOUNDED); ListPolicy listBoundedNoFailPolicy = new ListPolicy(ListOrder.UNORDERED, ListWriteFlags.INSERT_BOUNDED | ListWriteFlags.NO_FAIL); // create fresh record Integer whaleBoundedKeyName = 7; Bin bin1 = new Bin(listWhaleBinName, whaleMigration); Key whaleBoundedKey = new Key(nestedCDTNamespaceName, nestedCDTSetName, whaleBoundedKeyName); client.put(client.writePolicyDefault, whaleBoundedKey, bin1); Record ibDataPutIn = client.get(null, whaleBoundedKey); System.out.println("Data in the record: " + ibDataPutIn.getValue(listWhaleBinName) + "\n"); // fail for INSERT_BOUNDED try { Record ibFail = client.operate(client.writePolicyDefault, whaleBoundedKey, ListOperation.insert(listInsertBoundedPolicy, listWhaleBinName, outOfBoundsIndex, Value.get(newWhaleMigration)) ); System.out.println("The code does not get here."); } catch(Exception e) { System.out.println("Out of Bounds Attempt 1: Exception caught."); Record ibNoFail = client.operate(client.writePolicyDefault, whaleBoundedKey, ListOperation.insert(listBoundedNoFailPolicy, listWhaleBinName, outOfBoundsIndex, Value.get(newWhaleMigration)) ); Record ibNoFailData = client.get(client.writePolicyDefault, whaleBoundedKey); if(ibNoFailData.getValue(listWhaleBinName).equals(ibDataPutIn.getValue(listWhaleBinName))) { System.out.println("Out of Bounds Attempt 2: No operation was executed. Error was suppressed by NO_FAIL.\n"); } } Record noIB = client.operate(client.writePolicyDefault, whaleBoundedKey, ListOperation.insert(listWhaleBinName, outOfBoundsIndex, Value.get(newWhaleMigration)) ); Record noIBData = client.get(null, whaleBoundedKey); System.out.println("Without Insert Bounded, a series of nulls is inside the Bin: " + noIBData.getValue(listWhaleBinName));
Data in the record: [[1420, beluga whale, Beaufort Sea, Bering Sea], [13988, gray whale, Baja California, Chukchi Sea], [1278, north pacific right whale, Japan, Sea of Okhotsk], [5100, humpback whale, Columbia, Antarctic Peninsula], [3100, southern hemisphere blue whale, Corcovado Gulf, The Galapagos]] Out of Bounds Attempt 1: Exception caught. Out of Bounds Attempt 2: No operation was executed. Error was suppressed by NO_FAIL. Without Insert Bounded, a series of nulls is insein the Bin: [[1420, beluga whale, Beaufort Sea, Bering Sea], [13988, gray whale, Baja California, Chukchi Sea], [1278, north pacific right whale, Japan, Sea of Okhotsk], [5100, humpback whale, Columbia, Antarctic Peninsula], [3100, southern hemisphere blue whale, Corcovado Gulf, The Galapagos], null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, [1449, sei whale, Greenland, Gulf of Maine]]
MIT
notebooks/java/java-advanced_collection_data_types.ipynb
markprincely/interactive-notebooks
Add Unique Example
// create policy to apply ListPolicy listAddUniquePolicy = new ListPolicy(ListOrder.UNORDERED, ListWriteFlags.ADD_UNIQUE); ListPolicy listAddUniqueNoFailPolicy = new ListPolicy(ListOrder.UNORDERED, ListWriteFlags.ADD_UNIQUE | ListWriteFlags.NO_FAIL); // create fresh record Integer whaleAddUniqueKeyName = 8; Bin bin1 = new Bin(listWhaleBinName, whaleMigration); Key whaleAddUniqueKey = new Key(nestedCDTNamespaceName, nestedCDTSetName, whaleAddUniqueKeyName); client.put(client.writePolicyDefault, whaleAddUniqueKey, bin1); Record auDataPutIn = client.get(null, whaleAddUniqueKey); // successful ADD_UNIQUE operation Record auSuccess = client.operate(client.writePolicyDefault, whaleAddUniqueKey, ListOperation.append(listAddUniquePolicy, listWhaleBinName, Value.get(newWhaleMigration)) ); Record auSuccessData = client.get(null, whaleAddUniqueKey); System.out.println("Data after the unique add of " + newWhaleMigration + ": " + auSuccessData.getValue(listWhaleBinName) + "\n"); // fail for 2nd ADD_UNIQUE try { Record auFail = client.operate(client.writePolicyDefault, whaleAddUniqueKey, ListOperation.append(listAddUniquePolicy, listWhaleBinName, Value.get(newWhaleMigration)) ); System.out.println("The code does not get here."); } catch(Exception e) { System.out.println("Non-Unique Add 1: Exception caught."); Record auNoFail = client.operate(client.writePolicyDefault, whaleAddUniqueKey, ListOperation.append(listAddUniqueNoFailPolicy, listWhaleBinName, Value.get(newWhaleMigration)) ); Record auNoFailData = client.get(null, whaleAddUniqueKey); if(auNoFailData.getValue(listWhaleBinName).equals(auSuccessData.getValue(listWhaleBinName))) { System.out.println("Non-Unique Add 2: No operation was executed. Error was suppressed by NO_FAIL.\n"); } } Record noAU = client.operate(client.writePolicyDefault, whaleAddUniqueKey, ListOperation.append(listWhaleBinName, Value.get(newWhaleMigration)) ); Record noAUData = client.get(null, whaleAddUniqueKey); System.out.println("Without Add Unique here, the tuple for a sei whale is there 2x: " + noAUData.getValue(listWhaleBinName));
Data after the unique add of [1449, sei whale, Greenland, Gulf of Maine]: [[1420, beluga whale, Beaufort Sea, Bering Sea], [13988, gray whale, Baja California, Chukchi Sea], [1278, north pacific right whale, Japan, Sea of Okhotsk], [5100, humpback whale, Columbia, Antarctic Peninsula], [3100, southern hemisphere blue whale, Corcovado Gulf, The Galapagos], [1449, sei whale, Greenland, Gulf of Maine]] Non-Unique Add 1: Exception caught. Non-Unique Add 2: No operation was executed. Error was suppressed by NO_FAIL. Without Add Unique here, the tuple for a sei whale is there 2x: [[1420, beluga whale, Beaufort Sea, Bering Sea], [13988, gray whale, Baja California, Chukchi Sea], [1278, north pacific right whale, Japan, Sea of Okhotsk], [5100, humpback whale, Columbia, Antarctic Peninsula], [3100, southern hemisphere blue whale, Corcovado Gulf, The Galapagos], [1449, sei whale, Greenland, Gulf of Maine], [1449, sei whale, Greenland, Gulf of Maine]]
MIT
notebooks/java/java-advanced_collection_data_types.ipynb
markprincely/interactive-notebooks
Write Flags for Maps Only:* `CREATE_ONLY`: If the key already exists, the item will be denied.* `UPDATE_ONLY`: If the key already exists, the item will be overwritten. If the key does not exist, the item will be denied. Create Only Example
// create modify data and policy to apply HashMap <Value, Value> mapCoords4 = new HashMap <Value, Value>(); mapCoords4.put(Value.get("lat"), Value.get(0)); mapCoords4.put(Value.get("long"), Value.get(0)); MapPolicy mapCreateOnlyPolicy = new MapPolicy(MapOrder.KEY_ORDERED, MapWriteFlags.CREATE_ONLY); MapPolicy mapCreateOnlyNoFailPolicy = new MapPolicy(MapOrder.KEY_ORDERED, MapWriteFlags.CREATE_ONLY | MapWriteFlags.NO_FAIL); // create fresh record Integer obsCreateOnlyKeyName = 9; Key obsCreateOnlyKey = new Key(nestedCDTNamespaceName, nestedCDTSetName, obsCreateOnlyKeyName); Record putDataIn = client.operate(client.writePolicyDefault, obsCreateOnlyKey, MapOperation.putItems(mapObsPolicy, mapObsBinName, mapObs) ); Record coDataPutIn = client.get(null, obsCreateOnlyKey); // success for CREATE_ONLY Record coSuccess = client.operate(client.writePolicyDefault, obsCreateOnlyKey, MapOperation.put(mapCreateOnlyPolicy, mapObsBinName, Value.get(newObsKey), Value.get(mapCoords3)) ); Record coSuccessData = client.get(null, obsCreateOnlyKey); System.out.println("Created record and new key " + newObsKey + ". The data is now: " + coSuccessData.getValue(mapObsBinName) + "\n"); // fail for CREATE_ONLY try { Record coFail = client.operate(client.writePolicyDefault, obsCreateOnlyKey, MapOperation.put(mapCreateOnlyPolicy, mapObsBinName, Value.get(newObsKey), Value.get(mapCoords4)) ); System.out.println("The code does not get here."); } catch(Exception e) { System.out.println("Update attempt 1: Exception caught."); Record coNoFail = client.operate(client.writePolicyDefault, obsCreateOnlyKey, MapOperation.put(mapCreateOnlyNoFailPolicy, mapObsBinName, Value.get(newObsKey), Value.get(mapCoords4)) ); Record coNoFailData = client.get(null, obsCreateOnlyKey); if(coNoFailData.getValue(mapObsBinName).equals(coSuccessData.getValue(mapObsBinName))) { System.out.println("Update attempt 2: No operation was executed. Error was suppressed by NO_FAIL.\n"); } } Record noCO = client.operate(client.writePolicyDefault, obsCreateOnlyKey, MapOperation.put(mapObsPolicy, mapObsBinName, Value.get(newObsKey), Value.get(mapCoords4)) ); Record noCOData = client.get(null, obsCreateOnlyKey); System.out.println("Without Create Only, the observation at 15678 is overwritten: " + noCOData.getValue(mapObsBinName)); Boolean createOnlyExampleRecordDeleted=client.delete(null, obsCreateOnlyKey);
Created record and new key 15678. The data is now: {12345={lat=-85, long=-130}, 13456={lat=-25, long=-50}, 14567={lat=35, long=30}, 15678={lat=95, long=110}} Update attempt 1: Exception caught. Update attempt 2: No operation was executed. Error was suppressed by NO_FAIL. Without Create Only, the observation at 15678 is overwritten: {12345={lat=-85, long=-130}, 13456={lat=-25, long=-50}, 14567={lat=35, long=30}, 15678={lat=0, long=0}}
MIT
notebooks/java/java-advanced_collection_data_types.ipynb
markprincely/interactive-notebooks
Update Only Example
// create policy to apply MapPolicy mapUpdateOnlyPolicy = new MapPolicy(MapOrder.KEY_ORDERED, MapWriteFlags.UPDATE_ONLY); MapPolicy mapUpdateOnlyNoFailPolicy = new MapPolicy(MapOrder.KEY_ORDERED, MapWriteFlags.UPDATE_ONLY | MapWriteFlags.NO_FAIL); // create Aerospike data elements for a fresh record Integer obsUpdateOnlyKeyName = 10; Key obsUpdateOnlyKey = new Key(nestedCDTNamespaceName, nestedCDTSetName, obsUpdateOnlyKeyName); Record uoPutDataIn = client.operate(client.writePolicyDefault, obsUpdateOnlyKey, MapOperation.putItems(mapObsPolicy, mapObsBinName, mapObs) ); Record uoDataPutIn = client.get(null, obsUpdateOnlyKey); System.out.println("Created record: " + uoDataPutIn.getValue(mapObsBinName) + "\n"); // fail for UPDATE_ONLY try { Record uoFail = client.operate(client.writePolicyDefault, obsUpdateOnlyKey, MapOperation.put(mapUpdateOnlyPolicy, mapObsBinName, Value.get(newObsKey), Value.get(mapCoords3)) ); System.out.println("The code does not get here."); } catch(Exception e) { System.out.println("Create Attempt 1: Exception caught."); Record uoNoFail = client.operate(client.writePolicyDefault, obsUpdateOnlyKey, MapOperation.put(mapUpdateOnlyNoFailPolicy, mapObsBinName, Value.get(newObsKey), Value.get(mapCoords3)) ); Record uoNoFailData = client.get(null, obsUpdateOnlyKey); if(uoNoFailData.getValue(mapObsBinName).equals(uoDataPutIn.getValue(mapObsBinName))){ System.out.println("Create Attempt 2: No operation was executed. Error was suppressed by NO_FAIL.\n"); } } Record noUO = client.operate(client.writePolicyDefault, obsUpdateOnlyKey, MapOperation.put(mapObsPolicy, mapObsBinName, Value.get(newObsKey), Value.get(mapCoords3)) ); Record noUOData = client.get(null, obsUpdateOnlyKey); // success for UPDATE_ONLY Record uoSuccess = client.operate(client.writePolicyDefault, obsUpdateOnlyKey, MapOperation.put(mapUpdateOnlyPolicy, mapObsBinName, Value.get(existingObsKey), Value.get(mapCoords4)) ); Record uoSuccessData = client.get(null, obsUpdateOnlyKey); System.out.println("Using update only, the value of an existing key " + existingObsKey + " can be updated: " + uoSuccessData.getValue(mapObsBinName) + "\n"); Boolean uoExampleRecordDeleted=client.delete(null, obsUpdateOnlyKey);
Created record: {12345={lat=-85, long=-130}, 13456={lat=-25, long=-50}, 14567={lat=35, long=30}} Create Attempt 1: Exception caught. Create Attempt 2: No operation was executed. Error was suppressed by NO_FAIL. Using update only, the value of an existing key 13456 can be updated: {12345={lat=-85, long=-130}, 13456={lat=0, long=0}, 14567={lat=35, long=30}, 15678={lat=95, long=110}}
MIT
notebooks/java/java-advanced_collection_data_types.ipynb
markprincely/interactive-notebooks
Notebook Cleanup Truncate the SetTruncate the set from the Aerospike Database.
import com.aerospike.client.policy.InfoPolicy; InfoPolicy infoPolicy = new InfoPolicy(); client.truncate(infoPolicy, nestedCDTNamespaceName, nestedCDTSetName, null); System.out.println("Set Truncated.");
Set Truncated.
MIT
notebooks/java/java-advanced_collection_data_types.ipynb
markprincely/interactive-notebooks
Close the Client connections to Aerospike
client.close(); System.out.println("Server connection(s) closed.");
Server connection(s) closed.
MIT
notebooks/java/java-advanced_collection_data_types.ipynb
markprincely/interactive-notebooks
Merfish 10x comparison
import anndata import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl import matplotlib.patches as mpatches import scanpy as scanp from scipy.stats import ks_2samp, ttest_ind from scipy.sparse import csr_matrix from scipy import stats from sklearn.decomposition import TruncatedSVD from sklearn.manifold import TSNE from umap import UMAP from sklearn.cluster import KMeans from sklearn.metrics import adjusted_rand_score from sklearn.preprocessing import LabelEncoder from sklearn.neighbors import NeighborhoodComponentsAnalysis from matplotlib import cm from scipy.spatial import ConvexHull from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import normalize import warnings warnings.filterwarnings('ignore') import sys sys.path.append('/home/sina/projects/mop/BYVSTZP_2020/trackfig') from trackfig.utils import get_notebook_name from trackfig.trackfig import trackfig TRACKFIG = "/home/sina/projects/mop/BYVSTZP_2020/trackfig.txt" NB = get_notebook_name() fsize=20 plt.rcParams.update({'font.size': fsize}) %config InlineBackend.figure_format = 'retina' unique_map = {'Astrocytes': "Astro", 'Endothelial':"Endo", 'SMC':"SMC", 'L23_IT':"L2/3 IT", 'VLMC': "VLMC", 'L6_CT': "L6 CT", 'L45_IT': "L4/5 IT", 'L5_PT': "L5 PT", 'L5_IT': "L5 IT", 'Sst': "Sst", 'L6_IT': "L6 IT", 'Sncg': "Sncg", 'L6_IT_Car3': "L6 IT Car3", 'Vip': "Vip", 'L56_NP': "L5/6 NP", 'Pvalb': "Pvalb", 'L6b': "L6b", 'Lamp5': "Lamp5"} inv_map = {v: k for k, v in unique_map.items()} cluster_cmap = { "Astro": (0.38823529411764707, 0.4745098039215686, 0.2235294117647059 ), # 637939, "Endo" : (0.5490196078431373, 0.6352941176470588, 0.3215686274509804 ), # 8ca252, "SMC" : (0.7098039215686275, 0.8117647058823529, 0.4196078431372549 ), # b5cf6b, "VLMC" : (0.807843137254902, 0.8588235294117647, 0.611764705882353 ), # cedb9c, "Low Quality" : (0,0,0), "L2/3 IT" : (0.9921568627450981, 0.6823529411764706, 0.4196078431372549 ), # fdae6b "L5 PT" : (0.9921568627450981, 0.8156862745098039, 0.6352941176470588 ), # fdd0a2 "L5 IT" : (0.5176470588235295, 0.23529411764705882, 0.2235294117647059 ), # 843c39 "L5/6 NP": "#D43F3A", "L6 CT" : (0.8392156862745098, 0.3803921568627451, 0.4196078431372549 ), # d6616b "L6 IT" : (0.9058823529411765, 0.5882352941176471, 0.611764705882353 ), # e7969c "L6b" : (1.0, 0.4980392156862745, 0.054901960784313725), # ff7f0e "L6 IT Car3" : (1.0, 0.7333333333333333, 0.47058823529411764 ), # ffbb78 "Lamp5" : (0.19215686274509805, 0.5098039215686274, 0.7411764705882353 ), # 3182bd # blues "Sncg" : (0.4196078431372549, 0.6823529411764706, 0.8392156862745098 ), # 6baed6 "Vip" : (0.6196078431372549, 0.792156862745098, 0.8823529411764706 ), # 9ecae1 "Sst" : (0.7764705882352941, 0.8588235294117647, 0.9372549019607843 ), # c6dbef "Pvalb":(0.7372549019607844, 0.7411764705882353, 0.8627450980392157 ), # bcbddc } def trim_axs(axs, N): """little helper to massage the axs list to have correct length...""" axs = axs.flat for ax in axs[N:]: ax.remove() return axs[:N] def split_by_target(mat, targets, target, axis=0): """ Split the rows of mat by the proper assignment mat = ndarray targets, length is equal to number of components (axis=0) or features (axis=1) target is a singular element from unique(assignments/features) """ if axis==0 and len(targets) != mat.shape[axis]: return -1 if axis==1 and len(targets) != mat.shape[axis]: return -1 mask = targets == target if axis==0: t_mat = mat[mask] # target matrix c_mat = mat[~mask] # complement matrix elif axis==1: t_mat = mat[:, mask] # target matrix c_mat = mat[:, ~mask] # complement matrix return (t_mat, c_mat) def group_mtx_by_cluster(mtx, components, features, s2t, source_id="cell_id", target_id="subclass_label", by="components"): """ mtx: ndarray components by features components: labels for rows of mtx features: labels for columns of mtx s2t: pandas dataframe mapping source (features or components) to a targets features(components) to group by target_id: column name in s2t to group by """ if target_id not in s2t.columns: return -1 ncomp = components.shape[0] nfeat = features.shape[0] ntarget = s2t[target_id].nunique() if by =="features": source = features elif by =="components": source = components # Map the source to an index source2idx = dict(zip(source, range(len(source)))) # Map the target to a list of source indices target2idx = (s2t.groupby(target_id)[source_id].apply(lambda x: [source2idx[i] for i in x])).to_dict() # array of unique targets unique = s2t[target_id].unique().astype(str) nuniq = unique.shape[0] X = np.zeros((nuniq, mtx.shape[1])) for tidx, t in enumerate(unique): # Grab the matrix indices corresponding to columns and source columns to group by source_indices = target2idx[t] #print(source_indices) # breaks generality sub_mtx = mtx[source_indices,:].mean(axis=0) # Sum on source indicies X[tidx,:] = sub_mtx # place summed vector in new matrix # Return matrix that is grouped by return (X, components, unique) def nd(arr): return np.asarray(arr).reshape(-1) mfish = anndata.read_h5ad("../../data/notebook/revision/merfish-updated.h5ad") mfish.obs["tenx_subclass"] = mfish.obs["subclass"].apply(lambda x: unique_map.get(x, "None")) mfish = mfish[mfish.obs.tenx_subclass != "None"] md = pd.read_csv("../../reference/10xv3_cluster_labels/sample_metadata.csv", index_col = 0) md["sex"] = md["Gender"].apply(lambda x: {"Male": "M", "Female":"F"}.get(x, "X")) tenx = anndata.read_h5ad("../../data/notebook/revision/10xv3_gene.h5ad") tenx.obs["date"] = tenx.obs.index.map(md["Amp_Date"]) tenx.obs["sex"] = tenx.obs.index.map(md["sex"]) tenx = tenx[:,tenx.var.gene_short_name.isin(mfish.var.index)] tenx.var.index = tenx.var.gene_short_name.values #tenx = tenx[tenx.obs.eval("date == '11/29/2018'").values] # males #tenx = tenx[tenx.obs.eval("date == '12/7/2018'").values] # females tenx = tenx[tenx.obs.eval("date == '4/26/2019'").values] # females and males #tenx = tenx[tenx.obs.subclass_label!="Low Quality"] md.groupby("Amp_Date")["sex"].value_counts() print(tenx) print(mfish) tenx.obs.subclass_label.value_counts() mfish.obs.subclass.value_counts()
_____no_output_____
BSD-2-Clause
analysis_archive/notebooks/final-cmp_merfish_v_10x.ipynb
nmarkari/BYVSTZP_2020
Process
from sklearn.preprocessing import normalize tenx.layers["X"] = tenx.X tenx.layers["norm"] = normalize(tenx.X, norm='l1', axis=1)*1000000 tenx.layers["log1p"] = csr_matrix(np.log1p(tenx.layers["norm"])) from sklearn.preprocessing import scale %%time mat = tenx.layers["log1p"].todense() mtx = scale(mat, axis=0, with_mean=True, with_std=True, copy=True) tenx.X = mtx del mat
_____no_output_____
BSD-2-Clause
analysis_archive/notebooks/final-cmp_merfish_v_10x.ipynb
nmarkari/BYVSTZP_2020
Cluster comparisons
tenx = tenx[:,tenx.var.sort_index().index] mfish = mfish[:,mfish.var.sort_index().index] tenx.var.head() mfish.var.head() mfish_mat = mfish.X mfish_ass = mfish.obs.tenx_subclass.values tenx_mat = tenx.X tenx_ass = tenx.obs.subclass_label.values features = mfish.var.index.values unique = np.intersect1d(np.unique(mfish_ass), np.unique(tenx_ass)) %%time rvals = [] tenx_x = [] mfish_x = [] for uidx, u in enumerate(unique): mfish_t_mat, _ = split_by_target(mfish_mat, mfish_ass, u) tenx_t_mat, _ = split_by_target(tenx_mat, tenx_ass, u) mf = np.asarray(mfish_t_mat.mean(axis=0)).reshape(-1) t = np.asarray(tenx_t_mat.mean(axis=0)).reshape(-1) tenx_x.append(t) mfish_x.append(mf) r, p = stats.pearsonr(mf, t) rvals.append(r) print("[{} of {}] {:,.2f}: {}".format(uidx+1, unique.shape[0],r, u) ) tenx_size = tenx.obs["subclass_label"].value_counts()[unique] fig, ax = plt.subplots(figsize=(10,7)) x = tenx_size y = rvals for i, txt in enumerate(unique): ax.annotate(i, (x[i], y[i])) ax.scatter(x[i], y[i], label="{}: {}".format(i, txt), color=cluster_cmap[txt]) ax.set_ylim((0, 1)) ax.set_xscale("log") ax.set_xlabel("Number of 10xv3 cells") ax.set_ylabel("Pearson correlation") ax.legend(fontsize=15,loc='center left', bbox_to_anchor=(1, 0.5), markerscale=3) ax.set_title("MERFISH v. 10xv3 gene subclass correlation") plt.savefig(trackfig("../../figures/merfish-updated_10x_gene_subclass_size.png", TRACKFIG, NB), bbox_inches='tight', dpi=300) plt.show() # males males = pd.DataFrame({"subclass": unique.tolist(), "rvals": rvals, "size": tenx.obs.subclass_label.value_counts()[unique]}) males fig, ax = plt.subplots(figsize=(15,15), ncols=4, nrows=5) fig.subplots_adjust(hspace=0, wspace=0) axs = trim_axs(ax, len(unique)) fig.suptitle('MERFISH v. 10xv3 gene subclass correlation', y=0.9) #fig.subplots_adjust(top=1) for cidx, (ax, c) in enumerate(zip(axs, unique)): x = tenx_x[cidx] y = mfish_x[cidx] ax.scatter(x, y, label="{}: {:,}".format(c, tenx_size[cidx]), color="k", alpha=0.1) slope, intercept, r_value, p_value, std_err = stats.linregress(x, y) minx = min(x) maxx = max(x) x = np.linspace(minx, maxx, 10) y = slope*x+intercept ax.plot(x, y, label="corr : {:,.2f}".format(r_value**2), color="red", linewidth=3) ax.legend(fontsize=15) ax.xaxis.set_ticklabels([]) ax.yaxis.set_ticklabels([]) ax.set_axis_off() fig.text(0.5, 0.1, '10xv3 scaled $log(TPM+1)$', ha='center', va='center', fontsize=30) fig.text(0.1, 0.5, 'MERFISH scaled $log(CPM+1)$', ha='center', va='center', rotation='vertical', fontsize=30) plt.savefig(trackfig("../../figures/merfish-updated_10x_gene_subclass_correlation_scatter.png", TRACKFIG, NB), bbox_inches='tight',dpi=300) plt.show() tenx[tenx.obs.subclass_label=="L5 IT"].obs.cluster_label.value_counts() mfish[mfish.obs.subclass=="L5_IT"].obs.label.value_counts() rvals unique.tolist()
_____no_output_____
BSD-2-Clause
analysis_archive/notebooks/final-cmp_merfish_v_10x.ipynb
nmarkari/BYVSTZP_2020
Random sampling of parameters (c) 2019 Manuel Razo. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT). ---
import os import itertools import pickle import cloudpickle import re import glob import git # Our numerical workhorses import numpy as np import pandas as pd import scipy as sp # Import library to perform maximum entropy fits from maxentropy.skmaxent import FeatureTransformer, MinDivergenceModel # Import libraries to parallelize processes from joblib import Parallel, delayed # Import matplotlib stuff for plotting import matplotlib.pyplot as plt import matplotlib.cm as cm import matplotlib as mpl # Seaborn, useful for graphics import seaborn as sns # Increase DPI of displayed figures %config InlineBackend.figure_format = 'retina' # Import the project utils import ccutils # Find home directory for repo repo = git.Repo("./", search_parent_directories=True) homedir = repo.working_dir # Define directories for data and figure figdir = f'{homedir}/fig/MaxEnt_approx_joint/' datadir = f'{homedir}/data/csv_maxEnt_dist' # Set PBoC plotting format ccutils.viz.set_plotting_style() # Increase dpi mpl.rcParams['figure.dpi'] = 110
_____no_output_____
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
$\LaTeX$ macros$\newcommand{kpon}{k^p_{\text{on}}}$$\newcommand{kpoff}{k^p_{\text{off}}}$$\newcommand{kron}{k^r_{\text{on}}}$$\newcommand{kroff}{k^r_{\text{off}}}$$\newcommand{rm}{r _m}$$\newcommand{rp}{r _p}$$\newcommand{gm}{\gamma _m}$$\newcommand{gp}{\gamma _p}$$\newcommand{mm}{\left\langle m \right\rangle}$$\newcommand{foldchange}{\text{fold-change}}$$\newcommand{ee}[1]{\left\langle 1 \right\rangle}$$\newcommand{var}[1]{\text{Var}\left( 1 \right)}$$\newcommand{bb}[1]{\mathbf{1}}$$\newcommand{th}[1]{\text{th}}$ Variability in the kinetic parameters An idea that could explain the systematic variation between our theoretical predictions and the data is the stochasticity that could be associated with random variation of the kinetic parameters. For example, if cells happen to stochastically have different number of ribosomes, how much would that affect the final distribution. Another good example is the variability in repressor copy number, which would affect the $\kron$ rate.To simplify things what we will do is sample random variations to some of the kinetic parameters, run the dynamics with such parameters, and then reconstruct the corresponding MaxEnt distribution. Then we will combine all of these distributions to see how different this is compared to the one with single parameter values. Unregulated promoter parameter variation Let's begin with the unregulated promoter. The parameters here are $\kpon, \kpoff, r_m, \gm, r_p$, and $\gp$. The simplest scenario would be to sample variations out of a Gaussian distribution. We will set these distributions to be centered at the current value of the parameter we are using, and allow a variation of some defined percentage.Let's define a function that given an array of parameters, it samples random variations.
def param_normal_sample(param, n_samples, std=0.2): ''' Function that samples variations to the parameter values out of a normal distribution. Parameters ---------- param : array-like. List of parameters from which the samples will be generated. n_samples : int. Number or random samples to draw from the distribution std : float or array-like. Fractional standard deviations for each of the samples to be taken. If a single value is given, then all of the distributions will have the same standard deviation proportional to the mean. Returns ------- samples : array-like. Shape = len(param) x n_samples Random samples of the parameters. ''' # Initialize array to save output samples = np.zeros([n_samples, len(param)]) # Loop through parameters for i, par in enumerate(param): if len(std) == len(param): samples[:, i] = np.random.normal(par, par * std[i], n_samples) elif len(std) == 1: samples[:, i] = np.random.normal(par, par * std[0], n_samples) return samples
_____no_output_____
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
Let's now load the parameters and generate random samples.
# Load parameter values par = ccutils.model.load_constants() # Define parametesr for unregulated promoter par_names = ['kp_on', 'kp_off', 'rm', 'gm', 'rp'] param = [par[x] for x in par_names] # Generate samples of all parameters with a 10% variability n_samples = 999 std = [0.15] param_sample = param_normal_sample(param, n_samples, std) # Add reference parameters to list param_sample = np.append(np.array([[*param]]), param_sample, axis=0)
_____no_output_____
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
Having sampled the parameters let's go ahead and run the dynamics for each of these parameter sets. First we need to load the matrix to compute the moments of the distribution after the cell division as a function of the moments before the cell division.
# Read matrix into memory with open(f'{homedir}/src/theory/pkl_files/binom_coeff_matrix.pkl', 'rb') as file: unpickler = pickle.Unpickler(file) Z_mat = unpickler.load() expo_binom = unpickler.load()
_____no_output_____
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
Now let's load the matrix to compute the dynamics of the unregualted two-state promoter
with open('../pkl_files/two_state_protein_dynamics_matrix.pkl', 'rb') as file: A_mat_unreg_lam = cloudpickle.load(file) expo_unreg = cloudpickle.load(file)
_____no_output_____
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
Next let's define all of the parameters that we will need for the integration.
# Define doubling time doubling_time = 100 # Define fraction of cell cycle spent with one copy t_single_frac = 0.6 # Define time for single-promoter state t_single = 60 * t_single_frac * doubling_time # sec t_double = 60 * (1 - t_single_frac) * doubling_time # sec n_cycles = 6 # Define names for dataframe columns names = par_names + ['m' + str(m[0]) + 'p' + str(m[1]) for m in expo_unreg] # Initialize DataFrame to save constraints df_moments = pd.DataFrame([], columns=names)
_____no_output_____
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
Now we are ready to run the dynamics in parallel, let's define the function so that we can perform this numerical integration in parallel
compute_dynamics = False if compute_dynamics: # Define function for parallel computation def constraints_parallel(par): kp_on = par[0] kp_off = par[1] rm = par[2] gm = par[3] rp = par[4] # Single promoter gp_init = 1 / (60 * 60) rp_init = 500 * gp_init # Generate matrices for dynamics # Single promoter par_unreg_s = [kp_on, kp_off, rm, gm, rp, 0] # Two promoters par_unreg_d = [kp_on, kp_off, 2 * rm, gm, rp, 0] # Initial conditions A_unreg_s_init = A_mat_unreg_lam( kp_on, kp_off, rm, gm, rp_init, gp_init ) # Define initial conditions mom_init = np.zeros(len(expo_unreg) * 2) # Set initial condition for zero moment # Since this needs to add up to 1 mom_init[0] = 1 # Define time on which to perform integration t = np.linspace(0, 4000 * 60, 10000) # Numerically integrate equations m_init = sp.integrate.odeint( ccutils.model.rhs_dmomdt, mom_init, t, args=(A_unreg_s_init,) ) # Keep last time point as initial condition m_init = m_init[-1, :] # Integrate moment equations df = ccutils.model.dmomdt_cycles( m_init, t_single, t_double, A_mat_unreg_lam, par_unreg_s, par_unreg_d, expo_unreg, n_cycles, Z_mat, states=["A", "I"], n_steps=3000, ) # Keep only last cycle df = df[df["cycle"] == df["cycle"].max()] # Extract time of last cell cycle time = np.sort(df["time"].unique()) # Compute the time differences time_diff = np.diff(time) # Compute the cumulative time difference time_cumsum = np.cumsum(time_diff) time_cumsum = time_cumsum / time_cumsum[-1] # Define array for spacing of cell cycle a_array = np.zeros(len(time)) a_array[1:] = time_cumsum # Compute probability based on this array p_a_array = np.log(2) * 2 ** (1 - a_array) # Initialize list to append moments moms = list() # Loop through moments computing the average moment for i, mom in enumerate(expo_unreg): # Generate string that finds the moment mom_name = "m" + str(mom[0]) + "p" + str(mom[1]) # List rows with moment mom_bool = [x for x in df.columns if mom_name in x] # Extract data for this particular moment df_mom = df.loc[:, mom_bool].sum(axis=1) # Average moment and append it to list moms.append(sp.integrate.simps(df_mom * p_a_array, a_array)) # Save results into series in order to append it to data frame series = pd.Series(list(par) + moms, index=names) return series # Run function in parallel constraint_series = Parallel(n_jobs=6)( delayed(constraints_parallel)(par) for par in param_sample ) # Initialize data frame to save list of pareters df_moments = pd.DataFrame([], columns=names) for s in constraint_series: df_moments = df_moments.append(s, ignore_index=True) df_moments.to_csv( f"{homedir}/data/csv_maxEnt_dist/" + "MaxEnt_unreg_random.csv", index=False, ) df_moments = pd.read_csv( f"{homedir}/data/csv_maxEnt_dist/" + "MaxEnt_unreg_random.csv" ) df_moments.head()
_____no_output_____
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
Let's look at the distribution of means and standard deviations in mRNA count for these variations in parameters.
# Compute mRNA standard deviations mRNA_std = np.sqrt(df_moments.m2p0 - df_moments.m1p0**2) # Initialize figure fig, ax = plt.subplots(1, 2, figsize=(7, 3)) # Generate ECDF for mean x, y = ccutils.stats.ecdf(df_moments.m1p0) ax[0].plot(x, y, lw=0, marker='.') # add reference line ax[0].axvline(df_moments.m1p0[0], color='black', linestyle='--') # label axis ax[0].set_xlabel(r'$\left\langle \right.$mRNA/cell$\left. \right\rangle$') ax[0].set_ylabel('ECDF') # Generate ECDF for standard deviation x, y = ccutils.stats.ecdf(mRNA_std) ax[1].plot(x, y, lw=0, marker='.') # add reference line ax[1].axvline(mRNA_std[0], color='black', linestyle='--') # label axis ax[1].set_xlabel('STD(mRNA/cell)') ax[1].set_ylabel('ECDF');
_____no_output_____
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
There is quite a lot of variability compared to the reference value. Let's repeat these plots, but this time for the protein values
# Compute protein standard deviations protein_std = np.sqrt(df_moments.m0p2 - df_moments.m0p1 ** 2) # Initialize figure fig, ax = plt.subplots(1, 2, figsize=(7, 3)) # Generate ECDF for mean x, y = ccutils.stats.ecdf(df_moments.m0p1) ax[0].plot(x, y, lw=0, marker=".") # add reference line ax[0].axvline(df_moments.m0p1[0], color="black", linestyle="--") # label axis ax[0].set_xlabel(r"$\left\langle \right.$protein/cell$\left. \right\rangle$") ax[0].set_ylabel("ECDF") # Generate ECDF for standard deviation x, y = ccutils.stats.ecdf(protein_std) ax[1].plot(x, y, lw=0, marker=".") # add reference line ax[1].axvline(protein_std[0], color="black", linestyle="--") # label axis ax[1].set_xlabel("STD(protein/cell)") ax[1].set_ylabel("ECDF")
_____no_output_____
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
Moments of the conditional distribution Let's now compare the mean, variance and skewness of the resulting distribution. For this all we have to use is the so-called [law of total expectation](https://en.wikipedia.org/wiki/Law_of_total_expectation) that states that$$\ee{f(p)} = \ee{\ee{f(p) \mid \theta}_p}_\theta,$$i.e. to compute the expected value of the function $f(p)$ (could be something like $f(p) p^2$) we first compute the average of the function for a parameter set $\theta$, then we average the expected value of the function over all values of $\theta$. Let's for example first compare the resulting mean protein copy numbers for the original value and the one that includes the variability
mean_delta = df_moments.m0p1[0] mean_sample = df_moments.m0p1.mean() print(f'mean delta: {np.round(mean_delta, 0)}') print(f'mean sample: {np.round(mean_sample, 0)}') print(f'fractional change: {(mean_sample - mean_delta) / mean_delta}')
mean delta: 7733.0 mean sample: 8075.0 fractional change: 0.04428981949722924
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
There is an increment of roughly a 4%, so that is pretty small. Let's now look at the variance
var_delta = df_moments.m0p2[0] - df_moments.m0p1[0]**2 var_sample = df_moments.m0p2.mean() - df_moments.m0p1.mean()**2 print(f'variance delta: {np.round(var_delta, 0)}') print(f'variance sample: {np.round(var_sample, 0)}') print(f'fractional change: {(var_sample - var_delta) / var_delta}')
variance delta: 2607605.0 variance sample: 10714739.0 fractional change: 3.1090341346727075
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
The change in the variance is quite large! Let's see how this reflects to the change in the noise (std/mean).
noise_delta = np.sqrt(var_delta) / mean_delta noise_sample = np.sqrt(var_sample) / mean_sample print(f'noise delta: {np.round(noise_delta, 2)}') print(f'noise sample: {np.round(noise_sample, 2)}')
noise delta: 0.21 noise sample: 0.41
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
There is a factor of two of difference when computing the noise. That is quite interesting since that is exactly what we saw the systematic deviation in the data was like. Let's see what the change in the skewness is then.
skew_delta = ( df_moments.m0p3[0] - 3 * mean_delta * var_delta - mean_delta ** 3 ) / var_delta ** (3 / 2) skew_sample = ( df_moments.m0p3.mean() - 3 * mean_sample * var_sample - mean_sample ** 3 ) / var_sample ** (3 / 2) print(f"skewness delta: {np.round(skew_delta, 2)}") print(f"skewness sample: {np.round(skew_sample, 2)}")
skewness delta: 0.71 skewness sample: 1.26
MIT
src/theory/sandbox/random_parameter_sampling.ipynb
RPGroup-PBoC/chann_cap
https://spinningup.openai.com/en/latest/algorithms/ppo.html
%pylab inline import random import time import numpy as np import tensorflow as tf import tensorflow.keras.backend as K from tensorflow.keras.models import Model, clone_model from tensorflow.keras.optimizers import Adam, SGD from tensorflow.keras.layers import Input, Dense, Activation, Lambda import gym #env = gym.make("CartPole-v1") env = gym.make("Lander") env.observation_space, env.action_space, type(env.action_space)
_____no_output_____
MIT
.ipynb_checkpoints/Lunarlander-checkpoint.ipynb
ezztherose/notebooks
Time Series Cross Validation: Holt-Winters Exponential Smoothing with additive errors and seasonality.
import pandas as pd import numpy as np import matplotlib.pyplot as plt plt.style.use('ggplot') #forecast error metrics from forecast_tools.metrics import (mean_absolute_scaled_error, root_mean_squared_error, symmetric_mean_absolute_percentage_error) import statsmodels as sm from statsmodels.tsa.statespace.exponential_smoothing import ExponentialSmoothing import warnings warnings.filterwarnings('ignore') print(sm.__version__) #ensemble learning from amb_forecast.ensemble import (Ensemble, UnweightedVote)
_____no_output_____
MIT
analysis/model_selection/stage1/01_hw-tscv.ipynb
TomMonks/swast-benchmarking
Data InputThe constants `TOP_LEVEL`, `STAGE`, `REGION`,`TRUST` and `METHOD` are used to control data selection and the directory for outputting results. > Output file is `f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv'.csv`. where metric will be smape, rmse, mase, coverage_80 and coverage_95. Note: `REGION`: is also used to select the correct data from the input dataframe.
TOP_LEVEL = '../../../results/model_selection' STAGE = 'stage1' REGION = 'Trust' METHOD = 'hw' FILE_NAME = 'Daily_Responses_5_Years_2019_full.csv' #split training and test data. TEST_SPLIT_DATE = '2019-01-01' #second subdivide: train and val VAL_SPLIT_DATE = '2017-07-01' #discard data after 2020 due to coronavirus #this is the subject of a seperate study. DISCARD_DATE = '2020-01-01' #read in path path = f'../../../data/{FILE_NAME}' def pre_process_daily_data(path, index_col, by_col, values, dayfirst=False): ''' Daily data is stored in long format. Read in and pivot to wide format so that there is a single colmumn for each regions time series. ''' df = pd.read_csv(path, index_col=index_col, parse_dates=True, dayfirst=dayfirst) df.columns = map(str.lower, df.columns) df.index.rename(str(df.index.name).lower(), inplace=True) clean_table = pd.pivot_table(df, values=values.lower(), index=[index_col.lower()], columns=[by_col.lower()], aggfunc=np.sum) clean_table.index.freq = 'D' return clean_table clean = pre_process_daily_data(path, 'Actual_dt', 'ORA', 'Actual_Value', dayfirst=False) clean.head()
_____no_output_____
MIT
analysis/model_selection/stage1/01_hw-tscv.ipynb
TomMonks/swast-benchmarking
Train Test Split
def ts_train_test_split(data, split_date): ''' Split time series into training and test data Parameters: ------- data - pd.DataFrame - time series data. Index expected as datatimeindex split_date - the date on which to split the time series Returns: -------- tuple (len=2) 0. pandas.DataFrame - training dataset 1. pandas.DataFrame - test dataset ''' train = data.loc[data.index < split_date] test = data.loc[data.index >= split_date] return train, test train, test = ts_train_test_split(clean, split_date=TEST_SPLIT_DATE) #exclude data after 2020 due to coronavirus. test, discard = ts_train_test_split(test, split_date=DISCARD_DATE) #train split into train and validation train, val = ts_train_test_split(train, split_date=VAL_SPLIT_DATE) train.shape val.shape
_____no_output_____
MIT
analysis/model_selection/stage1/01_hw-tscv.ipynb
TomMonks/swast-benchmarking
Test fitting and predicting with model.The class below is a 'wrapper' class that provides the same interfacew for all methods and works in the time series cross valiation code.
class ExponentialSmoothingWrapper: ''' Facade for statsmodels exponential smoothing models. This wrapper provides a common interface for all models and allow interop with the custom time series cross validation code. ''' def __init__(self, trend=False, damped_trend=False, seasonal=None): self._trend = trend self._seasonal= seasonal self._damped_trend = damped_trend def _get_resids(self): return self._fitted.resid def _get_preds(self): return self._fitted.fittedvalues def fit(self, train): ''' Fit the model Parameters: train: array-like time series to fit. ''' self._model = ExponentialSmoothing(endog=train, trend=self._trend, damped_trend=self._damped_trend, seasonal=self._seasonal) self._fitted = self._model.fit() self._t = len(train) def predict(self, horizon, return_conf_int=False, alpha=0.2): ''' Forecast the time series from the final point in the fitted series. Parameters: ---------- horizon: int steps ahead to forecast return_conf_int: bool, optional (default=False) Return prediction interval? alpha: float Used if return_conf_int=True. 100(1-alpha) interval. ''' forecast = self._fitted.get_forecast(horizon) mean_forecast = forecast.summary_frame()['mean'].to_numpy() if return_conf_int: df = forecast.summary_frame(alpha=alpha) pi = df[['mean_ci_lower', 'mean_ci_upper']].to_numpy() return mean_forecast, pi else: return mean_forecast fittedvalues = property(_get_preds) resid = property(_get_resids)
_____no_output_____
MIT
analysis/model_selection/stage1/01_hw-tscv.ipynb
TomMonks/swast-benchmarking
Example fitting and prediction with comb
model_1 = ExponentialSmoothingWrapper(trend=True, damped_trend=True, seasonal=7) estimators = {'shw': model_1} ens = Ensemble(estimators, UnweightedVote()) ens.fit(train[REGION]) H = 5 ens_preds = ens.predict(horizon=H) ens_preds, pi = ens.predict(horizon=H, return_conf_int=True) ens_preds pi
_____no_output_____
MIT
analysis/model_selection/stage1/01_hw-tscv.ipynb
TomMonks/swast-benchmarking
Time Series Cross Validation`time_series_cv` implements rolling forecast origin cross validation for time series. It does not calculate forecast error, but instead returns the predictions, pred intervals and actuals in an array that can be passed to any forecast error function. (this is for efficiency and allows additional metrics to be calculated if needed).
def time_series_cv(model, train, val, horizons, alpha=0.2, step=1): ''' Time series cross validation across multiple horizons for a single model. Incrementally adds additional training data to the model and tests across a provided list of forecast horizons. Note that function tests a model only against complete validation sets. E.g. if horizon = 15 and len(val) = 12 then no testing is done. In the case of multiple horizons e.g. [7, 14, 28] then the function will use the maximum forecast horizon to calculate the number of iterations i.e if len(val) = 365 and step = 1 then no. iterations = len(val) - max(horizon) = 365 - 28 = 337. Parameters: -------- model - forecasting model error_func - function to measure forecast error train - np.array - vector of training data val - np.array - vector of validation data horizon - list of ints, forecast horizon e.g. [7, 14, 28] days step -- step taken in cross validation e.g. 1 in next cross validation training data includes next point from the validation set. e.g. 7 in the next cross validation training data includes next 7 points (default=1) Returns: ------- np.array - vector of forecast errors from the CVs. ''' cv_preds = [] #mean forecast cv_actuals = [] # actuals cv_pis = [] #prediction intervals split = 0 print('split => ', end="") for i in range(0, len(val) - max(horizons) + 1, step): split += 1 print(f'{split}, ', end="") train_cv = np.concatenate([train, val[:i]], axis=0) model.fit(train_cv) #predict the maximum horizon preds, pis = model.predict(horizon=len(val[i:i+max(horizons)]), return_conf_int=True, alpha=alpha) cv_h_preds = [] cv_test = [] cv_h_pis = [] for h in horizons: #store the h-step prediction cv_h_preds.append(preds[:h]) #store the h-step actual value cv_test.append(val.iloc[i:i+h]) cv_h_pis.append(pis[:h]) cv_preds.append(cv_h_preds) cv_actuals.append(cv_test) cv_pis.append(cv_h_pis) print('done.\n') return cv_preds, cv_actuals, cv_pis
_____no_output_____
MIT
analysis/model_selection/stage1/01_hw-tscv.ipynb
TomMonks/swast-benchmarking
Custom functions for calculating CV scores for point predictions and coverage.These functions have been written to work with the output of `time_series_cv`
def split_cv_error(cv_preds, cv_test, error_func): ''' Forecast error in the current split Params: ----- cv_preds, np.array Split predictions cv_test: np.array acutal ground truth observations error_func: object function with signature (y_true, y_preds) Returns: ------- np.ndarray cross validation errors for split ''' n_splits = len(cv_preds) cv_errors = [] for split in range(n_splits): pred_error = error_func(cv_test[split], cv_preds[split]) cv_errors.append(pred_error) return np.array(cv_errors) def forecast_errors_cv(cv_preds, cv_test, error_func): ''' Forecast errors by forecast horizon Params: ------ cv_preds: np.ndarray Array of arrays. Each array is of size h representing the forecast horizon specified. cv_test: np.ndarray Array of arrays. Each array is of size h representing the forecast horizon specified. error_func: object function with signature (y_true, y_preds) Returns: ------- np.ndarray ''' cv_test = np.array(cv_test) cv_preds = np.array(cv_preds) n_horizons = len(cv_test) horizon_errors = [] for h in range(n_horizons): split_errors = split_cv_error(cv_preds[h], cv_test[h], error_func) horizon_errors.append(split_errors) return np.array(horizon_errors) def split_coverage(cv_test, cv_intervals): n_splits = len(cv_test) cv_errors = [] for split in range(n_splits): val = np.asarray(cv_test[split]) lower = cv_intervals[split].T[0] upper = cv_intervals[split].T[1] coverage = len(np.where((val > lower) & (val < upper))[0]) coverage = coverage / len(val) cv_errors.append(coverage) return np.array(cv_errors) def prediction_int_coverage_cv(cv_test, cv_intervals): cv_test = np.array(cv_test) cv_intervals = np.array(cv_intervals) n_horizons = len(cv_test) horizon_coverage = [] for h in range(n_horizons): split_coverages = split_coverage(cv_test[h], cv_intervals[h]) horizon_coverage.append(split_coverages) return np.array(horizon_coverage) def split_cv_error_scaled(cv_preds, cv_test, y_train): n_splits = len(cv_preds) cv_errors = [] for split in range(n_splits): pred_error = mean_absolute_scaled_error(cv_test[split], cv_preds[split], y_train, period=7) cv_errors.append(pred_error) return np.array(cv_errors) def forecast_errors_cv_scaled(cv_preds, cv_test, y_train): cv_test = np.array(cv_test) cv_preds = np.array(cv_preds) n_horizons = len(cv_test) horizon_errors = [] for h in range(n_horizons): split_errors = split_cv_error_scaled(cv_preds[h], cv_test[h], y_train) horizon_errors.append(split_errors) return np.array(horizon_errors)
_____no_output_____
MIT
analysis/model_selection/stage1/01_hw-tscv.ipynb
TomMonks/swast-benchmarking
Get model and conduct tscv.
def get_model(): ''' Create ensemble model ''' model_1 = ExponentialSmoothingWrapper(trend=True, damped_trend=True, seasonal=7) estimators = {'hw': model_1} return Ensemble(estimators, UnweightedVote()) horizons = [7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 365] model = get_model() results = time_series_cv(model, train[REGION], val[REGION], horizons, alpha=0.2, step=7) cv_preds, cv_test, cv_intervals = results #CV point predictions smape cv_errors = forecast_errors_cv(cv_preds, cv_test, symmetric_mean_absolute_percentage_error) df = pd.DataFrame(cv_errors) df.columns = horizons df.describe() #output sMAPE results to file metric = 'smape' print(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') df.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') #CV point predictions rmse cv_errors = forecast_errors_cv(cv_preds, cv_test, root_mean_squared_error) df = pd.DataFrame(cv_errors) df.columns = horizons df.describe() #output rmse metric = 'rmse' print(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') df.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') #mase cv_errors = forecast_errors_cv_scaled(cv_preds, cv_test, train[REGION]) df = pd.DataFrame(cv_errors) df.columns = horizons df.describe() #output rmse metric = 'mase' print(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') df.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') #80% PIs cv_coverage = prediction_int_coverage_cv(cv_test, cv_intervals) df = pd.DataFrame(cv_coverage) df.columns = horizons df.describe() #output 80% PI coverage metric = 'coverage_80' print(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') df.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')
../../../results/model_selection/stage1/Trust-hw_coverage_80.csv
MIT
analysis/model_selection/stage1/01_hw-tscv.ipynb
TomMonks/swast-benchmarking
Rerun for 95% PI coverage
horizons = [7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 365] model = get_model() results = time_series_cv(model, train[REGION], val[REGION], horizons, alpha=0.05, step=7) #95% PIs cv_preds, cv_test, cv_intervals = results cv_coverage = prediction_int_coverage_cv(cv_test, cv_intervals) df = pd.DataFrame(cv_coverage) df.columns = horizons df.describe() #output 95% PI coverage metric = 'coverage_95' print(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv') df.to_csv(f'{TOP_LEVEL}/{STAGE}/{REGION}-{METHOD}_{metric}.csv')
../../../results/model_selection/stage1/Trust-hw_coverage_95.csv
MIT
analysis/model_selection/stage1/01_hw-tscv.ipynb
TomMonks/swast-benchmarking
**Preparing data**
train = pd.read_csv('../input/human-activity-recognition-with-smartphones/train.csv') train.head() train.shape train.isnull().values.any() test = pd.read_csv('../input/human-activity-recognition-with-smartphones/test.csv') test.head() print(test.shape) test.isnull().values.any() X_train = train.iloc[:,:-2] Y_train = train.iloc[:,-1] print(X_train.shape) print(Y_train.shape) X_test = test.iloc[:,:-2] Y_test = test.iloc[:,-1] print(X_test.shape) print(Y_test.shape) Category_counts = np.array(Y_train.value_counts()) Category_counts
_____no_output_____
MIT
human-activity-recognition.ipynb
varunsh20/Human-activity-recognition-
**There are five different activities i.e 'Standing','Sitting','Laying','Walking','Walking_downstairs','Walking_upstairs'.** **Plotting a count plot of each activity in the training data.**
import matplotlib.pyplot as plt import seaborn as sns plt.figure(figsize=(10,8)) sns.countplot(train.Activity) plt.xticks(rotation=45)
_____no_output_____
MIT
human-activity-recognition.ipynb
varunsh20/Human-activity-recognition-
**Creating a scatter plot using t-SNE** Using t-SNE data can be visualized from a extremely high dimensional space to a low dimensional space and still it retains lots of actual information. Given training data has 562 unqiue features, using t-SNE let's visualize it to a 2D space.
from sklearn.manifold import TSNE tsne = TSNE(random_state = 42, n_components=2, verbose=1, perplexity=50, n_iter=1000).fit_transform(X_train) plt.figure(figsize=(12,8)) sns.scatterplot(x =tsne[:, 0], y = tsne[:, 1],data = train,hue = train["Activity"]) train['tBodyAcc-mean()-X'].hist() train['tBodyAcc-mean()-Y'].hist() train['tBodyAcc-mean()-Z'].hist() #Y_train = Y_train.reshape((-1,1)) #Y_test = Y_test.reshape((-1,1)) #print(Y_train.shape) #print(Y_test.shape)
_____no_output_____
MIT
human-activity-recognition.ipynb
varunsh20/Human-activity-recognition-
**Scaling the data** **Creating labels for different classes**
from sklearn.preprocessing import LabelEncoder le = LabelEncoder() Y_train = le.fit_transform(Y_train) Y_test = le.transform(Y_test) le.classes_
_____no_output_____
MIT
human-activity-recognition.ipynb
varunsh20/Human-activity-recognition-
**It is necessary to create a one-hot vector for classes to fit the data in the model.**
Y_train = pd.get_dummies(Y_train).values Y_test = pd.get_dummies(Y_test).values Y_train Y_train.shape
_____no_output_____
MIT
human-activity-recognition.ipynb
varunsh20/Human-activity-recognition-
**Creating our model**
from tensorflow.keras import models from tensorflow.keras.layers import Dense,Dropout model = models.Sequential() model.add(Dense(64,activation='relu',input_dim=X_train.shape[1])) model.add(Dropout(0.25)) model.add(Dense(128,activation='relu')) model.add(Dense(64,activation='relu')) model.add(Dense(32,activation='relu')) model.add(Dropout(0.25)) model.add(Dense(10,activation='relu')) model.add(Dense(6,activation='softmax')) model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 64) 35968 _________________________________________________________________ dropout (Dropout) (None, 64) 0 _________________________________________________________________ dense_1 (Dense) (None, 128) 8320 _________________________________________________________________ dense_2 (Dense) (None, 64) 8256 _________________________________________________________________ dense_3 (Dense) (None, 32) 2080 _________________________________________________________________ dropout_1 (Dropout) (None, 32) 0 _________________________________________________________________ dense_4 (Dense) (None, 10) 330 _________________________________________________________________ dense_5 (Dense) (None, 6) 66 ================================================================= Total params: 55,020 Trainable params: 55,020 Non-trainable params: 0 _________________________________________________________________
MIT
human-activity-recognition.ipynb
varunsh20/Human-activity-recognition-
**Compiling and training the model.**
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy']) hist = model.fit(X_train,Y_train,epochs=30,batch_size = 128,validation_split=0.3)
Epoch 1/30 41/41 [==============================] - 0s 9ms/step - loss: 1.3677 - accuracy: 0.4131 - val_loss: 0.8933 - val_accuracy: 0.6668 Epoch 2/30 41/41 [==============================] - 0s 4ms/step - loss: 0.7899 - accuracy: 0.6731 - val_loss: 0.5051 - val_accuracy: 0.8314 Epoch 3/30 41/41 [==============================] - 0s 4ms/step - loss: 0.5186 - accuracy: 0.7928 - val_loss: 0.3797 - val_accuracy: 0.8613 Epoch 4/30 41/41 [==============================] - 0s 4ms/step - loss: 0.3697 - accuracy: 0.8581 - val_loss: 0.3890 - val_accuracy: 0.8708 Epoch 5/30 41/41 [==============================] - 0s 4ms/step - loss: 0.2890 - accuracy: 0.8877 - val_loss: 0.2865 - val_accuracy: 0.9130 Epoch 6/30 41/41 [==============================] - 0s 4ms/step - loss: 0.2318 - accuracy: 0.9162 - val_loss: 0.2936 - val_accuracy: 0.8867 Epoch 7/30 41/41 [==============================] - 0s 4ms/step - loss: 0.2054 - accuracy: 0.9217 - val_loss: 0.2423 - val_accuracy: 0.9229 Epoch 8/30 41/41 [==============================] - 0s 4ms/step - loss: 0.1832 - accuracy: 0.9339 - val_loss: 0.2686 - val_accuracy: 0.9180 Epoch 9/30 41/41 [==============================] - 0s 4ms/step - loss: 0.1609 - accuracy: 0.9403 - val_loss: 0.2350 - val_accuracy: 0.9234 Epoch 10/30 41/41 [==============================] - 0s 4ms/step - loss: 0.1368 - accuracy: 0.9491 - val_loss: 0.2061 - val_accuracy: 0.9302 Epoch 11/30 41/41 [==============================] - 0s 4ms/step - loss: 0.1432 - accuracy: 0.9493 - val_loss: 0.2020 - val_accuracy: 0.9311 Epoch 12/30 41/41 [==============================] - 0s 4ms/step - loss: 0.1283 - accuracy: 0.9532 - val_loss: 0.1948 - val_accuracy: 0.9284 Epoch 13/30 41/41 [==============================] - 0s 4ms/step - loss: 0.1163 - accuracy: 0.9571 - val_loss: 0.2416 - val_accuracy: 0.9374 Epoch 14/30 41/41 [==============================] - 0s 4ms/step - loss: 0.1073 - accuracy: 0.9594 - val_loss: 0.2372 - val_accuracy: 0.9329 Epoch 15/30 41/41 [==============================] - 0s 4ms/step - loss: 0.1043 - accuracy: 0.9637 - val_loss: 0.1879 - val_accuracy: 0.9306 Epoch 16/30 41/41 [==============================] - 0s 4ms/step - loss: 0.1036 - accuracy: 0.9619 - val_loss: 0.3256 - val_accuracy: 0.9093 Epoch 17/30 41/41 [==============================] - 0s 4ms/step - loss: 0.1034 - accuracy: 0.9633 - val_loss: 0.2749 - val_accuracy: 0.9248 Epoch 18/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0881 - accuracy: 0.9689 - val_loss: 0.2850 - val_accuracy: 0.9352 Epoch 19/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0807 - accuracy: 0.9695 - val_loss: 0.2876 - val_accuracy: 0.9266 Epoch 20/30 41/41 [==============================] - 0s 4ms/step - loss: 0.1229 - accuracy: 0.9536 - val_loss: 0.2350 - val_accuracy: 0.9361 Epoch 21/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0986 - accuracy: 0.9660 - val_loss: 0.2541 - val_accuracy: 0.9306 Epoch 22/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0909 - accuracy: 0.9666 - val_loss: 0.2134 - val_accuracy: 0.9343 Epoch 23/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0816 - accuracy: 0.9699 - val_loss: 0.2951 - val_accuracy: 0.9275 Epoch 24/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0827 - accuracy: 0.9709 - val_loss: 0.1860 - val_accuracy: 0.9406 Epoch 25/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0900 - accuracy: 0.9648 - val_loss: 0.2276 - val_accuracy: 0.9388 Epoch 26/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0711 - accuracy: 0.9738 - val_loss: 0.2106 - val_accuracy: 0.9393 Epoch 27/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0739 - accuracy: 0.9749 - val_loss: 0.2445 - val_accuracy: 0.9329 Epoch 28/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0848 - accuracy: 0.9675 - val_loss: 0.2974 - val_accuracy: 0.9288 Epoch 29/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0686 - accuracy: 0.9755 - val_loss: 0.1919 - val_accuracy: 0.9415 Epoch 30/30 41/41 [==============================] - 0s 4ms/step - loss: 0.0589 - accuracy: 0.9788 - val_loss: 0.2649 - val_accuracy: 0.9356
MIT
human-activity-recognition.ipynb
varunsh20/Human-activity-recognition-
**Visualising loss and accuracy curve of the model.**
plt.plot(hist.history['loss'],label='train_loss') plt.plot(hist.history['val_loss'],label='val_loss') plt.xlabel('Epochs',fontsize=18) plt.ylabel('Loss',fontsize=18) plt.legend() plt.title('Loss Curve',fontsize=22) plt.show() plt.plot(hist.history['accuracy'],label='train_accuracy') plt.plot(hist.history['val_accuracy'],label='val_accuracy') plt.xlabel('Epochs',fontsize=18) plt.ylabel('Accuracy',fontsize=18) plt.legend() plt.title('Accuracy Curve',fontsize=22) plt.show() model.save('my_model.h5')
_____no_output_____
MIT
human-activity-recognition.ipynb
varunsh20/Human-activity-recognition-
**Making predictions on test data**
predict = model.predict(X_test) predictions = np.argmax(predict,axis=1) predictions Y_test = np.argmax(Y_test,axis=1)
_____no_output_____
MIT
human-activity-recognition.ipynb
varunsh20/Human-activity-recognition-
**Calculating accuracy**
from sklearn.metrics import accuracy_score,precision_score,recall_score,confusion_matrix from mlxtend.plotting import plot_confusion_matrix conf_matrix = confusion_matrix(Y_test,predictions) plot_confusion_matrix(conf_matrix) precision = precision_score(Y_test,predictions,average='weighted') recall = recall_score(Y_test, predictions,average='weighted') accuracy = accuracy_score(Y_test,predictions) print("Accuracy = "+str(accuracy)) print("Precision = "+str(precision)) print("Recall = "+str(recall))
Accuracy = 0.9216152019002375 Precision = 0.9282570445597496 Recall = 0.9216152019002375
MIT
human-activity-recognition.ipynb
varunsh20/Human-activity-recognition-
**Chapter 16 – Natural Language Processing with RNNs and Attention** _This notebook contains all the sample code in chapter 16._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
# Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x !pip install -q -U tensorflow-addons IS_COLAB = True except Exception: IS_COLAB = False # TensorFlow ≥2.0 is required import tensorflow as tf from tensorflow import keras assert tf.__version__ >= "2.0" if not tf.config.list_physical_devices('GPU'): print("No GPU was detected. LSTMs and CNNs can be very slow without a GPU.") if IS_COLAB: print("Go to Runtime > Change runtime and select a GPU hardware accelerator.") # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) tf.random.set_seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "nlp" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution)
No GPU was detected. LSTMs and CNNs can be very slow without a GPU.
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Char-RNN Splitting a sequence into batches of shuffled windows For example, let's split the sequence 0 to 14 into windows of length 5, each shifted by 2 (e.g.,`[0, 1, 2, 3, 4]`, `[2, 3, 4, 5, 6]`, etc.), then shuffle them, and split them into inputs (the first 4 steps) and targets (the last 4 steps) (e.g., `[2, 3, 4, 5, 6]` would be split into `[[2, 3, 4, 5], [3, 4, 5, 6]]`), then create batches of 3 such input/target pairs:
np.random.seed(42) tf.random.set_seed(42) n_steps = 5 dataset = tf.data.Dataset.from_tensor_slices(tf.range(15)) dataset = dataset.window(n_steps, shift=2, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(n_steps)) dataset = dataset.shuffle(10).map(lambda window: (window[:-1], window[1:])) dataset = dataset.batch(3).prefetch(1) for index, (X_batch, Y_batch) in enumerate(dataset): print("_" * 20, "Batch", index, "\nX_batch") print(X_batch.numpy()) print("=" * 5, "\nY_batch") print(Y_batch.numpy())
____________________ Batch 0 X_batch [[6 7 8 9] [2 3 4 5] [4 5 6 7]] ===== Y_batch [[ 7 8 9 10] [ 3 4 5 6] [ 5 6 7 8]] ____________________ Batch 1 X_batch [[ 0 1 2 3] [ 8 9 10 11] [10 11 12 13]] ===== Y_batch [[ 1 2 3 4] [ 9 10 11 12] [11 12 13 14]]
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Loading the Data and Preparing the Dataset
shakespeare_url = "https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt" filepath = keras.utils.get_file("shakespeare.txt", shakespeare_url) with open(filepath) as f: shakespeare_text = f.read() print(shakespeare_text[:148]) "".join(sorted(set(shakespeare_text.lower()))) tokenizer = keras.preprocessing.text.Tokenizer(char_level=True) tokenizer.fit_on_texts(shakespeare_text) tokenizer.texts_to_sequences(["First"]) tokenizer.sequences_to_texts([[20, 6, 9, 8, 3]]) max_id = len(tokenizer.word_index) # number of distinct characters dataset_size = tokenizer.document_count # total number of characters [encoded] = np.array(tokenizer.texts_to_sequences([shakespeare_text])) - 1 train_size = dataset_size * 90 // 100 dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size]) n_steps = 100 window_length = n_steps + 1 # target = input shifted 1 character ahead dataset = dataset.repeat().window(window_length, shift=1, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(window_length)) np.random.seed(42) tf.random.set_seed(42) batch_size = 32 dataset = dataset.shuffle(10000).batch(batch_size) dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:])) dataset = dataset.map( lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch)) dataset = dataset.prefetch(1) for X_batch, Y_batch in dataset.take(1): print(X_batch.shape, Y_batch.shape)
(32, 100, 39) (32, 100)
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Creating and Training the Model
model = keras.models.Sequential([ keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id], dropout=0.2, recurrent_dropout=0.2), keras.layers.GRU(128, return_sequences=True, dropout=0.2, recurrent_dropout=0.2), keras.layers.TimeDistributed(keras.layers.Dense(max_id, activation="softmax")) ]) model.compile(loss="sparse_categorical_crossentropy", optimizer="adam") history = model.fit(dataset, steps_per_epoch=train_size // batch_size, epochs=10)
Train for 31370 steps Epoch 1/10 31370/31370 [==============================] - 7150s 228ms/step - loss: 1.4671 Epoch 2/10 31370/31370 [==============================] - 7094s 226ms/step - loss: 1.3614 Epoch 3/10 31370/31370 [==============================] - 7063s 225ms/step - loss: 1.3404 Epoch 4/10 31370/31370 [==============================] - 7039s 224ms/step - loss: 1.3311 Epoch 5/10 31370/31370 [==============================] - 7056s 225ms/step - loss: 1.3256 Epoch 6/10 31370/31370 [==============================] - 7049s 225ms/step - loss: 1.3209 Epoch 7/10 31370/31370 [==============================] - 7068s 225ms/step - loss: 1.3166 Epoch 8/10 31370/31370 [==============================] - 7030s 224ms/step - loss: 1.3138 Epoch 9/10 31370/31370 [==============================] - 7061s 225ms/step - loss: 1.3120 Epoch 10/10 31370/31370 [==============================] - 7177s 229ms/step - loss: 1.3105
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Using the Model to Generate Text
def preprocess(texts): X = np.array(tokenizer.texts_to_sequences(texts)) - 1 return tf.one_hot(X, max_id) X_new = preprocess(["How are yo"]) Y_pred = model.predict_classes(X_new) tokenizer.sequences_to_texts(Y_pred + 1)[0][-1] # 1st sentence, last char tf.random.set_seed(42) tf.random.categorical([[np.log(0.5), np.log(0.4), np.log(0.1)]], num_samples=40).numpy() def next_char(text, temperature=1): X_new = preprocess([text]) y_proba = model.predict(X_new)[0, -1:, :] rescaled_logits = tf.math.log(y_proba) / temperature char_id = tf.random.categorical(rescaled_logits, num_samples=1) + 1 return tokenizer.sequences_to_texts(char_id.numpy())[0] tf.random.set_seed(42) next_char("How are yo", temperature=1) def complete_text(text, n_chars=50, temperature=1): for _ in range(n_chars): text += next_char(text, temperature) return text tf.random.set_seed(42) print(complete_text("t", temperature=0.2)) print(complete_text("t", temperature=1)) print(complete_text("t", temperature=2))
th no cyty use ffor was firive this toighingaber; b
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Stateful RNN
tf.random.set_seed(42) dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size]) dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(window_length)) dataset = dataset.repeat().batch(1) dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:])) dataset = dataset.map( lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch)) dataset = dataset.prefetch(1) batch_size = 32 encoded_parts = np.array_split(encoded[:train_size], batch_size) datasets = [] for encoded_part in encoded_parts: dataset = tf.data.Dataset.from_tensor_slices(encoded_part) dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(window_length)) datasets.append(dataset) dataset = tf.data.Dataset.zip(tuple(datasets)).map(lambda *windows: tf.stack(windows)) dataset = dataset.repeat().map(lambda windows: (windows[:, :-1], windows[:, 1:])) dataset = dataset.map( lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch)) dataset = dataset.prefetch(1) model = keras.models.Sequential([ keras.layers.GRU(128, return_sequences=True, stateful=True, dropout=0.2, recurrent_dropout=0.2, batch_input_shape=[batch_size, None, max_id]), keras.layers.GRU(128, return_sequences=True, stateful=True, dropout=0.2, recurrent_dropout=0.2), keras.layers.TimeDistributed(keras.layers.Dense(max_id, activation="softmax")) ]) class ResetStatesCallback(keras.callbacks.Callback): def on_epoch_begin(self, epoch, logs): self.model.reset_states() model.compile(loss="sparse_categorical_crossentropy", optimizer="adam") steps_per_epoch = train_size // batch_size // n_steps history = model.fit(dataset, steps_per_epoch=steps_per_epoch, epochs=50, callbacks=[ResetStatesCallback()])
Train for 313 steps Epoch 1/50 313/313 [==============================] - 62s 198ms/step - loss: 2.6189 Epoch 2/50 313/313 [==============================] - 58s 187ms/step - loss: 2.2091 Epoch 3/50 313/313 [==============================] - 56s 178ms/step - loss: 2.0775 Epoch 4/50 313/313 [==============================] - 56s 179ms/step - loss: 2.4689 Epoch 5/50 313/313 [==============================] - 56s 179ms/step - loss: 2.3274 Epoch 6/50 313/313 [==============================] - 57s 183ms/step - loss: 2.1412 Epoch 7/50 313/313 [==============================] - 57s 183ms/step - loss: 2.0748 Epoch 8/50 313/313 [==============================] - 56s 179ms/step - loss: 1.9850 Epoch 9/50 313/313 [==============================] - 56s 179ms/step - loss: 1.9465 Epoch 10/50 313/313 [==============================] - 56s 179ms/step - loss: 1.8995 Epoch 11/50 313/313 [==============================] - 57s 182ms/step - loss: 1.8576 Epoch 12/50 313/313 [==============================] - 56s 179ms/step - loss: 1.8510 Epoch 13/50 313/313 [==============================] - 57s 184ms/step - loss: 1.8038 Epoch 14/50 313/313 [==============================] - 56s 178ms/step - loss: 1.7867 Epoch 15/50 313/313 [==============================] - 56s 180ms/step - loss: 1.7635 Epoch 16/50 313/313 [==============================] - 56s 179ms/step - loss: 1.7270 Epoch 17/50 313/313 [==============================] - 58s 184ms/step - loss: 1.7097 <<31 more lines>> 313/313 [==============================] - 58s 185ms/step - loss: 1.5998 Epoch 34/50 313/313 [==============================] - 58s 184ms/step - loss: 1.5954 Epoch 35/50 313/313 [==============================] - 58s 185ms/step - loss: 1.5944 Epoch 36/50 313/313 [==============================] - 57s 183ms/step - loss: 1.5902 Epoch 37/50 313/313 [==============================] - 57s 183ms/step - loss: 1.5893 Epoch 38/50 313/313 [==============================] - 59s 187ms/step - loss: 1.5845 Epoch 39/50 313/313 [==============================] - 57s 183ms/step - loss: 1.5821 Epoch 40/50 313/313 [==============================] - 59s 187ms/step - loss: 1.5798 Epoch 41/50 313/313 [==============================] - 57s 181ms/step - loss: 1.5794 Epoch 42/50 313/313 [==============================] - 57s 182ms/step - loss: 1.5774 Epoch 43/50 313/313 [==============================] - 57s 182ms/step - loss: 1.5755 Epoch 44/50 313/313 [==============================] - 58s 186ms/step - loss: 1.5735 Epoch 45/50 313/313 [==============================] - 58s 186ms/step - loss: 1.5714 Epoch 46/50 313/313 [==============================] - 57s 181ms/step - loss: 1.5686 Epoch 47/50 313/313 [==============================] - 57s 181ms/step - loss: 1.5675 Epoch 48/50 313/313 [==============================] - 56s 180ms/step - loss: 1.5657 Epoch 49/50 313/313 [==============================] - 58s 185ms/step - loss: 1.5654 Epoch 50/50 313/313 [==============================] - 57s 182ms/step - loss: 1.5620
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
To use the model with different batch sizes, we need to create a stateless copy. We can get rid of dropout since it is only used during training:
stateless_model = keras.models.Sequential([ keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id]), keras.layers.GRU(128, return_sequences=True), keras.layers.TimeDistributed(keras.layers.Dense(max_id, activation="softmax")) ])
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
To set the weights, we first need to build the model (so the weights get created):
stateless_model.build(tf.TensorShape([None, None, max_id])) stateless_model.set_weights(model.get_weights()) model = stateless_model tf.random.set_seed(42) print(complete_text("t"))
WARNING:tensorflow:5 out of the last 5 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:6 out of the last 6 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:7 out of the last 7 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:8 out of the last 8 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:9 out of the last 9 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:10 out of the last 10 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:11 out of the last 11 calls to <function _make_execution_function.<locals>.distributed_function at 0x7f8d44bc53b0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. tor: in the negver up how it thou like him; when it
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Sentiment Analysis
tf.random.set_seed(42)
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
You can load the IMDB dataset easily:
(X_train, y_test), (X_valid, y_test) = keras.datasets.imdb.load_data() X_train[0][:10] word_index = keras.datasets.imdb.get_word_index() id_to_word = {id_ + 3: word for word, id_ in word_index.items()} for id_, token in enumerate(("<pad>", "<sos>", "<unk>")): id_to_word[id_] = token " ".join([id_to_word[id_] for id_ in X_train[0][:10]]) import tensorflow_datasets as tfds datasets, info = tfds.load("imdb_reviews", as_supervised=True, with_info=True) datasets.keys() train_size = info.splits["train"].num_examples test_size = info.splits["test"].num_examples train_size, test_size for X_batch, y_batch in datasets["train"].batch(2).take(1): for review, label in zip(X_batch.numpy(), y_batch.numpy()): print("Review:", review.decode("utf-8")[:200], "...") print("Label:", label, "= Positive" if label else "= Negative") print() def preprocess(X_batch, y_batch): X_batch = tf.strings.substr(X_batch, 0, 300) X_batch = tf.strings.regex_replace(X_batch, rb"<br\s*/?>", b" ") X_batch = tf.strings.regex_replace(X_batch, b"[^a-zA-Z']", b" ") X_batch = tf.strings.split(X_batch) return X_batch.to_tensor(default_value=b"<pad>"), y_batch preprocess(X_batch, y_batch) from collections import Counter vocabulary = Counter() for X_batch, y_batch in datasets["train"].batch(32).map(preprocess): for review in X_batch: vocabulary.update(list(review.numpy())) vocabulary.most_common()[:3] len(vocabulary) vocab_size = 10000 truncated_vocabulary = [ word for word, count in vocabulary.most_common()[:vocab_size]] word_to_id = {word: index for index, word in enumerate(truncated_vocabulary)} for word in b"This movie was faaaaaantastic".split(): print(word_to_id.get(word) or vocab_size) words = tf.constant(truncated_vocabulary) word_ids = tf.range(len(truncated_vocabulary), dtype=tf.int64) vocab_init = tf.lookup.KeyValueTensorInitializer(words, word_ids) num_oov_buckets = 1000 table = tf.lookup.StaticVocabularyTable(vocab_init, num_oov_buckets) table.lookup(tf.constant([b"This movie was faaaaaantastic".split()])) def encode_words(X_batch, y_batch): return table.lookup(X_batch), y_batch train_set = datasets["train"].repeat().batch(32).map(preprocess) train_set = train_set.map(encode_words).prefetch(1) for X_batch, y_batch in train_set.take(1): print(X_batch) print(y_batch) embed_size = 128 model = keras.models.Sequential([ keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size, mask_zero=True, # not shown in the book input_shape=[None]), keras.layers.GRU(128, return_sequences=True), keras.layers.GRU(128), keras.layers.Dense(1, activation="sigmoid") ]) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) history = model.fit(train_set, steps_per_epoch=train_size // 32, epochs=5)
Train for 781 steps Epoch 1/5 781/781 [==============================] - 118s 152ms/step - loss: 0.5305 - accuracy: 0.7282 Epoch 2/5 781/781 [==============================] - 113s 145ms/step - loss: 0.3459 - accuracy: 0.8554 Epoch 3/5 781/781 [==============================] - 113s 145ms/step - loss: 0.1913 - accuracy: 0.9319 Epoch 4/5 781/781 [==============================] - 114s 146ms/step - loss: 0.1341 - accuracy: 0.9535 Epoch 5/5 781/781 [==============================] - 116s 148ms/step - loss: 0.1011 - accuracy: 0.9624
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Or using manual masking:
K = keras.backend embed_size = 128 inputs = keras.layers.Input(shape=[None]) mask = keras.layers.Lambda(lambda inputs: K.not_equal(inputs, 0))(inputs) z = keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size)(inputs) z = keras.layers.GRU(128, return_sequences=True)(z, mask=mask) z = keras.layers.GRU(128)(z, mask=mask) outputs = keras.layers.Dense(1, activation="sigmoid")(z) model = keras.models.Model(inputs=[inputs], outputs=[outputs]) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) history = model.fit(train_set, steps_per_epoch=train_size // 32, epochs=5)
Train for 781 steps Epoch 1/5 781/781 [==============================] - 118s 152ms/step - loss: 0.5425 - accuracy: 0.7155 Epoch 2/5 781/781 [==============================] - 112s 143ms/step - loss: 0.3479 - accuracy: 0.8558 Epoch 3/5 781/781 [==============================] - 112s 144ms/step - loss: 0.1761 - accuracy: 0.9388 Epoch 4/5 781/781 [==============================] - 115s 147ms/step - loss: 0.1281 - accuracy: 0.9531 Epoch 5/5 781/781 [==============================] - 116s 148ms/step - loss: 0.1088 - accuracy: 0.9603
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Reusing Pretrained Embeddings
tf.random.set_seed(42) TFHUB_CACHE_DIR = os.path.join(os.curdir, "my_tfhub_cache") os.environ["TFHUB_CACHE_DIR"] = TFHUB_CACHE_DIR import tensorflow_hub as hub model = keras.Sequential([ hub.KerasLayer("https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1", dtype=tf.string, input_shape=[], output_shape=[50]), keras.layers.Dense(128, activation="relu"), keras.layers.Dense(1, activation="sigmoid") ]) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) for dirpath, dirnames, filenames in os.walk(TFHUB_CACHE_DIR): for filename in filenames: print(os.path.join(dirpath, filename)) import tensorflow_datasets as tfds datasets, info = tfds.load("imdb_reviews", as_supervised=True, with_info=True) train_size = info.splits["train"].num_examples batch_size = 32 train_set = datasets["train"].repeat().batch(batch_size).prefetch(1) history = model.fit(train_set, steps_per_epoch=train_size // batch_size, epochs=5)
Train for 781 steps Epoch 1/5 781/781 [==============================] - 128s 164ms/step - loss: 0.5460 - accuracy: 0.7267 Epoch 2/5 781/781 [==============================] - 128s 164ms/step - loss: 0.5129 - accuracy: 0.7495 Epoch 3/5 781/781 [==============================] - 129s 165ms/step - loss: 0.5082 - accuracy: 0.7530 Epoch 4/5 781/781 [==============================] - 128s 164ms/step - loss: 0.5047 - accuracy: 0.7533 Epoch 5/5 781/781 [==============================] - 128s 164ms/step - loss: 0.5015 - accuracy: 0.7560
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Automatic Translation
tf.random.set_seed(42) vocab_size = 100 embed_size = 10 import tensorflow_addons as tfa encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32) embeddings = keras.layers.Embedding(vocab_size, embed_size) encoder_embeddings = embeddings(encoder_inputs) decoder_embeddings = embeddings(decoder_inputs) encoder = keras.layers.LSTM(512, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_embeddings) encoder_state = [state_h, state_c] sampler = tfa.seq2seq.sampler.TrainingSampler() decoder_cell = keras.layers.LSTMCell(512) output_layer = keras.layers.Dense(vocab_size) decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler, output_layer=output_layer) final_outputs, final_state, final_sequence_lengths = decoder( decoder_embeddings, initial_state=encoder_state, sequence_length=sequence_lengths) Y_proba = tf.nn.softmax(final_outputs.rnn_output) model = keras.models.Model( inputs=[encoder_inputs, decoder_inputs, sequence_lengths], outputs=[Y_proba]) model.compile(loss="sparse_categorical_crossentropy", optimizer="adam") X = np.random.randint(100, size=10*1000).reshape(1000, 10) Y = np.random.randint(100, size=15*1000).reshape(1000, 15) X_decoder = np.c_[np.zeros((1000, 1)), Y[:, :-1]] seq_lengths = np.full([1000], 15) history = model.fit([X, X_decoder, seq_lengths], Y, epochs=2)
Train on 1000 samples Epoch 1/2 1000/1000 [==============================] - 6s 6ms/sample - loss: 4.6053 Epoch 2/2 1000/1000 [==============================] - 3s 3ms/sample - loss: 4.6031
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Bidirectional Recurrent Layers
model = keras.models.Sequential([ keras.layers.GRU(10, return_sequences=True, input_shape=[None, 10]), keras.layers.Bidirectional(keras.layers.GRU(10, return_sequences=True)) ]) model.summary()
Model: "sequential_5" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= gru_10 (GRU) (None, None, 10) 660 _________________________________________________________________ bidirectional (Bidirectional (None, None, 20) 1320 ================================================================= Total params: 1,980 Trainable params: 1,980 Non-trainable params: 0 _________________________________________________________________
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Positional Encoding
class PositionalEncoding(keras.layers.Layer): def __init__(self, max_steps, max_dims, dtype=tf.float32, **kwargs): super().__init__(dtype=dtype, **kwargs) if max_dims % 2 == 1: max_dims += 1 # max_dims must be even p, i = np.meshgrid(np.arange(max_steps), np.arange(max_dims // 2)) pos_emb = np.empty((1, max_steps, max_dims)) pos_emb[0, :, ::2] = np.sin(p / 10000**(2 * i / max_dims)).T pos_emb[0, :, 1::2] = np.cos(p / 10000**(2 * i / max_dims)).T self.positional_embedding = tf.constant(pos_emb.astype(self.dtype)) def call(self, inputs): shape = tf.shape(inputs) return inputs + self.positional_embedding[:, :shape[-2], :shape[-1]] max_steps = 201 max_dims = 512 pos_emb = PositionalEncoding(max_steps, max_dims) PE = pos_emb(np.zeros((1, max_steps, max_dims), np.float32))[0].numpy() i1, i2, crop_i = 100, 101, 150 p1, p2, p3 = 22, 60, 35 fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(9, 5)) ax1.plot([p1, p1], [-1, 1], "k--", label="$p = {}$".format(p1)) ax1.plot([p2, p2], [-1, 1], "k--", label="$p = {}$".format(p2), alpha=0.5) ax1.plot(p3, PE[p3, i1], "bx", label="$p = {}$".format(p3)) ax1.plot(PE[:,i1], "b-", label="$i = {}$".format(i1)) ax1.plot(PE[:,i2], "r-", label="$i = {}$".format(i2)) ax1.plot([p1, p2], [PE[p1, i1], PE[p2, i1]], "bo") ax1.plot([p1, p2], [PE[p1, i2], PE[p2, i2]], "ro") ax1.legend(loc="center right", fontsize=14, framealpha=0.95) ax1.set_ylabel("$P_{(p,i)}$", rotation=0, fontsize=16) ax1.grid(True, alpha=0.3) ax1.hlines(0, 0, max_steps - 1, color="k", linewidth=1, alpha=0.3) ax1.axis([0, max_steps - 1, -1, 1]) ax2.imshow(PE.T[:crop_i], cmap="gray", interpolation="bilinear", aspect="auto") ax2.hlines(i1, 0, max_steps - 1, color="b") cheat = 2 # need to raise the red line a bit, or else it hides the blue one ax2.hlines(i2+cheat, 0, max_steps - 1, color="r") ax2.plot([p1, p1], [0, crop_i], "k--") ax2.plot([p2, p2], [0, crop_i], "k--", alpha=0.5) ax2.plot([p1, p2], [i2+cheat, i2+cheat], "ro") ax2.plot([p1, p2], [i1, i1], "bo") ax2.axis([0, max_steps - 1, 0, crop_i]) ax2.set_xlabel("$p$", fontsize=16) ax2.set_ylabel("$i$", rotation=0, fontsize=16) plt.savefig("positional_embedding_plot") plt.show() embed_size = 512; max_steps = 500; vocab_size = 10000 encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) embeddings = keras.layers.Embedding(vocab_size, embed_size) encoder_embeddings = embeddings(encoder_inputs) decoder_embeddings = embeddings(decoder_inputs) positional_encoding = PositionalEncoding(max_steps, max_dims=embed_size) encoder_in = positional_encoding(encoder_embeddings) decoder_in = positional_encoding(decoder_embeddings)
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Here is a (very) simplified Transformer (the actual architecture has skip connections, layer norm, dense nets, and most importantly it uses Multi-Head Attention instead of regular Attention):
Z = encoder_in for N in range(6): Z = keras.layers.Attention(use_scale=True)([Z, Z]) encoder_outputs = Z Z = decoder_in for N in range(6): Z = keras.layers.Attention(use_scale=True, causal=True)([Z, Z]) Z = keras.layers.Attention(use_scale=True)([Z, encoder_outputs]) outputs = keras.layers.TimeDistributed( keras.layers.Dense(vocab_size, activation="softmax"))(Z)
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Here's a basic implementation of the `MultiHeadAttention` layer. One will likely be added to `keras.layers` in the near future. Note that `Conv1D` layers with `kernel_size=1` (and the default `padding="valid"` and `strides=1`) is equivalent to a `TimeDistributed(Dense(...))` layer.
K = keras.backend class MultiHeadAttention(keras.layers.Layer): def __init__(self, n_heads, causal=False, use_scale=False, **kwargs): self.n_heads = n_heads self.causal = causal self.use_scale = use_scale super().__init__(**kwargs) def build(self, batch_input_shape): self.dims = batch_input_shape[0][-1] self.q_dims, self.v_dims, self.k_dims = [self.dims // self.n_heads] * 3 # could be hyperparameters instead self.q_linear = keras.layers.Conv1D(self.n_heads * self.q_dims, kernel_size=1, use_bias=False) self.v_linear = keras.layers.Conv1D(self.n_heads * self.v_dims, kernel_size=1, use_bias=False) self.k_linear = keras.layers.Conv1D(self.n_heads * self.k_dims, kernel_size=1, use_bias=False) self.attention = keras.layers.Attention(causal=self.causal, use_scale=self.use_scale) self.out_linear = keras.layers.Conv1D(self.dims, kernel_size=1, use_bias=False) super().build(batch_input_shape) def _multi_head_linear(self, inputs, linear): shape = K.concatenate([K.shape(inputs)[:-1], [self.n_heads, -1]]) projected = K.reshape(linear(inputs), shape) perm = K.permute_dimensions(projected, [0, 2, 1, 3]) return K.reshape(perm, [shape[0] * self.n_heads, shape[1], -1]) def call(self, inputs): q = inputs[0] v = inputs[1] k = inputs[2] if len(inputs) > 2 else v shape = K.shape(q) q_proj = self._multi_head_linear(q, self.q_linear) v_proj = self._multi_head_linear(v, self.v_linear) k_proj = self._multi_head_linear(k, self.k_linear) multi_attended = self.attention([q_proj, v_proj, k_proj]) shape_attended = K.shape(multi_attended) reshaped_attended = K.reshape(multi_attended, [shape[0], self.n_heads, shape_attended[1], shape_attended[2]]) perm = K.permute_dimensions(reshaped_attended, [0, 2, 1, 3]) concat = K.reshape(perm, [shape[0], shape_attended[1], -1]) return self.out_linear(concat) Q = np.random.rand(2, 50, 512) V = np.random.rand(2, 80, 512) multi_attn = MultiHeadAttention(8) multi_attn([Q, V]).shape
WARNING:tensorflow:Layer multi_head_attention is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx. If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2. To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Exercise solutions 1. to 7. See Appendix A. 8._Exercise:_ Embedded Reber grammars _were used by Hochreiter and Schmidhuber in [their paper](https://homl.info/93) about LSTMs. They are artificial grammars that produce strings such as "BPBTSXXVPSEPE." Check out Jenny Orr's [nice introduction](https://homl.info/108) to this topic. Choose a particular embedded Reber grammar (such as the one represented on Jenny Orr's page), then train an RNN to identify whether a string respects that grammar or not. You will first need to write a function capable of generating a training batch containing about 50% strings that respect the grammar, and 50% that don't._ First we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state.
default_reber_grammar = [ [("B", 1)], # (state 0) =B=>(state 1) [("T", 2), ("P", 3)], # (state 1) =T=>(state 2) or =P=>(state 3) [("S", 2), ("X", 4)], # (state 2) =S=>(state 2) or =X=>(state 4) [("T", 3), ("V", 5)], # and so on... [("X", 3), ("S", 6)], [("P", 4), ("V", 6)], [("E", None)]] # (state 6) =E=>(terminal state) embedded_reber_grammar = [ [("B", 1)], [("T", 2), ("P", 3)], [(default_reber_grammar, 4)], [(default_reber_grammar, 5)], [("T", 6)], [("P", 6)], [("E", None)]] def generate_string(grammar): state = 0 output = [] while state is not None: index = np.random.randint(len(grammar[state])) production, state = grammar[state][index] if isinstance(production, list): production = generate_string(grammar=production) output.append(production) return "".join(output)
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Let's generate a few strings based on the default Reber grammar:
np.random.seed(42) for _ in range(25): print(generate_string(default_reber_grammar), end=" ")
BTXXTTVPXTVPXTTVPSE BPVPSE BTXSE BPVVE BPVVE BTSXSE BPTVPXTTTVVE BPVVE BTXSE BTXXVPSE BPTTTTTTTTVVE BTXSE BPVPSE BTXSE BPTVPSE BTXXTVPSE BPVVE BPVVE BPVVE BPTTVVE BPVVE BPVVE BTXXVVE BTXXVVE BTXXVPXVVE
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Looks good. Now let's generate a few strings based on the embedded Reber grammar:
np.random.seed(42) for _ in range(25): print(generate_string(embedded_reber_grammar), end=" ")
BTBPTTTVPXTVPXTTVPSETE BPBPTVPSEPE BPBPVVEPE BPBPVPXVVEPE BPBTXXTTTTVVEPE BPBPVPSEPE BPBTXXVPSEPE BPBTSSSSSSSXSEPE BTBPVVETE BPBTXXVVEPE BPBTXXVPSEPE BTBTXXVVETE BPBPVVEPE BPBPVVEPE BPBTSXSEPE BPBPVVEPE BPBPTVPSEPE BPBTXXVVEPE BTBPTVPXVVETE BTBPVVETE BTBTSSSSSSSXXVVETE BPBTSSSXXTTTTVPSEPE BTBPTTVVETE BPBTXXTVVEPE BTBTXSETE
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character:
POSSIBLE_CHARS = "BEPSTVX" def generate_corrupted_string(grammar, chars=POSSIBLE_CHARS): good_string = generate_string(grammar) index = np.random.randint(len(good_string)) good_char = good_string[index] bad_char = np.random.choice(sorted(set(chars) - set(good_char))) return good_string[:index] + bad_char + good_string[index + 1:]
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Let's look at a few corrupted strings:
np.random.seed(42) for _ in range(25): print(generate_corrupted_string(embedded_reber_grammar), end=" ")
BTBPTTTPPXTVPXTTVPSETE BPBTXEEPE BPBPTVVVEPE BPBTSSSSXSETE BPTTXSEPE BTBPVPXTTTTTTEVETE BPBTXXSVEPE BSBPTTVPSETE BPBXVVEPE BEBTXSETE BPBPVPSXPE BTBPVVVETE BPBTSXSETE BPBPTTTPTTTTTVPSEPE BTBTXXTTSTVPSETE BBBTXSETE BPBTPXSEPE BPBPVPXTTTTVPXTVPXVPXTTTVVEVE BTBXXXTVPSETE BEBTSSSSSXXVPXTVVETE BTBXTTVVETE BPBTXSTPE BTBTXXTTTVPSBTE BTBTXSETX BTBTSXSSTE
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
We cannot feed strings directly to an RNN, so we need to encode them somehow. One option would be to one-hot encode each character. Another option is to use embeddings. Let's go for the second option (but since there are just a handful of characters, one-hot encoding would probably be a good option as well). For embeddings to work, we need to convert each string into a sequence of character IDs. Let's write a function for that, using each character's index in the string of possible characters "BEPSTVX":
def string_to_ids(s, chars=POSSIBLE_CHARS): return [POSSIBLE_CHARS.index(c) for c in s] string_to_ids("BTTTXXVVETE")
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
We can now generate the dataset, with 50% good strings, and 50% bad strings:
def generate_dataset(size): good_strings = [string_to_ids(generate_string(embedded_reber_grammar)) for _ in range(size // 2)] bad_strings = [string_to_ids(generate_corrupted_string(embedded_reber_grammar)) for _ in range(size - size // 2)] all_strings = good_strings + bad_strings X = tf.ragged.constant(all_strings, ragged_rank=1) y = np.array([[1.] for _ in range(len(good_strings))] + [[0.] for _ in range(len(bad_strings))]) return X, y np.random.seed(42) X_train, y_train = generate_dataset(10000) X_valid, y_valid = generate_dataset(2000)
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Let's take a look at the first training sequence:
X_train[0]
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
What classes does it belong to?
y_train[0]
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Perfect! We are ready to create the RNN to identify good strings. We build a simple sequence binary classifier:
np.random.seed(42) tf.random.set_seed(42) embedding_size = 5 model = keras.models.Sequential([ keras.layers.InputLayer(input_shape=[None], dtype=tf.int32, ragged=True), keras.layers.Embedding(input_dim=len(POSSIBLE_CHARS), output_dim=embedding_size), keras.layers.GRU(30), keras.layers.Dense(1, activation="sigmoid") ]) optimizer = keras.optimizers.SGD(lr=0.02, momentum = 0.95, nesterov=True) model.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["accuracy"]) history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))
Train on 10000 samples, validate on 2000 samples Epoch 1/20
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Now let's test our RNN on two tricky strings: the first one is bad while the second one is good. They only differ by the second to last character. If the RNN gets this right, it shows that it managed to notice the pattern that the second letter should always be equal to the second to last letter. That requires a fairly long short-term memory (which is the reason why we used a GRU cell).
test_strings = ["BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE", "BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE"] X_test = tf.ragged.constant([string_to_ids(s) for s in test_strings], ragged_rank=1) y_proba = model.predict(X_test) print() print("Estimated probability that these are Reber strings:") for index, string in enumerate(test_strings): print("{}: {:.2f}%".format(string, 100 * y_proba[index][0]))
Estimated probability that these are Reber strings: BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE: 0.40% BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE: 99.96%
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Ta-da! It worked fine. The RNN found the correct answers with very high confidence. :) 9._Exercise: Train an Encoder–Decoder model that can convert a date string from one format to another (e.g., from "April 22, 2019" to "2019-04-22")._ Let's start by creating the dataset. We will use random days between 1000-01-01 and 9999-12-31:
from datetime import date # cannot use strftime()'s %B format since it depends on the locale MONTHS = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"] def random_dates(n_dates): min_date = date(1000, 1, 1).toordinal() max_date = date(9999, 12, 31).toordinal() ordinals = np.random.randint(max_date - min_date, size=n_dates) + min_date dates = [date.fromordinal(ordinal) for ordinal in ordinals] x = [MONTHS[dt.month - 1] + " " + dt.strftime("%d, %Y") for dt in dates] y = [dt.isoformat() for dt in dates] return x, y
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Here are a few random dates, displayed in both the input format and the target format:
np.random.seed(42) n_dates = 3 x_example, y_example = random_dates(n_dates) print("{:25s}{:25s}".format("Input", "Target")) print("-" * 50) for idx in range(n_dates): print("{:25s}{:25s}".format(x_example[idx], y_example[idx]))
Input Target -------------------------------------------------- September 20, 7075 7075-09-20 May 15, 8579 8579-05-15 January 11, 7103 7103-01-11
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Let's get the list of all possible characters in the inputs:
INPUT_CHARS = "".join(sorted(set("".join(MONTHS)))) + "01234567890, " INPUT_CHARS
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
And here's the list of possible characters in the outputs:
OUTPUT_CHARS = "0123456789-"
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Let's write a function to convert a string to a list of character IDs, as we did in the previous exercise:
def date_str_to_ids(date_str, chars=INPUT_CHARS): return [chars.index(c) for c in date_str] date_str_to_ids(x_example[0], INPUT_CHARS) date_str_to_ids(y_example[0], OUTPUT_CHARS) def prepare_date_strs(date_strs, chars=INPUT_CHARS): X_ids = [date_str_to_ids(dt, chars) for dt in date_strs] X = tf.ragged.constant(X_ids, ragged_rank=1) return (X + 1).to_tensor() # using 0 as the padding token ID def create_dataset(n_dates): x, y = random_dates(n_dates) return prepare_date_strs(x, INPUT_CHARS), prepare_date_strs(y, OUTPUT_CHARS) np.random.seed(42) X_train, Y_train = create_dataset(10000) X_valid, Y_valid = create_dataset(2000) X_test, Y_test = create_dataset(2000) Y_train[0]
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
First version: a very basic seq2seq model Let's first try the simplest possible model: we feed in the input sequence, which first goes through the encoder (an embedding layer followed by a single LSTM layer), which outputs a vector, then it goes through a decoder (a single LSTM layer, followed by a dense output layer), which outputs a sequence of vectors, each representing the estimated probabilities for all possible output character.Since the decoder expects a sequence as input, we repeat the vector (which is output by the decoder) as many times as the longest possible output sequence.
embedding_size = 32 max_output_length = Y_train.shape[1] np.random.seed(42) tf.random.set_seed(42) encoder = keras.models.Sequential([ keras.layers.Embedding(input_dim=len(INPUT_CHARS) + 1, output_dim=embedding_size, input_shape=[None]), keras.layers.LSTM(128) ]) decoder = keras.models.Sequential([ keras.layers.LSTM(128, return_sequences=True), keras.layers.Dense(len(OUTPUT_CHARS) + 1, activation="softmax") ]) model = keras.models.Sequential([ encoder, keras.layers.RepeatVector(max_output_length), decoder ]) optimizer = keras.optimizers.Nadam() model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) history = model.fit(X_train, Y_train, epochs=20, validation_data=(X_valid, Y_valid))
Epoch 1/20 313/313 [==============================] - 6s 18ms/step - loss: 1.8111 - accuracy: 0.3533 - val_loss: 1.3581 - val_accuracy: 0.4965 Epoch 2/20 313/313 [==============================] - 5s 15ms/step - loss: 1.3518 - accuracy: 0.5103 - val_loss: 1.1915 - val_accuracy: 0.5694 Epoch 3/20 313/313 [==============================] - 5s 15ms/step - loss: 1.1706 - accuracy: 0.5908 - val_loss: 0.9983 - val_accuracy: 0.6398 Epoch 4/20 313/313 [==============================] - 5s 15ms/step - loss: 0.9158 - accuracy: 0.6686 - val_loss: 0.8012 - val_accuracy: 0.6987 Epoch 5/20 313/313 [==============================] - 5s 15ms/step - loss: 0.7058 - accuracy: 0.7308 - val_loss: 0.6224 - val_accuracy: 0.7599 Epoch 6/20 313/313 [==============================] - 5s 15ms/step - loss: 0.7756 - accuracy: 0.7203 - val_loss: 0.6541 - val_accuracy: 0.7599 Epoch 7/20 313/313 [==============================] - 5s 16ms/step - loss: 0.5379 - accuracy: 0.8034 - val_loss: 0.4174 - val_accuracy: 0.8440 Epoch 8/20 313/313 [==============================] - 5s 15ms/step - loss: 0.4867 - accuracy: 0.8262 - val_loss: 0.4188 - val_accuracy: 0.8480 Epoch 9/20 313/313 [==============================] - 5s 15ms/step - loss: 0.2979 - accuracy: 0.8951 - val_loss: 0.2549 - val_accuracy: 0.9126 Epoch 10/20 313/313 [==============================] - 5s 14ms/step - loss: 0.1785 - accuracy: 0.9479 - val_loss: 0.1461 - val_accuracy: 0.9594 Epoch 11/20 313/313 [==============================] - 5s 15ms/step - loss: 0.1830 - accuracy: 0.9557 - val_loss: 0.1644 - val_accuracy: 0.9550 Epoch 12/20 313/313 [==============================] - 5s 15ms/step - loss: 0.0775 - accuracy: 0.9857 - val_loss: 0.0595 - val_accuracy: 0.9901 Epoch 13/20 313/313 [==============================] - 5s 15ms/step - loss: 0.0400 - accuracy: 0.9953 - val_loss: 0.0342 - val_accuracy: 0.9957 Epoch 14/20 313/313 [==============================] - 5s 15ms/step - loss: 0.0248 - accuracy: 0.9979 - val_loss: 0.0231 - val_accuracy: 0.9983 Epoch 15/20 313/313 [==============================] - 5s 15ms/step - loss: 0.0161 - accuracy: 0.9991 - val_loss: 0.0149 - val_accuracy: 0.9995 Epoch 16/20 313/313 [==============================] - 5s 15ms/step - loss: 0.0108 - accuracy: 0.9997 - val_loss: 0.0106 - val_accuracy: 0.9996 Epoch 17/20 313/313 [==============================] - 5s 15ms/step - loss: 0.0074 - accuracy: 0.9999 - val_loss: 0.0077 - val_accuracy: 0.9999 Epoch 18/20 313/313 [==============================] - 5s 15ms/step - loss: 0.0053 - accuracy: 1.0000 - val_loss: 0.0054 - val_accuracy: 0.9999 Epoch 19/20 313/313 [==============================] - 5s 15ms/step - loss: 0.0039 - accuracy: 1.0000 - val_loss: 0.0041 - val_accuracy: 1.0000 Epoch 20/20 313/313 [==============================] - 5s 15ms/step - loss: 0.0029 - accuracy: 1.0000 - val_loss: 0.0032 - val_accuracy: 1.0000
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Looks great, we reach 100% validation accuracy! Let's use the model to make some predictions. We will need to be able to convert a sequence of character IDs to a readable string:
def ids_to_date_strs(ids, chars=OUTPUT_CHARS): return ["".join([("?" + chars)[index] for index in sequence]) for sequence in ids]
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Now we can use the model to convert some dates
X_new = prepare_date_strs(["September 17, 2009", "July 14, 1789"]) ids = model.predict_classes(X_new) for date_str in ids_to_date_strs(ids): print(date_str)
WARNING:tensorflow:From <ipython-input-15-472ea7c41409>:1: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01. Instructions for updating: Please use instead:* `np.argmax(model.predict(x), axis=-1)`, if your model does multi-class classification (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`, if your model does binary classification (e.g. if it uses a `sigmoid` last-layer activation). 2009-09-17 1789-07-14
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Perfect! :) However, since the model was only trained on input strings of length 18 (which is the length of the longest date), it does not perform well if we try to use it to make predictions on shorter sequences:
X_new = prepare_date_strs(["May 02, 2020", "July 14, 1789"]) ids = model.predict_classes(X_new) for date_str in ids_to_date_strs(ids): print(date_str)
2020-01-02 1789-02-14
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Oops! We need to ensure that we always pass sequences of the same length as during training, using padding if necessary. Let's write a little helper function for that:
max_input_length = X_train.shape[1] def prepare_date_strs_padded(date_strs): X = prepare_date_strs(date_strs) if X.shape[1] < max_input_length: X = tf.pad(X, [[0, 0], [0, max_input_length - X.shape[1]]]) return X def convert_date_strs(date_strs): X = prepare_date_strs_padded(date_strs) ids = model.predict_classes(X) return ids_to_date_strs(ids) convert_date_strs(["May 02, 2020", "July 14, 1789"])
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Cool! Granted, there are certainly much easier ways to write a date conversion tool (e.g., using regular expressions or even basic string manipulation), but you have to admit that using neural networks is way cooler. ;-) However, real-life sequence-to-sequence problems will usually be harder, so for the sake of completeness, let's build a more powerful model. Second version: feeding the shifted targets to the decoder (teacher forcing) Instead of feeding the decoder a simple repetition of the encoder's output vector, we can feed it the target sequence, shifted by one time step to the right. This way, at each time step the decoder will know what the previous target character was. This should help is tackle more complex sequence-to-sequence problems.Since the first output character of each target sequence has no previous character, we will need a new token to represent the start-of-sequence (sos).During inference, we won't know the target, so what will we feed the decoder? We can just predict one character at a time, starting with an sos token, then feeding the decoder all the characters that were predicted so far (we will look at this in more details later in this notebook).But if the decoder's LSTM expects to get the previous target as input at each step, how shall we pass it it the vector output by the encoder? Well, one option is to ignore the output vector, and instead use the encoder's LSTM state as the initial state of the decoder's LSTM (which requires that encoder's LSTM must have the same number of units as the decoder's LSTM).Now let's create the decoder's inputs (for training, validation and testing). The sos token will be represented using the last possible output character's ID + 1.
sos_id = len(OUTPUT_CHARS) + 1 def shifted_output_sequences(Y): sos_tokens = tf.fill(dims=(len(Y), 1), value=sos_id) return tf.concat([sos_tokens, Y[:, :-1]], axis=1) X_train_decoder = shifted_output_sequences(Y_train) X_valid_decoder = shifted_output_sequences(Y_valid) X_test_decoder = shifted_output_sequences(Y_test)
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Let's take a look at the decoder's training inputs:
X_train_decoder
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Now let's build the model. It's not a simple sequential model anymore, so let's use the functional API:
encoder_embedding_size = 32 decoder_embedding_size = 32 lstm_units = 128 np.random.seed(42) tf.random.set_seed(42) encoder_input = keras.layers.Input(shape=[None], dtype=tf.int32) encoder_embedding = keras.layers.Embedding( input_dim=len(INPUT_CHARS) + 1, output_dim=encoder_embedding_size)(encoder_input) _, encoder_state_h, encoder_state_c = keras.layers.LSTM( lstm_units, return_state=True)(encoder_embedding) encoder_state = [encoder_state_h, encoder_state_c] decoder_input = keras.layers.Input(shape=[None], dtype=tf.int32) decoder_embedding = keras.layers.Embedding( input_dim=len(OUTPUT_CHARS) + 2, output_dim=decoder_embedding_size)(decoder_input) decoder_lstm_output = keras.layers.LSTM(lstm_units, return_sequences=True)( decoder_embedding, initial_state=encoder_state) decoder_output = keras.layers.Dense(len(OUTPUT_CHARS) + 1, activation="softmax")(decoder_lstm_output) model = keras.models.Model(inputs=[encoder_input, decoder_input], outputs=[decoder_output]) optimizer = keras.optimizers.Nadam() model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) history = model.fit([X_train, X_train_decoder], Y_train, epochs=10, validation_data=([X_valid, X_valid_decoder], Y_valid))
Epoch 1/10 313/313 [==============================] - 5s 17ms/step - loss: 1.6898 - accuracy: 0.3714 - val_loss: 1.4141 - val_accuracy: 0.4603 Epoch 2/10 313/313 [==============================] - 5s 15ms/step - loss: 1.2118 - accuracy: 0.5541 - val_loss: 0.9360 - val_accuracy: 0.6653 Epoch 3/10 313/313 [==============================] - 5s 15ms/step - loss: 0.6399 - accuracy: 0.7766 - val_loss: 0.4054 - val_accuracy: 0.8631 Epoch 4/10 313/313 [==============================] - 5s 15ms/step - loss: 0.2207 - accuracy: 0.9463 - val_loss: 0.1069 - val_accuracy: 0.9869 Epoch 5/10 313/313 [==============================] - 5s 15ms/step - loss: 0.0805 - accuracy: 0.9910 - val_loss: 0.0445 - val_accuracy: 0.9976 Epoch 6/10 313/313 [==============================] - 5s 15ms/step - loss: 0.0297 - accuracy: 0.9993 - val_loss: 0.0237 - val_accuracy: 0.9992 Epoch 7/10 313/313 [==============================] - 5s 15ms/step - loss: 0.0743 - accuracy: 0.9857 - val_loss: 0.0702 - val_accuracy: 0.9889 Epoch 8/10 313/313 [==============================] - 5s 15ms/step - loss: 0.0187 - accuracy: 0.9995 - val_loss: 0.0112 - val_accuracy: 0.9999 Epoch 9/10 313/313 [==============================] - 5s 15ms/step - loss: 0.0084 - accuracy: 1.0000 - val_loss: 0.0072 - val_accuracy: 1.0000 Epoch 10/10 313/313 [==============================] - 5s 15ms/step - loss: 0.0057 - accuracy: 1.0000 - val_loss: 0.0053 - val_accuracy: 1.0000
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
This model also reaches 100% validation accuracy, but it does so even faster. Let's once again use the model to make some predictions. This time we need to predict characters one by one.
sos_id = len(OUTPUT_CHARS) + 1 def predict_date_strs(date_strs): X = prepare_date_strs_padded(date_strs) Y_pred = tf.fill(dims=(len(X), 1), value=sos_id) for index in range(max_output_length): pad_size = max_output_length - Y_pred.shape[1] X_decoder = tf.pad(Y_pred, [[0, 0], [0, pad_size]]) Y_probas_next = model.predict([X, X_decoder])[:, index:index+1] Y_pred_next = tf.argmax(Y_probas_next, axis=-1, output_type=tf.int32) Y_pred = tf.concat([Y_pred, Y_pred_next], axis=1) return ids_to_date_strs(Y_pred[:, 1:]) predict_date_strs(["July 14, 1789", "May 01, 2020"])
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Works fine! :) Third version: using TF-Addons's seq2seq implementation Let's build exactly the same model, but using TF-Addon's seq2seq API. The implementation below is almost very similar to the TFA example higher in this notebook, except without the model input to specify the output sequence length, for simplicity (but you can easily add it back in if you need it for your projects, when the output sequences have very different lengths).
import tensorflow_addons as tfa np.random.seed(42) tf.random.set_seed(42) encoder_embedding_size = 32 decoder_embedding_size = 32 units = 128 encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32) encoder_embeddings = keras.layers.Embedding( len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs) decoder_embedding_layer = keras.layers.Embedding( len(INPUT_CHARS) + 2, decoder_embedding_size) decoder_embeddings = decoder_embedding_layer(decoder_inputs) encoder = keras.layers.LSTM(units, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_embeddings) encoder_state = [state_h, state_c] sampler = tfa.seq2seq.sampler.TrainingSampler() decoder_cell = keras.layers.LSTMCell(units) output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1) decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler, output_layer=output_layer) final_outputs, final_state, final_sequence_lengths = decoder( decoder_embeddings, initial_state=encoder_state) Y_proba = keras.layers.Activation("softmax")(final_outputs.rnn_output) model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs], outputs=[Y_proba]) optimizer = keras.optimizers.Nadam() model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) history = model.fit([X_train, X_train_decoder], Y_train, epochs=15, validation_data=([X_valid, X_valid_decoder], Y_valid))
Epoch 1/15 313/313 [==============================] - 5s 17ms/step - loss: 1.6757 - accuracy: 0.3683 - val_loss: 1.4602 - val_accuracy: 0.4214 Epoch 2/15 313/313 [==============================] - 5s 15ms/step - loss: 1.3873 - accuracy: 0.4566 - val_loss: 1.2904 - val_accuracy: 0.4957 Epoch 3/15 313/313 [==============================] - 5s 15ms/step - loss: 1.0471 - accuracy: 0.6109 - val_loss: 0.7737 - val_accuracy: 0.7276 Epoch 4/15 313/313 [==============================] - 5s 15ms/step - loss: 0.5056 - accuracy: 0.8296 - val_loss: 0.2695 - val_accuracy: 0.9305 Epoch 5/15 313/313 [==============================] - 5s 15ms/step - loss: 0.1677 - accuracy: 0.9657 - val_loss: 0.0870 - val_accuracy: 0.9912 Epoch 6/15 313/313 [==============================] - 5s 15ms/step - loss: 0.1007 - accuracy: 0.9850 - val_loss: 0.0492 - val_accuracy: 0.9975 Epoch 7/15 313/313 [==============================] - 5s 15ms/step - loss: 0.0308 - accuracy: 0.9993 - val_loss: 0.0228 - val_accuracy: 0.9996 Epoch 8/15 313/313 [==============================] - 5s 15ms/step - loss: 0.0168 - accuracy: 0.9999 - val_loss: 0.0144 - val_accuracy: 0.9999 Epoch 9/15 313/313 [==============================] - 5s 15ms/step - loss: 0.0107 - accuracy: 1.0000 - val_loss: 0.0095 - val_accuracy: 0.9999 Epoch 10/15 313/313 [==============================] - 5s 15ms/step - loss: 0.0074 - accuracy: 1.0000 - val_loss: 0.0066 - val_accuracy: 0.9999 Epoch 11/15 313/313 [==============================] - 5s 15ms/step - loss: 0.0053 - accuracy: 1.0000 - val_loss: 0.0051 - val_accuracy: 0.9999 Epoch 12/15 313/313 [==============================] - 5s 15ms/step - loss: 0.0039 - accuracy: 1.0000 - val_loss: 0.0037 - val_accuracy: 1.0000 Epoch 13/15 313/313 [==============================] - 5s 15ms/step - loss: 0.0029 - accuracy: 1.0000 - val_loss: 0.0030 - val_accuracy: 1.0000 Epoch 14/15 313/313 [==============================] - 5s 15ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.0022 - val_accuracy: 1.0000 Epoch 15/15 313/313 [==============================] - 5s 15ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 0.0018 - val_accuracy: 1.0000
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
And once again, 100% validation accuracy! To use the model, we can just reuse the `predict_date_strs()` function:
predict_date_strs(["July 14, 1789", "May 01, 2020"])
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
However, there's a much more efficient way to perform inference. Until now, during inference, we've run the model once for each new character. Instead, we can create a new decoder, based on the previously trained layers, but using a `GreedyEmbeddingSampler` instead of a `TrainingSampler`.At each time step, the `GreedyEmbeddingSampler` will compute the argmax of the decoder's outputs, and run the resulting token IDs through the decoder's embedding layer. Then it will feed the resulting embeddings to the decoder's LSTM cell at the next time step. This way, we only need to run the decoder once to get the full prediction.
inference_sampler = tfa.seq2seq.sampler.GreedyEmbeddingSampler( embedding_fn=decoder_embedding_layer) inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder( decoder_cell, inference_sampler, output_layer=output_layer, maximum_iterations=max_output_length) batch_size = tf.shape(encoder_inputs)[:1] start_tokens = tf.fill(dims=batch_size, value=sos_id) final_outputs, final_state, final_sequence_lengths = inference_decoder( start_tokens, initial_state=encoder_state, start_tokens=start_tokens, end_token=0) inference_model = keras.models.Model(inputs=[encoder_inputs], outputs=[final_outputs.sample_id])
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
A few notes:* The `GreedyEmbeddingSampler` needs the `start_tokens` (a vector containing the start-of-sequence ID for each decoder sequence), and the `end_token` (the decoder will stop decoding a sequence once the model outputs this token).* We must set `maximum_iterations` when creating the `BasicDecoder`, or else it may run into an infinite loop (if the model never outputs the end token for at least one of the sequences). This would force you would to restart the Jupyter kernel.* The decoder inputs are not needed anymore, since all the decoder inputs are generated dynamically based on the outputs from the previous time step.* The model's outputs are `final_outputs.sample_id` instead of the softmax of `final_outputs.rnn_outputs`. This allows us to directly get the argmax of the model's outputs. If you prefer to have access to the logits, you can replace `final_outputs.sample_id` with `final_outputs.rnn_outputs`. Now we can write a simple function that uses the model to perform the date format conversion:
def fast_predict_date_strs(date_strs): X = prepare_date_strs_padded(date_strs) Y_pred = inference_model.predict(X) return ids_to_date_strs(Y_pred) fast_predict_date_strs(["July 14, 1789", "May 01, 2020"])
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Let's check that it really is faster:
%timeit predict_date_strs(["July 14, 1789", "May 01, 2020"]) %timeit fast_predict_date_strs(["July 14, 1789", "May 01, 2020"])
18.3 ms ± 366 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
That's more than a 10x speedup! And it would be even more if we were handling longer sequences. Fourth version: using TF-Addons's seq2seq implementation with a scheduled sampler **Warning**: due to a TF bug, this version only works using TensorFlow 2.2. When we trained the previous model, at each time step _t_ we gave the model the target token for time step _t_ - 1. However, at inference time, the model did not get the previous target at each time step. Instead, it got the previous prediction. So there is a discrepancy between training and inference, which may lead to disappointing performance. To alleviate this, we can gradually replace the targets with the predictions, during training. For this, we just need to replace the `TrainingSampler` with a `ScheduledEmbeddingTrainingSampler`, and use a Keras callback to gradually increase the `sampling_probability` (i.e., the probability that the decoder will use the prediction from the previous time step rather than the target for the previous time step).
import tensorflow_addons as tfa np.random.seed(42) tf.random.set_seed(42) n_epochs = 20 encoder_embedding_size = 32 decoder_embedding_size = 32 units = 128 encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32) encoder_embeddings = keras.layers.Embedding( len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs) decoder_embedding_layer = keras.layers.Embedding( len(INPUT_CHARS) + 2, decoder_embedding_size) decoder_embeddings = decoder_embedding_layer(decoder_inputs) encoder = keras.layers.LSTM(units, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_embeddings) encoder_state = [state_h, state_c] sampler = tfa.seq2seq.sampler.ScheduledEmbeddingTrainingSampler( sampling_probability=0., embedding_fn=decoder_embedding_layer) # we must set the sampling_probability after creating the sampler # (see https://github.com/tensorflow/addons/pull/1714) sampler.sampling_probability = tf.Variable(0.) decoder_cell = keras.layers.LSTMCell(units) output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1) decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler, output_layer=output_layer) final_outputs, final_state, final_sequence_lengths = decoder( decoder_embeddings, initial_state=encoder_state) Y_proba = keras.layers.Activation("softmax")(final_outputs.rnn_output) model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs], outputs=[Y_proba]) optimizer = keras.optimizers.Nadam() model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) def update_sampling_probability(epoch, logs): proba = min(1.0, epoch / (n_epochs - 10)) sampler.sampling_probability.assign(proba) sampling_probability_cb = keras.callbacks.LambdaCallback( on_epoch_begin=update_sampling_probability) history = model.fit([X_train, X_train_decoder], Y_train, epochs=n_epochs, validation_data=([X_valid, X_valid_decoder], Y_valid), callbacks=[sampling_probability_cb])
Epoch 1/20
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Not quite 100% validation accuracy, but close enough! For inference, we could do the exact same thing as earlier, using a `GreedyEmbeddingSampler`. However, just for the sake of completeness, let's use a `SampleEmbeddingSampler` instead. It's almost the same thing, except that instead of using the argmax of the model's output to find the token ID, it treats the outputs as logits and uses them to sample a token ID randomly. This can be useful when you want to generate text. The `softmax_temperature` argument serves the same purpose as when we generated Shakespeare-like text (the higher this argument, the more random the generated text will be).
softmax_temperature = tf.Variable(1.) inference_sampler = tfa.seq2seq.sampler.SampleEmbeddingSampler( embedding_fn=decoder_embedding_layer, softmax_temperature=softmax_temperature) inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder( decoder_cell, inference_sampler, output_layer=output_layer, maximum_iterations=max_output_length) batch_size = tf.shape(encoder_inputs)[:1] start_tokens = tf.fill(dims=batch_size, value=sos_id) final_outputs, final_state, final_sequence_lengths = inference_decoder( start_tokens, initial_state=encoder_state, start_tokens=start_tokens, end_token=0) inference_model = keras.models.Model(inputs=[encoder_inputs], outputs=[final_outputs.sample_id]) def creative_predict_date_strs(date_strs, temperature=1.0): softmax_temperature.assign(temperature) X = prepare_date_strs_padded(date_strs) Y_pred = inference_model.predict(X) return ids_to_date_strs(Y_pred) tf.random.set_seed(42) creative_predict_date_strs(["July 14, 1789", "May 01, 2020"])
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Dates look good at room temperature. Now let's heat things up a bit:
tf.random.set_seed(42) creative_predict_date_strs(["July 14, 1789", "May 01, 2020"], temperature=5.)
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Oops, the dates are overcooked, now. Let's call them "creative" dates. Fifth version: using TFA seq2seq, the Keras subclassing API and attention mechanisms The sequences in this problem are pretty short, but if we wanted to tackle longer sequences, we would probably have to use attention mechanisms. While it's possible to code our own implementation, it's simpler and more efficient to use TF-Addons's implementation instead. Let's do that now, this time using Keras' subclassing API.**Warning**: due to a TensorFlow bug (see [this issue](https://github.com/tensorflow/addons/issues/1153) for details), the `get_initial_state()` method fails in eager mode, so for now we have to use the subclassing API, as Keras automatically calls `tf.function()` on the `call()` method (so it runs in graph mode). In this implementation, we've reverted back to using the `TrainingSampler`, for simplicity (but you can easily tweak it to use a `ScheduledEmbeddingTrainingSampler` instead). We also use a `GreedyEmbeddingSampler` during inference, so this class is pretty easy to use:
class DateTranslation(keras.models.Model): def __init__(self, units=128, encoder_embedding_size=32, decoder_embedding_size=32, **kwargs): super().__init__(**kwargs) self.encoder_embedding = keras.layers.Embedding( input_dim=len(INPUT_CHARS) + 1, output_dim=encoder_embedding_size) self.encoder = keras.layers.LSTM(units, return_sequences=True, return_state=True) self.decoder_embedding = keras.layers.Embedding( input_dim=len(OUTPUT_CHARS) + 2, output_dim=decoder_embedding_size) self.attention = tfa.seq2seq.LuongAttention(units) decoder_inner_cell = keras.layers.LSTMCell(units) self.decoder_cell = tfa.seq2seq.AttentionWrapper( cell=decoder_inner_cell, attention_mechanism=self.attention) output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1) self.decoder = tfa.seq2seq.BasicDecoder( cell=self.decoder_cell, sampler=tfa.seq2seq.sampler.TrainingSampler(), output_layer=output_layer) self.inference_decoder = tfa.seq2seq.BasicDecoder( cell=self.decoder_cell, sampler=tfa.seq2seq.sampler.GreedyEmbeddingSampler( embedding_fn=self.decoder_embedding), output_layer=output_layer, maximum_iterations=max_output_length) def call(self, inputs, training=None): encoder_input, decoder_input = inputs encoder_embeddings = self.encoder_embedding(encoder_input) encoder_outputs, encoder_state_h, encoder_state_c = self.encoder( encoder_embeddings, training=training) encoder_state = [encoder_state_h, encoder_state_c] self.attention(encoder_outputs, setup_memory=True) decoder_embeddings = self.decoder_embedding(decoder_input) decoder_initial_state = self.decoder_cell.get_initial_state( decoder_embeddings) decoder_initial_state = decoder_initial_state.clone( cell_state=encoder_state) if training: decoder_outputs, _, _ = self.decoder( decoder_embeddings, initial_state=decoder_initial_state, training=training) else: start_tokens = tf.zeros_like(encoder_input[:, 0]) + sos_id decoder_outputs, _, _ = self.inference_decoder( decoder_embeddings, initial_state=decoder_initial_state, start_tokens=start_tokens, end_token=0) return tf.nn.softmax(decoder_outputs.rnn_output) np.random.seed(42) tf.random.set_seed(42) model = DateTranslation() optimizer = keras.optimizers.Nadam() model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) history = model.fit([X_train, X_train_decoder], Y_train, epochs=25, validation_data=([X_valid, X_valid_decoder], Y_valid))
Epoch 1/25 313/313 [==============================] - 7s 21ms/step - loss: 2.1549 - accuracy: 0.2295 - val_loss: 2.1450 - val_accuracy: 0.2239 Epoch 2/25 313/313 [==============================] - 6s 19ms/step - loss: 1.8147 - accuracy: 0.3492 - val_loss: 1.4931 - val_accuracy: 0.4476 Epoch 3/25 313/313 [==============================] - 6s 18ms/step - loss: 1.3585 - accuracy: 0.4909 - val_loss: 1.3168 - val_accuracy: 0.5100 Epoch 4/25 313/313 [==============================] - 6s 18ms/step - loss: 1.2787 - accuracy: 0.5293 - val_loss: 1.1767 - val_accuracy: 0.5624 Epoch 5/25 313/313 [==============================] - 6s 18ms/step - loss: 1.1236 - accuracy: 0.5776 - val_loss: 1.0769 - val_accuracy: 0.5907 Epoch 6/25 313/313 [==============================] - 6s 18ms/step - loss: 1.0369 - accuracy: 0.6073 - val_loss: 1.0159 - val_accuracy: 0.6199 Epoch 7/25 313/313 [==============================] - 6s 18ms/step - loss: 0.9752 - accuracy: 0.6295 - val_loss: 0.9723 - val_accuracy: 0.6346 Epoch 8/25 313/313 [==============================] - 6s 18ms/step - loss: 0.9794 - accuracy: 0.6315 - val_loss: 0.9444 - val_accuracy: 0.6371 Epoch 9/25 313/313 [==============================] - 6s 18ms/step - loss: 0.9338 - accuracy: 0.6415 - val_loss: 0.9296 - val_accuracy: 0.6381 Epoch 10/25 313/313 [==============================] - 6s 19ms/step - loss: 0.9439 - accuracy: 0.6418 - val_loss: 0.9028 - val_accuracy: 0.6574 Epoch 11/25 313/313 [==============================] - 6s 19ms/step - loss: 0.8807 - accuracy: 0.6637 - val_loss: 0.9835 - val_accuracy: 0.6369 Epoch 12/25 313/313 [==============================] - 6s 19ms/step - loss: 0.7307 - accuracy: 0.6953 - val_loss: 0.8942 - val_accuracy: 0.6873 Epoch 13/25 313/313 [==============================] - 6s 19ms/step - loss: 0.5833 - accuracy: 0.7327 - val_loss: 0.6944 - val_accuracy: 0.7391 Epoch 14/25 313/313 [==============================] - 6s 19ms/step - loss: 0.4664 - accuracy: 0.7940 - val_loss: 0.6228 - val_accuracy: 0.7885 Epoch 15/25 313/313 [==============================] - 6s 19ms/step - loss: 0.3205 - accuracy: 0.8740 - val_loss: 0.4825 - val_accuracy: 0.8780 Epoch 16/25 313/313 [==============================] - 6s 19ms/step - loss: 0.2329 - accuracy: 0.9216 - val_loss: 0.3851 - val_accuracy: 0.9118 Epoch 17/25 313/313 [==============================] - 7s 21ms/step - loss: 0.2480 - accuracy: 0.9372 - val_loss: 0.2785 - val_accuracy: 0.9111 Epoch 18/25 313/313 [==============================] - 7s 22ms/step - loss: 0.1182 - accuracy: 0.9801 - val_loss: 0.1372 - val_accuracy: 0.9786 Epoch 19/25 313/313 [==============================] - 7s 22ms/step - loss: 0.0643 - accuracy: 0.9937 - val_loss: 0.0681 - val_accuracy: 0.9909 Epoch 20/25 313/313 [==============================] - 6s 18ms/step - loss: 0.0446 - accuracy: 0.9952 - val_loss: 0.0487 - val_accuracy: 0.9934 Epoch 21/25 313/313 [==============================] - 6s 18ms/step - loss: 0.0247 - accuracy: 0.9987 - val_loss: 0.0228 - val_accuracy: 0.9987 Epoch 22/25 313/313 [==============================] - 6s 18ms/step - loss: 0.0456 - accuracy: 0.9918 - val_loss: 0.0207 - val_accuracy: 0.9985 Epoch 23/25 313/313 [==============================] - 6s 18ms/step - loss: 0.0131 - accuracy: 0.9997 - val_loss: 0.0127 - val_accuracy: 0.9993 Epoch 24/25 313/313 [==============================] - 6s 19ms/step - loss: 0.0360 - accuracy: 0.9933 - val_loss: 0.0146 - val_accuracy: 0.9990 Epoch 25/25 313/313 [==============================] - 6s 19ms/step - loss: 0.0092 - accuracy: 0.9998 - val_loss: 0.0089 - val_accuracy: 0.9992
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Not quite 100% validation accuracy, but close. It took a bit longer to converge this time, but there were also more parameters and more computations per iteration. And we did not use a scheduled sampler.To use the model, we can write yet another little function:
def fast_predict_date_strs_v2(date_strs): X = prepare_date_strs_padded(date_strs) X_decoder = tf.zeros(shape=(len(X), max_output_length), dtype=tf.int32) Y_probas = model.predict([X, X_decoder]) Y_pred = tf.argmax(Y_probas, axis=-1) return ids_to_date_strs(Y_pred) fast_predict_date_strs_v2(["July 14, 1789", "May 01, 2020"])
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
There are still a few interesting features from TF-Addons that you may want to look at:* Using a `BeamSearchDecoder` rather than a `BasicDecoder` for inference. Instead of outputing the character with the highest probability, this decoder keeps track of the several candidates, and keeps only the most likely sequences of candidates (see chapter 16 in the book for more details).* Setting masks or specifying `sequence_length` if the input or target sequences may have very different lengths.* Using a `ScheduledOutputTrainingSampler`, which gives you more flexibility than the `ScheduledEmbeddingTrainingSampler` to decide how to feed the output at time _t_ to the cell at time _t_+1. By default it feeds the outputs directly to cell, without computing the argmax ID and passing it through an embedding layer. Alternatively, you specify a `next_inputs_fn` function that will be used to convert the cell outputs to inputs at the next step. 10._Exercise: Go through TensorFlow's [Neural Machine Translation with Attention tutorial](https://homl.info/nmttuto)._ Simply open the Colab and follow its instructions. Alternatively, if you want a simpler example of using TF-Addons's seq2seq implementation for Neural Machine Translation (NMT), look at the solution to the previous question. The last model implementation will give you a simpler example of using TF-Addons to build an NMT model using attention mechanisms. 11._Exercise: Use one of the recent language models (e.g., GPT) to generate more convincing Shakespearean text._ The simplest way to use recent language models is to use the excellent [transformers library](https://huggingface.co/transformers/), open sourced by Hugging Face. It provides many modern neural net architectures (including BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet and more) for Natural Language Processing (NLP), including many pretrained models. It relies on either TensorFlow or PyTorch. Best of all: it's amazingly simple to use. First, let's load a pretrained model. In this example, we will use OpenAI's GPT model, with an additional Language Model on top (just a linear layer with weights tied to the input embeddings). Let's import it and load the pretrained weights (this will download about 445MB of data to `~/.cache/torch/transformers`):
from transformers import TFOpenAIGPTLMHeadModel model = TFOpenAIGPTLMHeadModel.from_pretrained("openai-gpt")
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh