id
stringlengths
3
8
text
stringlengths
1
115k
st207000
I am running into the following error when building keras-2.7.0: Traceback (most recent call last): File "/tmp/spackvixapidb/c19c5e115f6adb500190afacc8c6c4f6/execroot/org_keras/bazel-out/k8-opt-exec-2B5CBBC6/bin/keras/api/create_keras_api_1_keras_python_api_gen.runfiles/org_keras/keras/api/create_python_api_wrapper.py", line 26, in <module> import keras # pylint: disable=unused-import File "/tmp/spackvixapidb/c19c5e115f6adb500190afacc8c6c4f6/execroot/org_keras/bazel-out/k8-opt-exec-2B5CBBC6/bin/keras/api/create_keras_api_1_keras_python_api_gen.runfiles/org_keras/keras/__init__.py", line 21, in <module> from tensorflow.python import tf2 File "/opt/packages/gpjohnsn/prefix/opt/apps/x86_64/gcc-9.3.0/py-tensorflow-2.7.0-pfpeahr/lib/python3.8/site-packages/tensorflow/__init__.py", line 41, in <module> from tensorflow.python.tools import module_util as _module_util File "/opt/packages/gpjohnsn/prefix/opt/apps/x86_64/gcc-9.3.0/py-tensorflow-2.7.0-pfpeahr/lib/python3.8/site-packages/tensorflow/python/__init__.py", line 41, in <module> from tensorflow.python.eager import context File "/opt/packages/gpjohnsn/prefix/opt/apps/x86_64/gcc-9.3.0/py-tensorflow-2.7.0-pfpeahr/lib/python3.8/site-packages/tensorflow/python/eager/context.py", line 33, in <module> from tensorflow.core.framework import function_pb2 File "/opt/packages/gpjohnsn/prefix/opt/apps/x86_64/gcc-9.3.0/py-tensorflow-2.7.0-pfpeahr/lib/python3.8/site-packages/tensorflow/core/framework/function_pb2.py", line 14, in <module> from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2 File "/opt/packages/gpjohnsn/prefix/opt/apps/x86_64/gcc-9.3.0/py-tensorflow-2.7.0-pfpeahr/lib/python3.8/site-packages/tensorflow/core/framework/attr_value_pb2.py", line 14, in <module> from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2 File "/opt/packages/gpjohnsn/prefix/opt/apps/x86_64/gcc-9.3.0/py-tensorflow-2.7.0-pfpeahr/lib/python3.8/site-packages/tensorflow/core/framework/tensor_pb2.py", line 14, in <module> from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2 File "/opt/packages/gpjohnsn/prefix/opt/apps/x86_64/gcc-9.3.0/py-tensorflow-2.7.0-pfpeahr/lib/python3.8/site-packages/tensorflow/core/framework/resource_handle_pb2.py", line 14, in <module> from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2 File "/opt/packages/gpjohnsn/prefix/opt/apps/x86_64/gcc-9.3.0/py-tensorflow-2.7.0-pfpeahr/lib/python3.8/site-packages/tensorflow/core/framework/tensor_shape_pb2.py", line 21, in <module> create_key=_descriptor._internal_create_key, AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key' Doing a search for that error seems to indicate that there is a mismatch with protobuf, but I have ensured that the protobuf library and python protobuf are at the same version, 3.17.3. I have tried different versions of protobuf as well but keep hitting the same error. Could it be a mismatch of some sort between keras and tensorflow, as it seems to be failing on a tensorflow module? The same versions of protobuf are being used for both tensorflow and keras. Thanks.
st207001
Hi! Im am using a tflite model on an edge device with not a lot of RAM. My program reads continuous sensor data and the input layer takes in 150 datapoints. I want to invoke the model in intervals of 50 datapoints so that there is overlap with the data in each invoke. Currently I save the data to a buffer and copy it to the input when I want to invoke the model but this requires too much RAM. Is there a way to keep the data in the input kayer after invoking, so that I don’t have to hold the data in 2 seperate places?
st207002
RUN the below mentioned code for waymo datasets pre-processing import argparse import multiprocessing import os import cv2 import numpy as np import tensorflow as tf from tqdm import tqdm roadgraph_features = { "roadgraph_samples/dir": tf.io.FixedLenFeature( [20000, 3], tf.float32, default_value=None ), "roadgraph_samples/id": tf.io.FixedLenFeature( [20000, 1], tf.int64, default_value=None ), "roadgraph_samples/type": tf.io.FixedLenFeature( [20000, 1], tf.int64, default_value=None ), "roadgraph_samples/valid": tf.io.FixedLenFeature( [20000, 1], tf.int64, default_value=None ), "roadgraph_samples/xyz": tf.io.FixedLenFeature( [20000, 3], tf.float32, default_value=None ), } # Features of other agents. state_features = { "state/id": tf.io.FixedLenFeature([128], tf.float32, default_value=None), "state/type": tf.io.FixedLenFeature([128], tf.float32, default_value=None), "state/is_sdc": tf.io.FixedLenFeature([128], tf.int64, default_value=None), "state/tracks_to_predict": tf.io.FixedLenFeature( [128], tf.int64, default_value=None ), "state/current/bbox_yaw": tf.io.FixedLenFeature( [128, 1], tf.float32, default_value=None ), "state/current/height": tf.io.FixedLenFeature( [128, 1], tf.float32, default_value=None ), "state/current/length": tf.io.FixedLenFeature( [128, 1], tf.float32, default_value=None ), "state/current/timestamp_micros": tf.io.FixedLenFeature( [128, 1], tf.int64, default_value=None ), "state/current/valid": tf.io.FixedLenFeature( [128, 1], tf.int64, default_value=None ), "state/current/vel_yaw": tf.io.FixedLenFeature( [128, 1], tf.float32, default_value=None ), "state/current/velocity_x": tf.io.FixedLenFeature( [128, 1], tf.float32, default_value=None ), "state/current/velocity_y": tf.io.FixedLenFeature( [128, 1], tf.float32, default_value=None ), "state/current/speed": tf.io.FixedLenFeature( [128, 1], tf.float32, default_value=None ), "state/current/width": tf.io.FixedLenFeature( [128, 1], tf.float32, default_value=None ), "state/current/x": tf.io.FixedLenFeature([128, 1], tf.float32, default_value=None), "state/current/y": tf.io.FixedLenFeature([128, 1], tf.float32, default_value=None), "state/current/z": tf.io.FixedLenFeature([128, 1], tf.float32, default_value=None), "state/future/bbox_yaw": tf.io.FixedLenFeature( [128, 80], tf.float32, default_value=None ), "state/future/height": tf.io.FixedLenFeature( [128, 80], tf.float32, default_value=None ), "state/future/length": tf.io.FixedLenFeature( [128, 80], tf.float32, default_value=None ), "state/future/timestamp_micros": tf.io.FixedLenFeature( [128, 80], tf.int64, default_value=None ), "state/future/valid": tf.io.FixedLenFeature( [128, 80], tf.int64, default_value=None ), "state/future/vel_yaw": tf.io.FixedLenFeature( [128, 80], tf.float32, default_value=None ), "state/future/velocity_x": tf.io.FixedLenFeature( [128, 80], tf.float32, default_value=None ), "state/future/velocity_y": tf.io.FixedLenFeature( [128, 80], tf.float32, default_value=None ), "state/future/width": tf.io.FixedLenFeature( [128, 80], tf.float32, default_value=None ), "state/future/x": tf.io.FixedLenFeature([128, 80], tf.float32, default_value=None), "state/future/y": tf.io.FixedLenFeature([128, 80], tf.float32, default_value=None), "state/future/z": tf.io.FixedLenFeature([128, 80], tf.float32, default_value=None), "state/past/bbox_yaw": tf.io.FixedLenFeature( [128, 10], tf.float32, default_value=None ), "state/past/height": tf.io.FixedLenFeature( [128, 10], tf.float32, default_value=None ), "state/past/length": tf.io.FixedLenFeature( [128, 10], tf.float32, default_value=None ), "state/past/timestamp_micros": tf.io.FixedLenFeature( [128, 10], tf.int64, default_value=None ), "state/past/valid": tf.io.FixedLenFeature([128, 10], tf.int64, default_value=None), "state/past/vel_yaw": tf.io.FixedLenFeature( [128, 10], tf.float32, default_value=None ), "state/past/velocity_x": tf.io.FixedLenFeature( [128, 10], tf.float32, default_value=None ), "state/past/velocity_y": tf.io.FixedLenFeature( [128, 10], tf.float32, default_value=None ), "state/past/speed": tf.io.FixedLenFeature( [128, 10], tf.float32, default_value=None ), "state/past/width": tf.io.FixedLenFeature( [128, 10], tf.float32, default_value=None ), "state/past/x": tf.io.FixedLenFeature([128, 10], tf.float32, default_value=None), "state/past/y": tf.io.FixedLenFeature([128, 10], tf.float32, default_value=None), "state/past/z": tf.io.FixedLenFeature([128, 10], tf.float32, default_value=None), "scenario/id": tf.io.FixedLenFeature([1], tf.string, default_value=None), } traffic_light_features = { "traffic_light_state/current/state": tf.io.FixedLenFeature( [1, 16], tf.int64, default_value=None ), "traffic_light_state/current/valid": tf.io.FixedLenFeature( [1, 16], tf.int64, default_value=None ), "traffic_light_state/current/id": tf.io.FixedLenFeature( [1, 16], tf.int64, default_value=None ), "traffic_light_state/current/x": tf.io.FixedLenFeature( [1, 16], tf.float32, default_value=None ), "traffic_light_state/current/y": tf.io.FixedLenFeature( [1, 16], tf.float32, default_value=None ), "traffic_light_state/current/z": tf.io.FixedLenFeature( [1, 16], tf.float32, default_value=None ), "traffic_light_state/past/state": tf.io.FixedLenFeature( [10, 16], tf.int64, default_value=None ), "traffic_light_state/past/valid": tf.io.FixedLenFeature( [10, 16], tf.int64, default_value=None ), # "traffic_light_state/past/id": # tf.io.FixedLenFeature([1, 16], tf.int64, default_value=None), "traffic_light_state/past/x": tf.io.FixedLenFeature( [10, 16], tf.float32, default_value=None ), "traffic_light_state/past/y": tf.io.FixedLenFeature( [10, 16], tf.float32, default_value=None ), "traffic_light_state/past/z": tf.io.FixedLenFeature( [10, 16], tf.float32, default_value=None ), } features_description = {} features_description.update(roadgraph_features) features_description.update(state_features) features_description.update(traffic_light_features) MAX_PIXEL_VALUE = 255 N_ROADS = 21 road_colors = [int(x) for x in np.linspace(1, MAX_PIXEL_VALUE, N_ROADS).astype("uint8")] idx2type = ["unset", "vehicle", "pedestrian", "cyclist", "other"] def parse_arguments(): parser = argparse.ArgumentParser() parser.add_argument("--data", type=str, required=True, help="Path to raw data") parser.add_argument("--out", type=str, required=True, help="Path to save data") parser.add_argument( "--no-valid", action="store_true", help="Use data with flag `valid = 0`" ) parser.add_argument( "--use-vectorize", action="store_true", help="Generate vector data" ) parser.add_argument( "--n-jobs", type=int, default=20, required=False, help="Number of threads" ) parser.add_argument( "--n-shards", type=int, default=8, required=False, help="Use `1/n_shards` of full dataset", ) parser.add_argument( "--each", type=int, default=0, required=False, help="Take `each` sample in shard", ) args = parser.parse_args() return args def rasterize( tracks_to_predict, past_x, past_y, current_x, current_y, current_yaw, past_yaw, past_valid, current_valid, agent_type, roadlines_coords, roadlines_types, roadlines_valid, roadlines_ids, widths, lengths, agents_ids, tl_states, tl_ids, tl_valids, future_x, future_y, future_valid, scenario_id, validate, crop_size=512, raster_size=224, shift=2 ** 9, magic_const=3, n_channels=11, ): GRES = [] displacement = np.array([[raster_size // 4, raster_size // 2]]) * shift tl_dict = {"green": set(), "yellow": set(), "red": set()} # Unknown = 0, Arrow_Stop = 1, Arrow_Caution = 2, Arrow_Go = 3, Stop = 4, # Caution = 5, Go = 6, Flashing_Stop = 7, Flashing_Caution = 8 for tl_state, tl_id, tl_valid in zip( tl_states.flatten(), tl_ids.flatten(), tl_valids.flatten() ): if tl_valid == 0: continue if tl_state in [1, 4, 7]: tl_dict["red"].add(tl_id) if tl_state in [2, 5, 8]: tl_dict["yellow"].add(tl_id) if tl_state in [3, 6]: tl_dict["green"].add(tl_id) XY = np.concatenate( ( np.expand_dims(np.concatenate((past_x, current_x), axis=1), axis=-1), np.expand_dims(np.concatenate((past_y, current_y), axis=1), axis=-1), ), axis=-1, ) GT_XY = np.concatenate( (np.expand_dims(future_x, axis=-1), np.expand_dims(future_y, axis=-1)), axis=-1 ) YAWS = np.concatenate((past_yaw, current_yaw), axis=1) agents_valid = np.concatenate((past_valid, current_valid), axis=1) roadlines_valid = roadlines_valid.reshape(-1) roadlines_coords = ( roadlines_coords[:, :2][roadlines_valid > 0] * shift * magic_const * raster_size / crop_size ) roadlines_types = roadlines_types[roadlines_valid > 0] roadlines_ids = roadlines_ids.reshape(-1)[roadlines_valid > 0] for _, ( xy, current_val, val, _, yaw, agent_id, gt_xy, future_val, predict, ) in enumerate( zip( XY, current_valid, agents_valid, agent_type, current_yaw.flatten(), agents_ids, GT_XY, future_valid, tracks_to_predict.flatten(), ) ): if (not validate and future_val.sum() == 0) or (validate and predict == 0): continue if current_val == 0: continue RES_ROADMAP = ( np.ones((raster_size, raster_size, 3), dtype=np.uint8) * MAX_PIXEL_VALUE ) RES_EGO = [ np.zeros((raster_size, raster_size, 1), dtype=np.uint8) for _ in range(n_channels) ] RES_OTHER = [ np.zeros((raster_size, raster_size, 1), dtype=np.uint8) for _ in range(n_channels) ] xy_val = xy[val > 0] if len(xy_val) == 0: continue unscaled_center_xy = xy_val[-1].reshape(1, -1) center_xy = unscaled_center_xy * shift * magic_const * raster_size / crop_size rot_matrix = np.array( [ [np.cos(yaw), -np.sin(yaw)], [np.sin(yaw), np.cos(yaw)], ] ) centered_roadlines = (roadlines_coords - center_xy) @ rot_matrix + displacement centered_others = ( XY.reshape(-1, 2) * shift * magic_const * raster_size / crop_size - center_xy ) @ rot_matrix + displacement centered_others = centered_others.reshape(128, n_channels, 2) centered_gt = (gt_xy - unscaled_center_xy) @ rot_matrix unique_road_ids = np.unique(roadlines_ids) for road_id in unique_road_ids: if road_id >= 0: roadline = centered_roadlines[roadlines_ids == road_id] road_type = roadlines_types[roadlines_ids == road_id].flatten()[0] road_color = road_colors[road_type] for c, rgb in zip( ["green", "yellow", "red"], [ (0, MAX_PIXEL_VALUE, 0), (MAX_PIXEL_VALUE, 211, 0), (MAX_PIXEL_VALUE, 0, 0), ], ): if road_id in tl_dict[c]: road_color = rgb RES_ROADMAP = cv2.polylines( RES_ROADMAP, [roadline.astype(int)], False, road_color, shift=9, ) unique_agent_ids = np.unique(agents_ids) is_ego = False self_type = 0 _tmp = 0 for other_agent_id in unique_agent_ids: other_agent_id = int(other_agent_id) if other_agent_id < 1: continue if other_agent_id == agent_id: is_ego = True self_type = agent_type[agents_ids == other_agent_id] else: is_ego = False _tmp += 1 agent_lane = centered_others[agents_ids == other_agent_id][0] agent_valid = agents_valid[agents_ids == other_agent_id] agent_yaw = YAWS[agents_ids == other_agent_id] agent_l = lengths[agents_ids == other_agent_id] agent_w = widths[agents_ids == other_agent_id] for timestamp, (coord, valid_coordinate, past_yaw,) in enumerate( zip( agent_lane, agent_valid.flatten(), agent_yaw.flatten(), ) ): if valid_coordinate == 0: continue box_points = ( np.array( [ -agent_l, -agent_w, agent_l, -agent_w, agent_l, agent_w, -agent_l, agent_w, ] ) .reshape(4, 2) .astype(np.float32) * shift * magic_const / 2 * raster_size / crop_size ) box_points = ( box_points @ np.array( ( (np.cos(yaw - past_yaw), -np.sin(yaw - past_yaw)), (np.sin(yaw - past_yaw), np.cos(yaw - past_yaw)), ) ).reshape(2, 2) ) _coord = np.array([coord]) box_points = box_points + _coord box_points = box_points.reshape(1, -1, 2).astype(np.int32) if is_ego: cv2.fillPoly( RES_EGO[timestamp], box_points, color=MAX_PIXEL_VALUE, shift=9, ) else: cv2.fillPoly( RES_OTHER[timestamp], box_points, color=MAX_PIXEL_VALUE, shift=9, ) raster = np.concatenate([RES_ROADMAP] + RES_EGO + RES_OTHER, axis=2) raster_dict = { "object_id": agent_id, "raster": raster, "yaw": yaw, "shift": unscaled_center_xy, "_gt_marginal": gt_xy, "gt_marginal": centered_gt, "future_val_marginal": future_val, "gt_joint": GT_XY[tracks_to_predict.flatten() > 0], "future_val_joint": future_valid[tracks_to_predict.flatten() > 0], "scenario_id": scenario_id, "self_type": self_type, } GRES.append(raster_dict) return GRES F2I = { "x": 0, "y": 1, "s": 2, "vel_yaw": 3, "bbox_yaw": 4, "l": 5, "w": 6, "agent_type_range": [7, 12], "lane_range": [13, 33], "lt_range": [34, 43], "global_idx": 44, } def ohe(N, n, zero): n = int(n) N = int(N) M = np.eye(N) diff = 0 if zero: M = np.concatenate((np.zeros((1, N)), M), axis=0) diff = 1 return M[n + diff] def make_2d(arraylist): n = len(arraylist) k = arraylist[0].shape[0] a2d = np.zeros((n, k)) for i in range(n): a2d[i] = arraylist[i] return a2d def vectorize( past_x, current_x, past_y, current_y, past_valid, current_valid, past_speed, current_speed, past_velocity_yaw, current_velocity_yaw, past_bbox_yaw, current_bbox_yaw, Agent_id, Agent_type, Roadline_id, Roadline_type, Roadline_valid, Roadline_xy, Tl_rl_id, Tl_state, Tl_valid, W, L, tracks_to_predict, future_valid, validate, n_channels=11, ): XY = np.concatenate( ( np.expand_dims(np.concatenate((past_x, current_x), axis=1), axis=-1), np.expand_dims(np.concatenate((past_y, current_y), axis=1), axis=-1), ), axis=-1, ) Roadline_valid = Roadline_valid.flatten() RoadXY = Roadline_xy[:, :2][Roadline_valid > 0] Roadline_type = Roadline_type[Roadline_valid > 0].flatten() Roadline_id = Roadline_id[Roadline_valid > 0].flatten() tl_state = [[-1] for _ in range(9)] for lane_id, state, valid in zip( Tl_rl_id.flatten(), Tl_state.flatten(), Tl_valid.flatten() ): if valid == 0: continue tl_state[int(state)].append(lane_id) VALID = np.concatenate((past_valid, current_valid), axis=1) Speed = np.concatenate((past_speed, current_speed), axis=1) Vyaw = np.concatenate((past_velocity_yaw, current_velocity_yaw), axis=1) Bbox_yaw = np.concatenate((past_bbox_yaw, current_bbox_yaw), axis=1) GRES = [] ROADLINES_STATE = [] GLOBAL_IDX = -1 unique_road_ids = np.unique(Roadline_id) for road_id in unique_road_ids: GLOBAL_IDX += 1 roadline_coords = RoadXY[Roadline_id == road_id] roadline_type = Roadline_type[Roadline_id == road_id][0] for i, (x, y) in enumerate(roadline_coords): if i > 0 and i < len(roadline_coords) - 1 and i % 3 > 0: continue tmp = np.zeros(48) tmp[0] = x tmp[1] = y tmp[13:33] = ohe(20, roadline_type, True) tmp[44] = GLOBAL_IDX ROADLINES_STATE.append(tmp) ROADLINES_STATE = make_2d(ROADLINES_STATE) for ( agent_id, xy, current_val, valid, _, bbox_yaw, _, _, _, future_val, predict, ) in zip( Agent_id, XY, current_valid, VALID, Speed, Bbox_yaw, Vyaw, W, L, future_valid, tracks_to_predict.flatten(), ): if (not validate and future_val.sum() == 0) or (validate and predict == 0): continue if current_val == 0: continue GLOBAL_IDX = -1 RES = [] xy_val = xy[valid > 0] if len(xy_val) == 0: continue centered_xy = xy_val[-1].copy().reshape(-1, 2) ANGLE = bbox_yaw[-1] rot_matrix = np.array( [ [np.cos(ANGLE), -np.sin(ANGLE)], [np.sin(ANGLE), np.cos(ANGLE)], ] ).reshape(2, 2) local_roadlines_state = ROADLINES_STATE.copy() local_roadlines_state[:, :2] = ( local_roadlines_state[:, :2] - centered_xy ) @ rot_matrix.astype(np.float64) local_XY = ((XY - centered_xy).reshape(-1, 2) @ rot_matrix).reshape( 128, n_channels, 2 ) for ( other_agent_id, other_agent_type, other_xy, other_valids, other_speeds, other_bbox_yaws, other_v_yaws, other_w, other_l, other_predict, ) in zip( Agent_id, Agent_type, local_XY, VALID, Speed, Bbox_yaw, Vyaw, W.flatten(), L.flatten(), tracks_to_predict.flatten(), ): if other_valids.sum() == 0: continue GLOBAL_IDX += 1 for timestamp, ( (x, y), v, other_speed, other_v_yaw, other_bbox_yaw, ) in enumerate( zip(other_xy, other_valids, other_speeds, other_v_yaws, other_bbox_yaws) ): if v == 0: continue tmp = np.zeros(48) tmp[0] = x tmp[1] = y tmp[2] = other_speed tmp[3] = other_v_yaw - ANGLE tmp[4] = other_bbox_yaw - ANGLE tmp[5] = float(other_l) tmp[6] = float(other_w) tmp[7:12] = ohe(5, other_agent_type, True) tmp[43] = timestamp tmp[44] = GLOBAL_IDX tmp[45] = 1 if other_agent_id == agent_id else 0 tmp[46] = other_predict tmp[47] = other_agent_id RES.append(tmp) local_roadlines_state[:, 44] = local_roadlines_state[:, 44] + GLOBAL_IDX + 1 RES = np.concatenate((make_2d(RES), local_roadlines_state), axis=0) GRES.append(RES) return GRES def merge( data, proc_id, validate, out_dir, use_vectorize=False, max_rand_int=10000000000 ): parsed = tf.io.parse_single_example(data, features_description) raster_data = rasterize( parsed["state/tracks_to_predict"].numpy(), parsed["state/past/x"].numpy(), parsed["state/past/y"].numpy(), parsed["state/current/x"].numpy(), parsed["state/current/y"].numpy(), parsed["state/current/bbox_yaw"].numpy(), parsed["state/past/bbox_yaw"].numpy(), parsed["state/past/valid"].numpy(), parsed["state/current/valid"].numpy(), parsed["state/type"].numpy(), parsed["roadgraph_samples/xyz"].numpy(), parsed["roadgraph_samples/type"].numpy(), parsed["roadgraph_samples/valid"].numpy(), parsed["roadgraph_samples/id"].numpy(), parsed["state/current/width"].numpy(), parsed["state/current/length"].numpy(), parsed["state/id"].numpy(), parsed["traffic_light_state/current/state"].numpy(), parsed["traffic_light_state/current/id"].numpy(), parsed["traffic_light_state/current/valid"].numpy(), parsed["state/future/x"].numpy(), parsed["state/future/y"].numpy(), parsed["state/future/valid"].numpy(), parsed["scenario/id"].numpy()[0].decode("utf-8"), validate=validate, ) if use_vectorize: vector_data = vectorize( parsed["state/past/x"].numpy(), parsed["state/current/x"].numpy(), parsed["state/past/y"].numpy(), parsed["state/current/y"].numpy(), parsed["state/past/valid"].numpy(), parsed["state/current/valid"].numpy(), parsed["state/past/speed"].numpy(), parsed["state/current/speed"].numpy(), parsed["state/past/vel_yaw"].numpy(), parsed["state/current/vel_yaw"].numpy(), parsed["state/past/bbox_yaw"].numpy(), parsed["state/current/bbox_yaw"].numpy(), parsed["state/id"].numpy(), parsed["state/type"].numpy(), parsed["roadgraph_samples/id"].numpy(), parsed["roadgraph_samples/type"].numpy(), parsed["roadgraph_samples/valid"].numpy(), parsed["roadgraph_samples/xyz"].numpy(), parsed["traffic_light_state/current/id"].numpy(), parsed["traffic_light_state/current/state"].numpy(), parsed["traffic_light_state/current/valid"].numpy(), parsed["state/current/width"].numpy(), parsed["state/current/length"].numpy(), parsed["state/tracks_to_predict"].numpy(), parsed["state/future/valid"].numpy(), validate=validate, ) for i in range(len(raster_data)): if use_vectorize: raster_data[i]["vector_data"] = vector_data[i].astype(np.float16) r = np.random.randint(max_rand_int) filename = f"{idx2type[int(raster_data[i]['self_type'])]}_{proc_id}_{str(i).zfill(5)}_{r}.npz" np.savez_compressed(os.path.join(out_dir, filename), **raster_data[i]) def main(): args = parse_arguments() print(args) if not os.path.exists(args.out): os.mkdir(args.out) files = os.listdir(args.data) dataset = tf.data.TFRecordDataset( [os.path.join(args.data, f) for f in files], num_parallel_reads=1 ) if args.n_shards > 1: dataset = dataset.shard(args.n_shards, args.each) # p = multiprocessing.Pool(args.n_jobs) proc_id = 0 # res = [] for data in tqdm(dataset.as_numpy_iterator()): # print(data) proc_id += 1 kwds=dict(data=data, proc_id=proc_id, validate=not args.no_valid, out_dir=args.out, use_vectorize=args.use_vectorize) # print(kwds) merge(**kwds) # for r in tqdm(res): # r.get() if __name__ == "__main__": main() if __name__ == "__main__": main() ERROR OUTPUT root@6f4483fc53c1:/app/waymo-adas-main/waymo-motion-prediction-2021# python3 prerender1.py --data /app/waymo-adas-main/waymo-dataset/original/train/ --out /app/waymo-adas-main/data/train1 False Namespace(data='/app/waymo-adas-main/waymo-dataset/original/train/', each=0, n_jobs=1, n_shards=8, no_valid=False, out='/app/waymo-adas-main/data/train1', use_vectorize=False) <ShardDataset shapes: (), types: tf.string> 0it [00:00, ?it/s] Traceback (most recent call last): File "prerender1.py", line 829, in <module> main() File "prerender1.py", line 822, in main merge(**kwds) File "prerender1.py", line 731, in merge parsed = tf.io.parse_single_example(data, features_description) File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler raise e.with_traceback(filtered_tb) from None File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 58, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 40: invalid start byte
st207003
Hi All, Just wanted to call attention to our meeting being moved to 12/9 this month. This will be our first back log grooming meeting so looking forward to your attendance!
st207004
Unfortunately I have a late conflict and calendar should be moved to 12/16. Sincerely apologize.
st207005
SIG Build’s next meeting will be tomorrow, Tuesday, December 7, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 4, and feel free to suggest new agenda items. One of the big discussion topics will be the Docker containers, which are discussed here: Adopting Open-Source Dockerfiles for Official tf-nightly CI 1
st207006
Thanks to everyone who attended yesterday. Here’s a summary of the biggest topics: Next month’s meeting will be January 11th. I have not moved the calendar slot yet. We discussed the Docker containers. Notable points: @angerson will work on a public roadmap; there are some permissions-related tests that should not exist; GPU passthrough isn’t working right now; the resulting wheels do seem to be the same; there are many issues with cache misses that we know about but are low priority. DevInfra is aware of Numpy’s Python 3.7 deprecation plans and need to discuss with the TF product team about what we’re going to do.
st207007
One more point (thanks @mihaimaruseac ): The branch cut for TF 2.8 is on Dec 16th. The final release will occur near end of January and will be accompanied by patch releases to TF 2.5, 2.6 and 2.7. around a similar date. We will have a clearer view of the final release dates on the next SIG BUILD.
st207008
Hi there, I am curious about TFG’s designed representation of tensors. From the code, I found that tensor is represented as an attribute of the tfg.op. The code is like below: // tensorflow/core/ir/importexport/convert_tensor.h // Converts an TensorFlow tensor proto into an MLIR elements attribute. tensorflow::StatusOr<ElementsAttr> ConvertTensorProto( const tensorflow::TensorProto& input_tensor, Builder builder, TFGraphDialect* tfgDialect); From the above code, a tensorflow tensor is converted to mlir::ElementsAttr for a specific op. This would be an attribute for the op node. However, MLIR attributes 1 read following: attributes are compile-time known constant values So, from my understanding, tfg does not allow modifying the tensor’s data, because the tensor attribute should be constant value. In that case, Are the following understanding correct? tfg by design does not allow optimizations like constant_folding, which would modify tensor’s data. (Or have to recreate a new node and replace the old one). Why not create a type instead of attribute in TFG MLIR? As type would allow mutable part 1 listed in the link. Similar design appreas on ShapeAttr , it also does not have the setShape method, meaning this by design could not be modified. As tfg is designed to replace grappler, in grappler, it could easily change those things in NodeDef, how could tfg do these things? Should tfg always create a new node and replace the old one?
st207009
Solved by Mehdi_AMINI in post #8 You really can’t mutate Type and Attribute safely: they are stored in a map and hashed. Every operation (node…) using a given type will reuse the same instance. When you “recreate” a Type what happens is actually hashing and lookup to see if it already exists, in which case it gets returned. In ge…
st207010
Stonepia: tfg by design does not allow optimizations like constant_folding, which would modify tensor’s data. (Or have to recreate a new node and replace the old one). The attribute is immutable, but the node has a mutable dictionary of attribute (this isn’t specific to TFG, this is standard MLIR): so you can just update the attribute on a node. That said, when you do constant folding, in general you replace some nodes with a constant node so you have to recreate a new node and replace the old one regardless. Stonepia: Why not create a type instead of attribute in TFG MLIR? As type would allow mutable part listed in the link. Types aren’t mutable, the exception you’re pointing at is only intended to be able to implement recursive types (you need to create the Type and then mutate it to add a reference to itself). Regardless, I’m not sure what is the problem you’re trying to solve with this? Stonepia: As tfg is designed to replace grappler, in grappler, it could easily change those things in NodeDef, how could tfg do these things? Should tfg always create a new node and replace the old one? As mentioned above, you can’t change the content of an attribute in-place, but you swap individual attributes on an operation. Similarly you can swap the type of individual results of an operation in place. Doing so you don’t modify a type itself, you create a new type and swap it from the old type. In general MLIR is much more efficient than the proto representation used by Grappler. If you haven’t seen it yet, I invite you to look into the MLIR tutorial (slides - recording - online step-by-step) as well as this doc that explains the in-memory representation of the IR and in particular the def-use chains.
st207011
Thank you very much for this detailed reply! Mehdi_AMINI: Regardless, I’m not sure what is the problem you’re trying to solve with this? The problem I am trying to solve is to implement a graph optimization pass based on TFG, similar to GSPMD 1 does for XLA. This optimization pass would add additional information to TFG’s node, and this information would be changed during the optimization process. This info could be something like “estimated split strategy for current op”. Let’s call this split_strategy for simplicity. I would wish this split_strategy flow through the whole graph, thus every op node would have this split_strategy. To do that, I am looking forward to define a dialect called toy , this dialect is based on tfg, and it wraps mlir::TensorType to a toy::TensorType. This toy::TensorType would have an additional field split_strategy. This split_strategy would change frequently when searching for an optimal one. Thus recreating the node would be something too expensive. Mehdi_AMINI: Types aren’t mutable, the exception you’re pointing at is only intended to be able to implement recursive types Mehdi_AMINI: As mentioned above, you can’t change the content of an attribute in-place, but you swap individual attributes on an operation. As you mentioned, if the types are not mutable in general, then could you tell me a little more the use case of how to bind mutable information to the tfg tensor as an attribute? I mean, could you give a little more hint on how to sway individual attributes? By defining the the attribute as a pointer? I thought converting a mlir::ElementsAttr(tfg converted tensor type) to toy::TensorType (toy dialect converted from tfg) dialect and do whatever on that type is a a good choice. If the mutable type is not intended , then what should I refer if I wish to frequently update information binded to an tensor attribute? Thank you again for your help!
st207012
If you’d like to carry information on the nodes, you can just add attributes to the node themselves freely: that’s quite cheap to do. If you rather model this on the types themselves, then you’d just re-create a new type every time you want to modify the “split_strategy” and set it on the actual “edge” in the graph (a Value in MLIR terminology). The actual mechanism depends on how is this “split_strategy” represented? Is this an enum or a more complex datastrutture?
st207013
Thanks for reply! Mehdi_AMINI: The actual mechanism depends on how is this “split_strategy” represented? Is this an enum or a more complex datastrutture? It should be a complex data structure containing something like 1D array representing shapes, 2D array representing devices, etc. Thus it is a struct. Mehdi_AMINI: If you rather model this on the types themselves, then you’d just re-create a new type every time you want to modify the “split_strategy” and set it on the actual “edge” in the graph (a Value in MLIR terminology). In that case, it seems that carrying information on Types is expensive, since I need to recreate every time it changes. So I have to attach a pointer-like attribute, and mutate that underlying information, is this correct? You saved my day! Thank you very much for your help!
st207014
Stonepia: So I have to attach a pointer-like attribute, and mutate that underlying information, is this correct? You really can’t mutate Type and Attribute safely: they are stored in a map and hashed. Every operation (node…) using a given type will reuse the same instance. When you “recreate” a Type what happens is actually hashing and lookup to see if it already exists, in which case it gets returned. In general, if you need to just compute transient information in a transformation, you may not store them directly by mutating the IR, you may keep a map on the side from operations to “split_strategy” and use this as your temporary storage. It does not prevent you from materializing the “split_strategy” as type/attribute annotation when you’re done, but that shouldn’t be too heavy any more at this point.
st207015
Hi everyone! I was exporting a custom data generator to a tf.Dataset to use my dataset in a memory-efficient way. However, I am encountering this error:- ValueError: Expect x to be a non-empty array or dataset. To keep things clean here, I have put all the heavy information on a GitHub Thread 3. If anyone requires additional information or help for reproduction, please do not hesitate to ping me! As I have mentioned in my issue, using torch.random.radn((...)) for a dummy dataset seems the fastest way Dissecting the error message, it seems that in train.py (keras/engine) the Dataset it gets is empty (even though its not) which yields no updates to the model parameters. Since no updates are made, no logs are created. logs is unchanged to its OG value None and it hits the raise statement. If anyone has any idea, please do help me out!
st207016
I made a little progress; but help is really appreciated as Im kinda confused The issue seems to be directly at autokeras/auto_model.py at 78223dfc63056414e32e202b158dbc981be48dc9 · keras-team/autokeras · GitHub 3 just when I pass the tf.Dataset - its simply empty. I have checked numerous times but can confirm I am not passing an empty Dataset. strange. but I will surely double check. Again, if anyone can expedite this process for me - that would be really appreciated!
st207017
When I learned how to use generators with Keras, this thread helped me: python - How to make a generator callable? - Stack Overflow 3
st207018
oh, it’s definitely callable - I’ve already put a lambda in the definition of train_dataset but that doesn’t seem to be the source of the error at all. The actual problem is a bit simple, but I can’t seem to get my head around it at all:- https://www.toptal.com/developers/hastebin/uwanewivul.py 8 very simply, in my own script ,train_dataset seems to change its value automatically after a few lines of code and comments none of which actually reference it directly
st207019
Hi, I would like to collect a little some feedback about the idea of standardizing a little bit a new candidate contributor experience across SIGs as we are moving forward a multi repositories/SIGs oriented ecosystem (e.g. see the new TF-micro and Keras standalone repos or TF core as product RFC 7). With this I hope that we could discuss a minimal set of common contents for the two main Github community health files 4. I have currently submitted two DRAFT PR to collect feedback and comments: README.md 2 CONTRIBUTING.md 4 At the end of the process, if we could find a consensus on the minimal set of info to maintain, then every SIG could also add extra sections in the footers or links to other Markdown files available in its own repository. I hope that we could lower the cognitive overhead of a candidate contributor navigating over the ecosystem. For general comments we can use this thread to discuss the topic. Thanks /cc @thea @Joana @ewilderj @yarri-oss
st207020
Thanks for starting this thread. I’d like to make sure we give special attention to the community Maintainers. We have a mix of very active vs. more passive maintainers across the SIGs – both types are important and to be encouraged! I would like to propose adding a CALL_FOR_MAINTAINERS.md file to your list above.
st207021
it is true that one never thinks about the file format, yes, the tensorflow team should look into this problem
st207022
Keras Team is try to do something more in its own subsystem at: github.com/keras-team/governance project setup best practices keras-team:master ← haifeng-jin:practice opened Dec 6, 2021 haifeng-jin +165 -0 This is an open discussion on how to set up the open-source projects under `kera…s-team/keras`. But some of their points have a scope similar to this thread The goal of this document is to: Improve the overall quality of the projects. The fact that projects all follow the same standard for dev process, which may evolve through time, will ensure the quality from all aspects. Unify the external contributing experience. The external open-source contributors may contribute to multiple Keras projects by submitting issues or pull requests. They don’t need to learn from different contributing guides. Save time for the project leads. They save time by copying and pasting the same setup and by avoiding the listed caveats. /cc @haifeng
st207023
Hi everyone, I am using the CRFModelWrapper method following the tutorial as addons/layers_crf.ipynb at add_crf_tutorial · howl-anderson/addons · GitHub to implement a Bi-LSTM -CRF neural-network for a multi-classes NER problem. The model I built (codes are shown below) can be trained with multiple GPUs and it can be saved and load with the tf.keras.model functions. However, when I I saved the model, a warning shows below which I am not sure if it really matters. After that, I loaded the trained model, it shows many warnings related inconsistent output shape? These warnings are posted below. want to train it again using the fit function, it shows no Saving warning: WARNING:absl:Found untraced functions such as embedding_layer_call_and_return_conditional_losses, embedding_layer_call_fn, embedding_1_layer_call_and_return_conditional_losses, embedding_1_layer_call_fn, multi_head_attention_layer_call_and_return_conditional_losses while saving (showing 5 of 65). These functions will not be directly callable after loading. Loading warnings: 2021-11-21 00:28:34.222763: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:34.374067: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:35.553642: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:37.286608: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:37.556591: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:37.645775: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:37.742369: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:38.758195: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:38.892746: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:38.905252: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:39.369396: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:39.426702: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:39.439491: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:39.562408: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:39.574280: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:40.102410: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:40.363686: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:40.375926: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:40.706863: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:40.765348: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:41.099845: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:41.111754: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:41.127138: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:41.139674: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:41.959556: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:41.971252: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:42.012887: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:42.025740: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:42.284787: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:42.403613: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:42.551636: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:42.773302: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:42.786014: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:42.993502: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:43.005655: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:43.019730: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:43.031896: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:43.045523: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:43.581028: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:43.593377: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:44.286465: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:44.298637: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:44.319105: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:44.331097: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:44.446932: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:44.864126: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:44.995524: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:45.084922: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:45.097600: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:45.134094: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:45.243815: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:45.264186: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:45.599941: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:45.621283: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:45.633535: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:46.106656: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:46.119119: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:46.141522: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:46.165137: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. 2021-11-21 00:28:46.266813: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate. With these warnings, the model can still be saved, loaded, and used for the prediction, the loaded model cannot be re-trained again. The error message is posted below and it seems that the loss function inside the CRFModelWrapper cannot be called again so that the gradient calculation cannot be done. So, I am wondering if the CRFModelWrapper doesn’t support the serialization (save->load->training) or it’s because of some mistakes I have made. If so, is there any way that I can workaround to retrain the model? Thank you very much. Error message when re-training the trained model: Traceback (most recent call last): File "<input>", line 1, in <module> File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py", line 1184, in fit tmp_logs = self.train_function(iterator) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\def_function.py", line 885, in __call__ result = self._call(*args, **kwds) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\def_function.py", line 933, in _call self._initialize(args, kwds, add_initializers_to=initializers) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\def_function.py", line 759, in _initialize self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\function.py", line 3066, in _get_concrete_function_internal_garbage_collected graph_function, _ = self._maybe_define_function(args, kwargs) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\function.py", line 3463, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\function.py", line 3298, in _create_graph_function func_graph_module.func_graph_from_py_func( File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\framework\func_graph.py", line 1007, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\def_function.py", line 668, in wrapped_fn out = weak_wrapped_fn().__wrapped__(*args, **kwds) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\framework\func_graph.py", line 994, in wrapper raise e.ag_error_metadata.to_exception(e) ValueError: in user code: C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py:853 train_function * return step_function(self, iterator) C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py:842 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1286 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2849 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:3632 _call_for_each_replica return fn(*args, **kwargs) C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py:835 run_step ** outputs = model.train_step(data) C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py:791 train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\optimizer_v2\optimizer_v2.py:522 minimize return self.apply_gradients(grads_and_vars, name=name) C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\optimizer_v2\optimizer_v2.py:622 apply_gradients grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars) C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\optimizer_v2\utils.py:72 filter_empty_gradients raise ValueError("No gradients provided for any variable: %s." % ValueError: No gradients provided for any variable: ['chain_kernel:0', 'left_boundary:0', 'right_boundary:0', 'crf_model_wrapper/crf/dense/kernel:0', 'crf_model_wrapper/crf/dense/bias:0', 'embedding/embeddings:0', 'bidirectional/forward_bilstm/lstm_cell_1/kernel:0', 'bidirectional/forward_bilstm/lstm_cell_1/recurrent_kernel:0', 'bidirectional/forward_bilstm/lstm_cell_1/bias:0', 'bidirectional/backward_bilstm/lstm_cell_2/kernel:0', 'bidirectional/backward_bilstm/lstm_cell_2/recurrent_kernel:0', 'bidirectional/backward_bilstm/lstm_cell_2/bias:0', 'time_distributed/kernel:0', 'time_distributed/bias:0']. Below shows my codes except for the data-preprocessing. #%% Build the base model_1 def build_bilstm_crf_model( lstm_unit, fc_unit ) -> tf.keras.Model: x = tf.keras.layers.Input(shape=(None,), dtype=tf.float32, name="inn") y = tf.keras.layers.Embedding(1, 1, mask_zero=True)(x) y = tf.keras.layers.Bidirectional( tf.keras.layers.LSTM(lstm_unit, return_sequences=True,name="bilstm") )(y) y = tf.keras.layers.TimeDistributed( tf.keras.layers.Dense(fc_unit,name="fc") )(y) return tf.keras.Model( inputs=x, outputs=y ) # CFR Wrapper Model class CRFModelWrapper(tf.keras.Model): def __init__( self, model: tf.keras.Model, units: int, chain_initializer="orthogonal", use_boundary: bool = True, boundary_initializer="zeros", use_kernel: bool = True, **kwargs ): super().__init__() self.crf_layer = tfa.layers.CRF( units=units, chain_initializer=chain_initializer, use_boundary=use_boundary, boundary_initializer=boundary_initializer, use_kernel=use_kernel, **kwargs ) self.base_model = model def unpack_training_data(self, data): # override me, if this is not suit for your task if len(data) == 3: x, y, sample_weight = data else: x, y = data sample_weight = None return x, y, sample_weight def call(self, inputs, training=None, mask=None, return_crf_internal=False): base_model_outputs = self.base_model(inputs, training, mask) # change next line, if your model has more outputs crf_input = base_model_outputs decode_sequence, potentials, sequence_length, kernel = self.crf_layer(crf_input) ### potentials =predicted y during training # change next line, if your base model has more outputs # Always keep `(potentials, sequence_length, kernel), decode_sequence, ` # as first two outputs of model. # current `self.train_step()` expected such settings outputs = (potentials, sequence_length, kernel), decode_sequence if return_crf_internal: return outputs else: # outputs[0] is the crf internal, skip it output_without_crf_internal = outputs[1:] # it is nicer to return a tensor instead of an one tensor list if len(output_without_crf_internal) == 1: return output_without_crf_internal[0] else: return output_without_crf_internal def compute_crf_loss(self, potentials, sequence_length, kernel, y, sample_weight=None): ### Added to reshape labels(y) shape = y.shape if len(shape) > 2: y_1 = tf.argmax(y, -1, output_type=tf.int32) ################################################ crf_likelihood, _ = tfa.text.crf_log_likelihood( potentials, y_1, sequence_length, kernel ) # convert likelihood to loss flat_crf_loss = -1 * crf_likelihood if sample_weight is not None: flat_crf_loss = flat_crf_loss * sample_weight crf_loss = tf.reduce_mean(flat_crf_loss) return crf_loss def train_step(self, data): x, y, sample_weight = self.unpack_training_data(data) with tf.GradientTape() as tape: (potentials, sequence_length, kernel), decoded_sequence, *_ = self( x, training=True, return_crf_internal=True ) crf_loss = self.compute_crf_loss( potentials, sequence_length, kernel, y, sample_weight ) loss = crf_loss + tf.reduce_sum(self.losses) gradients = tape.gradient(loss, self.trainable_variables) self.optimizer.apply_gradients(zip(gradients, self.trainable_variables)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(y, decoded_sequence) # Return a dict mapping metric names to current value results = {m.name: m.result() for m in self.metrics} results.update({"loss": loss, "crf_loss": crf_loss}) # append loss return results def test_step(self, data): x, y, sample_weight = self.unpack_training_data(data) (potentials, sequence_length, kernel), decode_sequence, *_ = self( x, training=False, return_crf_internal=True ) crf_loss = self.compute_crf_loss( potentials, sequence_length, kernel, y, sample_weight ) loss = crf_loss + tf.reduce_sum(self.losses) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(y, decode_sequence) # Return a dict mapping metric names to current value results = {m.name: m.result() for m in self.metrics} results.update({"loss": loss, "crf_loss": crf_loss}) # append loss return results When training the model: strategy = tf.distribute.MirroredStrategy() print('Number of devices: {}'.format(strategy.num_replicas_in_sync)) with strategy.scope(): base_model = build_bilstm_crf_model(units_lstm,TAG_SIZE) model = CRFModelWrapper(base_model, TAG_SIZE) model.compile(optimizer=tf.keras.optimizers.Adam(lr)) num_epochs = 10 name = 'BLD_CRF_Lut{}_lr{}_{}'.format(units_lstm,lrr,int(time.time())) name_csv = 'BLD_CRF_Lut{}_lr{}'.format(units_lstm,lrr) mc = tf.keras.callbacks.ModelCheckpoint(filepath=os.path.join(name,'{epoch:02d}'), verbose=1, save_best_only=False,save_weights_only=False) csv_log = tf.keras.callbacks.CSVLogger('{}.csv'.format(name_csv),append=True) model.fit(tr_gen, epochs=num_epochs, validation_data = val_gen,verbose=2,batch_size = bt_sz,callbacks=[mc,csv_log]) When re-training the model: #%% strategy = tf.distribute.MirroredStrategy() print('Number of devices: {}'.format(strategy.num_replicas_in_sync)) with strategy.scope(): # base_model = build_bilstm_crf_model(units_lstm,TAG_SIZE) # model = CRFModelWrapper(base_model, TAG_SIZE) # model.compile(optimizer=tf.keras.optimizers.Adam(lr)) model = load_model(FILE_PATH4) num_epochs = 10 name = 'BLD_CRF_Lut{}_lr{}_{}'.format(units_lstm,lrr,int(time.time())) name_csv = 'BLD_CRF_Lut{}_lr{}'.format(units_lstm,lrr) mc = tf.keras.callbacks.ModelCheckpoint(filepath=os.path.join(name,'{epoch:02d}'),verbose=1,save_best_only=False,save_weights_only=False) csv_log = tf.keras.callbacks.CSVLogger('{}.csv'.format(name_csv),append=True) model.fit(tr_gen, epochs=num_epochs, validation_data = val_gen,verbose=2,batch_size = bt_sz,callbacks=[mc,csv_log])
st207024
Hi @dada_Lai, thank you for your bug report. I am trying to reproduce this bug on my computer, if I found the root cause or need your help, I will let you know.
st207025
Hi @dada_Lai, I have tried to reproduce this bug on Colab, but it works fine in my notebook. You can find my notebook at Google Colab 7 . And, you can reload the model in a separate notebook at Google Colab 2. To run the second notebook, you need to copy the model produced by the first notebook to the workspace of the second notebook via mounting Google Drive. If you have any questions please let me know it.
st207026
Hi @XiaoquanKong , Just in case you didn’t notice the response I posted in another related question, I re-post the results I got as below. I run the same code with the same testing data provided above on Google Colab, and it only shows a few minor warnings when saving: WARNING:absl:Found untraced functions such as dense_1_layer_call_and_return_conditional_losses, dense_1_layer_call_fn, dense_1_layer_call_fn, dense_1_layer_call_and_return_conditional_losses, dense_1_layer_call_and_return_conditional_losses while saving (showing 5 of 15). These functions will not be directly callable after loading. I am wondering if you ever test the codes with tensorflow GPU version (i.e. tensorflow-gpu=2.6.0) because this is the main difference between my tests running on Google Colab (tensorflow=2.6.0) and on my PC (tensorflow-gpu=2.6.0). So far I couldn’t run the code with tensorflow GPU version on Google Colab and will let you know once I fix it. Thank you very much.
st207027
Hi @Bhack, Thank you for the following up. My question is the issue to re-train the model with CRFModelWrapper API still exists when I use the tensorflow-gpu package. In the beginning, I thought the example provided by @XiaoquanKong is only for the tensorflow, not for the tensorflow-gpu, but I found it seems there is no difference between them as the clarification 1. So, I am somewhat confused and wondering if CRFModelWrapper API can work properly in the tensorflow-gpu environment? Please let me know if there is anything not clear. Thanks.
st207028
tensorflow-gpu Is only for older and now end of Life versions: TensorFlow GPU support  |  TensorFlow 1
st207029
Hi everyone, We are trying to build a trainable Quantum Convolutional Neural Network. To this end we are trying to create a new subclass QuantumConvolutionalLayer of the class keras.Layer. How we are trying this now: In the initialization we use the add_weight function to add the trainable_parameters as a weight. Then we create (still in the initialization) a quantum circuit (on 4 qubits) with pennylane that takes 4 classical inputs and the trainable weights that will be called upon. In the call function (for the forward pass): a 2x2 grid is moved over the entire image (mnist in our case) and everytime the 2x2 grid is processed using the quantum circuit defined in the initialization. The outcome of the processing with this quantum circuit of such a 2x2 grid [measurement_results] is stored in a tensorflow tensor object and we aim to then store all these in one big tensor [out]. To do this we use the tensor_scatter_nd_update. Unfortunately when we look at the resulting out tensor it has all zero values, even though when we print the measurement_results of the small grids we get non zero values. Any ideas on how this can be solved? Many thanks in advance for your help! Odiel PS: Below you can find our code so far: # Library installation !pip install qiskit !pip install pennylane # General imports import pennylane as qml import numpy as np from numpy import pi, sqrt import tensorflow as tf from tensorflow import keras from tensorflow.keras import backend from tensorflow.keras.layers import Layer import matplotlib.pyplot as plt from qiskit import QuantumCircuit, Aer, assemble, visualization # Dataset from keras.datasets import mnist # Embedding imports from pennylane.templates import QAOAEmbedding # Build function to load train and test dataset def load_dataset(): # load dataset (trainX, trainY), (testX, testY) = mnist.load_data() # reshape dataset to have a single channel trainX = trainX.reshape((trainX.shape[0], 28, 28, 1)) testX = testX.reshape((testX.shape[0], 28, 28, 1)) # one hot encode target values trainY = tf.keras.utils.to_categorical(trainY) testY = tf.keras.utils.to_categorical(testY) return trainX, trainY, testX, testY # Load train and test dataset X_train, Y_train, X_test, Y_test = load_dataset() num_images = 10 # set to -1 in order to keep all images X_train = X_train[0:num_images] X_test = X_test[0:num_images] Y_train = Y_train[0:num_images] Y_test = Y_test[0:num_images] # Build a class for the trainable quantum convolutional layer that is a subclass of the keras.Layer class class QuantumConvolutionalLayer(Layer): def __init__(self, device = "default.qubit", stride = 2, wires = 4, layers = 1, n_measurements = 4): # Inherits the initialization of the keras.Layer class super(QuantumConvolutionalLayer, self).__init__() # Initialize the device self.wires = wires self.dev = qml.device(device, wires = self.wires) # Initialize the quantum circuit self.layers = layers self.stride = stride self.n_measurements = n_measurements self.trainable_parameters = self.add_weight("trainable_parameters", shape = QAOAEmbedding.shape(n_layers=layers, n_wires=wires), initializer = tf.keras.initializers.RandomNormal()) # To this end, build the quantum circuit (for 1 square of stride x stride) @qml.qnode(device = self.dev, interface = "tf") def quantum_circuit(inputs, trainable_parameters = self.trainable_parameters): QAOAEmbedding(features = inputs, weights = trainable_parameters, wires = range(wires)) return [qml.expval(qml.PauliZ(j)) for j in range(n_measurements)] #weight_shapes = {"trainable_parameters": QAOAEmbedding.shape(n_layers=self.layers, n_wires=self.wires)} #self.quantum_circuit = qml.qnn.KerasLayer(quantum_circuit, weight_shapes = weight_shapes, output_dim = self.n_measurements) self.quantum_circuit = quantum_circuit dtype = tf.float32 if tf.keras.backend.floatx() == tf.float32 else tf.float64 if self.quantum_circuit.diff_method != "backprop" or self.quantum_circuit.diff_method_change: self.quantum_circuit.to_tf(dtype=dtype) def build(self, input_shape): super().build(input_shape) def call(self, inputs): # define forward pass num_images=inputs.shape[0] h_in, w_in, ch_in = inputs.shape[1:] # inputs.shape (28, 28, 1) for MNIST h_out, w_out, ch_out = h_in // self.stride, w_in // self.stride, ch_in * self.n_measurements # (14, 14, 4) for MNIST and our quantum circuit filter out = tf.zeros((num_images, h_out, w_out, ch_out)) for img_idx in range(num_images): # print(tf.rank(out)) for j in range(0, h_in, self.stride): for k in range(0, w_in, self.stride): grid = [inputs[img_idx, j, k, 0], inputs[img_idx, j, k + 1, 0], inputs[img_idx, j + 1, k, 0], inputs[img_idx, j + 1, k + 1, 0]] measurement_results = self.quantum_circuit(inputs = grid, trainable_parameters = self.trainable_parameters) print(measurement_results) for ch in range(self.n_measurements): tf.tensor_scatter_nd_update(out[img_idx], tf.constant([[j//2, k//2, ch]]), [measurement_results[ch]]) return out quanv = QuantumConvolutionalLayer() quanv(np.array([X_train[0]]))
st207030
Dear all, We have found the issue. The tf.tensor_scatter_nd_update function replaces the entries at the given coordinates by the right values. It just doesn’t store them automatically. so you still have to assign it yourself. Replacing the following line in the code: tf.tensor_scatter_nd_update(out[img_idx], tf.constant([[j//2, k//2, ch]]), [measurement_results[ch]]) by this line solves the problem: out = tf.tensor_scatter_nd_update(out, tf.constant([[img_idx, j//2, k//2, ch]]), [measurement_results[ch]])
st207031
Odiel_Hooybergs: re them automatically. so you still have to assign it yourself. Yes. tf.Tensor objects are immutable. for img_idx in range(num_images): # print(tf.rank(out)) for j in range(0, h_in, self.stride): for k in range(0, w_in, self.stride): ... for ch in range*(...): out = tf.tensor_scatter_nd_update(...) Ouch, a quadruple loop. Remember TensorFlow is way more efficient if you can express things in vectorized terms. If there’s any way to make the self.quantum_circuit run on a batch of items, you should consider using tf.image.extract_patches and then run this once over the batch of patches. tf.map_fn or tf.vectorized_map may do it. Similarly, out = tf.tensor_scatter_nd_update(out, ...) makes a complete copy of the image, for each color channel of each pixel. Ooof. tf.map_fn or tf.vectorized_map would address this too, as it collects all the results, and then stacks them. TensorFlow tf.map_fn  |  TensorFlow Core v2.7.0 1 Transforms elems by applying fn to each element unstacked on axis 0. (deprecated arguments) TensorFlow tf.vectorized_map  |  TensorFlow Core v2.7.0 2 Parallel map on the list of tensors unpacked from elems on dimension 0.
st207032
Thanks a lot markdaoust! Thanks to your comment we managed to streamline our code a lot in this vectorized way. I’m sorry for the late reply, but still wanted to let you know that your comment helped us out quite a lot. Kind regards Odiel
st207033
Hello, I have a model that has multiple independent inputs/outputs. I want to execute just 1 of them and read the corresponding output. But TensorflowJS complains that I have to fill all of the inputs: Uncaught Error: Cannot compute the outputs [output_classification1/output_node, output_classification2/output_node, output_classification3/output_node]. Missing the following inputs: [input_classification2/input_node, input_classification3/input_node] Is it not possible to have multiple independent inputs in the model and execute with only 1 filled to get the corresponding output? Note: The inputs/outputs are independent. Thanks!
st207034
I’m running TensorFlow on NVLink’ed GPUs passed through individually to VMs. I’m only sometimes running them in this configuration, but I do want to solve an issue that keeps cropping up with it, for use with other frameworks as well. It seems that, when the 2nd GPU of an NVLink’ed pair (passed through individually, at the moment) is initialized, (when trying torch, at the torch.cuda() call), there are 2 ioctls to /dev/nvidia, which each timeout at 30 seconds (one after the other). I noticed that installing a more recent version of TensorFlow fixed this symptom, so I’m curious: What might’ve changed in terms of NVIDIA: libraries used, drivers loaded, or timeouts configured, between versions? Is this likely addressed in TensorFlow? Or in an upstream dependency: nvcc arguments or defaults, CuDNN use or options used at compile time, or something else? Slow, before: ubuntu@host:~$ python -c 'from pprint import pprint as print; import tensorflow as tf; print([(k,v) for k,v in tf.version.__dict__.items() if "VERSION" in k])'[('COMPILER_VERSION', '9.3.0'), ('GIT_VERSION', 'unknown'), ('GRAPH_DEF_VERSION', 808), ('GRAPH_DEF_VERSION_MIN_CONSUMER', 0), ('GRAPH_DEF_VERSION_MIN_PRODUCER', 0), ('VERSION', '2.6.0')] Fast, after: (tf2) ubuntu@host:~$ python -c 'from pprint import pprint as print; import tensorflow as tf; print([(k,v) for k,v in tf.version.__dict__.items() if "VERSION" in k])' [('COMPILER_VERSION', '7.3.1 20180303'), ('GIT_VERSION', 'v2.7.0-rc1-69-gc256c071bb2'), ('GRAPH_DEF_VERSION', 898), ('GRAPH_DEF_VERSION_MIN_CONSUMER', 0), ('GRAPH_DEF_VERSION_MIN_PRODUCER', 0), ('VERSION', '2.7.0')]
st207035
I’m trying to subclass ModelCheckpoint and I’m curious about an implementation detail on distributed environments. When saving the checkpoint, the callback will call: self._write_filepath = distributed_file_utils.write_filepath(file_path, self.model.distribute_strategy) which tests if the model is being executed in a distributed environment, and, returns the original file_path for the “chief worker”, and a temporary file path for all other workers. The callback will then call: distributed_file_utils.remove_temp_dir_with_filepath(self._write_filepath, self.model.distribute_strategy) which deletes the temporary file_path for the non-chief workers, and leave the original file_path for the original worker. My question is: why this roundabout way of doing things? Why not simply check whether the worker is the chief one, and save the checkpoint only if that is true?
st207036
Hi everyone, I am trying to use the CRFModelWrapper method following the tutorial as addons/layers_crf.ipynb at add_crf_tutorial · howl-anderson/addons · GitHub 2 to implement a Bi-LSTM -CRF neural-network for a multi-classes time-series NER problem, and it works in TF 2.7 in my PC. However, when I used the same code and same data runing in TF 2.6 environment, it pops out some errors(as shown below) regarding to the tfa-CRF layer. So, could someone please let me know if the addons CRF layer is only compatible for the TF 2.7 version or this is because I made some mistakes? Thank you very much. Traceback (most recent call last): File "<input>", line 1, in <module> File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py", line 1134, in fit data_handler = data_adapter.get_data_handler( File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\data_adapter.py", line 1383, in get_data_handler return DataHandler(*args, **kwargs) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\data_adapter.py", line 1138, in __init__ self._adapter = adapter_cls( File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\data_adapter.py", line 917, in __init__ super(KerasSequenceAdapter, self).__init__( File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\data_adapter.py", line 801, in __init__ model.distribute_strategy.run( File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py", line 1286, in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py", line 2849, in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py", line 3632, in _call_for_each_replica return fn(*args, **kwargs) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 597, in wrapper return func(*args, **kwargs) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\data_adapter.py", line 802, in <lambda> lambda x: model(x, training=False), args=(concrete_x,)) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\base_layer.py", line 1037, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "<input>", line 47, in call File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\base_layer.py", line 1037, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow_addons\layers\crf.py", line 131, in call raise NotImplementedError( NotImplementedError: Currently, CRF layer do not support left padding
st207037
I found this might be the incorrect data type I assigned to the input layer of the base model(dtype=int32 → dtype = float32, as shown below). After the modification, it works in both TF2.6 and TF2.7. (Still don’t know the reason why the results in the TF2.6 and TF2.7 are different if the input data type are wrongly assigned) # Build the model def build_embedding_bilstm_crf_model( vocab_size: int, embed_dims: int, lstm_unit: int ) -> tf.keras.Model: x = tf.keras.layers.Input(shape=(None,), dtype=tf.int32, name="x") y = tf.keras.layers.Embedding(vocab_size, embed_dims, mask_zero=True)(x) y = tf.keras.layers.Bidirectional( tf.keras.layers.LSTM(lstm_unit, return_sequences=True) )(y) return tf.keras.Model( inputs=x, outputs=y ) base_model = build_embedding_bilstm_crf_model(VOCAB_SIZE, 32, 64)
st207038
Hi @dada_Lai, can you provide me with some code and data to reproduce this behavior on my computer? Or can you repeat this experiment (use both TF2.6 and TF2.7) a few times on your own computer using the old code? I guess there may be randomness in this behavior.
st207039
Hi @XiaoquanKong, I run the old code (set the dtype = tf.int32) several times on my PC and all got the same error messages. I tried to run it on Google Colab with and without the GPU and they all get the same errors in TF2.6.0.(But I couldn’t run the GPU version properly on Google Colab even I set the dtype = tf.float32 because it cannot find the cudnn module) Please find My code and the testing data through the links and let me know if they cannot work properly. About the other related question, I run the same code with the same testing data provided above on Google Colab, and it only shows a few minor warnings when saving: WARNING:absl:Found untraced functions such as dense_1_layer_call_and_return_conditional_losses, dense_1_layer_call_fn, dense_1_layer_call_fn, dense_1_layer_call_and_return_conditional_losses, dense_1_layer_call_and_return_conditional_losses while saving (showing 5 of 15). These functions will not be directly callable after loading. I am wondering if you ever test the code with tensorflow GPU version (i.e. tensorflow-gpu=2.6.0) because this is the main difference between my tests running on Google Colab (tensorflow=2.6.0) and on my PC (tensorflow-gpu=2.6.0). So far I couldn’t run the code with tensorflow GPU version on Google Colab and will let you know once I fix it. Thank you very much.
st207040
Hi there, I’m trying to quickly load a model to make predictions in a REST API. The tf.keras.models.load_model method takes ~1s to load so it’s too slow for what I’m trying to do. What is the fastest way to load a model for inference only? I know there is a TFX serving server to just do this efficiently but I’ve already have a REST API for doing other things. Setting up a specialised server just for predictions feels like an overkill. How is the TFX server handling this? Thanks in advance, Joan
st207041
Solved by joanfihu in post #8 Hey Robert, thanks for your reply. Yes, I ended up loading the model in memory on server initialisation. It’s a small model so works well this way.
st207042
Setting up a specialised server just for predictions feels like an overkill. I don’t think that it is overkill and probably the alternatives are not so simpler than Tensorflow Serving with Docker E.g. see: Medium – 8 Feb 21 ML model deployment option with concurrency (Flask + uWSGI) 1 Beginner’s guide to ML solutions deployment maintaining concurrency + load testing. Discussion on the Data Scientist’s responsibilities… Reading time: 9 min read
st207043
Yes there might be no other way. However, I’m not sure if the TFX server loads a model from disk in every request? What I’m trying to achieve is to either find a very quick way to load a model from disk or keep the model in memory somehow so it doesn’t need to be loaded in every request. I also tried caching but pickle deserialisation is very expensive and adds ~1.2s. I suspect the built-in load model does some sort of serialisation too, which seems to be the killer.
st207044
I think that you want to use TensorFlow Serving. If your model is small enough to keep in memory, it will. You can also do SavedModel Warmup 7.
st207045
Hey Robert, thanks for your reply. Yes, I ended up loading the model in memory on server initialisation. It’s a small model so works well this way.
st207046
Hi, after running a hyperparameter search with PyTorch and visualizing the results in Tensorboard, I want to read out the hyperparameters from Tensorboard programatically. My folder structure is all_runs run1 Loss_trainingloss 1634168941.9091413 (or some other timestamp) events.out.tfevents.1634168941 events.out.tfevents.1634135651 run2 run3 and I can easily read tensorboard time-series like e.g. the loss curve with code like this: from tensorboard.backend.event_processing.event_accumulator import EventAccumulator def read_eventfile(filepath, tag): event_accumulator = EventAccumulator(filepath) event_accumulator.Reload() events = event_accumulator.Scalars(tag) y = [x.value for x in events] return y train_loss_curve = read_eventfile(path_to_event_folder, "Loss_trainingloss") However I struggle to find an access to the hparams of each run, which are correctly displayed in tensorboard. Anybody knows how this can be done? Thanks!
st207047
Hello, I am trying to print the protobuf of a SavedModel as MLIR code without inlining the weights, just like tf-mlir-translate mygraph.pbtxt --graphdef-to-splatted-mlir does with (not trained) graphdefs. I assume that tf-mlir-translate [mydir] --savedmodel-objectgraph-to-mlir can do that. Am I right ? But when I use tf-mlir-translate [mydir] --savedmodel-objectgraph-to-mlir --tf-savedmodel-exported-names=[name], I have a precondition error : tensorflow/compiler/mlir/tensorflow/translate/tf_mlir_translate.cc:167] SavedModel import failed: FAILED_PRECONDITION: Could not restore saved variable: Adam/dense/kernel/m In the other hand, tf-mlir-translate [mydir] --savedmodel-signaturedefs-to-mlir works well (but prints the MLIR code with weights already inlined). Do you know what I am doing wrong ? Regards
st207048
I followed the tensorflow recommenders movie ranking tutorial 1 and built the model. Now I would like to get top_k recommendations using the model. This is what I tried: layer = tfrs.layers.factorized_top_k.Streaming(model.ranking_model) layer.index(movies.map(model.ranking_model.movie_embeddings), movies) tracks = layer.query_with_exclusions( queries=np.array([["42", "52"]]), exclusions= np.array([[]]) ) But it throws the error “iterating over tf.Tensor is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.” How to invoke the query_with_exclusions() function correctly?
st207049
Also interested in the answer to this, and how one could use this function to remove all previously interacted with items for each user.
st207050
Hi there, I am really interested in Tensorflow MLIR, and hope to create custom dialects from Tensorflow’s existing dialects. I found that recently there is a TensorFlow Graph IR 7 , it seems quite exciting. May I ask what is its def status by now? Does it support basic runtime? I mean, could I use it for actually run by now? I am going to use it for optimizing ResNet training, is it ok now? Besides, Is there any dev discussion group for now? Like PyTorch’s developer group in slack.
st207051
Hi, The Graph IR is fairly recent, and enabled by default in Grappler for a bit more than a month now. We’re ramping up on this this quarter to provide more natural extension point and helpers for transformations on this.
st207052
Hi there, I am trying to convert a PyTorch model which uses a 3D convolution to be used with Tensorflow.js for inference. I saved my PyTorch model in ONNX (with torch.onnx.export) and then converted to a TF Saved Model with the following Python code: import onnx from onnx_tf.backend import prepare onnx_model = onnx.load('path/to/my/onnx/model') tf_model = prepare(onnx_model) tf_model.export_graph('model') The above procedure generates a ‘model’ directory with contains the .pb file and other stuff. Then I run the conversion tool: tensorflowjs_converter --input_format=tf_saved_model model model_tfjs and I get the following error: [... omitted stack trace ...] ValueError: Unsupported Ops in the model before optimization Conv3DBackpropInputV2 Is there any chance to have the Conv3D backprop op to be enabled (or ignored) in tfjs converter? Please note that I don’t need to backprop at the web client, but just use the model for inference. Thank you so much, m.
st207053
I notice that the tensorflow repo doesn’t have a .clang-tidy file anywhere in its tree. Does anyone have a .clang-tidy file that conforms or nearly conforms to the TF coding style? I specifically need it for the mlir-hlo sub-tree (tensorflow/compiler/mlir/).
st207054
Note that the mlir-hlo subtree follows the LLVM coding style in theory (in practice it is a bit of a mix). I don’t know about .clang-tidy, but for .clang-format TF just uses the Google style otherwise .
st207055
Please check the semi-disclosure but old thread in: github.com/tensorflow/addons [CI] Add cpplint as sanity check tensorflow:master ← Squadrick:cpplint opened May 10, 2020 Squadrick +34 -13 Make the required changes to ensure clean cpplint run. Command: ``` $ cpplint… --filter=-build/header_guard --recursive tensorflow_addons/custom_ops ``` /cc @mihaimaruseac @angerson
st207056
Thanks - I went through the thread, but I couldn’t still find a reference .clang-tidy to reuse. Mehdi_AMINI: I don’t know about .clang-tidy, but for .clang-format TF just uses the Google style otherwise Right. I’m aware of the .clang-format for mlir-hlo. But clang-tidy has a different role to play – for eg. in-editor warnings on variable naming style deviations via clangd. (clangd will read a .clang-tidy if there is one.)
st207057
The specific comment was: github.com/tensorflow/addons [CI] Add cpplint as sanity check tensorflow:master ← Squadrick:cpplint opened May 10, 2020 Squadrick +34 -13 Make the required changes to ensure clean cpplint run. Command: ``` $ cpplint… --filter=-build/header_guard --recursive tensorflow_addons/custom_ops ``` Also you can see @Mehdi_AMINI commit at: github.com/tensorflow/mlir-hlo Fix clang-tidy warning about C-style cast committed Sep 14, 2021 joker-eph +1 -1 PiperOrigin-RevId: 396489741 If your read both the links they have a clang-tidy pass in the internal repository but It was not exposed on the open source one. I have no access to the internal repo but I suppose that for mlir-hlo they are using something like: github.com llvm/llvm-project/blob/main/.clang-tidy Checks: '-*,clang-diagnostic-*,llvm-*,misc-*,-misc-unused-parameters,-misc-non-private-member-variables-in-classes,-misc-no-recursion,readability-identifier-naming' CheckOptions: - key: readability-identifier-naming.ClassCase value: CamelCase - key: readability-identifier-naming.EnumCase value: CamelCase - key: readability-identifier-naming.FunctionCase value: camelBack - key: readability-identifier-naming.MemberCase value: CamelCase - key: readability-identifier-naming.ParameterCase value: CamelCase - key: readability-identifier-naming.UnionCase value: CamelCase - key: readability-identifier-naming.VariableCase value: CamelCase - key: readability-identifier-naming.IgnoreMainLikeFunctions value: 1
st207058
I have on my roadmap to add clang-tidy OSS for TF, but no ETA in sight, not high priority at the moment
st207059
We can track this probably in: github.com/tensorflow/build compile_commands.json opened May 27, 2020 bhack It could be nice if we could generate/distribute `compile_commands.json` to bett…er interact with some IDE and our c++ code. I.e. In Bazel and also in Vscode the the official Bazel team vscode plugin we don't have in tree support to generate `compile_commands`: https://github.com/bazelbuild/bazel/issues/258 https://github.com/bazelbuild/vscode-bazel/issues/179 I've tried some quite popular community workaround like: https://github.com/grailbio/bazel-compilation-database But the problem is that we are using `--action_env` for `build` when we generate our `.bazelrc` with `python configure.py` and so these arguments not supported in other command like e.g. `bazel query` https://github.com/bazelbuild/bazel/issues/10226. This is going to invalidate the official bazel vscode plugin that need to execute `query command` but also the `bazel-compilation-database` workaround cause we have problem to retrieve our env variable.
st207060
@petewarden I use the dual core 160 pin microcontroller Arduino Portenta to make TensorFlowLite/Micro models. Recently I found out about an extra 8 MB of SDRAM on the board. I have managed to use the SDRAM to load the 320x320 Grayscale Camera Buffer when using EdgeImpulse.com 1 models, draft code here 1. . . I would like to put the entire TensorflowMicro model_tflite array into the 8 MB SDRAM, the board normally only has 1 MB of RAM for each core, so using SDRAM would be a fair improvement. When using SDRAM for the camera frame buffer I was just changing one pointer for another so it was not that hard. #include <SDRAM.h> SDRAMClass mySDRAM; uint8_t *sdram_frame_buffer; // in the setup mySDRAM.begin(SDRAM_START_ADDRESS); // for camera 320x320 sdram_frame_buffer = (uint8_t *)mySDRAM.malloc(320 * 320 * sizeof(uint8_t)); // in the main loop int myCamResult = myCam.grab(sdram_frame_buffer); // myCamResult should be zero but for TFLITE I need to work with an array, and that is a bit different when calling the array. Anyone got any suggestions to get me started? Here is the basic TFLITE code. const unsigned char model_tflite[] = { ... } unsigned int model_tflite_len = 3048; // which is then called using model = tflite::GetModel(model_tflite); // I have tried passing the array as a pointer (which works in some situations) // but doesn't seem to work here // Not really sure what the GetModel method does with the array. . . using TFLITE tutorial here . More info posted on the Arduino Fourm Portenta - Usage of SDRAM.h library - #3 by jerteach - Portenta - Arduino Forum
st207061
Just as a quick followup, I chatted to Jeremy elsewhere and think we reached a solution: SDRAM Examples · Issue #38 · arduino/ArduinoCore-mbed · GitHub 2
st207062
Yes, thanks @petewarden the basic SDRAM concept is solved, but when I try to use it with the arduino TensorflowLite HelloWorld (sine wave) it compiles on the arduino Portenta but then gives me the red LED flash of death. A starting point, I did get the Portenta working using regular TensorflowLite HelloWorld without and SDRAM saving of the Model. Just the arduino_output_handler.cpp file has analogWrite() to the builtin LED which kills the program, since the Portenta Analog pins strangely can’t do PWM. Changing analogWrite(led, brightness); to digitalWrite(led, brightness); delay(3); then works fine (note: it is really fast so the delay helps see the sine wave better). Then: The code I was running to test SDRAM is here in the main “hello_world.ino” file /* Copyright 2020 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/ #include <TensorFlowLite.h> #include "main_functions.h" #include "tensorflow/lite/micro/all_ops_resolver.h" #include "constants.h" #include "model.h" #include "output_handler.h" #include "tensorflow/lite/micro/micro_error_reporter.h" #include "tensorflow/lite/micro/micro_interpreter.h" #include "tensorflow/lite/schema/schema_generated.h" #include "tensorflow/lite/version.h" #include <SDRAM.h> #define ALIGN_PTR(p,a) ((p & (a-1)) ?(((uintptr_t)p + a) & ~(uintptr_t)(a-1)) : p) SDRAMClass mySDRAM; // define SDRAM pointer alignas(8) unsigned char *sdram_mem; alignas(8) unsigned char *sdram_tflite; // 32-byte aligned // Globals, used for compatibility with Arduino-style sketches. namespace { tflite::ErrorReporter* error_reporter = nullptr; const tflite::Model* model = nullptr; tflite::MicroInterpreter* interpreter = nullptr; TfLiteTensor* input = nullptr; TfLiteTensor* output = nullptr; int inference_count = 0; constexpr int kTensorArenaSize = 2000; uint8_t tensor_arena[kTensorArenaSize]; } // namespace // The name of this function is important for Arduino compatibility. void setup() { mySDRAM.begin(SDRAM_START_ADDRESS); // setup SDRAM memory block sdram_mem = (alignas(8) unsigned char *) SDRAM.malloc(4 + 32 /*alignment*/); sdram_tflite = (alignas(8) unsigned char *)ALIGN_PTR((uintptr_t)sdram_mem, 32); const int model_tflite_len = 2640; const int g_model_len = 2640; //const int g_model_len = 2488; memcpy(sdram_tflite, "0x18/x00/x00/x00/x54/x46/x4c/x33/x00/x00/x0e/x000x18/x00/x04/x00/x08/x00/x0c/x00/x10/x00/x14/x000x0e/x00/x00/x00/x03/x00/x00/x00/x10/x0a/x00/x000xb8/x05/x00/x00/xa0/x05/x00/x00/x04/x00/x00/x000x0b/x00/x00/x00/x90/x05/x00/x00/x7c/x05/x00/x000x24/x05/x00/x00/xd4/x04/x00/x00/xc4/x00/x00/x000x74/x00/x00/x00/x24/x00/x00/x00/x1c/x00/x00/x000x14/x00/x00/x00/x0c/x00/x00/x00/x04/x00/x00/x000x54/xf6/xff/xff/x58/xf6/xff/xff/x5c/xf6/xff/xff0x60/xf6/xff/xff/xc2/xfa/xff/xff/x04/x00/x00/x000x40/x00/x00/x00/xd0/x37/xed/xbd/xb6/x38/xf9/x3e0xc6/x9d/x00/x3f/x1a/xb0/x05/xbe/xc6/x49/xd9/xbe0x55/x62/x8b/xbe/xfa/xa9/x86/x3e/xc4/x4d/x9c/x3e0x85/x7c/x8d/xbe/x44/xf3/x2c/xbe/xb5/x07/x56/x3e0xb3/x84/x49/x3e/x60/x35/xa8/xbc/x60/xbc/xcf/xbe0x23/x52/xff/x3e/xc6/xaf/x39/xbe/x0e/xfb/xff/xff0x04/x00/x00/x00/x40/x00/x00/x00/x00/x00/x00/x000x36/xec/xe2/x3d/x87/xea/x53/xbf/x00/x00/x00/x000x00/x00/x00/x00/x00/x00/x00/x00/x17/xef/xfb/x3e0x5a/x6f/x2b/xbe/x00/x00/x00/x00/x00/x00/x00/x000x1b/x1f/x12/xbf/xb4/xef/xec/xbd/x00/x00/x00/x000x00/x00/x00/x00/x2e/x32/x8f/xbf/x00/x00/x00/x000x5a/xfb/xff/xff/x04/x00/x00/x00/x00/x04/x00/x000xdd/xec/x8d/xbe/x42/xef/x55/x3d/xdd/xdd/x18/x3e0x6b/x54/xaa/x3e/xa5/xa5/xa6/x3e/xf4/x12/xe7/xbd0x06/x6a/x4b/xbe/xf7/x32/x14/x3e/xa4/xe2/xb0/xbe0x0c/x83/xe1/x3d/x88/xf6/xf1/x3e/x6b/x71/x0e/x3c0xde/xed/x77/x3e/x92/x91/x23/x3e/x08/xb4/x1b/xbe0x9d/x9b/xa8/xbe/x77/x35/x9e/xbe/x42/x29/x20/xbf0x07/x20/x45/x3e/x3e/x92/xc3/xbe/x36/xca/x10/xbe0xef/x96/xc0/xbe/x8c/xa8/xcf/x3d/xe5/xbe/xfc/x3d0x44/x5a/xf8/x3d/x92/x68/xd4/xbe/x3d/x18/x8f/x3e0xae/x9a/x45/x3d/x1b/x8b/xb8/xbe/x40/x3d/x8c/xbd0x2f/x90/x2d/xbf/x8a/x81/x1a/x3e/x04/x8f/x5c/xbe0x4c/xb8/xe1/xbd/x62/x66/x59/xbe/xe6/xb4/xb4/xbe0xe6/x74/x3f/x3e/xfc/x40/xaf/x3d/x25/x72/x50/x3e0x47/xec/xcc/x3e/x86/x9e/x70/x3e/x2a/x3b/x67/x3e0xad/xf5/xcf/xbc/x30/x4d/x0d/x3e/xfc/xb9/xd0/x3d0xcf/xa8/xc0/xbe/x7c/x8e/xe8/x3e/x18/x76/x14/xbe0x76/x3f/x84/xbe/x4c/xd6/x46/x3d/x49/x07/xca/x3e0x90/x87/xa0/xbd/xa5/x7d/xdd/xbe/xe5/x4a/xa5/x3e0x0d/x6e/xff/xbd/xbc/x53/xb3/xbe/x57/x58/x92/x3e0x29/x9e/x91/x3e/x24/x25/xd5/x3e/xe5/x06/x07/xbe0x2a/xd4/xb8/xbe/xcc/xe4/x02/xbe/x68/x7b/x95/x3e0xc1/x45/xcc/xbe/x4b/xb3/x82/x3e/x25/xae/x00/xbf0x29/xdc/xb4/xbe/xfe/x09/x60/x3e/x0f/x43/xc7/x3e0xc4/xf8/xcd/xbd/x1d/x74/xc1/xbe/xf4/xc8/x5c/x3e0xe6/xaf/x0b/x3e/x98/x01/xa6/xbd/x96/xb4/x90/xbe0x78/x41/xc3/xbe/xfd/x30/xc1/x3e/x15/x7f/xcb/x3e0xb3/x97/x0a/x3e/x97/x4d/xa9/x3e/x2f/x97/xa5/x3e0x24/x1c/xea/x3d/x6a/x7e/x39/x3e/x83/x3b/x61/xbe0xd8/x55/x6d/x3d/xe1/x22/xd9/x3e/xc0/x09/x88/xbe0x42/x55/xdb/xbe/xfa/x71/x73/x3e/x0d/x35/x88/xbe0xf1/x67/xc9/x3e/xa0/x1f/x76/xbc/x81/xe0/xa0/x3e0xc0/x11/x1c/x3d/x04/x30/x6a/xbe/x2a/x9c/x4c/x3e0x1f/x99/xad/x3e/x3f/xe9/x5a/x3d/xee/xaa/xce/x3e0xd1/x44/xc6/x3e/x9b/xaa/x57/xbe/x43/x1f/x8f/xbe0x34/x8a/x9a/xbd/xbd/x92/x6e/x3e/xc1/xc1/x3c/xbe0x1f/x30/x43/xbe/x45/xe6/xb5/xbc/xce/xa5/x93/xbe0x56/xea/x1a/xbe/x47/x05/xab/x3e/xc0/xd0/xcd/x3e0x25/x60/x8d/x3e/xb6/xf2/x50/x3e/x93/x4c/x09/x3d0xca/x36/x9d/x3e/xcf/xd3/xa0/xbe/xfa/x86/x00/x3e0xe0/xfb/x8c/xbd/xb2/x8a/xbf/x3e/xc3/x70/xa9/xbe0x21/x43/x84/x3e/x5a/x16/x16/x3e/xc4/x0b/x28/x3d0xb9/x09/x51/xbe/xda/xcc/x04/xbe/xd9/x4b/x8a/x3e0x8f/x22/x93/x3d/x72/xfe/x2d/xbe/x8e/x0d/x13/xbe0xa1/x47/xe4/xbe/x03/x52/xe0/x3e/xd2/xdc/x5c/xbe0x63/xab/x83/x3e/x14/x07/x92/x3d/xe8/xa4/x7e/x3d0x16/xc6/x92/x3e/xd4/x76/x95/xbe/xf0/xdc/xc4/xbd0xec/x5e/x74/x3e/x0c/x08/xc7/x3e/x66/xc0/x9c/xbe0x6c/xf6/xdd/xbd/x77/x7e/x74/x3e/xc0/xfb/xdd/x3b0x7e/x00/x08/xbe/xa0/x2e/x9f/xbc/xe0/xc5/xbf/x3c0x22/xcc/x95/xbe/x00/x90/x80/x3a/x48/x39/x5c/xbd0x02/x1c/x06/xbe/x60/x32/xdd/xbc/x9c/xfb/x99/xbe0xb0/xc2/x0d/xbd/x28/x33/x42/xbd/x90/x1f/x82/xbd0x09/xd4/xd9/x3e/xa4/x79/xf0/x3d/xe6/x5b/x21/xbe0x6d/xa2/xbe/xbe/xae/x26/xc9/xbe/x76/x50/xeb/x3d0x5e/x01/x86/x3e/x67/xbd/xdc/x3e/x78/x38/x8a/xbe0x74/xc3/xec/x3d/x70/xc9/xd0/x3e/xc3/xfe/x5a/xbd0xd8/x69/x31/xbd/x3b/xdb/x7f/xbe/x12/x35/x58/xbe0xb0/x3a/x81/x3e/xcc/xa0/x17/xbe/x32/xad/x48/x3e0x1f/x35/x8d/x3e/xec/xfe/xd4/x3d/xef/x33/x7b/xbe0xce/x81/x9e/xbe/x02/x7a/x5a/x3e/xa7/xb6/xd5/xbe0xae/x04/x0c/x3e/x95/x40/xd1/x3e/x87/xfe/x1b/x3e0x61/xf3/x82/xbe/x00/xcd/x3c/xbd/x03/x7d/x8b/x3e0x20/xfd/xcb/x3c/x80/x33/x99/xbc/x09/xad/x68/xbe0x7a/xff/x75/x3e/x11/xba/x6d/xbe/x30/xe3/x31/x3d0x00/xfa/xc1/xbd/x9f/x38/xa3/xbe/x97/x88/x28/xbe0x37/x72/x3c/xbe/x58/xc6/x86/xbd/x62/xdf/x70/x3e0xb5/xef/xb8/x3c/x16/x67/x87/x3e/xce/x77/xb4/xbe0x20/x52/x64/x3d/xa8/xcc/xbc/xbd/x17/x62/x49/xbb0x42/x13/x49/x3e/xa6/x0a/x14/x3e/x95/xc9/x37/xbe0x8c/x10/xb2/xbe/xe1/x11/x6f/xbe/x46/x45/x73/x3e0xf0/x26/xbc/xbe/xc0/x4f/x88/xbb/x39/x0a/xbb/x3e0x0f/x32/x88/x3e/xf0/x81/x99/x3e/x3f/xd5/x8c/xbe0x10/x59/x4f/xbd/x63/x85/xb6/x3e/x6a/xe4/x98/xbf0x6b/x1a/xa8/xbe/xc8/x77/xb5/xbe/x80/xea/x3f/xbd0x6d/x91/xac/xbe/xd2/xdd/x63/xbe/x41/xcb/x59/xbe0xb6/x0f/x94/xbe/x45/x12/x89/x3e/x4f/x6f/xb8/x3e0x08/xa5/x02/xbd/x12/x74/xa8/xbe/x56/x2f/xa6/xbe0x9e/xd8/x11/xbd/x3e/xdd/x67/x3e/x3e/xa8/x1a/x3e0x81/x8c/x9c/x3e/x9f/x5b/x96/x3d/x9c/x65/xca/x3d0xb4/x3c/xd2/xbe/x91/x2a/x45/x3e/x63/x53/x9c/x3e0x99/x85/x42/xbe/xfc/xaf/x04/xbe/x12/xd7/x88/xbe0xfa/xc2/xc5/xbe/x71/x2f/x96/x3e/xe5/x0b/x93/x3e0xb8/x85/xf4/x3d/x37/x48/xcd/xbe/xa1/xc8/x63/xbe0x5a/xa0/x3c/x3e/x4d/x84/x1e/x3b/x79/x40/x50/x3e0xcb/xb1/xd0/x3e/xe1/x10/xcd/x3e/x7f/x3c/xcd/x3e0xb8/xda/x37/xbe/x66/xff/xff/xff/x04/x00/x00/x000x40/x00/x00/x00/x9c/x4f/xd6/xbe/x7d/xdb/xf3/x3e0x6b/x2b/xaa/xbe/xc1/x26/x26/xbf/xee/x03/x1f/x3f0x00/x00/x00/x00/x37/xab/xa7/xbe/xd0/x10/x5b/x3e0xe5/xbc/xca/xbe/x00/x00/x00/x00/x94/x4a/x48/x3e0xc3/x7d/x2f/xbd/x27/xef/x9b/xbd/x32/x58/x48/x3f0xfc/x3b/xda/xbb/x18/xfd/x60/xbd/xb2/xff/xff/xff0x04/x00/x00/x00/x40/x00/x00/x00/xa8/xde/x92/x390x4a/x26/x8b/xbf/xb9/x79/x8c/x3e/xc1/xb8/x1e/x400xde/x0b/x9d/xbf/x0e/xe2/x61/xbe/xdc/x1b/xb1/x3e0x47/x15/x60/xbe/x8c/x3c/x06/x3f/xfc/x69/xcf/x3e0xd0/x07/x15/xbf/xa7/x92/x25/x3e/xf6/xf8/xbc/x3e0x31/xe1/xd4/x3f/x3f/xf6/x02/x3f/x78/xf2/x02/x3d0x00/x00/x06/x00/x08/x00/x04/x00/x06/x00/x00/x000x04/x00/x00/x00/x04/x00/x00/x00/xaf/x96/x93/xbe0xb8/xfb/xff/xff/x0f/x00/x00/x00/x54/x4f/x43/x4f0x20/x43/x6f/x6e/x76/x65/x72/x74/x65/x64/x2e/x000x01/x00/x00/x00/x10/x00/x00/x00/x0c/x00/x14/x000x04/x00/x08/x00/x0c/x00/x10/x00/x0c/x00/x00/x000xf0/x00/x00/x00/xe4/x00/x00/x00/xd8/x00/x00/x000x04/x00/x00/x00/x03/x00/x00/x00/x90/x00/x00/x000x48/x00/x00/x00/x04/x00/x00/x00/xce/xff/xff/xff0x00/x00/x00/x08/x18/x00/x00/x00/x0c/x00/x00/x000x04/x00/x00/x00/x1c/xfc/xff/xff/x01/x00/x00/x000x00/x00/x00/x00/x03/x00/x00/x00/x07/x00/x00/x000x08/x00/x00/x00/x09/x00/x00/x00/x00/x00/x0e/x000x14/x00/x00/x00/x08/x00/x0c/x00/x07/x00/x10/x000x0e/x00/x00/x00/x00/x00/x00/x08/x1c/x00/x00/x000x10/x00/x00/x00/x04/x00/x00/x00/xba/xff/xff/xff0x00/x00/x00/x01/x01/x00/x00/x00/x07/x00/x00/x000x03/x00/x00/x00/x04/x00/x00/x00/x05/x00/x00/x000x06/x00/x00/x00/x00/x00/x0e/x00/x16/x00/x00/x000x08/x00/x0c/x00/x07/x00/x10/x00/x0e/x00/x00/x000x00/x00/x00/x08/x24/x00/x00/x00/x18/x00/x00/x000x0c/x00/x00/x00/x00/x00/x06/x00/x08/x00/x07/x000x06/x00/x00/x00/x00/x00/x00/x01/x01/x00/x00/x000x04/x00/x00/x00/x03/x00/x00/x00/x01/x00/x00/x000x02/x00/x00/x00/x03/x00/x00/x00/x01/x00/x00/x000x00/x00/x00/x00/x01/x00/x00/x00/x01/x00/x00/x000x0a/x00/x00/x00/x10/x03/x00/x00/xa4/x02/x00/x000x40/x02/x00/x00/xf4/x01/x00/x00/xac/x01/x00/x000x48/x01/x00/x00/xfc/x00/x00/x00/xb4/x00/x00/x000x50/x00/x00/x00/x04/x00/x00/x00/x26/xfd/xff/xff0x3c/x00/x00/x00/x01/x00/x00/x00/x0c/x00/x00/x000x04/x00/x00/x00/x18/xfd/xff/xff/x20/x00/x00/x000x73/x65/x71/x75/x65/x6e/x74/x69/x61/x6c/x5f/x310x2f/x64/x65/x6e/x73/x65/x5f/x34/x2f/x4d/x61/x740x4d/x75/x6c/x5f/x62/x69/x61/x73/x00/x00/x00/x000x01/x00/x00/x00/x01/x00/x00/x00/x6e/xfd/xff/xff0x50/x00/x00/x00/x02/x00/x00/x00/x0c/x00/x00/x000x04/x00/x00/x00/x60/xfd/xff/xff/x34/x00/x00/x000x73/x65/x71/x75/x65/x6e/x74/x69/x61/x6c/x5f/x310x2f/x64/x65/x6e/x73/x65/x5f/x34/x2f/x4d/x61/x740x4d/x75/x6c/x2f/x52/x65/x61/x64/x56/x61/x72/x690x61/x62/x6c/x65/x4f/x70/x2f/x74/x72/x61/x6e/x730x70/x6f/x73/x65/x00/x00/x00/x00/x02/x00/x00/x000x01/x00/x00/x00/x10/x00/x00/x00/xce/xfd/xff/xff0x34/x00/x00/x00/x08/x00/x00/x00/x0c/x00/x00/x000x04/x00/x00/x00/xc0/xfd/xff/xff/x19/x00/x00/x000x73/x65/x71/x75/x65/x6e/x74/x69/x61/x6c/x5f/x310x2f/x64/x65/x6e/x73/x65/x5f/x33/x2f/x52/x65/x6c0x75/x00/x00/x00/x02/x00/x00/x00/x01/x00/x00/x000x10/x00/x00/x00/x12/xfe/xff/xff/x3c/x00/x00/x000x03/x00/x00/x00/x0c/x00/x00/x00/x04/x00/x00/x000x04/xfe/xff/xff/x20/x00/x00/x00/x73/x65/x71/x750x65/x6e/x74/x69/x61/x6c/x5f/x31/x2f/x64/x65/x6e0x73/x65/x5f/x33/x2f/x4d/x61/x74/x4d/x75/x6c/x5f0x62/x69/x61/x73/x00/x00/x00/x00/x01/x00/x00/x000x10/x00/x00/x00/x5a/xfe/xff/xff/x50/x00/x00/x000x04/x00/x00/x00/x0c/x00/x00/x00/x04/x00/x00/x000x4c/xfe/xff/xff/x34/x00/x00/x00/x73/x65/x71/x750x65/x6e/x74/x69/x61/x6c/x5f/x31/x2f/x64/x65/x6e0x73/x65/x5f/x33/x2f/x4d/x61/x74/x4d/x75/x6c/x2f0x52/x65/x61/x64/x56/x61/x72/x69/x61/x62/x6c/x650x4f/x70/x2f/x74/x72/x61/x6e/x73/x70/x6f/x73/x650x00/x00/x00/x00/x02/x00/x00/x00/x10/x00/x00/x000x10/x00/x00/x00/xba/xfe/xff/xff/x34/x00/x00/x000x0a/x00/x00/x00/x0c/x00/x00/x00/x04/x00/x00/x000xac/xfe/xff/xff/x19/x00/x00/x00/x73/x65/x71/x750x65/x6e/x74/x69/x61/x6c/x5f/x31/x2f/x64/x65/x6e0x73/x65/x5f/x32/x2f/x52/x65/x6c/x75/x00/x00/x000x02/x00/x00/x00/x01/x00/x00/x00/x10/x00/x00/x000xfe/xfe/xff/xff/x3c/x00/x00/x00/x05/x00/x00/x000x0c/x00/x00/x00/x04/x00/x00/x00/xf0/xfe/xff/xff0x20/x00/x00/x00/x73/x65/x71/x75/x65/x6e/x74/x690x61/x6c/x5f/x31/x2f/x64/x65/x6e/x73/x65/x5f/x320x2f/x4d/x61/x74/x4d/x75/x6c/x5f/x62/x69/x61/x730x00/x00/x00/x00/x01/x00/x00/x00/x10/x00/x00/x000x46/xff/xff/xff/x50/x00/x00/x00/x06/x00/x00/x000x0c/x00/x00/x00/x04/x00/x00/x00/x38/xff/xff/xff0x34/x00/x00/x00/x73/x65/x71/x75/x65/x6e/x74/x690x61/x6c/x5f/x31/x2f/x64/x65/x6e/x73/x65/x5f/x320x2f/x4d/x61/x74/x4d/x75/x6c/x2f/x52/x65/x61/x640x56/x61/x72/x69/x61/x62/x6c/x65/x4f/x70/x2f/x740x72/x61/x6e/x73/x70/x6f/x73/x65/x00/x00/x00/x000x02/x00/x00/x00/x10/x00/x00/x00/x01/x00/x00/x000xa6/xff/xff/xff/x48/x00/x00/x00/x09/x00/x00/x000x2c/x00/x00/x00/x0c/x00/x00/x00/x08/x00/x0c/x000x04/x00/x08/x00/x08/x00/x00/x00/x10/x00/x00/x000x04/x00/x00/x00/x01/x00/x00/x00/x00/x00/x7f/x430x01/x00/x00/x00/x00/x00/x00/x00/x0d/x00/x00/x000x64/x65/x6e/x73/x65/x5f/x32/x5f/x69/x6e/x70/x750x74/x00/x00/x00/x02/x00/x00/x00/x01/x00/x00/x000x01/x00/x00/x00/x00/x00/x0e/x00/x14/x00/x04/x000x00/x00/x08/x00/x0c/x00/x10/x00/x0e/x00/x00/x000x28/x00/x00/x00/x07/x00/x00/x00/x10/x00/x00/x000x08/x00/x00/x00/x04/x00/x04/x00/x04/x00/x00/x000x08/x00/x00/x00/x49/x64/x65/x6e/x74/x69/x74/x790x00/x00/x00/x00/x02/x00/x00/x00/x01/x00/x00/x000x01/x00/x00/x00/x01/x00/x00/x00/x10/x00/x00/x000x00/x00/x0a/x00/x0c/x00/x07/x00/x00/x00/x08/x000x0a/x00/x00/x00/x00/x00/x00/x09/x03/x00/x00/x00", model_tflite_len); // Set up logging. Google style is to avoid globals or statics because of // lifetime uncertainty, but since this has a trivial destructor it's okay. // NOLINTNEXTLINE(runtime-global-variables) static tflite::MicroErrorReporter micro_error_reporter; error_reporter = &micro_error_reporter; // Map the model into a usable data structure. This doesn't involve any // copying or parsing, it's a very lightweight operation. // model = tflite::GetModel(g_model); model = tflite::GetModel(sdram_tflite); if (model->version() != TFLITE_SCHEMA_VERSION) { TF_LITE_REPORT_ERROR(error_reporter, "Model provided is schema version %d not equal " "to supported version %d.", model->version(), TFLITE_SCHEMA_VERSION); return; } // This pulls in all the operation implementations we need. // NOLINTNEXTLINE(runtime-global-variables) static tflite::AllOpsResolver resolver; // Build an interpreter to run the model with. static tflite::MicroInterpreter static_interpreter( model, resolver, tensor_arena, kTensorArenaSize, error_reporter); interpreter = &static_interpreter; // Allocate memory from the tensor_arena for the model's tensors. TfLiteStatus allocate_status = interpreter->AllocateTensors(); if (allocate_status != kTfLiteOk) { TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed"); return; } // Obtain pointers to the model's input and output tensors. input = interpreter->input(0); output = interpreter->output(0); // Keep track of how many inferences we have performed. inference_count = 0; } // The name of this function is important for Arduino compatibility. void loop() { // Calculate an x value to feed into the model. We compare the current // inference_count to the number of inferences per cycle to determine // our position within the range of possible x values the model was // trained on, and use this to calculate a value. float position = static_cast<float>(inference_count) / static_cast<float>(kInferencesPerCycle); float x = position * kXrange; // Quantize the input from floating-point to integer int8_t x_quantized = x / input->params.scale + input->params.zero_point; // Place the quantized input in the model's input tensor input->data.int8[0] = x_quantized; // Run inference, and report any error TfLiteStatus invoke_status = interpreter->Invoke(); if (invoke_status != kTfLiteOk) { TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed on x: %f\n", static_cast<double>(x)); return; } // Obtain the quantized output from model's output tensor int8_t y_quantized = output->data.int8[0]; // Dequantize the output from integer to floating-point float y = (y_quantized - output->params.zero_point) * output->params.scale; // Output the results. A custom HandleOutput function can be implemented // for each supported hardware target. HandleOutput(error_reporter, x, y); // Increment the inference_counter, and reset it if we have reached // the total number per cycle inference_count += 1; if (inference_count >= kInferencesPerCycle) inference_count = 0; } This code compiles, but crashes on startup. Any suggestion to try would be appreciated, but not a huge deal as I have lots of other ML projects on the go.
st207063
I have a dataframe in my ML project it is in the following format initially. Feature 1 | Feature 2| Feature 3 | Feature 4 | etc. Time 1 Time 2 Time 3 Etc. I am trying to change this dataframe to be 3d, where each value in this dataframe has another dimension into the screen, containing the same value for the same feature, but at previous 192 timesteps. Here i am trying to use the built in function keras.preprocessing.timeseries_dataset_from_array(), but it returns the opposite of what i’m trying to achieve. I expect it to return Feature 1 | Feature 2| Feature 3 | Feature 4 | etc. Time 192| [1-192] | [1-192] | [1-192] | | Time 193| | | | | Time 194| | | | | Time End| | | | | Here it instead returns: Feature 1 | Feature 2| Feature 3 | Feature 4 | etc. Time 1| [192-1] | [192-1] | [192-1] | | Time 2| | | | | Time 3| | | | | Time End-192| | | | | Basically every sample contains the future 192 values, instead of the previous 192 values of the dataset. Therefore it ends 192 samples before it should, and starts 192 samples too early. My code is the following: #Past is defined as 192 #x_val is the 2-d dataframe #y_val is one of the columns in the dataframe. dataset_historic_train = keras.preprocessing.timeseries_dataset_from_array( x_val, y_val, sequence_length=past, batch_size=len(x_val), ) Where x_val is the entirety of my 2-d dataframe indexed from first to last time of sample, and y_val is my target feature, which is Feature 1 in this case. pythondat
st207064
You pass x and y of equal length to the dataset constructor. When it transforms x using sliding window of size 192, the x becomes shorter, because the first 192 rows of your original DataFrame do not have enough previous values. So it drops the last 192 values of y to pair it with x. To make it work as expected you should pass x and y[192:]. Then it will drop the first 192 values of y.
st207065
Greetings TF.js Community, We are looking forward to catching up with you all today Tuesday November 2nd from 5:00-6:00 P.M. PST. We will discuss the first RFC to the new sig-tfjs repo “WebNN Delegate for TensorFlow Lite Web” (https://github.com/tensorflow/sig-tfjs/pull/2 2), and also discuss Coral Node support and TFJS Debugger tool. Please find the agenda and the Gmeet link below. Feel free to add any questions or topics for discussion to the agenda. docs.google.com [Public] SIG TF.js Meeting Notes 1 [Public] SIG TF.js Meeting Notes Meeting: 2021-11-02 Tuesday, November 2nd, 2021, 5:00 – 6:00 pm Pacific Time Meeting Recording - TODO Please join this link: meet.google.com/dac-ngjq-okc (US) +1 617-675-4444‬ PIN: ‪298 636 519 4797‬# Shared Drive... Cheers, and talk soon! Masoud on behalf of the TF.js team
st207066
Hi everyone! If you didn’t have the chance to join today’s meeting, here’s the link 2 to rewatch it. See you next time!
st207067
I have trained a model using tensorflow model maker model = image_classifier.create( train_data, train_whole_model= False, epoch=5 ) and exporting it: model.export(export_dir='./tfjs-model', export_format=ExportFormat.TFJS) then with tensorflow js I load it: model = await tf.loadLayersModel("./tfjs-model/model.json"); but it throws the following error Unknown layer: HubKerasLayerV1V2. This may be due to one of the following reasons: 1. The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code. 2. The custom layer is defined in JavaScript, but is not registered properly with tf.serialization.registerClass(). at deserializeKerasObject (generic_utils.js:227) at deserialize (serialization.js:25) at fromConfig (models.js:861) at deserializeKerasObject (generic_utils.js:258) at deserialize (serialization.js:25) at loadLayersModelFromIOHandler (models.js:222) I have also tried to export the model with tensorflow js, but I get the same error `tensorflowjs_converter --input_format=tf_saved_model --output_node_names=‘efficientnet-b0/model/blocks_1/tpu_match_normalization’ --saved_model_tags=./model ./tfjs-model2
st207068
SIG Build’s next meeting will be tomorrow, Tuesday, November 2, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 4, and feel free to suggest new agenda items. The time might be different for attendees in Europe because the USA starts daylight savings time on November 7, but other countries may do it differently.
st207069
Hello, this is my first time posting here. Let me know whether this question is suited to be posted to this forum or not. I have been trying to translate tensorflow models into MLIR-HLO. As such I successfully used tf-mlir-translate , tf-opt with very simple TF models like Conv2D, MatMul,MaxPool…, achieving the expected results. However, if I try to do the same with the official example tf.Variable with the three steps below: tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false target.pbtxt -tf-prune-unused-nodes -tf-control-output-arrays=Variable/Assign,AssignAdd -o target.mlir tf-opt -tf-executor-to-functional-conversion target.mlir -o target-func.mlir tf-opt --tf-to-hlo-pipeline target-func.mlir -o target-mhlo.mlir I get the following error message with the 3’rd step: target-func.mlir:2:3: error: The following operations cannot be legalized: tf.Assign (count: 1); tf.AssignAdd (count: 1); tf.VariableV2 (count: 1). These legalization failure(s) may be due to missing TF to HLO lowerings and/or unsupported attributes, etc. func @main() attributes {tf.entry_function = {control_outputs = "Variable/Assign,AssignAdd", inputs = "", outputs = ""}} { ^ target-func.mlir:2:3: error: Emitting more detail about one op that failed to legalize... func @main() attributes {tf.entry_function = {control_outputs = "Variable/Assign,AssignAdd", inputs = "", outputs = ""}} { ^ target-func.mlir:7:10: error: 'tf.Assign' op is not legalizable %2 = "tf.Assign"(%0, %cst_0) {_class = ["loc:@Variable"], device = "", use_locking = true, validate_shape = true} : (tensor<!tf.f32ref>, tensor<f32>) -> tensor<*x!tf.f32ref> ^ target-func.mlir:7:10: note: see current operation: %4 = "tf.Assign"(%2, %0) {_class = ["loc:@Variable"], device = "", use_locking = true, validate_shape = true} : (tensor<!tf.f32ref>, tensor<f32>) -> tensor<!tf.f32ref> As of right now, is there a way to lower these tf operations into mhlo-hlo ?
st207070
“HLO ops are intended for the numeric computation side and are mostly pure with state management is handled outside them. For TF this is done via TF ops and the state hoisted/sunk outside stateless parts" How to continue lowering for TF? does TF lower the resource variables to other dialect than mhlo-hlo?
st207071
In general (for models that are supporting it), the resource variables will be hoisted outside of the function intended to be compiler to make it “pure”. See for example this pass pipeline which is our flow to target TPUs: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/tensorflow/transforms/bridge.cc#L136 2
st207072
Hi GipsonLeo, Have you found any solutions to convert the tf.assign to MHLO dialect? I experienced the same issue. Thanks
st207073
Hi there, I am almost entirely new to TF or any ML. I am a software developer and do some web-applications. As an improvement to one project, it would be great if there was a possibility, to detect and track first a passport and later a face on the users webcam or smartphone cam. This is part of an identification process and will support the system to take pictures at the right moment with optimal framing. I already played a littlebit with the teachable machine demo and got the impression, that this kind of approach even in the browser is realtime capable on a normal enduser-computer or smartphone. I know that the teachable machine does image classification instead of tracking. And I am not sure what would be the best approach. Basically I want a person to show the front-side of his passport in the webcam - take a photo - show the rear side - take a photo. Show the own face - take a photo - and another photo of the face with some parallaxe to separate a photo of a face from a real 3d-object. Regarding this idea/vision, I have several questions which I would really like to know to ask somebody who knows things about TF.js: What method should I go for: Object tracking or Image classification? (what kind of training data would be required?) What footprint in terms of end-user-filesize could I expect to end up with? What technology/processes and tools do I have to learn to realize this? Thanks, Simon
st207074
Hello everyone, when I read the examples code of tflite-micro, I saw that different examples use different values of kTensorArenaSize. What is the basis for determining this value? For example, in hello_world, the value of kTensorArenaSize is 2000, and in magic_wand, the value of kTensorArenaSize is 64*1024.
st207075
In the comment, there are such lines, so the only way is to try again and again to get minist value. // Create an area of memory to use for input, output, and intermediate arrays. // The size of this will depend on the model you’re using, and may need to be // determined by experimentation.
st207076
Using typeof() on a tf tensor only returns object. Using isinstancof object, or isinstanceof tf.tensor, will generate syntax error of missing ) after argument list… So, how to verify an object is a tf tensor on which we can apply tensor related operations? import * as tf from "@tensorflow/tfjs"; const a = tf.tensor([[1, 2], [3, 4]]); console.log('type:', typeof(a)); // returns "object"
st207077
You can simply print in the console the variable assigned to console and that would print Tensor if the object is tf object. For Example import * as tf from "@tensorflow/tfjs"; console.log(tf.tensor([1, 2, 3])) Output:
st207078
Thanks for the suggestion, @Aseem_Mangla However, I wonder if there is a direct way to get a response that a given object is a “tensorflow tensor” in particular, or not. E.g. we will get class torch.Tensor when we ask the type of a PyTorch tensor as shown below: import torch import numpy as np data = [[1, 2], [3, 4], [5, 6]] t1 = torch.tensor(data) print(type(t1)) # output is: # <class 'torch.Tensor'>
st207079
Allow me to reply myself. This question is actually about how to get an instance object’s class name. The solutions are shown in the code below: import * as tf from "@tensorflow/tfjs"; const a = tf.randomNormal([3, 4, 5]); a.print(); console.log('type of a \t\t:', typeof(a)); // returns a string "object" console.log('a is an instance of \t:', a instanceof tf.Tensor); // returns a boolean "true" console.log('constructor of a \t:', a.constructor); // returns a function "Tensor(){}" console.log('constructor name of a \t:', a.constructor.name); // returns a string "Tensor" // reference link : // https://stackoverflow.com/questions/1249531/how-to-get-a-javascript-objects-class?rq=1 function getNativeClass(obj) { if (typeof obj === "undefined") return "undefined"; if (obj === null) return "null"; return Object.prototype.toString.call(obj).match(/^\[object\s(.*)\]$/)[1]; } function getAnyClass(obj) { if (typeof obj === "undefined") return "undefined"; if (obj === null) return "null"; return obj.constructor.name; } console.log('getAnyClass of a \t:', getAnyClass(a)); // returns a string "Tensor" console.log('getNativeClass of a \t:', getNativeClass(a)); // returns a string "object" [codeSandBox link 1] [stackoverflow reference link 2]
st207080
I think you could be interested to this presentation: Getting Started with Google's TensorFlow on Spresense
st207081
I am building a simple 1D convolutional neural network in Keras. Here is the model: def build_model(): model = models.Sequential() model.add(layers.SeparableConv1D(64, kernel_size=2, activation="relu", input_shape=(64,20))) model.add(layers.SeparableConv1D(64, kernel_size=2, activation="relu")) model.add(layers.MaxPooling1D(4)) model.add(layers.Flatten()) model.add(layers.Dense(128, activation="relu")) model.add(layers.Dense(128, activation="relu")) model.add(layers.Dropout(0.1)) model.add(layers.Dense(1, activation="sigmoid")) model.compile( optimizer='rmsprop', loss='binary_crossentropy', metrics=[ keras.metrics.BinaryAccuracy(), ], ) #model.summary() return model When I train my model on roughly 1500 samples, I always get my training and validation accuracy completely overlapping and virtually equal, reflected in the graph below. This is making me think there is something fishy going on with my code or in Keras/Tensorflow since the loss is increasing dramatically and you would expect the accuracy to be affected at least somewhat by this. It looks like it is massively overfitting and yet only reporting the accuracy values for the training set or something along those lines. When I then test on a test set, the accuracy is nowhere near the 85 to 90 percent reported on the graph, but rather ~70%. Any help is greatly appreciated, I have been stuck on this for the longest time. Below is the training code. image1764×615 116 KB #Define the number of folds... this will give us an 80/20 split k = 5 epochs = 100 num_val_samples = len(x_train) // k scores_binacc = [] scores_precision = [] scores_recall = [] histories = [] #Train the dense model in k iterations for i in range(k): print('Processing fold #', i) val_data = x_train[i * num_val_samples : (i + 1) * num_val_samples] val_targets = y_train[i * num_val_samples : (i + 1) * num_val_samples] print('Validation partition = ', i * num_val_samples, (i + 1) * num_val_samples) print('Training partition 1 = ', 0, i * num_val_samples) print('Training partition 2 = ', (i+1) * num_val_samples, len(x_train)) partial_train_data = np.concatenate( [ x_train[:i * num_val_samples], x_train[(i+1) * num_val_samples:] ], axis=0 ) partial_train_targets = np.concatenate( [ y_train[:i * num_val_samples], y_train[(i+1) * num_val_samples:] ], axis=0 ) model = build_model() h = model.fit( partial_train_data, partial_train_targets, validation_data=(val_data, val_targets), epochs=epochs, verbose=1 ) val_loss, val_binacc = model.evaluate(val_data, val_targets, verbose=0) scores_binacc.append(val_binacc) #scores_precision.append(val_precision) #scores_recall.append(val_recall) histories.append(h)
st207082
Maybe you’re overfitting but the underlying relationships are simple so your validation set still has decent accuracy but higher loss. I feel like the change in accuracy could be caused by shuffling. Are you shuffling your data during training but not on test data? Does order matter for your problem?
st207083
Your dataset is very small, causing your model overfitting. You should try the following options: Augment your data. Reduce learning rate or “scheduling lr”. Study this paper: A disciplined approach to neural network hyper-parameters: Part 1 – learning rate, batch size, momentum, and weight decay https://arxiv.org/abs/1803.09820 21. Take a look at model prediction and compare the groundtruth. Change metrics to F1 score. Hope above answers help! Supachan
st207084
Hello, I would like to implement some ideas from the paper [1603.06432] Beyond Sharing Weights for Deep Domain Adaptation 1 This paper is about domain adaptation, where two networks are trained simultaneously. Both network weights should be related. This is achieved by using a weight regularizer. By using some kind of distance metric, it assures that the weights of both networks remain close. So the function takes as an input the weights of two nets. How can I implement this in Keras? Should I create a custom tf.keras.regularizers.Regularizer? Any help and ideas are highly appreciated.
st207085
Sorry about the last-minute notice but SIG-JVM’s is holding its next meeting today, October 22nd, at 9am PDT. The meeting call-in details and agenda are here 5, feel free to add additional topics, including the status of the next 0.4.0 release.
st207086
can anyone help me regarding my FYP project.I am building a text to speech model using my own data set . I have text lines as input of shape(868,70) where 70 is the max words in each record. And Audio Features as output of shape (868,82688) each record has Mel-Spectrogram features . I want to know which activation and loss function I use in my layers to get only 82688 unique values in output. for example [1.6796985,0.4172823,0.0011608523,…,0.000051741976] I know it is a regression problem. Please help.
st207087
Greetings TF.js Community, We are looking forward to catching up with you all tomorrow Tuesday October 19th from 5:00-6:00PM PST, after a gap of a couple of months now! Tomorrow we will share a draft of the RFC process for TF.js contributions. The RFCs will be assigned a sponsor from the Tensforflow team that will help your project be successful. With a more formal contribution to the new sig-tfjs repo, our Tensorflow group can help you connect with partners to collaborate through working groups, and we can offer regular support on working group projects through the SIG. In addition, we will help promote your work and there will be swag involved! Please find the agenda and the Gmeet link below. Feel free to add any questions or topics for discussion to the agenda. docs.google.com [Public] SIG TF.js Meeting Notes 3 [Public] SIG TF.js Meeting Notes Meeting: 2021-10-19 Tuesday, October 19, 2021, 5:00 – 6:00 pm Pacific Time Meeting Recording - TODO Please join this link: meet.google.com/dac-ngjq-okc (US) +1 617-675-4444‬ PIN: ‪298 636 519 4797‬# Shared Drive... Cheers, and talk soon! Masoud on behalf of the TF.js team
st207088
If you missed the SIG TF.js meeting, please see the recording here 1. We look forward to seeing you at the next meeting!
st207089
We are starting to see this error in some of the unit tests, within recent docker containers we are creating. 2021-07-14 02:13:34.052676: E tensorflow/core/lib/monitoring/collection_registry.cc:77] Cannot register 2 metrics with the same name: /tensorflow/api/keras/optimizers see this in the tip of develop branch (which is a fork of the upstream/master branch) and in the tip of our r2.6 branch (which is a fork of the upstream/r2.6 branch) see this in about 10+ of the unit tests…these ones 21134://tensorflow/python/compiler/xla:xla_test_gpu FAILED in 3 out of 3 in 3.6s 21139://tensorflow/python/keras/benchmarks:eager_microbenchmarks_test_gpu FAILED in 3 out of 3 in 3.6s 21144://tensorflow/python/keras/benchmarks:model_components_benchmarks_test_gpu FAILED in 3 out of 3 in 3.5s 21149://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:antirectifier_benchmark_test_gpu FAILED in 3 out of 3 in 3.4s 21154://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:bidirectional_lstm_benchmark_test_gpu FAILED in 3 out of 3 in 3.5s 21159://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:cifar10_cnn_benchmark_test_gpu FAILED in 3 out of 3 in 3.6s 21164://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:mnist_conv_benchmark_test_gpu FAILED in 3 out of 3 in 3.5s 21169://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:mnist_conv_custom_training_benchmark_test_gpu FAILED in 3 out of 3 in 3.7s 21174://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:mnist_hierarchical_rnn_benchmark_test_gpu FAILED in 3 out of 3 in 3.5s 21179://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:mnist_irnn_benchmark_test_gpu FAILED in 3 out of 3 in 3.5s 21184://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:reuters_mlp_benchmark_test_gpu FAILED in 3 out of 3 in 3.5s 21189://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:text_classification_transformer_benchmark_test_gpu FAILED in 3 out of 3 in 3.7s 21194://tensorflow/python/ops/numpy_ops:np_interop_test_gpu FAILED in 3 out of 3 in 29.1s Anyone else running into this? Any insight as to what might be causing this?
st207090
The docker containers we build, use this script to install all the pip packages github.com ROCmSoftwarePlatform/tensorflow-upstream/blob/develop-upstream/tensorflow/tools/ci_build/install/install_pip_packages.sh 82 #!/usr/bin/env bash # Copyright 2015 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== set -e # Get the latest version of pip so it recognize manylinux2010 wget https://bootstrap.pypa.io/get-pip.py This file has been truncated. show original which in turn seems to install the keras-nightly package github.com ROCmSoftwarePlatform/tensorflow-upstream/blob/develop-upstream/tensorflow/tools/ci_build/install/install_pip_packages.sh#L98 13 # TensorFlow Serving integration tests require the following: pip3 install grpcio # Eager-to-graph execution needs astor, gast and termcolor: pip3 install --upgrade astor pip3 install --upgrade gast pip3 install --upgrade termcolor # Keras pip3 install keras-nightly --no-deps pip3 install keras_preprocessing==1.1.0 --no-deps pip3 install --upgrade h5py==3.1.0 # Estimator pip3 install tf-estimator-nightly --no-deps # Tensorboard pip3 install tb-nightly --no-deps # Argparse is that still the correct thing to do? and is that co-related to the error we are getting? thanks
st207091
On 2.6 branch is github.com tensorflow/tensorflow/blob/v2.6.0-rc1/tensorflow/tools/ci_build/release/requirements_common.txt#L27 33 wheel ~= 0.36.2 wrapt ~= 1.12.1 # We need to pin the gast dependency exactly gast == 0.4.0 # Finally, install tensorboard and estimator and keras # Note that here we want the latest version that matches TF major.minor version # Note that we must use nightly here as these are used in nightly jobs # For release jobs, we will pin these on the release branch keras-nightly ~= 2.6.0.dev tb-nightly ~= 2.6.0.a tf-estimator-nightly ~= 2.6.0.dev # Test dependencies grpcio ~= 1.38.0 portpicker ~= 1.4.0 scipy ~= 1.5.4 # NOTE: not the latest version due to py3.6 Or not?
st207092
Hello Bhack, I saw your reply to Deven_Desai. I’m having a similar problem. I have installed tensorflow on Jetson Nano of Nvidia. The installation went well but when I load a tensorflow model stored on hdf5 file, I’m getting this error: 2021-10-10 16:39:06.985798: E tensorflow/core/lib/monitoring/collection_registry.cc:77] Cannot register 2 metrics with the same name: /tensorflow/api/keras/optimizers Traceback (most recent call last): File “benchmarkObjectDetection.py”, line 25, in nn_loaded_model = tf.keras.models.load_model(filepath_for_model, compile = False) # IMPORTANT! the compile = False input is crucial for loading the model as the custom loss function is unknown in the compilation phase; we can only use the model for prediction this way. File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/lazy_loader.py”, line 62, in getattr module = self._load() File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/lazy_loader.py”, line 45, in _load module = importlib.import_module(self.name) File “/usr/lib/python3.6/importlib/init.py”, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File “”, line 994, in _gcd_import File “”, line 971, in _find_and_load File “”, line 941, in _find_and_load_unlocked File “”, line 219, in _call_with_frames_removed File “”, line 994, in _gcd_import File “”, line 971, in _find_and_load File “”, line 941, in _find_and_load_unlocked File “”, line 219, in _call_with_frames_removed File “”, line 994, in _gcd_import File “”, line 971, in _find_and_load File “”, line 941, in _find_and_load_unlocked File “”, line 219, in _call_with_frames_removed File “”, line 994, in _gcd_import File “”, line 971, in _find_and_load File “”, line 955, in _find_and_load_unlocked File “”, line 665, in _load_unlocked File “”, line 678, in exec_module File “”, line 219, in _call_with_frames_removed File “/usr/local/lib/python3.6/dist-packages/keras/init.py”, line 25, in from keras import models File “/usr/local/lib/python3.6/dist-packages/keras/models.py”, line 20, in from keras import metrics as metrics_module File “/usr/local/lib/python3.6/dist-packages/keras/metrics.py”, line 26, in from keras import activations File “/usr/local/lib/python3.6/dist-packages/keras/activations.py”, line 20, in from keras.layers import advanced_activations File “/usr/local/lib/python3.6/dist-packages/keras/layers/init.py”, line 23, in from keras.engine.input_layer import Input File “/usr/local/lib/python3.6/dist-packages/keras/engine/input_layer.py”, line 21, in from keras.engine import base_layer File “/usr/local/lib/python3.6/dist-packages/keras/engine/base_layer.py”, line 43, in from keras.mixed_precision import loss_scale_optimizer File “/usr/local/lib/python3.6/dist-packages/keras/mixed_precision/loss_scale_optimizer.py”, line 18, in from keras import optimizers File “/usr/local/lib/python3.6/dist-packages/keras/optimizers.py”, line 26, in from keras.optimizer_v2 import adadelta as adadelta_v2 File “/usr/local/lib/python3.6/dist-packages/keras/optimizer_v2/adadelta.py”, line 22, in from keras.optimizer_v2 import optimizer_v2 File “/usr/local/lib/python3.6/dist-packages/keras/optimizer_v2/optimizer_v2.py”, line 37, in “/tensorflow/api/keras/optimizers”, “keras optimizer usage”, “method”) File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/monitoring.py”, line 361, in init len(labels), name, description, *labels) File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/monitoring.py”, line 135, in init self._metric = self._metric_methods[self._label_length].create(*args) tensorflow.python.framework.errors_impl.AlreadyExistsError: Another metric with the same name already exists. I would appreciate very much any help on this problem. Thanks!
st207093
Thank you very much Bhack! This is what I get: Name: tensorflow Version: 2.6.0+nv21.9 Name: keras Version: 2.7.0rc0 Thank you! Avner
st207094
Can you try to install and test It in a venv: TensorFlow Install TensorFlow with pip 83
st207095
Hi, Can you please explain what you think is the problem and why the installation within a venv should help? It is just that I have installed tensorflow as per the instruction on Nvidia site for Jetsons…and I’m not sure how the installation in venv will affect the performance. BTW, when I run a simple tensorflow calculation, the code runs with no issue. Thanks.
st207096
Hi, I followed the instruction on the link you sent me. After setting up the venv I’m trying to install tensroflow using pip but I’m getting the following message: avner@avner-desktop:~$ source ./venv/bin/activate (venv) avner@avner-desktop:~$ pip install tensorflow ERROR: Could not find a version that satisfies the requirement tensorflow from versions: none) ERROR: No matching distribution found for tensorflow I also tried to specify versions of tensorflow but was not successful. Any suggestions? I’m running on Jetson with ubunto 18.04. Thanks.
st207097
Jest but the venv you need to follow the Jetson steps Set up the Virtual Environment: https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html#install_multiple_versions_tensorflow 12
st207098
Thanks Bhack. The virtual environment doesn’t help. However, I reinstalled tensorflow with different version 2.5.0 (instead of 2.6.0) and with Nvidia TensorFlow container 21.07 and it works great! Instead of using (doesn’t work for me, by default installing tf 2.6.0 version and no container): “sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v46 24 tensorflow” I used (runs excellent): “sudo pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v46 24 tensorflow==2.5.0+nv21.07” Thanks a lot for you help!
st207099
Cause you need to always check the NVIDIA TensorFlow Container Versions e.g. if you check for TF 2.6.0 the minimum required is 21.09: https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/rel_21-09.html#rel_21-09 20