code
stringlengths
67
466k
docstring
stringlengths
1
13.2k
public static boolean checkpointsMatch( Collection<CompletedCheckpoint> first, Collection<CompletedCheckpoint> second) { if (first.size() != second.size()) { return false; } List<Tuple2<Long, JobID>> firstInterestingFields = new ArrayList<>(first.size()); for (CompletedCheckpoint checkpoint : first) { firstInterestingFields.add( new Tuple2<>(checkpoint.getCheckpointID(), checkpoint.getJobId())); } List<Tuple2<Long, JobID>> secondInterestingFields = new ArrayList<>(second.size()); for (CompletedCheckpoint checkpoint : second) { secondInterestingFields.add( new Tuple2<>(checkpoint.getCheckpointID(), checkpoint.getJobId())); } return firstInterestingFields.equals(secondInterestingFields); }
------------------------------------------------------------------------
private static int indexOfName(List<UnresolvedReferenceExpression> inputFieldReferences, String targetName) { int i; for (i = 0; i < inputFieldReferences.size(); ++i) { if (inputFieldReferences.get(i).getName().equals(targetName)) { break; } } return i == inputFieldReferences.size() ? -1 : i; }
Find the index of targetName in the list. Return -1 if not found.
private static boolean checkBegin( BinaryString pattern, MemorySegment[] segments, int start, int len) { int lenSub = pattern.getSizeInBytes(); return len >= lenSub && SegmentsUtil.equals(pattern.getSegments(), 0, segments, start, lenSub); }
Matches the beginning of each string to a pattern.
private static int indexMiddle( BinaryString pattern, MemorySegment[] segments, int start, int len) { return SegmentsUtil.find( segments, start, len, pattern.getSegments(), pattern.getOffset(), pattern.getSizeInBytes()); }
Matches the middle of each string to its pattern. @return Returns absolute offset of the match.
public <C extends RpcGateway> C getSelfGateway(Class<C> selfGatewayType) { if (selfGatewayType.isInstance(rpcServer)) { @SuppressWarnings("unchecked") C selfGateway = ((C) rpcServer); return selfGateway; } else { throw new RuntimeException("RpcEndpoint does not implement the RpcGateway interface of type " + selfGatewayType + '.'); } }
Returns a self gateway of the specified type which can be used to issue asynchronous calls against the RpcEndpoint. <p>IMPORTANT: The self gateway type must be implemented by the RpcEndpoint. Otherwise the method will fail. @param selfGatewayType class of the self gateway type @param <C> type of the self gateway to create @return Self gateway of the specified type which can be used to issue asynchronous rpcs
protected void scheduleRunAsync(Runnable runnable, Time delay) { scheduleRunAsync(runnable, delay.getSize(), delay.getUnit()); }
Execute the runnable in the main thread of the underlying RPC endpoint, with a delay of the given number of milliseconds. @param runnable Runnable to be executed @param delay The delay after which the runnable will be executed
protected void scheduleRunAsync(Runnable runnable, long delay, TimeUnit unit) { rpcServer.scheduleRunAsync(runnable, unit.toMillis(delay)); }
Execute the runnable in the main thread of the underlying RPC endpoint, with a delay of the given number of milliseconds. @param runnable Runnable to be executed @param delay The delay after which the runnable will be executed
protected <V> CompletableFuture<V> callAsync(Callable<V> callable, Time timeout) { return rpcServer.callAsync(callable, timeout); }
Execute the callable in the main thread of the underlying RPC service, returning a future for the result of the callable. If the callable is not completed within the given timeout, then the future will be failed with a {@link TimeoutException}. @param callable Callable to be executed in the main thread of the underlying rpc server @param timeout Timeout for the callable to be completed @param <V> Return type of the callable @return Future for the result of the callable.
@Internal public static void initializeSafetyNetForThread() { SafetyNetCloseableRegistry oldRegistry = REGISTRIES.get(); checkState(null == oldRegistry, "Found an existing FileSystem safety net for this thread: %s " + "This may indicate an accidental repeated initialization, or a leak of the" + "(Inheritable)ThreadLocal through a ThreadPool.", oldRegistry); SafetyNetCloseableRegistry newRegistry = new SafetyNetCloseableRegistry(); REGISTRIES.set(newRegistry); }
Activates the safety net for a thread. {@link FileSystem} instances obtained by the thread that called this method will be guarded, meaning that their created streams are tracked and can be closed via the safety net closing hook. <p>This method should be called at the beginning of a thread that should be guarded. @throws IllegalStateException Thrown, if a safety net was already registered for the thread.
@Internal public static void closeSafetyNetAndGuardedResourcesForThread() { SafetyNetCloseableRegistry registry = REGISTRIES.get(); if (null != registry) { REGISTRIES.remove(); IOUtils.closeQuietly(registry); } }
Closes the safety net for a thread. This closes all remaining unclosed streams that were opened by safety-net-guarded file systems. After this method was called, no streams can be opened any more from any FileSystem instance that was obtained while the thread was guarded by the safety net. <p>This method should be called at the very end of a guarded thread.
static FileSystem wrapWithSafetyNetWhenActivated(FileSystem fs) { SafetyNetCloseableRegistry reg = REGISTRIES.get(); return reg != null ? new SafetyNetWrapperFileSystem(fs, reg) : fs; }
------------------------------------------------------------------------
@SafeVarargs @SuppressWarnings("unchecked") public final PythonDataStream union(PythonDataStream... streams) { ArrayList<DataStream<PyObject>> dsList = new ArrayList<>(); for (PythonDataStream ps : streams) { dsList.add(ps.stream); } DataStream<PyObject>[] dsArray = new DataStream[dsList.size()]; return new PythonDataStream(stream.union(dsList.toArray(dsArray))); }
A thin wrapper layer over {@link DataStream#union(DataStream[])}. @param streams The Python DataStreams to union output with. @return The {@link PythonDataStream}.
public PythonSplitStream split(OutputSelector<PyObject> output_selector) throws IOException { return new PythonSplitStream(this.stream.split(new PythonOutputSelector(output_selector))); }
A thin wrapper layer over {@link DataStream#split(OutputSelector)}. @param output_selector The user defined {@link OutputSelector} for directing the tuples. @return The {@link PythonSplitStream}
public PythonSingleOutputStreamOperator filter(FilterFunction<PyObject> filter) throws IOException { return new PythonSingleOutputStreamOperator(stream.filter(new PythonFilterFunction(filter))); }
A thin wrapper layer over {@link DataStream#filter(FilterFunction)}. @param filter The FilterFunction that is called for each element of the DataStream. @return The filtered {@link PythonDataStream}.
public PythonDataStream<SingleOutputStreamOperator<PyObject>> map( MapFunction<PyObject, PyObject> mapper) throws IOException { return new PythonSingleOutputStreamOperator(stream.map(new PythonMapFunction(mapper))); }
A thin wrapper layer over {@link DataStream#map(MapFunction)}. @param mapper The MapFunction that is called for each element of the DataStream. @return The transformed {@link PythonDataStream}.
public PythonDataStream<SingleOutputStreamOperator<PyObject>> flat_map( FlatMapFunction<PyObject, Object> flat_mapper) throws IOException { return new PythonSingleOutputStreamOperator(stream.flatMap(new PythonFlatMapFunction(flat_mapper))); }
A thin wrapper layer over {@link DataStream#flatMap(FlatMapFunction)}. @param flat_mapper The FlatMapFunction that is called for each element of the DataStream @return The transformed {@link PythonDataStream}.
public PythonKeyedStream key_by(KeySelector<PyObject, PyKey> selector) throws IOException { return new PythonKeyedStream(stream.keyBy(new PythonKeySelector(selector))); }
A thin wrapper layer over {@link DataStream#keyBy(KeySelector)}. @param selector The KeySelector to be used for extracting the key for partitioning @return The {@link PythonDataStream} with partitioned state (i.e. {@link PythonKeyedStream})
@PublicEvolving public void write_as_text(String path, WriteMode mode) { stream.writeAsText(path, mode); }
A thin wrapper layer over {@link DataStream#writeAsText(java.lang.String, WriteMode)}. @param path The path pointing to the location the text file is written to @param mode Controls the behavior for existing files. Options are NO_OVERWRITE and OVERWRITE.
@PublicEvolving public void write_to_socket(String host, Integer port, SerializationSchema<PyObject> schema) throws IOException { stream.writeToSocket(host, port, new PythonSerializationSchema(schema)); }
A thin wrapper layer over {@link DataStream#writeToSocket(String, int, org.apache.flink.api.common.serialization.SerializationSchema)}. @param host host of the socket @param port port of the socket @param schema schema for serialization
@PublicEvolving public void add_sink(SinkFunction<PyObject> sink_func) throws IOException { stream.addSink(new PythonSinkFunction(sink_func)); }
A thin wrapper layer over {@link DataStream#addSink(SinkFunction)}. @param sink_func The object containing the sink's invoke function.
public void addHeuristicNetworkCost(double cost) { if (cost <= 0) { throw new IllegalArgumentException("Heuristic costs must be positive."); } this.heuristicNetworkCost += cost; // check for overflow if (this.heuristicNetworkCost < 0) { this.heuristicNetworkCost = Double.MAX_VALUE; } }
Adds the heuristic costs for network to the current heuristic network costs for this Costs object. @param cost The heuristic network cost to add.
public void addHeuristicDiskCost(double cost) { if (cost <= 0) { throw new IllegalArgumentException("Heuristic costs must be positive."); } this.heuristicDiskCost += cost; // check for overflow if (this.heuristicDiskCost < 0) { this.heuristicDiskCost = Double.MAX_VALUE; } }
Adds the heuristic costs for disk to the current heuristic disk costs for this Costs object. @param cost The heuristic disk cost to add.
public void addHeuristicCpuCost(double cost) { if (cost <= 0) { throw new IllegalArgumentException("Heuristic costs must be positive."); } this.heuristicCpuCost += cost; // check for overflow if (this.heuristicCpuCost < 0) { this.heuristicCpuCost = Double.MAX_VALUE; } }
Adds the given heuristic CPU cost to the current heuristic CPU cost for this Costs object. @param cost The heuristic CPU cost to add.
public void addCosts(Costs other) { // ---------- quantifiable costs ---------- if (this.networkCost == UNKNOWN || other.networkCost == UNKNOWN) { this.networkCost = UNKNOWN; } else { this.networkCost += other.networkCost; } if (this.diskCost == UNKNOWN || other.diskCost == UNKNOWN) { this.diskCost = UNKNOWN; } else { this.diskCost += other.diskCost; } if (this.cpuCost == UNKNOWN || other.cpuCost == UNKNOWN) { this.cpuCost = UNKNOWN; } else { this.cpuCost += other.cpuCost; } // ---------- heuristic costs ---------- this.heuristicNetworkCost += other.heuristicNetworkCost; this.heuristicDiskCost += other.heuristicDiskCost; this.heuristicCpuCost += other.heuristicCpuCost; }
Adds the given costs to these costs. If for one of the different cost components (network, disk), the costs are unknown, the resulting costs will be unknown. @param other The costs to add.
public void subtractCosts(Costs other) { if (this.networkCost != UNKNOWN && other.networkCost != UNKNOWN) { this.networkCost -= other.networkCost; if (this.networkCost < 0) { throw new IllegalArgumentException("Cannot subtract more cost then there is."); } } if (this.diskCost != UNKNOWN && other.diskCost != UNKNOWN) { this.diskCost -= other.diskCost; if (this.diskCost < 0) { throw new IllegalArgumentException("Cannot subtract more cost then there is."); } } if (this.cpuCost != UNKNOWN && other.cpuCost != UNKNOWN) { this.cpuCost -= other.cpuCost; if (this.cpuCost < 0) { throw new IllegalArgumentException("Cannot subtract more cost then there is."); } } // ---------- relative costs ---------- this.heuristicNetworkCost -= other.heuristicNetworkCost; if (this.heuristicNetworkCost < 0) { throw new IllegalArgumentException("Cannot subtract more cost then there is."); } this.heuristicDiskCost -= other.heuristicDiskCost; if (this.heuristicDiskCost < 0) { throw new IllegalArgumentException("Cannot subtract more cost then there is."); } this.heuristicCpuCost -= other.heuristicCpuCost; if (this.heuristicCpuCost < 0) { throw new IllegalArgumentException("Cannot subtract more cost then there is."); } }
Subtracts the given costs from these costs. If the given costs are unknown, then these costs are remain unchanged. @param other The costs to subtract.
@Override public int compareTo(Costs o) { // check the network cost. if we have actual costs on both, use them, otherwise use the heuristic costs. if (this.networkCost != UNKNOWN && o.networkCost != UNKNOWN) { if (this.networkCost != o.networkCost) { return this.networkCost < o.networkCost ? -1 : 1; } } else if (this.heuristicNetworkCost < o.heuristicNetworkCost) { return -1; } else if (this.heuristicNetworkCost > o.heuristicNetworkCost) { return 1; } // next, check the disk cost. again, if we have actual costs on both, use them, otherwise use the heuristic costs. if (this.diskCost != UNKNOWN && o.diskCost != UNKNOWN) { if (this.diskCost != o.diskCost) { return this.diskCost < o.diskCost ? -1 : 1; } } else if (this.heuristicDiskCost < o.heuristicDiskCost) { return -1; } else if (this.heuristicDiskCost > o.heuristicDiskCost) { return 1; } // next, check the CPU cost. again, if we have actual costs on both, use them, otherwise use the heuristic costs. if (this.cpuCost != UNKNOWN && o.cpuCost != UNKNOWN) { return this.cpuCost < o.cpuCost ? -1 : this.cpuCost > o.cpuCost ? 1 : 0; } else if (this.heuristicCpuCost < o.heuristicCpuCost) { return -1; } else if (this.heuristicCpuCost > o.heuristicCpuCost) { return 1; } else { return 0; } }
The order of comparison is: network first, then disk, then CPU. The comparison here happens each time primarily after the heuristic costs, then after the quantifiable costs. @see java.lang.Comparable#compareTo(java.lang.Object)
@Override public boolean addAll(final int index, final Collection<? extends V> c) { return this.list.addAll(index, c); }
/* (non-Javadoc) @see java.util.List#addAll(int, java.util.Collection)
@Override public V set(final int index, final V element) { return this.list.set(index, element); }
/* (non-Javadoc) @see java.util.List#set(int, java.lang.Object)
@Override public List<V> subList(final int fromIndex, final int toIndex) { return this.list.subList(fromIndex, toIndex); }
/* (non-Javadoc) @see java.util.List#subList(int, int)
public static <T> CompletableFuture<T> getFailedFuture(Throwable throwable) { CompletableFuture<T> failedAttempt = new CompletableFuture<>(); failedAttempt.completeExceptionally(throwable); return failedAttempt; }
Returns a {@link CompletableFuture} that has failed with the exception provided as argument. @param throwable the exception to fail the future with. @return The failed future.
public JoinOperator<I1, I2, OUT> withPartitioner(Partitioner<?> partitioner) { if (partitioner != null) { keys1.validateCustomPartitioner(partitioner, null); keys2.validateCustomPartitioner(partitioner, null); } this.customPartitioner = getInput1().clean(partitioner); return this; }
Sets a custom partitioner for this join. The partitioner will be called on the join keys to determine the partition a key should be assigned to. The partitioner is evaluated on both join inputs in the same way. <p>NOTE: A custom partitioner can only be used with single-field join keys, not with composite join keys. @param partitioner The custom partitioner to be used. @return This join operator, to allow for function chaining.
public BinaryRow append(LookupInfo info, BinaryRow value) throws IOException { try { if (numElements >= growthThreshold) { growAndRehash(); //update info's bucketSegmentIndex and bucketOffset lookup(info.key); } BinaryRow toAppend = hashSetMode ? reusedValue : value; long pointerToAppended = recordArea.appendRecord(info.key, toAppend); bucketSegments.get(info.bucketSegmentIndex).putLong(info.bucketOffset, pointerToAppended); bucketSegments.get(info.bucketSegmentIndex).putInt( info.bucketOffset + ELEMENT_POINT_LENGTH, info.keyHashCode); numElements++; recordArea.setReadPosition(pointerToAppended); recordArea.skipKey(); return recordArea.readValue(reusedValue); } catch (EOFException e) { numSpillFiles++; spillInBytes += recordArea.segments.size() * ((long) segmentSize); throw e; } }
Append an value into the hash map's record area. @return An BinaryRow mapping to the memory segments in the map's record area belonging to the newly appended value. @throws EOFException if the map can't allocate much more memory.
public void reset() { int numBuckets = bucketSegments.size() * numBucketsPerSegment; this.log2NumBuckets = MathUtils.log2strict(numBuckets); this.numBucketsMask = (1 << MathUtils.log2strict(numBuckets)) - 1; this.numBucketsMask2 = (1 << MathUtils.log2strict(numBuckets >> 1)) - 1; this.growthThreshold = (int) (numBuckets * LOAD_FACTOR); //reset the record segments. recordArea.reset(); resetBucketSegments(bucketSegments); numElements = 0; destructiveIterator = null; LOG.info("reset BytesHashMap with record memory segments {}, {} in bytes, init allocating {} for bucket area.", freeMemorySegments.size(), freeMemorySegments.size() * segmentSize, bucketSegments.size()); }
reset the map's record and bucket area's memory segments for reusing.
public static int calculateHeapSize(int memory, org.apache.flink.configuration.Configuration conf) { float memoryCutoffRatio = conf.getFloat(ResourceManagerOptions.CONTAINERIZED_HEAP_CUTOFF_RATIO); int minCutoff = conf.getInteger(ResourceManagerOptions.CONTAINERIZED_HEAP_CUTOFF_MIN); if (memoryCutoffRatio > 1 || memoryCutoffRatio < 0) { throw new IllegalArgumentException("The configuration value '" + ResourceManagerOptions.CONTAINERIZED_HEAP_CUTOFF_RATIO.key() + "' must be between 0 and 1. Value given=" + memoryCutoffRatio); } if (minCutoff > memory) { throw new IllegalArgumentException("The configuration value '" + ResourceManagerOptions.CONTAINERIZED_HEAP_CUTOFF_MIN.key() + "' is higher (" + minCutoff + ") than the requested amount of memory " + memory); } int heapLimit = (int) ((float) memory * memoryCutoffRatio); if (heapLimit < minCutoff) { heapLimit = minCutoff; } return memory - heapLimit; }
See documentation.
static Tuple2<Path, LocalResource> setupLocalResource( FileSystem fs, String appId, Path localSrcPath, Path homedir, String relativeTargetPath) throws IOException { File localFile = new File(localSrcPath.toUri().getPath()); if (localFile.isDirectory()) { throw new IllegalArgumentException("File to copy must not be a directory: " + localSrcPath); } // copy resource to HDFS String suffix = ".flink/" + appId + (relativeTargetPath.isEmpty() ? "" : "/" + relativeTargetPath) + "/" + localSrcPath.getName(); Path dst = new Path(homedir, suffix); LOG.debug("Copying from {} to {}", localSrcPath, dst); fs.copyFromLocalFile(false, true, localSrcPath, dst); // Note: If we used registerLocalResource(FileSystem, Path) here, we would access the remote // file once again which has problems with eventually consistent read-after-write file // systems. Instead, we decide to preserve the modification time at the remote // location because this and the size of the resource will be checked by YARN based on // the values we provide to #registerLocalResource() below. fs.setTimes(dst, localFile.lastModified(), -1); // now create the resource instance LocalResource resource = registerLocalResource(dst, localFile.length(), localFile.lastModified()); return Tuple2.of(dst, resource); }
Copy a local file to a remote file system. @param fs remote filesystem @param appId application ID @param localSrcPath path to the local file @param homedir remote home directory base (will be extended) @param relativeTargetPath relative target path of the file (will be prefixed be the full home directory we set up) @return Path to remote file (usually hdfs)
public static void deleteApplicationFiles(final Map<String, String> env) { final String applicationFilesDir = env.get(YarnConfigKeys.FLINK_YARN_FILES); if (!StringUtils.isNullOrWhitespaceOnly(applicationFilesDir)) { final org.apache.flink.core.fs.Path path = new org.apache.flink.core.fs.Path(applicationFilesDir); try { final org.apache.flink.core.fs.FileSystem fileSystem = path.getFileSystem(); if (!fileSystem.delete(path, true)) { LOG.error("Deleting yarn application files under {} was unsuccessful.", applicationFilesDir); } } catch (final IOException e) { LOG.error("Could not properly delete yarn application files directory {}.", applicationFilesDir, e); } } else { LOG.debug("No yarn application files directory set. Therefore, cannot clean up the data."); } }
Deletes the YARN application files, e.g., Flink binaries, libraries, etc., from the remote filesystem. @param env The environment variables.
private static LocalResource registerLocalResource( Path remoteRsrcPath, long resourceSize, long resourceModificationTime) { LocalResource localResource = Records.newRecord(LocalResource.class); localResource.setResource(ConverterUtils.getYarnUrlFromURI(remoteRsrcPath.toUri())); localResource.setSize(resourceSize); localResource.setTimestamp(resourceModificationTime); localResource.setType(LocalResourceType.FILE); localResource.setVisibility(LocalResourceVisibility.APPLICATION); return localResource; }
Creates a YARN resource for the remote object at the given location. @param remoteRsrcPath remote location of the resource @param resourceSize size of the resource @param resourceModificationTime last modification time of the resource @return YARN resource
private static void obtainTokenForHBase(Credentials credentials, Configuration conf) throws IOException { if (UserGroupInformation.isSecurityEnabled()) { LOG.info("Attempting to obtain Kerberos security token for HBase"); try { // ---- // Intended call: HBaseConfiguration.addHbaseResources(conf); Class .forName("org.apache.hadoop.hbase.HBaseConfiguration") .getMethod("addHbaseResources", Configuration.class) .invoke(null, conf); // ---- LOG.info("HBase security setting: {}", conf.get("hbase.security.authentication")); if (!"kerberos".equals(conf.get("hbase.security.authentication"))) { LOG.info("HBase has not been configured to use Kerberos."); return; } LOG.info("Obtaining Kerberos security token for HBase"); // ---- // Intended call: Token<AuthenticationTokenIdentifier> token = TokenUtil.obtainToken(conf); Token<?> token = (Token<?>) Class .forName("org.apache.hadoop.hbase.security.token.TokenUtil") .getMethod("obtainToken", Configuration.class) .invoke(null, conf); // ---- if (token == null) { LOG.error("No Kerberos security token for HBase available"); return; } credentials.addToken(token.getService(), token); LOG.info("Added HBase Kerberos security token to credentials."); } catch (ClassNotFoundException | NoSuchMethodException | IllegalAccessException | InvocationTargetException e) { LOG.info("HBase is not available (not packaged with this application): {} : \"{}\".", e.getClass().getSimpleName(), e.getMessage()); } } }
Obtain Kerberos security token for HBase.
public static void addToEnvironment(Map<String, String> environment, String variable, String value) { String val = environment.get(variable); if (val == null) { val = value; } else { val = val + File.pathSeparator + value; } environment.put(StringInterner.weakIntern(variable), StringInterner.weakIntern(val)); }
Copied method from org.apache.hadoop.yarn.util.Apps. It was broken by YARN-1824 (2.4.0) and fixed for 2.4.1 by https://issues.apache.org/jira/browse/YARN-1931
public static Map<String, String> getEnvironmentVariables(String envPrefix, org.apache.flink.configuration.Configuration flinkConfiguration) { Map<String, String> result = new HashMap<>(); for (Map.Entry<String, String> entry: flinkConfiguration.toMap().entrySet()) { if (entry.getKey().startsWith(envPrefix) && entry.getKey().length() > envPrefix.length()) { // remove prefix String key = entry.getKey().substring(envPrefix.length()); result.put(key, entry.getValue()); } } return result; }
Method to extract environment variables from the flinkConfiguration based on the given prefix String. @param envPrefix Prefix for the environment variables key @param flinkConfiguration The Flink config to get the environment variable defintion from
static ContainerLaunchContext createTaskExecutorContext( org.apache.flink.configuration.Configuration flinkConfig, YarnConfiguration yarnConfig, Map<String, String> env, ContaineredTaskManagerParameters tmParams, org.apache.flink.configuration.Configuration taskManagerConfig, String workingDirectory, Class<?> taskManagerMainClass, Logger log) throws Exception { // get and validate all relevant variables String remoteFlinkJarPath = env.get(YarnConfigKeys.FLINK_JAR_PATH); require(remoteFlinkJarPath != null, "Environment variable %s not set", YarnConfigKeys.FLINK_JAR_PATH); String appId = env.get(YarnConfigKeys.ENV_APP_ID); require(appId != null, "Environment variable %s not set", YarnConfigKeys.ENV_APP_ID); String clientHomeDir = env.get(YarnConfigKeys.ENV_CLIENT_HOME_DIR); require(clientHomeDir != null, "Environment variable %s not set", YarnConfigKeys.ENV_CLIENT_HOME_DIR); String shipListString = env.get(YarnConfigKeys.ENV_CLIENT_SHIP_FILES); require(shipListString != null, "Environment variable %s not set", YarnConfigKeys.ENV_CLIENT_SHIP_FILES); String yarnClientUsername = env.get(YarnConfigKeys.ENV_HADOOP_USER_NAME); require(yarnClientUsername != null, "Environment variable %s not set", YarnConfigKeys.ENV_HADOOP_USER_NAME); final String remoteKeytabPath = env.get(YarnConfigKeys.KEYTAB_PATH); final String remoteKeytabPrincipal = env.get(YarnConfigKeys.KEYTAB_PRINCIPAL); final String remoteYarnConfPath = env.get(YarnConfigKeys.ENV_YARN_SITE_XML_PATH); final String remoteKrb5Path = env.get(YarnConfigKeys.ENV_KRB5_PATH); if (log.isDebugEnabled()) { log.debug("TM:remote keytab path obtained {}", remoteKeytabPath); log.debug("TM:remote keytab principal obtained {}", remoteKeytabPrincipal); log.debug("TM:remote yarn conf path obtained {}", remoteYarnConfPath); log.debug("TM:remote krb5 path obtained {}", remoteKrb5Path); } String classPathString = env.get(ENV_FLINK_CLASSPATH); require(classPathString != null, "Environment variable %s not set", YarnConfigKeys.ENV_FLINK_CLASSPATH); //register keytab LocalResource keytabResource = null; if (remoteKeytabPath != null) { log.info("Adding keytab {} to the AM container local resource bucket", remoteKeytabPath); Path keytabPath = new Path(remoteKeytabPath); FileSystem fs = keytabPath.getFileSystem(yarnConfig); keytabResource = registerLocalResource(fs, keytabPath); } //To support Yarn Secure Integration Test Scenario LocalResource yarnConfResource = null; LocalResource krb5ConfResource = null; boolean hasKrb5 = false; if (remoteYarnConfPath != null && remoteKrb5Path != null) { log.info("TM:Adding remoteYarnConfPath {} to the container local resource bucket", remoteYarnConfPath); Path yarnConfPath = new Path(remoteYarnConfPath); FileSystem fs = yarnConfPath.getFileSystem(yarnConfig); yarnConfResource = registerLocalResource(fs, yarnConfPath); log.info("TM:Adding remoteKrb5Path {} to the container local resource bucket", remoteKrb5Path); Path krb5ConfPath = new Path(remoteKrb5Path); fs = krb5ConfPath.getFileSystem(yarnConfig); krb5ConfResource = registerLocalResource(fs, krb5ConfPath); hasKrb5 = true; } // register Flink Jar with remote HDFS final LocalResource flinkJar; { Path remoteJarPath = new Path(remoteFlinkJarPath); FileSystem fs = remoteJarPath.getFileSystem(yarnConfig); flinkJar = registerLocalResource(fs, remoteJarPath); } // register conf with local fs final LocalResource flinkConf; { // write the TaskManager configuration to a local file final File taskManagerConfigFile = new File(workingDirectory, UUID.randomUUID() + "-taskmanager-conf.yaml"); log.debug("Writing TaskManager configuration to {}", taskManagerConfigFile.getAbsolutePath()); BootstrapTools.writeConfiguration(taskManagerConfig, taskManagerConfigFile); try { Path homeDirPath = new Path(clientHomeDir); FileSystem fs = homeDirPath.getFileSystem(yarnConfig); flinkConf = setupLocalResource( fs, appId, new Path(taskManagerConfigFile.toURI()), homeDirPath, "").f1; log.debug("Prepared local resource for modified yaml: {}", flinkConf); } finally { try { FileUtils.deleteFileOrDirectory(taskManagerConfigFile); } catch (IOException e) { log.info("Could not delete temporary configuration file " + taskManagerConfigFile.getAbsolutePath() + '.', e); } } } Map<String, LocalResource> taskManagerLocalResources = new HashMap<>(); taskManagerLocalResources.put("flink.jar", flinkJar); taskManagerLocalResources.put("flink-conf.yaml", flinkConf); //To support Yarn Secure Integration Test Scenario if (yarnConfResource != null && krb5ConfResource != null) { taskManagerLocalResources.put(YARN_SITE_FILE_NAME, yarnConfResource); taskManagerLocalResources.put(KRB5_FILE_NAME, krb5ConfResource); } if (keytabResource != null) { taskManagerLocalResources.put(KEYTAB_FILE_NAME, keytabResource); } // prepare additional files to be shipped for (String pathStr : shipListString.split(",")) { if (!pathStr.isEmpty()) { String[] keyAndPath = pathStr.split("="); require(keyAndPath.length == 2, "Invalid entry in ship file list: %s", pathStr); Path path = new Path(keyAndPath[1]); LocalResource resource = registerLocalResource(path.getFileSystem(yarnConfig), path); taskManagerLocalResources.put(keyAndPath[0], resource); } } // now that all resources are prepared, we can create the launch context log.info("Creating container launch context for TaskManagers"); boolean hasLogback = new File(workingDirectory, "logback.xml").exists(); boolean hasLog4j = new File(workingDirectory, "log4j.properties").exists(); String launchCommand = BootstrapTools.getTaskManagerShellCommand( flinkConfig, tmParams, ".", ApplicationConstants.LOG_DIR_EXPANSION_VAR, hasLogback, hasLog4j, hasKrb5, taskManagerMainClass); if (log.isDebugEnabled()) { log.debug("Starting TaskManagers with command: " + launchCommand); } else { log.info("Starting TaskManagers"); } ContainerLaunchContext ctx = Records.newRecord(ContainerLaunchContext.class); ctx.setCommands(Collections.singletonList(launchCommand)); ctx.setLocalResources(taskManagerLocalResources); Map<String, String> containerEnv = new HashMap<>(); containerEnv.putAll(tmParams.taskManagerEnv()); // add YARN classpath, etc to the container environment containerEnv.put(ENV_FLINK_CLASSPATH, classPathString); setupYarnClassPath(yarnConfig, containerEnv); containerEnv.put(YarnConfigKeys.ENV_HADOOP_USER_NAME, UserGroupInformation.getCurrentUser().getUserName()); if (remoteKeytabPath != null && remoteKeytabPrincipal != null) { containerEnv.put(YarnConfigKeys.KEYTAB_PATH, remoteKeytabPath); containerEnv.put(YarnConfigKeys.KEYTAB_PRINCIPAL, remoteKeytabPrincipal); } ctx.setEnvironment(containerEnv); // For TaskManager YARN container context, read the tokens from the jobmanager yarn container local file. // NOTE: must read the tokens from the local file, not from the UGI context, because if UGI is login // using Kerberos keytabs, there is no HDFS delegation token in the UGI context. final String fileLocation = System.getenv(UserGroupInformation.HADOOP_TOKEN_FILE_LOCATION); if (fileLocation != null) { log.debug("Adding security tokens to TaskExecutor's container launch context."); try (DataOutputBuffer dob = new DataOutputBuffer()) { Method readTokenStorageFileMethod = Credentials.class.getMethod( "readTokenStorageFile", File.class, org.apache.hadoop.conf.Configuration.class); Credentials cred = (Credentials) readTokenStorageFileMethod.invoke( null, new File(fileLocation), HadoopUtils.getHadoopConfiguration(flinkConfig)); cred.writeTokenStorageToStream(dob); ByteBuffer securityTokens = ByteBuffer.wrap(dob.getData(), 0, dob.getLength()); ctx.setTokens(securityTokens); } catch (Throwable t) { log.error("Failed to add Hadoop's security tokens.", t); } } else { log.info("Could not set security tokens because Hadoop's token file location is unknown."); } return ctx; }
Creates the launch context, which describes how to bring up a TaskExecutor / TaskManager process in an allocated YARN container. <p>This code is extremely YARN specific and registers all the resources that the TaskExecutor needs (such as JAR file, config file, ...) and all environment variables in a YARN container launch context. The launch context then ensures that those resources will be copied into the containers transient working directory. @param flinkConfig The Flink configuration object. @param yarnConfig The YARN configuration object. @param env The environment variables. @param tmParams The TaskExecutor container memory parameters. @param taskManagerConfig The configuration for the TaskExecutors. @param workingDirectory The current application master container's working directory. @param taskManagerMainClass The class with the main method. @param log The logger. @return The launch context for the TaskManager processes. @throws Exception Thrown if the launch context could not be created, for example if the resources could not be copied.
static void require(boolean condition, String message, Object... values) { if (!condition) { throw new RuntimeException(String.format(message, values)); } }
Validates a condition, throwing a RuntimeException if the condition is violated. @param condition The condition. @param message The message for the runtime exception, with format variables as defined by {@link String#format(String, Object...)}. @param values The format arguments.
private static DataSet<Centroid> getCentroidDataSet(ParameterTool params, ExecutionEnvironment env) { DataSet<Centroid> centroids; if (params.has("centroids")) { centroids = env.readCsvFile(params.get("centroids")) .fieldDelimiter(" ") .pojoType(Centroid.class, "id", "x", "y"); } else { System.out.println("Executing K-Means example with default centroid data set."); System.out.println("Use --centroids to specify file input."); centroids = KMeansData.getDefaultCentroidDataSet(env); } return centroids; }
*************************************************************************
String getLogicalScope(CharacterFilter filter, char delimiter, int reporterIndex) { if (logicalScopeStrings.length == 0 || (reporterIndex < 0 || reporterIndex >= logicalScopeStrings.length)) { return createLogicalScope(filter, delimiter); } else { if (logicalScopeStrings[reporterIndex] == null) { logicalScopeStrings[reporterIndex] = createLogicalScope(filter, delimiter); } return logicalScopeStrings[reporterIndex]; } }
Returns the logical scope of this group, for example {@code "taskmanager.job.task"}. @param filter character filter which is applied to the scope components @param delimiter delimiter to use for concatenating scope components @param reporterIndex index of the reporter @return logical scope
public QueryScopeInfo getQueryServiceMetricInfo(CharacterFilter filter) { if (queryServiceScopeInfo == null) { queryServiceScopeInfo = createQueryServiceMetricInfo(filter); } return queryServiceScopeInfo; }
Returns the metric query service scope for this group. @param filter character filter @return query service scope
public String getMetricIdentifier(String metricName, CharacterFilter filter, int reporterIndex) { if (scopeStrings.length == 0 || (reporterIndex < 0 || reporterIndex >= scopeStrings.length)) { char delimiter = registry.getDelimiter(); String newScopeString; if (filter != null) { newScopeString = ScopeFormat.concat(filter, delimiter, scopeComponents); metricName = filter.filterCharacters(metricName); } else { newScopeString = ScopeFormat.concat(delimiter, scopeComponents); } return newScopeString + delimiter + metricName; } else { char delimiter = registry.getDelimiter(reporterIndex); if (scopeStrings[reporterIndex] == null) { if (filter != null) { scopeStrings[reporterIndex] = ScopeFormat.concat(filter, delimiter, scopeComponents); } else { scopeStrings[reporterIndex] = ScopeFormat.concat(delimiter, scopeComponents); } } if (filter != null) { metricName = filter.filterCharacters(metricName); } return scopeStrings[reporterIndex] + delimiter + metricName; } }
Returns the fully qualified metric name using the configured delimiter for the reporter with the given index, for example {@code "host-7.taskmanager-2.window_word_count.my-mapper.metricName"}. @param metricName metric name @param filter character filter which is applied to the scope components if not null. @param reporterIndex index of the reporter whose delimiter should be used @return fully qualified metric name
public void close() { synchronized (this) { if (!closed) { closed = true; // close all subgroups for (AbstractMetricGroup group : groups.values()) { group.close(); } groups.clear(); // un-register all directly contained metrics for (Map.Entry<String, Metric> metric : metrics.entrySet()) { registry.unregister(metric.getValue(), metric.getKey(), this); } metrics.clear(); } } }
------------------------------------------------------------------------
protected void addMetric(String name, Metric metric) { if (metric == null) { LOG.warn("Ignoring attempted registration of a metric due to being null for name {}.", name); return; } // add the metric only if the group is still open synchronized (this) { if (!closed) { // immediately put without a 'contains' check to optimize the common case (no collision) // collisions are resolved later Metric prior = metrics.put(name, metric); // check for collisions with other metric names if (prior == null) { // no other metric with this name yet if (groups.containsKey(name)) { // we warn here, rather than failing, because metrics are tools that should not fail the // program when used incorrectly LOG.warn("Name collision: Adding a metric with the same name as a metric subgroup: '" + name + "'. Metric might not get properly reported. " + Arrays.toString(scopeComponents)); } registry.register(metric, name, this); } else { // we had a collision. put back the original value metrics.put(name, prior); // we warn here, rather than failing, because metrics are tools that should not fail the // program when used incorrectly LOG.warn("Name collision: Group already contains a Metric with the name '" + name + "'. Metric will not be reported." + Arrays.toString(scopeComponents)); } } } }
Adds the given metric to the group and registers it at the registry, if the group is not yet closed, and if no metric with the same name has been registered before. @param name the name to register the metric under @param metric the metric to register
@Override public MetricGroup addGroup(int name) { return addGroup(String.valueOf(name), ChildType.GENERIC); }
------------------------------------------------------------------------
public final void read(InputStream inputStream) throws IOException { byte[] tmp = new byte[VERSIONED_IDENTIFIER.length]; inputStream.read(tmp); if (Arrays.equals(tmp, VERSIONED_IDENTIFIER)) { DataInputView inputView = new DataInputViewStreamWrapper(inputStream); super.read(inputView); read(inputView, true); } else { PushbackInputStream resetStream = new PushbackInputStream(inputStream, VERSIONED_IDENTIFIER.length); resetStream.unread(tmp); read(new DataInputViewStreamWrapper(resetStream), false); } }
This read attempts to first identify if the input view contains the special {@link #VERSIONED_IDENTIFIER} by reading and buffering the first few bytes. If identified to be versioned, the usual version resolution read path in {@link VersionedIOReadableWritable#read(DataInputView)} is invoked. Otherwise, we "reset" the input stream by pushing back the read buffered bytes into the stream.
private static Calendar valueAsCalendar(Object value) { Date date = (Date) value; Calendar cal = Calendar.getInstance(); cal.setTime(date); return cal; }
Convert a Date value to a Calendar. Calcite's fromCalendarField functions use the Calendar.get methods, so the raw values of the individual fields are preserved when converted to the String formats. @return get the Calendar value
@SuppressWarnings("unchecked") public static <V> Optional<V> extractValue(Expression expr, TypeInformation<V> type) { if (expr instanceof ValueLiteralExpression) { final ValueLiteralExpression valueLiteral = (ValueLiteralExpression) expr; if (valueLiteral.getType().equals(type)) { return Optional.of((V) valueLiteral.getValue()); } } return Optional.empty(); }
Extracts value of given type from expression assuming it is a {@link ValueLiteralExpression}. @param expr literal to extract the value from @param type expected type to extract from the literal @param <V> type of extracted value @return extracted value or empty if could not extract value of given type
public static boolean isFunctionOfType(Expression expr, FunctionDefinition.Type type) { return expr instanceof CallExpression && ((CallExpression) expr).getFunctionDefinition().getType() == type; }
Checks if the expression is a function call of given type. @param expr expression to check @param type expected type of function @return true if the expression is function call of given type, false otherwise
void setOffsetsToCommit( Map<TopicPartition, OffsetAndMetadata> offsetsToCommit, @Nonnull KafkaCommitCallback commitCallback) { // record the work to be committed by the main consumer thread and make sure the consumer notices that if (nextOffsetsToCommit.getAndSet(Tuple2.of(offsetsToCommit, commitCallback)) != null) { log.warn("Committing offsets to Kafka takes longer than the checkpoint interval. " + "Skipping commit of previous offsets because newer complete checkpoint offsets are available. " + "This does not compromise Flink's checkpoint integrity."); } // if the consumer is blocked in a poll() or handover operation, wake it up to commit soon handover.wakeupProducer(); synchronized (consumerReassignmentLock) { if (consumer != null) { consumer.wakeup(); } else { // the consumer is currently isolated for partition reassignment; // set this flag so that the wakeup state is restored once the reassignment is complete hasBufferedWakeup = true; } } }
Tells this thread to commit a set of offsets. This method does not block, the committing operation will happen asynchronously. <p>Only one commit operation may be pending at any time. If the committing takes longer than the frequency with which this method is called, then some commits may be skipped due to being superseded by newer ones. @param offsetsToCommit The offsets to commit @param commitCallback callback when Kafka commit completes
@VisibleForTesting protected ConsumerRecords<byte[], byte[]> getRecordsFromKafka() { ConsumerRecords<byte[], byte[]> records = consumer.poll(pollTimeout); if (rateLimiter != null) { int bytesRead = getRecordBatchSize(records); rateLimiter.acquire(bytesRead); } return records; }
Get records from Kafka. If the rate-limiting feature is turned on, this method is called at a rate specified by the {@link #rateLimiter}. @return ConsumerRecords
private static List<TopicPartition> convertKafkaPartitions(List<KafkaTopicPartitionState<TopicPartition>> partitions) { ArrayList<TopicPartition> result = new ArrayList<>(partitions.size()); for (KafkaTopicPartitionState<TopicPartition> p : partitions) { result.add(p.getKafkaPartitionHandle()); } return result; }
------------------------------------------------------------------------
private static String stripHostname(final String originalHostname) { // Check if the hostname domains the domain separator character final int index = originalHostname.indexOf(DOMAIN_SEPARATOR); if (index == -1) { return originalHostname; } // Make sure we are not stripping an IPv4 address final Matcher matcher = IPV4_PATTERN.matcher(originalHostname); if (matcher.matches()) { return originalHostname; } if (index == 0) { throw new IllegalStateException("Hostname " + originalHostname + " starts with a " + DOMAIN_SEPARATOR); } return originalHostname.substring(0, index); }
Looks for a domain suffix in a FQDN and strips it if present. @param originalHostname the original hostname, possibly an FQDN @return the stripped hostname without the domain suffix
public StreamStateHandle closeAndGetSecondaryHandle() throws IOException { if (secondaryStreamException == null) { flushInternalBuffer(); return secondaryOutputStream.closeAndGetHandle(); } else { throw new IOException("Secondary stream previously failed exceptionally", secondaryStreamException); } }
Returns the state handle from the {@link #secondaryOutputStream}. Also reports suppressed exceptions from earlier interactions with that stream.
public synchronized TaskManagerMetricStore getTaskManagerMetricStore(String tmID) { return tmID == null ? null : TaskManagerMetricStore.unmodifiable(taskManagers.get(tmID)); }
Returns the {@link TaskManagerMetricStore} for the given taskmanager ID. @param tmID taskmanager ID @return TaskManagerMetricStore for the given ID, or null if no store for the given argument exists
public synchronized ComponentMetricStore getJobMetricStore(String jobID) { return jobID == null ? null : ComponentMetricStore.unmodifiable(jobs.get(jobID)); }
Returns the {@link ComponentMetricStore} for the given job ID. @param jobID job ID @return ComponentMetricStore for the given ID, or null if no store for the given argument exists
public synchronized TaskMetricStore getTaskMetricStore(String jobID, String taskID) { JobMetricStore job = jobID == null ? null : jobs.get(jobID); if (job == null || taskID == null) { return null; } return TaskMetricStore.unmodifiable(job.getTaskMetricStore(taskID)); }
Returns the {@link ComponentMetricStore} for the given job/task ID. @param jobID job ID @param taskID task ID @return ComponentMetricStore for given IDs, or null if no store for the given arguments exists
public synchronized ComponentMetricStore getSubtaskMetricStore(String jobID, String taskID, int subtaskIndex) { JobMetricStore job = jobID == null ? null : jobs.get(jobID); if (job == null) { return null; } TaskMetricStore task = job.getTaskMetricStore(taskID); if (task == null) { return null; } return ComponentMetricStore.unmodifiable(task.getSubtaskMetricStore(subtaskIndex)); }
Returns the {@link ComponentMetricStore} for the given job/task ID and subtask index. @param jobID job ID @param taskID task ID @param subtaskIndex subtask index @return SubtaskMetricStore for the given IDs and index, or null if no store for the given arguments exists
@Override public BufferOrEvent getNextNonBlocked() throws Exception { while (true) { // process buffered BufferOrEvents before grabbing new ones Optional<BufferOrEvent> next; if (currentBuffered == null) { next = inputGate.getNextBufferOrEvent(); } else { next = Optional.ofNullable(currentBuffered.getNext()); if (!next.isPresent()) { completeBufferedSequence(); return getNextNonBlocked(); } } if (!next.isPresent()) { if (!endOfStream) { // end of input stream. stream continues with the buffered data endOfStream = true; releaseBlocksAndResetBarriers(); return getNextNonBlocked(); } else { // final end of both input and buffered data return null; } } BufferOrEvent bufferOrEvent = next.get(); if (isBlocked(bufferOrEvent.getChannelIndex())) { // if the channel is blocked, we just store the BufferOrEvent bufferBlocker.add(bufferOrEvent); checkSizeLimit(); } else if (bufferOrEvent.isBuffer()) { return bufferOrEvent; } else if (bufferOrEvent.getEvent().getClass() == CheckpointBarrier.class) { if (!endOfStream) { // process barriers only if there is a chance of the checkpoint completing processBarrier((CheckpointBarrier) bufferOrEvent.getEvent(), bufferOrEvent.getChannelIndex()); } } else if (bufferOrEvent.getEvent().getClass() == CancelCheckpointMarker.class) { processCancellationBarrier((CancelCheckpointMarker) bufferOrEvent.getEvent()); } else { if (bufferOrEvent.getEvent().getClass() == EndOfPartitionEvent.class) { processEndOfPartition(); } return bufferOrEvent; } } }
------------------------------------------------------------------------
private void onBarrier(int channelIndex) throws IOException { if (!blockedChannels[channelIndex]) { blockedChannels[channelIndex] = true; numBarriersReceived++; if (LOG.isDebugEnabled()) { LOG.debug("{}: Received barrier from channel {}.", inputGate.getOwningTaskName(), channelIndex); } } else { throw new IOException("Stream corrupt: Repeated barrier for same checkpoint on input " + channelIndex); } }
Blocks the given channel index, from which a barrier has been received. @param channelIndex The channel index to block.
private void releaseBlocksAndResetBarriers() throws IOException { LOG.debug("{}: End of stream alignment, feeding buffered data back.", inputGate.getOwningTaskName()); for (int i = 0; i < blockedChannels.length; i++) { blockedChannels[i] = false; } if (currentBuffered == null) { // common case: no more buffered data currentBuffered = bufferBlocker.rollOverReusingResources(); if (currentBuffered != null) { currentBuffered.open(); } } else { // uncommon case: buffered data pending // push back the pending data, if we have any LOG.debug("{}: Checkpoint skipped via buffered data:" + "Pushing back current alignment buffers and feeding back new alignment data first.", inputGate.getOwningTaskName()); // since we did not fully drain the previous sequence, we need to allocate a new buffer for this one BufferOrEventSequence bufferedNow = bufferBlocker.rollOverWithoutReusingResources(); if (bufferedNow != null) { bufferedNow.open(); queuedBuffered.addFirst(currentBuffered); numQueuedBytes += currentBuffered.size(); currentBuffered = bufferedNow; } } if (LOG.isDebugEnabled()) { LOG.debug("{}: Size of buffered data: {} bytes", inputGate.getOwningTaskName(), currentBuffered == null ? 0L : currentBuffered.size()); } // the next barrier that comes must assume it is the first numBarriersReceived = 0; if (startOfAlignmentTimestamp > 0) { latestAlignmentDurationNanos = System.nanoTime() - startOfAlignmentTimestamp; startOfAlignmentTimestamp = 0; } }
Releases the blocks on all channels and resets the barrier count. Makes sure the just written data is the next to be consumed.
@Override public void processElement1(StreamRecord<T1> record) throws Exception { processElement(record, leftBuffer, rightBuffer, lowerBound, upperBound, true); }
Process a {@link StreamRecord} from the left stream. Whenever an {@link StreamRecord} arrives at the left stream, it will get added to the left buffer. Possible join candidates for that element will be looked up from the right buffer and if the pair lies within the user defined boundaries, it gets passed to the {@link ProcessJoinFunction}. @param record An incoming record to be joined @throws Exception Can throw an Exception during state access
@Override public void processElement2(StreamRecord<T2> record) throws Exception { processElement(record, rightBuffer, leftBuffer, -upperBound, -lowerBound, false); }
Process a {@link StreamRecord} from the right stream. Whenever a {@link StreamRecord} arrives at the right stream, it will get added to the right buffer. Possible join candidates for that element will be looked up from the left buffer and if the pair lies within the user defined boundaries, it gets passed to the {@link ProcessJoinFunction}. @param record An incoming record to be joined @throws Exception Can throw an exception during state access
public static void initDefaultsFromConfiguration(Configuration configuration) { final boolean overwrite = configuration.getBoolean(CoreOptions.FILESYTEM_DEFAULT_OVERRIDE); DEFAULT_WRITE_MODE = overwrite ? WriteMode.OVERWRITE : WriteMode.NO_OVERWRITE; final boolean alwaysCreateDirectory = configuration.getBoolean(CoreOptions.FILESYSTEM_OUTPUT_ALWAYS_CREATE_DIRECTORY); DEFAULT_OUTPUT_DIRECTORY_MODE = alwaysCreateDirectory ? OutputDirectoryMode.ALWAYS : OutputDirectoryMode.PARONLY; }
Initialize defaults for output format. Needs to be a static method because it is configured for local cluster execution. @param configuration The configuration to load defaults from
@Override public void configure(Configuration parameters) { // get the output file path, if it was not yet set if (this.outputFilePath == null) { // get the file parameter String filePath = parameters.getString(FILE_PARAMETER_KEY, null); if (filePath == null) { throw new IllegalArgumentException("The output path has been specified neither via constructor/setters" + ", nor via the Configuration."); } try { this.outputFilePath = new Path(filePath); } catch (RuntimeException rex) { throw new RuntimeException("Could not create a valid URI from the given file path name: " + rex.getMessage()); } } // check if have not been set and use the defaults in that case if (this.writeMode == null) { this.writeMode = DEFAULT_WRITE_MODE; } if (this.outputDirectoryMode == null) { this.outputDirectoryMode = DEFAULT_OUTPUT_DIRECTORY_MODE; } }
----------------------------------------------------------------
@Override public void initializeGlobal(int parallelism) throws IOException { final Path path = getOutputFilePath(); final FileSystem fs = path.getFileSystem(); // only distributed file systems can be initialized at start-up time. if (fs.isDistributedFS()) { final WriteMode writeMode = getWriteMode(); final OutputDirectoryMode outDirMode = getOutputDirectoryMode(); if (parallelism == 1 && outDirMode == OutputDirectoryMode.PARONLY) { // output is not written in parallel and should be written to a single file. // prepare distributed output path if(!fs.initOutPathDistFS(path, writeMode, false)) { // output preparation failed! Cancel task. throw new IOException("Output path could not be initialized."); } } else { // output should be written to a directory // only distributed file systems can be initialized at start-up time. if(!fs.initOutPathDistFS(path, writeMode, true)) { throw new IOException("Output directory could not be created."); } } } }
Initialization of the distributed file system if it is used. @param parallelism The task parallelism.
public static ByteBuffer toSerializedEvent(AbstractEvent event) throws IOException { final Class<?> eventClass = event.getClass(); if (eventClass == EndOfPartitionEvent.class) { return ByteBuffer.wrap(new byte[] { 0, 0, 0, END_OF_PARTITION_EVENT }); } else if (eventClass == CheckpointBarrier.class) { return serializeCheckpointBarrier((CheckpointBarrier) event); } else if (eventClass == EndOfSuperstepEvent.class) { return ByteBuffer.wrap(new byte[] { 0, 0, 0, END_OF_SUPERSTEP_EVENT }); } else if (eventClass == CancelCheckpointMarker.class) { CancelCheckpointMarker marker = (CancelCheckpointMarker) event; ByteBuffer buf = ByteBuffer.allocate(12); buf.putInt(0, CANCEL_CHECKPOINT_MARKER_EVENT); buf.putLong(4, marker.getCheckpointId()); return buf; } else { try { final DataOutputSerializer serializer = new DataOutputSerializer(128); serializer.writeInt(OTHER_EVENT); serializer.writeUTF(event.getClass().getName()); event.write(serializer); return serializer.wrapAsByteBuffer(); } catch (IOException e) { throw new IOException("Error while serializing event.", e); } } }
------------------------------------------------------------------------
private static boolean isEvent(ByteBuffer buffer, Class<?> eventClass) throws IOException { if (buffer.remaining() < 4) { throw new IOException("Incomplete event"); } final int bufferPos = buffer.position(); final ByteOrder bufferOrder = buffer.order(); buffer.order(ByteOrder.BIG_ENDIAN); try { int type = buffer.getInt(); if (eventClass.equals(EndOfPartitionEvent.class)) { return type == END_OF_PARTITION_EVENT; } else if (eventClass.equals(CheckpointBarrier.class)) { return type == CHECKPOINT_BARRIER_EVENT; } else if (eventClass.equals(EndOfSuperstepEvent.class)) { return type == END_OF_SUPERSTEP_EVENT; } else if (eventClass.equals(CancelCheckpointMarker.class)) { return type == CANCEL_CHECKPOINT_MARKER_EVENT; } else { throw new UnsupportedOperationException("Unsupported eventClass = " + eventClass); } } finally { buffer.order(bufferOrder); // restore the original position in the buffer (recall: we only peak into it!) buffer.position(bufferPos); } }
Identifies whether the given buffer encodes the given event. Custom events are not supported. <p><strong>Pre-condition</strong>: This buffer must encode some event!</p> @param buffer the buffer to peak into @param eventClass the expected class of the event type @return whether the event class of the <tt>buffer</tt> matches the given <tt>eventClass</tt>
public static Buffer toBuffer(AbstractEvent event) throws IOException { final ByteBuffer serializedEvent = EventSerializer.toSerializedEvent(event); MemorySegment data = MemorySegmentFactory.wrap(serializedEvent.array()); final Buffer buffer = new NetworkBuffer(data, FreeingBufferRecycler.INSTANCE, false); buffer.setSize(serializedEvent.remaining()); return buffer; }
------------------------------------------------------------------------
public static boolean isEvent(Buffer buffer, Class<?> eventClass) throws IOException { return !buffer.isBuffer() && isEvent(buffer.getNioBufferReadable(), eventClass); }
Identifies whether the given buffer encodes the given event. Custom events are not supported. @param buffer the buffer to peak into @param eventClass the expected class of the event type @return whether the event class of the <tt>buffer</tt> matches the given <tt>eventClass</tt>
public static String stringifyException(final Throwable e) { if (e == null) { return STRINGIFIED_NULL_EXCEPTION; } try { StringWriter stm = new StringWriter(); PrintWriter wrt = new PrintWriter(stm); e.printStackTrace(wrt); wrt.close(); return stm.toString(); } catch (Throwable t) { return e.getClass().getName() + " (error while printing stack trace)"; } }
Makes a string representation of the exception's stack trace, or "(null)", if the exception is null. <p>This method makes a best effort and never fails. @param e The exception to stringify. @return A string with exception name and call stack.
public static <T extends Throwable> T firstOrSuppressed(T newException, @Nullable T previous) { checkNotNull(newException, "newException"); if (previous == null) { return newException; } else { previous.addSuppressed(newException); return previous; } }
Adds a new exception as a {@link Throwable#addSuppressed(Throwable) suppressed exception} to a prior exception, or returns the new exception, if no prior exception exists. <pre>{@code public void closeAllThings() throws Exception { Exception ex = null; try { component.shutdown(); } catch (Exception e) { ex = firstOrSuppressed(e, ex); } try { anotherComponent.stop(); } catch (Exception e) { ex = firstOrSuppressed(e, ex); } try { lastComponent.shutdown(); } catch (Exception e) { ex = firstOrSuppressed(e, ex); } if (ex != null) { throw ex; } } }</pre> @param newException The newly occurred exception @param previous The previously occurred exception, possibly null. @return The new exception, if no previous exception exists, or the previous exception with the new exception in the list of suppressed exceptions.
public static void rethrow(Throwable t, String parentMessage) { if (t instanceof Error) { throw (Error) t; } else if (t instanceof RuntimeException) { throw (RuntimeException) t; } else { throw new RuntimeException(parentMessage, t); } }
Throws the given {@code Throwable} in scenarios where the signatures do not allow you to throw an arbitrary Throwable. Errors and RuntimeExceptions are thrown directly, other exceptions are packed into a parent RuntimeException. @param t The throwable to be thrown. @param parentMessage The message for the parent RuntimeException, if one is needed.
public static void rethrowException(Throwable t, String parentMessage) throws Exception { if (t instanceof Error) { throw (Error) t; } else if (t instanceof Exception) { throw (Exception) t; } else { throw new Exception(parentMessage, t); } }
Throws the given {@code Throwable} in scenarios where the signatures do allow to throw a Exception. Errors and Exceptions are thrown directly, other "exotic" subclasses of Throwable are wrapped in an Exception. @param t The throwable to be thrown. @param parentMessage The message for the parent Exception, if one is needed.
public static void rethrowException(Throwable t) throws Exception { if (t instanceof Error) { throw (Error) t; } else if (t instanceof Exception) { throw (Exception) t; } else { throw new Exception(t.getMessage(), t); } }
Throws the given {@code Throwable} in scenarios where the signatures do allow to throw a Exception. Errors and Exceptions are thrown directly, other "exotic" subclasses of Throwable are wrapped in an Exception. @param t The throwable to be thrown.
public static void tryRethrowIOException(Throwable t) throws IOException { if (t instanceof IOException) { throw (IOException) t; } else if (t instanceof RuntimeException) { throw (RuntimeException) t; } else if (t instanceof Error) { throw (Error) t; } }
Tries to throw the given {@code Throwable} in scenarios where the signatures allows only IOExceptions (and RuntimeException and Error). Throws this exception directly, if it is an IOException, a RuntimeException, or an Error. Otherwise does nothing. @param t The Throwable to be thrown.
public static void rethrowIOException(Throwable t) throws IOException { if (t instanceof IOException) { throw (IOException) t; } else if (t instanceof RuntimeException) { throw (RuntimeException) t; } else if (t instanceof Error) { throw (Error) t; } else { throw new IOException(t.getMessage(), t); } }
Re-throws the given {@code Throwable} in scenarios where the signatures allows only IOExceptions (and RuntimeException and Error). <p>Throws this exception directly, if it is an IOException, a RuntimeException, or an Error. Otherwise it wraps it in an IOException and throws it. @param t The Throwable to be thrown.
public static <T extends Throwable> Optional<T> findThrowable(Throwable throwable, Class<T> searchType) { if (throwable == null || searchType == null) { return Optional.empty(); } Throwable t = throwable; while (t != null) { if (searchType.isAssignableFrom(t.getClass())) { return Optional.of(searchType.cast(t)); } else { t = t.getCause(); } } return Optional.empty(); }
Checks whether a throwable chain contains a specific type of exception and returns it. @param throwable the throwable chain to check. @param searchType the type of exception to search for in the chain. @return Optional throwable of the requested type if available, otherwise empty
public static Optional<Throwable> findThrowable(Throwable throwable, Predicate<Throwable> predicate) { if (throwable == null || predicate == null) { return Optional.empty(); } Throwable t = throwable; while (t != null) { if (predicate.test(t)) { return Optional.of(t); } else { t = t.getCause(); } } return Optional.empty(); }
Checks whether a throwable chain contains an exception matching a predicate and returns it. @param throwable the throwable chain to check. @param predicate the predicate of the exception to search for in the chain. @return Optional throwable of the requested type if available, otherwise empty
public static Optional<Throwable> findThrowableWithMessage(Throwable throwable, String searchMessage) { if (throwable == null || searchMessage == null) { return Optional.empty(); } Throwable t = throwable; while (t != null) { if (t.getMessage() != null && t.getMessage().contains(searchMessage)) { return Optional.of(t); } else { t = t.getCause(); } } return Optional.empty(); }
Checks whether a throwable chain contains a specific error message and returns the corresponding throwable. @param throwable the throwable chain to check. @param searchMessage the error message to search for in the chain. @return Optional throwable containing the search message if available, otherwise empty
public static Throwable stripException(Throwable throwableToStrip, Class<? extends Throwable> typeToStrip) { while (typeToStrip.isAssignableFrom(throwableToStrip.getClass()) && throwableToStrip.getCause() != null) { throwableToStrip = throwableToStrip.getCause(); } return throwableToStrip; }
Unpacks an specified exception and returns its cause. Otherwise the given {@link Throwable} is returned. @param throwableToStrip to strip @param typeToStrip type to strip @return Unpacked cause or given Throwable if not packed
public static void tryDeserializeAndThrow(Throwable throwable, ClassLoader classLoader) throws Throwable { Throwable current = throwable; while (!(current instanceof SerializedThrowable) && current.getCause() != null) { current = current.getCause(); } if (current instanceof SerializedThrowable) { throw ((SerializedThrowable) current).deserializeError(classLoader); } else { throw throwable; } }
Tries to find a {@link SerializedThrowable} as the cause of the given throwable and throws its deserialized value. If there is no such throwable, then the original throwable is thrown. @param throwable to check for a SerializedThrowable @param classLoader to be used for the deserialization of the SerializedThrowable @throws Throwable either the deserialized throwable or the given throwable
public static void suppressExceptions(RunnableWithException action) { try { action.run(); } catch (InterruptedException e) { // restore interrupted state Thread.currentThread().interrupt(); } catch (Throwable t) { if (isJvmFatalError(t)) { rethrow(t); } } }
------------------------------------------------------------------------
private static S3Recoverable castToS3Recoverable(CommitRecoverable recoverable) { if (recoverable instanceof S3Recoverable) { return (S3Recoverable) recoverable; } throw new IllegalArgumentException( "S3 File System cannot recover recoverable for other file system: " + recoverable); }
--------------------------- Utils ---------------------------
public static S3RecoverableWriter writer( final FileSystem fs, final FunctionWithException<File, RefCountedFile, IOException> tempFileCreator, final S3AccessHelper s3AccessHelper, final Executor uploadThreadPool, final long userDefinedMinPartSize, final int maxConcurrentUploadsPerStream) { checkArgument(userDefinedMinPartSize >= S3_MULTIPART_MIN_PART_SIZE); final S3RecoverableMultipartUploadFactory uploadFactory = new S3RecoverableMultipartUploadFactory( fs, s3AccessHelper, maxConcurrentUploadsPerStream, uploadThreadPool, tempFileCreator); return new S3RecoverableWriter(s3AccessHelper, uploadFactory, tempFileCreator, userDefinedMinPartSize); }
--------------------------- Static Constructor ---------------------------
public static int optimalNumOfBits(long inputEntries, double fpp) { int numBits = (int) (-inputEntries * Math.log(fpp) / (Math.log(2) * Math.log(2))); return numBits; }
Compute optimal bits number with given input entries and expected false positive probability. @param inputEntries @param fpp @return optimal bits number
public static double estimateFalsePositiveProbability(long inputEntries, int bitSize) { int numFunction = optimalNumOfHashFunctions(inputEntries, bitSize); double p = Math.pow(Math.E, -(double) numFunction * inputEntries / bitSize); double estimatedFPP = Math.pow(1 - p, numFunction); return estimatedFPP; }
Compute the false positive probability based on given input entries and bits size. Note: this is just the math expected value, you should not expect the fpp in real case would under the return value for certain. @param inputEntries @param bitSize @return
static int optimalNumOfHashFunctions(long expectEntries, long bitSize) { return Math.max(1, (int) Math.round((double) bitSize / expectEntries * Math.log(2))); }
compute the optimal hash function number with given input entries and bits size, which would make the false positive probability lowest. @param expectEntries @param bitSize @return hash function number
@Override protected List<IN> executeOnCollections(List<IN> inputData, RuntimeContext runtimeContext, ExecutionConfig executionConfig) { return inputData; }
--------------------------------------------------------------------------------------------
public static long getTimestampMillis(Binary timestampBinary) { if (timestampBinary.length() != 12) { throw new IllegalArgumentException("Parquet timestamp must be 12 bytes, actual " + timestampBinary.length()); } byte[] bytes = timestampBinary.getBytes(); // little endian encoding - need to invert byte order long timeOfDayNanos = ByteBuffer.wrap(new byte[] {bytes[7], bytes[6], bytes[5], bytes[4], bytes[3], bytes[2], bytes[1], bytes[0]}).getLong(); int julianDay = ByteBuffer.wrap(new byte[] {bytes[11], bytes[10], bytes[9], bytes[8]}).getInt(); return julianDayToMillis(julianDay) + (timeOfDayNanos / NANOS_PER_MILLISECOND); }
Returns GMT timestamp from binary encoded parquet timestamp (12 bytes - julian date + time of day nanos). @param timestampBinary INT96 parquet timestamp @return timestamp in millis, GMT timezone
public final UnresolvedReferenceExpression[] operands() { int operandCount = operandCount(); Preconditions.checkState(operandCount >= 0, "inputCount must be greater than or equal to 0."); UnresolvedReferenceExpression[] ret = new UnresolvedReferenceExpression[operandCount]; for (int i = 0; i < operandCount; i++) { String name = String.valueOf(i); validateOperandName(name); ret[i] = new UnresolvedReferenceExpression(name); } return ret; }
Args of accumulate and retract, the input value (usually obtained from a new arrived data).
public final UnresolvedReferenceExpression operand(int i) { String name = String.valueOf(i); if (getAggBufferNames().contains(name)) { throw new IllegalStateException( String.format("Agg buffer name(%s) should not same to operands.", name)); } return new UnresolvedReferenceExpression(name); }
Arg of accumulate and retract, the input value (usually obtained from a new arrived data).
public final UnresolvedReferenceExpression mergeOperand(UnresolvedReferenceExpression aggBuffer) { String name = String.valueOf(Arrays.asList(aggBufferAttributes()).indexOf(aggBuffer)); validateOperandName(name); return new UnresolvedReferenceExpression(name); }
Merge input of {@link #mergeExpressions()}, the input are AGG buffer generated by user definition.
public final UnresolvedReferenceExpression[] mergeOperands() { UnresolvedReferenceExpression[] aggBuffers = aggBufferAttributes(); UnresolvedReferenceExpression[] ret = new UnresolvedReferenceExpression[aggBuffers.length]; for (int i = 0; i < aggBuffers.length; i++) { String name = String.valueOf(i); validateOperandName(name); ret[i] = new UnresolvedReferenceExpression(name); } return ret; }
Merge inputs of {@link #mergeExpressions()}, these inputs are agg buffer generated by user definition.
@Override public void collect(T record) { if (record != null) { this.delegate.setInstance(record); try { for (RecordWriter<SerializationDelegate<T>> writer : writers) { writer.emit(this.delegate); } } catch (IOException e) { throw new RuntimeException("Emitting the record caused an I/O exception: " + e.getMessage(), e); } catch (InterruptedException e) { throw new RuntimeException("Emitting the record was interrupted: " + e.getMessage(), e); } } else { throw new NullPointerException("The system does not support records that are null." + "Null values are only supported as fields inside other objects."); } }
Collects a record and emits it to all writers.
@SuppressWarnings("unchecked") public List<RecordWriter<SerializationDelegate<T>>> getWriters() { return Collections.unmodifiableList(Arrays.asList(this.writers)); }
List of writers that are associated with this output collector @return list of writers