code
stringlengths
67
466k
docstring
stringlengths
1
13.2k
@Internal public static <L, R> Right<L, R> obtainRight(Either<L, R> input, TypeSerializer<R> rightSerializer) { if (input.isRight()) { return (Right<L, R>) input; } else { Left<L, R> left = (Left<L, R>) input; if (left.right == null) { left.right = Right.of(rightSerializer.createInstance()); left.right.left = left; } return left.right; } }
Utility function for {@link EitherSerializer} to support object reuse. To support object reuse both subclasses of Either contain a reference to an instance of the other type. This method provides access to and initializes the cross-reference. @param input container for Left or Right value @param rightSerializer for creating an instance of the right type @param <L> the type of Left @param <R> the type of Right @return input if Right type else input's Right reference
public Throwable unwrap() { Throwable cause = getCause(); return (cause instanceof WrappingRuntimeException) ? ((WrappingRuntimeException) cause).unwrap() : cause; }
Recursively unwraps this WrappingRuntimeException and its causes, getting the first non wrapping exception. @return The first cause that is not a wrapping exception.
private Kryo getKryoInstance() { try { // check if ScalaKryoInstantiator is in class path (coming from Twitter's Chill library). // This will be true if Flink's Scala API is used. Class<?> chillInstantiatorClazz = Class.forName("org.apache.flink.runtime.types.FlinkScalaKryoInstantiator"); Object chillInstantiator = chillInstantiatorClazz.newInstance(); // obtain a Kryo instance through Twitter Chill Method m = chillInstantiatorClazz.getMethod("newKryo"); return (Kryo) m.invoke(chillInstantiator); } catch (ClassNotFoundException | InstantiationException | NoSuchMethodException | IllegalAccessException | InvocationTargetException e) { LOG.warn("Falling back to default Kryo serializer because Chill serializer couldn't be found.", e); Kryo.DefaultInstantiatorStrategy initStrategy = new Kryo.DefaultInstantiatorStrategy(); initStrategy.setFallbackInstantiatorStrategy(new StdInstantiatorStrategy()); Kryo kryo = new Kryo(); kryo.setInstantiatorStrategy(initStrategy); return kryo; } }
Returns the Chill Kryo Serializer which is implicitly added to the classpath via flink-runtime. Falls back to the default Kryo serializer if it can't be found. @return The Kryo serializer instance.
private static LinkedHashMap<String, KryoRegistration> buildKryoRegistrations( Class<?> serializedType, LinkedHashSet<Class<?>> registeredTypes, LinkedHashMap<Class<?>, Class<? extends Serializer<?>>> registeredTypesWithSerializerClasses, LinkedHashMap<Class<?>, ExecutionConfig.SerializableSerializer<?>> registeredTypesWithSerializers) { final LinkedHashMap<String, KryoRegistration> kryoRegistrations = new LinkedHashMap<>(); kryoRegistrations.put(serializedType.getName(), new KryoRegistration(serializedType)); for (Class<?> registeredType : checkNotNull(registeredTypes)) { kryoRegistrations.put(registeredType.getName(), new KryoRegistration(registeredType)); } for (Map.Entry<Class<?>, Class<? extends Serializer<?>>> registeredTypeWithSerializerClassEntry : checkNotNull(registeredTypesWithSerializerClasses).entrySet()) { kryoRegistrations.put( registeredTypeWithSerializerClassEntry.getKey().getName(), new KryoRegistration( registeredTypeWithSerializerClassEntry.getKey(), registeredTypeWithSerializerClassEntry.getValue())); } for (Map.Entry<Class<?>, ExecutionConfig.SerializableSerializer<?>> registeredTypeWithSerializerEntry : checkNotNull(registeredTypesWithSerializers).entrySet()) { kryoRegistrations.put( registeredTypeWithSerializerEntry.getKey().getName(), new KryoRegistration( registeredTypeWithSerializerEntry.getKey(), registeredTypeWithSerializerEntry.getValue())); } // add Avro support if flink-avro is available; a dummy otherwise AvroUtils.getAvroUtils().addAvroGenericDataArrayRegistration(kryoRegistrations); return kryoRegistrations; }
Utility method that takes lists of registered types and their serializers, and resolve them into a single list such that the result will resemble the final registration result in Kryo.
private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException { in.defaultReadObject(); // kryoRegistrations may be null if this Kryo serializer is deserialized from an old version if (kryoRegistrations == null) { this.kryoRegistrations = buildKryoRegistrations( type, registeredTypes, registeredTypesWithSerializerClasses, registeredTypesWithSerializers); } }
--------------------------------------------------------------------------------------------
protected AmazonKinesis createKinesisClient(Properties configProps) { ClientConfiguration awsClientConfig = new ClientConfigurationFactory().getConfig(); AWSUtil.setAwsClientConfigProperties(awsClientConfig, configProps); return AWSUtil.createKinesisClient(configProps, awsClientConfig); }
Create the Kinesis client, using the provided configuration properties and default {@link ClientConfiguration}. Derived classes can override this method to customize the client configuration. @param configProps @return
@Override public GetRecordsResult getRecords(String shardIterator, int maxRecordsToGet) throws InterruptedException { final GetRecordsRequest getRecordsRequest = new GetRecordsRequest(); getRecordsRequest.setShardIterator(shardIterator); getRecordsRequest.setLimit(maxRecordsToGet); GetRecordsResult getRecordsResult = null; int retryCount = 0; while (retryCount <= getRecordsMaxRetries && getRecordsResult == null) { try { getRecordsResult = kinesisClient.getRecords(getRecordsRequest); } catch (SdkClientException ex) { if (isRecoverableSdkClientException(ex)) { long backoffMillis = fullJitterBackoff( getRecordsBaseBackoffMillis, getRecordsMaxBackoffMillis, getRecordsExpConstant, retryCount++); LOG.warn("Got recoverable SdkClientException. Backing off for " + backoffMillis + " millis (" + ex.getClass().getName() + ": " + ex.getMessage() + ")"); Thread.sleep(backoffMillis); } else { throw ex; } } } if (getRecordsResult == null) { throw new RuntimeException("Retries exceeded for getRecords operation - all " + getRecordsMaxRetries + " retry attempts failed."); } return getRecordsResult; }
{@inheritDoc}
@Override public GetShardListResult getShardList(Map<String, String> streamNamesWithLastSeenShardIds) throws InterruptedException { GetShardListResult result = new GetShardListResult(); for (Map.Entry<String, String> streamNameWithLastSeenShardId : streamNamesWithLastSeenShardIds.entrySet()) { String stream = streamNameWithLastSeenShardId.getKey(); String lastSeenShardId = streamNameWithLastSeenShardId.getValue(); result.addRetrievedShardsToStream(stream, getShardsOfStream(stream, lastSeenShardId)); } return result; }
{@inheritDoc}
@Override public String getShardIterator(StreamShardHandle shard, String shardIteratorType, @Nullable Object startingMarker) throws InterruptedException { GetShardIteratorRequest getShardIteratorRequest = new GetShardIteratorRequest() .withStreamName(shard.getStreamName()) .withShardId(shard.getShard().getShardId()) .withShardIteratorType(shardIteratorType); switch (ShardIteratorType.fromValue(shardIteratorType)) { case TRIM_HORIZON: case LATEST: break; case AT_TIMESTAMP: if (startingMarker instanceof Date) { getShardIteratorRequest.setTimestamp((Date) startingMarker); } else { throw new IllegalArgumentException("Invalid object given for GetShardIteratorRequest() when ShardIteratorType is AT_TIMESTAMP. Must be a Date object."); } break; case AT_SEQUENCE_NUMBER: case AFTER_SEQUENCE_NUMBER: if (startingMarker instanceof String) { getShardIteratorRequest.setStartingSequenceNumber((String) startingMarker); } else { throw new IllegalArgumentException("Invalid object given for GetShardIteratorRequest() when ShardIteratorType is AT_SEQUENCE_NUMBER or AFTER_SEQUENCE_NUMBER. Must be a String."); } } return getShardIterator(getShardIteratorRequest); }
{@inheritDoc}
protected boolean isRecoverableSdkClientException(SdkClientException ex) { if (ex instanceof AmazonServiceException) { return KinesisProxy.isRecoverableException((AmazonServiceException) ex); } // customizations may decide to retry other errors, such as read timeouts return false; }
Determines whether the exception is recoverable using exponential-backoff. @param ex Exception to inspect @return <code>true</code> if the exception can be recovered from, else <code>false</code>
protected static boolean isRecoverableException(AmazonServiceException ex) { if (ex.getErrorType() == null) { return false; } switch (ex.getErrorType()) { case Client: return ex instanceof ProvisionedThroughputExceededException; case Service: case Unknown: return true; default: return false; } }
Determines whether the exception is recoverable using exponential-backoff. @param ex Exception to inspect @return <code>true</code> if the exception can be recovered from, else <code>false</code>
private ListShardsResult listShards(String streamName, @Nullable String startShardId, @Nullable String startNextToken) throws InterruptedException { final ListShardsRequest listShardsRequest = new ListShardsRequest(); if (startNextToken == null) { listShardsRequest.setExclusiveStartShardId(startShardId); listShardsRequest.setStreamName(streamName); } else { // Note the nextToken returned by AWS expires within 300 sec. listShardsRequest.setNextToken(startNextToken); } ListShardsResult listShardsResults = null; // Call ListShards, with full-jitter backoff (if we get LimitExceededException). int retryCount = 0; // List Shards returns just the first 1000 shard entries. Make sure that all entries // are taken up. while (retryCount <= listShardsMaxRetries && listShardsResults == null) { // retry until we get a result try { listShardsResults = kinesisClient.listShards(listShardsRequest); } catch (LimitExceededException le) { long backoffMillis = fullJitterBackoff( listShardsBaseBackoffMillis, listShardsMaxBackoffMillis, listShardsExpConstant, retryCount++); LOG.warn("Got LimitExceededException when listing shards from stream " + streamName + ". Backing off for " + backoffMillis + " millis."); Thread.sleep(backoffMillis); } catch (ResourceInUseException reInUse) { if (LOG.isWarnEnabled()) { // List Shards will throw an exception if stream in not in active state. Return and re-use previous state available. LOG.info("The stream is currently not in active state. Reusing the older state " + "for the time being"); break; } } catch (ResourceNotFoundException reNotFound) { throw new RuntimeException("Stream not found. Error while getting shard list.", reNotFound); } catch (InvalidArgumentException inArg) { throw new RuntimeException("Invalid Arguments to listShards.", inArg); } catch (ExpiredNextTokenException expiredToken) { LOG.warn("List Shards has an expired token. Reusing the previous state."); break; } catch (SdkClientException ex) { if (retryCount < listShardsMaxRetries && isRecoverableSdkClientException(ex)) { long backoffMillis = fullJitterBackoff( listShardsBaseBackoffMillis, listShardsMaxBackoffMillis, listShardsExpConstant, retryCount++); LOG.warn("Got SdkClientException when listing shards from stream {}. Backing off for {} millis.", streamName, backoffMillis); Thread.sleep(backoffMillis); } else { // propagate if retries exceeded or not recoverable // (otherwise would return null result and keep trying forever) throw ex; } } } // Kinesalite (mock implementation of Kinesis) does not correctly exclude shards before // the exclusive start shard id in the returned shards list; check if we need to remove // these erroneously returned shards. // Related issues: // https://github.com/mhart/kinesalite/pull/77 // https://github.com/lyft/kinesalite/pull/4 if (startShardId != null && listShardsResults != null) { List<Shard> shards = listShardsResults.getShards(); Iterator<Shard> shardItr = shards.iterator(); while (shardItr.hasNext()) { if (StreamShardHandle.compareShardIds(shardItr.next().getShardId(), startShardId) <= 0) { shardItr.remove(); } } } return listShardsResults; }
Get metainfo for a Kinesis stream, which contains information about which shards this Kinesis stream possess. <p>This method is using a "full jitter" approach described in AWS's article, <a href="https://www.awsarchitectureblog.com/2015/03/backoff.html">"Exponential Backoff and Jitter"</a>. This is necessary because concurrent calls will be made by all parallel subtask's fetcher. This jitter backoff approach will help distribute calls across the fetchers over time. @param streamName the stream to describe @param startShardId which shard to start with for this describe operation (earlier shard's infos will not appear in result) @return the result of the describe stream operation
protected DescribeStreamResult describeStream(String streamName, @Nullable String startShardId) throws InterruptedException { final DescribeStreamRequest describeStreamRequest = new DescribeStreamRequest(); describeStreamRequest.setStreamName(streamName); describeStreamRequest.setExclusiveStartShardId(startShardId); DescribeStreamResult describeStreamResult = null; // Call DescribeStream, with full-jitter backoff (if we get LimitExceededException). int attemptCount = 0; while (describeStreamResult == null) { // retry until we get a result try { describeStreamResult = kinesisClient.describeStream(describeStreamRequest); } catch (LimitExceededException le) { long backoffMillis = fullJitterBackoff( describeStreamBaseBackoffMillis, describeStreamMaxBackoffMillis, describeStreamExpConstant, attemptCount++); LOG.warn(String.format("Got LimitExceededException when describing stream %s. " + "Backing off for %d millis.", streamName, backoffMillis)); Thread.sleep(backoffMillis); } catch (ResourceNotFoundException re) { throw new RuntimeException("Error while getting stream details", re); } } String streamStatus = describeStreamResult.getStreamDescription().getStreamStatus(); if (!(streamStatus.equals(StreamStatus.ACTIVE.toString()) || streamStatus.equals(StreamStatus.UPDATING.toString()))) { if (LOG.isWarnEnabled()) { LOG.warn(String.format("The status of stream %s is %s ; result of the current " + "describeStream operation will not contain any shard information.", streamName, streamStatus)); } } return describeStreamResult; }
Get metainfo for a Kinesis stream, which contains information about which shards this Kinesis stream possess. <p>This method is using a "full jitter" approach described in AWS's article, <a href="https://www.awsarchitectureblog.com/2015/03/backoff.html"> "Exponential Backoff and Jitter"</a>. This is necessary because concurrent calls will be made by all parallel subtask's fetcher. This jitter backoff approach will help distribute calls across the fetchers over time. @param streamName the stream to describe @param startShardId which shard to start with for this describe operation @return the result of the describe stream operation
@SuppressWarnings("unchecked") public <K, VV, EV> Graph<K, VV, EV> types(Class<K> vertexKey, Class<VV> vertexValue, Class<EV> edgeValue) { if (edgeReader == null) { throw new RuntimeException("The edge input file cannot be null!"); } DataSet<Tuple3<K, K, EV>> edges = edgeReader.types(vertexKey, vertexKey, edgeValue); // the vertex value can be provided by an input file or a user-defined mapper if (vertexReader != null) { DataSet<Tuple2<K, VV>> vertices = vertexReader .types(vertexKey, vertexValue) .name(GraphCsvReader.class.getName()); return Graph.fromTupleDataSet(vertices, edges, executionContext); } else if (mapper != null) { return Graph.fromTupleDataSet(edges, (MapFunction<K, VV>) mapper, executionContext); } else { throw new RuntimeException("Vertex values have to be specified through a vertices input file" + "or a user-defined map function."); } }
Creates a Graph from CSV input with vertex values and edge values. The vertex values are specified through a vertices input file or a user-defined map function. @param vertexKey the type of the vertex IDs @param vertexValue the type of the vertex values @param edgeValue the type of the edge values @return a Graph with vertex and edge values.
public <K, EV> Graph<K, NullValue, EV> edgeTypes(Class<K> vertexKey, Class<EV> edgeValue) { if (edgeReader == null) { throw new RuntimeException("The edge input file cannot be null!"); } DataSet<Tuple3<K, K, EV>> edges = edgeReader .types(vertexKey, vertexKey, edgeValue) .name(GraphCsvReader.class.getName()); return Graph.fromTupleDataSet(edges, executionContext); }
Creates a Graph from CSV input with edge values, but without vertex values. @param vertexKey the type of the vertex IDs @param edgeValue the type of the edge values @return a Graph where the edges are read from an edges CSV file (with values).
public <K> Graph<K, NullValue, NullValue> keyType(Class<K> vertexKey) { if (edgeReader == null) { throw new RuntimeException("The edge input file cannot be null!"); } DataSet<Edge<K, NullValue>> edges = edgeReader .types(vertexKey, vertexKey) .name(GraphCsvReader.class.getName()) .map(new Tuple2ToEdgeMap<>()) .name("Type conversion"); return Graph.fromDataSet(edges, executionContext); }
Creates a Graph from CSV input without vertex values or edge values. @param vertexKey the type of the vertex IDs @return a Graph where the vertex IDs are read from the edges input file.
@SuppressWarnings({ "serial", "unchecked" }) public <K, VV> Graph<K, VV, NullValue> vertexTypes(Class<K> vertexKey, Class<VV> vertexValue) { if (edgeReader == null) { throw new RuntimeException("The edge input file cannot be null!"); } DataSet<Edge<K, NullValue>> edges = edgeReader .types(vertexKey, vertexKey) .name(GraphCsvReader.class.getName()) .map(new Tuple2ToEdgeMap<>()) .name("To Edge"); // the vertex value can be provided by an input file or a user-defined mapper if (vertexReader != null) { DataSet<Vertex<K, VV>> vertices = vertexReader .types(vertexKey, vertexValue) .name(GraphCsvReader.class.getName()) .map(new Tuple2ToVertexMap<>()) .name("Type conversion"); return Graph.fromDataSet(vertices, edges, executionContext); } else if (mapper != null) { return Graph.fromDataSet(edges, (MapFunction<K, VV>) mapper, executionContext); } else { throw new RuntimeException("Vertex values have to be specified through a vertices input file" + "or a user-defined map function."); } }
Creates a Graph from CSV input without edge values. The vertex values are specified through a vertices input file or a user-defined map function. If no vertices input file is provided, the vertex IDs are automatically created from the edges input file. @param vertexKey the type of the vertex IDs @param vertexValue the type of the vertex values @return a Graph where the vertex IDs and vertex values.
public static void mergeHadoopConf(JobConf jobConf) { // we have to load the global configuration here, because the HadoopInputFormatBase does not // have access to a Flink configuration object org.apache.flink.configuration.Configuration flinkConfiguration = GlobalConfiguration.loadConfiguration(); Configuration hadoopConf = getHadoopConfiguration(flinkConfiguration); for (Map.Entry<String, String> e : hadoopConf) { if (jobConf.get(e.getKey()) == null) { jobConf.set(e.getKey(), e.getValue()); } } }
Merge HadoopConfiguration into JobConf. This is necessary for the HDFS configuration.
public static Configuration getHadoopConfiguration(org.apache.flink.configuration.Configuration flinkConfiguration) { Configuration retConf = new Configuration(); // We need to load both core-site.xml and hdfs-site.xml to determine the default fs path and // the hdfs configuration // Try to load HDFS configuration from Hadoop's own configuration files // 1. approach: Flink configuration final String hdfsDefaultPath = flinkConfiguration.getString(ConfigConstants .HDFS_DEFAULT_CONFIG, null); if (hdfsDefaultPath != null) { retConf.addResource(new org.apache.hadoop.fs.Path(hdfsDefaultPath)); } else { LOG.debug("Cannot find hdfs-default configuration file"); } final String hdfsSitePath = flinkConfiguration.getString(ConfigConstants.HDFS_SITE_CONFIG, null); if (hdfsSitePath != null) { retConf.addResource(new org.apache.hadoop.fs.Path(hdfsSitePath)); } else { LOG.debug("Cannot find hdfs-site configuration file"); } // 2. Approach environment variables String[] possibleHadoopConfPaths = new String[4]; possibleHadoopConfPaths[0] = flinkConfiguration.getString(ConfigConstants.PATH_HADOOP_CONFIG, null); possibleHadoopConfPaths[1] = System.getenv("HADOOP_CONF_DIR"); if (System.getenv("HADOOP_HOME") != null) { possibleHadoopConfPaths[2] = System.getenv("HADOOP_HOME") + "/conf"; possibleHadoopConfPaths[3] = System.getenv("HADOOP_HOME") + "/etc/hadoop"; // hadoop 2.2 } for (String possibleHadoopConfPath : possibleHadoopConfPaths) { if (possibleHadoopConfPath != null) { if (new File(possibleHadoopConfPath).exists()) { if (new File(possibleHadoopConfPath + "/core-site.xml").exists()) { retConf.addResource(new org.apache.hadoop.fs.Path(possibleHadoopConfPath + "/core-site.xml")); if (LOG.isDebugEnabled()) { LOG.debug("Adding " + possibleHadoopConfPath + "/core-site.xml to hadoop configuration"); } } if (new File(possibleHadoopConfPath + "/hdfs-site.xml").exists()) { retConf.addResource(new org.apache.hadoop.fs.Path(possibleHadoopConfPath + "/hdfs-site.xml")); if (LOG.isDebugEnabled()) { LOG.debug("Adding " + possibleHadoopConfPath + "/hdfs-site.xml to hadoop configuration"); } } } } } return retConf; }
Returns a new Hadoop Configuration object using the path to the hadoop conf configured in the main configuration (flink-conf.yaml). This method is public because its being used in the HadoopDataSource. @param flinkConfiguration Flink configuration object @return A Hadoop configuration instance
@SuppressWarnings("unchecked") public CompletableFuture<StackTraceSample> triggerStackTraceSample( ExecutionVertex[] tasksToSample, int numSamples, Time delayBetweenSamples, int maxStackTraceDepth) { checkNotNull(tasksToSample, "Tasks to sample"); checkArgument(tasksToSample.length >= 1, "No tasks to sample"); checkArgument(numSamples >= 1, "No number of samples"); checkArgument(maxStackTraceDepth >= 0, "Negative maximum stack trace depth"); // Execution IDs of running tasks ExecutionAttemptID[] triggerIds = new ExecutionAttemptID[tasksToSample.length]; Execution[] executions = new Execution[tasksToSample.length]; // Check that all tasks are RUNNING before triggering anything. The // triggering can still fail. for (int i = 0; i < triggerIds.length; i++) { Execution execution = tasksToSample[i].getCurrentExecutionAttempt(); if (execution != null && execution.getState() == ExecutionState.RUNNING) { executions[i] = execution; triggerIds[i] = execution.getAttemptId(); } else { return FutureUtils.completedExceptionally(new IllegalStateException("Task " + tasksToSample[i] .getTaskNameWithSubtaskIndex() + " is not running.")); } } synchronized (lock) { if (isShutDown) { return FutureUtils.completedExceptionally(new IllegalStateException("Shut down")); } final int sampleId = sampleIdCounter++; LOG.debug("Triggering stack trace sample {}", sampleId); final PendingStackTraceSample pending = new PendingStackTraceSample( sampleId, triggerIds); // Discard the sample if it takes too long. We don't send cancel // messages to the task managers, but only wait for the responses // and then ignore them. long expectedDuration = numSamples * delayBetweenSamples.toMilliseconds(); Time timeout = Time.milliseconds(expectedDuration + sampleTimeout); // Add the pending sample before scheduling the discard task to // prevent races with removing it again. pendingSamples.put(sampleId, pending); // Trigger all samples for (Execution execution: executions) { final CompletableFuture<StackTraceSampleResponse> stackTraceSampleFuture = execution.requestStackTraceSample( sampleId, numSamples, delayBetweenSamples, maxStackTraceDepth, timeout); stackTraceSampleFuture.handleAsync( (StackTraceSampleResponse stackTraceSampleResponse, Throwable throwable) -> { if (stackTraceSampleResponse != null) { collectStackTraces( stackTraceSampleResponse.getSampleId(), stackTraceSampleResponse.getExecutionAttemptID(), stackTraceSampleResponse.getSamples()); } else { cancelStackTraceSample(sampleId, throwable); } return null; }, executor); } return pending.getStackTraceSampleFuture(); } }
Triggers a stack trace sample to all tasks. @param tasksToSample Tasks to sample. @param numSamples Number of stack trace samples to collect. @param delayBetweenSamples Delay between consecutive samples. @param maxStackTraceDepth Maximum depth of the stack trace. 0 indicates no maximum and keeps the complete stack trace. @return A future of the completed stack trace sample
public void cancelStackTraceSample(int sampleId, Throwable cause) { synchronized (lock) { if (isShutDown) { return; } PendingStackTraceSample sample = pendingSamples.remove(sampleId); if (sample != null) { if (cause != null) { LOG.info("Cancelling sample " + sampleId, cause); } else { LOG.info("Cancelling sample {}", sampleId); } sample.discard(cause); rememberRecentSampleId(sampleId); } } }
Cancels a pending sample. @param sampleId ID of the sample to cancel. @param cause Cause of the cancelling (can be <code>null</code>).
public void shutDown() { synchronized (lock) { if (!isShutDown) { LOG.info("Shutting down stack trace sample coordinator."); for (PendingStackTraceSample pending : pendingSamples.values()) { pending.discard(new RuntimeException("Shut down")); } pendingSamples.clear(); isShutDown = true; } } }
Shuts down the coordinator. <p>After shut down, no further operations are executed.
public void collectStackTraces( int sampleId, ExecutionAttemptID executionId, List<StackTraceElement[]> stackTraces) { synchronized (lock) { if (isShutDown) { return; } if (LOG.isDebugEnabled()) { LOG.debug("Collecting stack trace sample {} of task {}", sampleId, executionId); } PendingStackTraceSample pending = pendingSamples.get(sampleId); if (pending != null) { pending.collectStackTraces(executionId, stackTraces); // Publish the sample if (pending.isComplete()) { pendingSamples.remove(sampleId); rememberRecentSampleId(sampleId); pending.completePromiseAndDiscard(); } } else if (recentPendingSamples.contains(sampleId)) { if (LOG.isDebugEnabled()) { LOG.debug("Received late stack trace sample {} of task {}", sampleId, executionId); } } else { if (LOG.isDebugEnabled()) { LOG.debug("Unknown sample ID " + sampleId); } } } }
Collects stack traces of a task. @param sampleId ID of the sample. @param executionId ID of the sampled task. @param stackTraces Stack traces of the sampled task. @throws IllegalStateException If unknown sample ID and not recently finished or cancelled sample.
public StreamExecutionEnvironment setMaxParallelism(int maxParallelism) { Preconditions.checkArgument(maxParallelism > 0 && maxParallelism <= KeyGroupRangeAssignment.UPPER_BOUND_MAX_PARALLELISM, "maxParallelism is out of bounds 0 < maxParallelism <= " + KeyGroupRangeAssignment.UPPER_BOUND_MAX_PARALLELISM + ". Found: " + maxParallelism); config.setMaxParallelism(maxParallelism); return this; }
Sets the maximum degree of parallelism defined for the program. The upper limit (inclusive) is Short.MAX_VALUE. <p>The maximum degree of parallelism specifies the upper limit for dynamic scaling. It also defines the number of key groups used for partitioned state. @param maxParallelism Maximum degree of parallelism to be used for the program., with 0 < maxParallelism <= 2^15 - 1
public StreamExecutionEnvironment enableCheckpointing(long interval, CheckpointingMode mode) { checkpointCfg.setCheckpointingMode(mode); checkpointCfg.setCheckpointInterval(interval); return this; }
Enables checkpointing for the streaming job. The distributed state of the streaming dataflow will be periodically snapshotted. In case of a failure, the streaming dataflow will be restarted from the latest completed checkpoint. <p>The job draws checkpoints periodically, in the given interval. The system uses the given {@link CheckpointingMode} for the checkpointing ("exactly once" vs "at least once"). The state will be stored in the configured state backend. <p>NOTE: Checkpointing iterative streaming dataflows in not properly supported at the moment. For that reason, iterative jobs will not be started if used with enabled checkpointing. To override this mechanism, use the {@link #enableCheckpointing(long, CheckpointingMode, boolean)} method. @param interval Time interval between state checkpoints in milliseconds. @param mode The checkpointing mode, selecting between "exactly once" and "at least once" guaranteed.
@Deprecated @SuppressWarnings("deprecation") @PublicEvolving public StreamExecutionEnvironment enableCheckpointing(long interval, CheckpointingMode mode, boolean force) { checkpointCfg.setCheckpointingMode(mode); checkpointCfg.setCheckpointInterval(interval); checkpointCfg.setForceCheckpointing(force); return this; }
Enables checkpointing for the streaming job. The distributed state of the streaming dataflow will be periodically snapshotted. In case of a failure, the streaming dataflow will be restarted from the latest completed checkpoint. <p>The job draws checkpoints periodically, in the given interval. The state will be stored in the configured state backend. <p>NOTE: Checkpointing iterative streaming dataflows in not properly supported at the moment. If the "force" parameter is set to true, the system will execute the job nonetheless. @param interval Time interval between state checkpoints in millis. @param mode The checkpointing mode, selecting between "exactly once" and "at least once" guaranteed. @param force If true checkpointing will be enabled for iterative jobs as well. @deprecated Use {@link #enableCheckpointing(long, CheckpointingMode)} instead. Forcing checkpoints will be removed in the future.
public <T extends Serializer<?> & Serializable>void addDefaultKryoSerializer(Class<?> type, T serializer) { config.addDefaultKryoSerializer(type, serializer); }
Adds a new Kryo default serializer to the Runtime. <p>Note that the serializer instance must be serializable (as defined by java.io.Serializable), because it may be distributed to the worker nodes by java serialization. @param type The class of the types serialized with the given serializer. @param serializer The serializer to use.
public void addDefaultKryoSerializer(Class<?> type, Class<? extends Serializer<?>> serializerClass) { config.addDefaultKryoSerializer(type, serializerClass); }
Adds a new Kryo default serializer to the Runtime. @param type The class of the types serialized with the given serializer. @param serializerClass The class of the serializer to use.
public <T extends Serializer<?> & Serializable>void registerTypeWithKryoSerializer(Class<?> type, T serializer) { config.registerTypeWithKryoSerializer(type, serializer); }
Registers the given type with a Kryo Serializer. <p>Note that the serializer instance must be serializable (as defined by java.io.Serializable), because it may be distributed to the worker nodes by java serialization. @param type The class of the types serialized with the given serializer. @param serializer The serializer to use.
public void registerType(Class<?> type) { if (type == null) { throw new NullPointerException("Cannot register null type class."); } TypeInformation<?> typeInfo = TypeExtractor.createTypeInfo(type); if (typeInfo instanceof PojoTypeInfo) { config.registerPojoType(type); } else { config.registerKryoType(type); } }
Registers the given type with the serialization stack. If the type is eventually serialized as a POJO, then the type is registered with the POJO serializer. If the type ends up being serialized with Kryo, then it will be registered at Kryo to make sure that only tags are written. @param type The class of the type to register.
@PublicEvolving public void setStreamTimeCharacteristic(TimeCharacteristic characteristic) { this.timeCharacteristic = Preconditions.checkNotNull(characteristic); if (characteristic == TimeCharacteristic.ProcessingTime) { getConfig().setAutoWatermarkInterval(0); } else { getConfig().setAutoWatermarkInterval(200); } }
Sets the time characteristic for all streams create from this environment, e.g., processing time, event time, or ingestion time. <p>If you set the characteristic to IngestionTime of EventTime this will set a default watermark update interval of 200 ms. If this is not applicable for your application you should change it using {@link ExecutionConfig#setAutoWatermarkInterval(long)}. @param characteristic The time characteristic.
public DataStreamSource<Long> generateSequence(long from, long to) { if (from > to) { throw new IllegalArgumentException("Start of sequence must not be greater than the end"); } return addSource(new StatefulSequenceSource(from, to), "Sequence Source"); }
Creates a new data stream that contains a sequence of numbers. This is a parallel source, if you manually set the parallelism to {@code 1} (using {@link org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator#setParallelism(int)}) the generated sequence of elements is in order. @param from The number to start at (inclusive) @param to The number to stop at (inclusive) @return A data stream, containing all number in the [from, to] interval
@SafeVarargs public final <OUT> DataStreamSource<OUT> fromElements(OUT... data) { if (data.length == 0) { throw new IllegalArgumentException("fromElements needs at least one element as argument"); } TypeInformation<OUT> typeInfo; try { typeInfo = TypeExtractor.getForObject(data[0]); } catch (Exception e) { throw new RuntimeException("Could not create TypeInformation for type " + data[0].getClass().getName() + "; please specify the TypeInformation manually via " + "StreamExecutionEnvironment#fromElements(Collection, TypeInformation)", e); } return fromCollection(Arrays.asList(data), typeInfo); }
Creates a new data stream that contains the given elements. The elements must all be of the same type, for example, all of the {@link String} or {@link Integer}. <p>The framework will try and determine the exact type from the elements. In case of generic elements, it may be necessary to manually supply the type information via {@link #fromCollection(java.util.Collection, org.apache.flink.api.common.typeinfo.TypeInformation)}. <p>Note that this operation will result in a non-parallel data stream source, i.e. a data stream source with a degree of parallelism one. @param data The array of elements to create the data stream from. @param <OUT> The type of the returned data stream @return The data stream representing the given array of elements
@SafeVarargs public final <OUT> DataStreamSource<OUT> fromElements(Class<OUT> type, OUT... data) { if (data.length == 0) { throw new IllegalArgumentException("fromElements needs at least one element as argument"); } TypeInformation<OUT> typeInfo; try { typeInfo = TypeExtractor.getForClass(type); } catch (Exception e) { throw new RuntimeException("Could not create TypeInformation for type " + type.getName() + "; please specify the TypeInformation manually via " + "StreamExecutionEnvironment#fromElements(Collection, TypeInformation)", e); } return fromCollection(Arrays.asList(data), typeInfo); }
Creates a new data set that contains the given elements. The framework will determine the type according to the based type user supplied. The elements should be the same or be the subclass to the based type. The sequence of elements must not be empty. Note that this operation will result in a non-parallel data stream source, i.e. a data stream source with a degree of parallelism one. @param type The based class type in the collection. @param data The array of elements to create the data stream from. @param <OUT> The type of the returned data stream @return The data stream representing the given array of elements
public <OUT> DataStreamSource<OUT> fromCollection(Collection<OUT> data) { Preconditions.checkNotNull(data, "Collection must not be null"); if (data.isEmpty()) { throw new IllegalArgumentException("Collection must not be empty"); } OUT first = data.iterator().next(); if (first == null) { throw new IllegalArgumentException("Collection must not contain null elements"); } TypeInformation<OUT> typeInfo; try { typeInfo = TypeExtractor.getForObject(first); } catch (Exception e) { throw new RuntimeException("Could not create TypeInformation for type " + first.getClass() + "; please specify the TypeInformation manually via " + "StreamExecutionEnvironment#fromElements(Collection, TypeInformation)", e); } return fromCollection(data, typeInfo); }
Creates a data stream from the given non-empty collection. The type of the data stream is that of the elements in the collection. <p>The framework will try and determine the exact type from the collection elements. In case of generic elements, it may be necessary to manually supply the type information via {@link #fromCollection(java.util.Collection, org.apache.flink.api.common.typeinfo.TypeInformation)}. <p>Note that this operation will result in a non-parallel data stream source, i.e. a data stream source with parallelism one. @param data The collection of elements to create the data stream from. @param <OUT> The generic type of the returned data stream. @return The data stream representing the given collection
public <OUT> DataStreamSource<OUT> fromCollection(Collection<OUT> data, TypeInformation<OUT> typeInfo) { Preconditions.checkNotNull(data, "Collection must not be null"); // must not have null elements and mixed elements FromElementsFunction.checkCollection(data, typeInfo.getTypeClass()); SourceFunction<OUT> function; try { function = new FromElementsFunction<>(typeInfo.createSerializer(getConfig()), data); } catch (IOException e) { throw new RuntimeException(e.getMessage(), e); } return addSource(function, "Collection Source", typeInfo).setParallelism(1); }
Creates a data stream from the given non-empty collection. <p>Note that this operation will result in a non-parallel data stream source, i.e., a data stream source with parallelism one. @param data The collection of elements to create the data stream from @param typeInfo The TypeInformation for the produced data stream @param <OUT> The type of the returned data stream @return The data stream representing the given collection
public <OUT> DataStreamSource<OUT> fromCollection(Iterator<OUT> data, Class<OUT> type) { return fromCollection(data, TypeExtractor.getForClass(type)); }
Creates a data stream from the given iterator. <p>Because the iterator will remain unmodified until the actual execution happens, the type of data returned by the iterator must be given explicitly in the form of the type class (this is due to the fact that the Java compiler erases the generic type information). <p>Note that this operation will result in a non-parallel data stream source, i.e., a data stream source with a parallelism of one. @param data The iterator of elements to create the data stream from @param type The class of the data produced by the iterator. Must not be a generic class. @param <OUT> The type of the returned data stream @return The data stream representing the elements in the iterator @see #fromCollection(java.util.Iterator, org.apache.flink.api.common.typeinfo.TypeInformation)
public <OUT> DataStreamSource<OUT> fromCollection(Iterator<OUT> data, TypeInformation<OUT> typeInfo) { Preconditions.checkNotNull(data, "The iterator must not be null"); SourceFunction<OUT> function = new FromIteratorFunction<>(data); return addSource(function, "Collection Source", typeInfo); }
Creates a data stream from the given iterator. <p>Because the iterator will remain unmodified until the actual execution happens, the type of data returned by the iterator must be given explicitly in the form of the type information. This method is useful for cases where the type is generic. In that case, the type class (as given in {@link #fromCollection(java.util.Iterator, Class)} does not supply all type information. <p>Note that this operation will result in a non-parallel data stream source, i.e., a data stream source with parallelism one. @param data The iterator of elements to create the data stream from @param typeInfo The TypeInformation for the produced data stream @param <OUT> The type of the returned data stream @return The data stream representing the elements in the iterator
public <OUT> DataStreamSource<OUT> fromParallelCollection(SplittableIterator<OUT> iterator, Class<OUT> type) { return fromParallelCollection(iterator, TypeExtractor.getForClass(type)); }
Creates a new data stream that contains elements in the iterator. The iterator is splittable, allowing the framework to create a parallel data stream source that returns the elements in the iterator. <p>Because the iterator will remain unmodified until the actual execution happens, the type of data returned by the iterator must be given explicitly in the form of the type class (this is due to the fact that the Java compiler erases the generic type information). @param iterator The iterator that produces the elements of the data stream @param type The class of the data produced by the iterator. Must not be a generic class. @param <OUT> The type of the returned data stream @return A data stream representing the elements in the iterator
public <OUT> DataStreamSource<OUT> fromParallelCollection(SplittableIterator<OUT> iterator, TypeInformation<OUT> typeInfo) { return fromParallelCollection(iterator, typeInfo, "Parallel Collection Source"); }
Creates a new data stream that contains elements in the iterator. The iterator is splittable, allowing the framework to create a parallel data stream source that returns the elements in the iterator. <p>Because the iterator will remain unmodified until the actual execution happens, the type of data returned by the iterator must be given explicitly in the form of the type information. This method is useful for cases where the type is generic. In that case, the type class (as given in {@link #fromParallelCollection(org.apache.flink.util.SplittableIterator, Class)} does not supply all type information. @param iterator The iterator that produces the elements of the data stream @param typeInfo The TypeInformation for the produced data stream. @param <OUT> The type of the returned data stream @return A data stream representing the elements in the iterator
private <OUT> DataStreamSource<OUT> fromParallelCollection(SplittableIterator<OUT> iterator, TypeInformation<OUT> typeInfo, String operatorName) { return addSource(new FromSplittableIteratorFunction<>(iterator), operatorName, typeInfo); }
private helper for passing different names
public DataStreamSource<String> readTextFile(String filePath, String charsetName) { Preconditions.checkArgument(!StringUtils.isNullOrWhitespaceOnly(filePath), "The file path must not be null or blank."); TextInputFormat format = new TextInputFormat(new Path(filePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); TypeInformation<String> typeInfo = BasicTypeInfo.STRING_TYPE_INFO; format.setCharsetName(charsetName); return readFile(format, filePath, FileProcessingMode.PROCESS_ONCE, -1, typeInfo); }
Reads the given file line-by-line and creates a data stream that contains a string with the contents of each such line. The {@link java.nio.charset.Charset} with the given name will be used to read the files. <p><b>NOTES ON CHECKPOINTING: </b> The source monitors the path, creates the {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, forwards them to the downstream {@link ContinuousFileReaderOperator readers} to read the actual data, and exits, without waiting for the readers to finish reading. This implies that no more checkpoint barriers are going to be forwarded after the source exits, thus having no checkpoints after that point. @param filePath The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path") @param charsetName The name of the character set used to read the file @return The data stream that represents the data read from the given file as text lines
public <OUT> DataStreamSource<OUT> readFile(FileInputFormat<OUT> inputFormat, String filePath) { return readFile(inputFormat, filePath, FileProcessingMode.PROCESS_ONCE, -1); }
Reads the contents of the user-specified {@code filePath} based on the given {@link FileInputFormat}. <p>Since all data streams need specific information about their types, this method needs to determine the type of the data produced by the input format. It will attempt to determine the data type by reflection, unless the input format implements the {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable} interface. In the latter case, this method will invoke the {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable#getProducedType()} method to determine data type produced by the input format. <p><b>NOTES ON CHECKPOINTING: </b> The source monitors the path, creates the {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, forwards them to the downstream {@link ContinuousFileReaderOperator readers} to read the actual data, and exits, without waiting for the readers to finish reading. This implies that no more checkpoint barriers are going to be forwarded after the source exits, thus having no checkpoints after that point. @param filePath The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path") @param inputFormat The input format used to create the data stream @param <OUT> The type of the returned data stream @return The data stream that represents the data read from the given file
@PublicEvolving @Deprecated public <OUT> DataStreamSource<OUT> readFile(FileInputFormat<OUT> inputFormat, String filePath, FileProcessingMode watchType, long interval, FilePathFilter filter) { inputFormat.setFilesFilter(filter); TypeInformation<OUT> typeInformation; try { typeInformation = TypeExtractor.getInputFormatTypes(inputFormat); } catch (Exception e) { throw new InvalidProgramException("The type returned by the input format could not be " + "automatically determined. Please specify the TypeInformation of the produced type " + "explicitly by using the 'createInput(InputFormat, TypeInformation)' method instead."); } return readFile(inputFormat, filePath, watchType, interval, typeInformation); }
Reads the contents of the user-specified {@code filePath} based on the given {@link FileInputFormat}. Depending on the provided {@link FileProcessingMode}. <p>See {@link #readFile(FileInputFormat, String, FileProcessingMode, long)} @param inputFormat The input format used to create the data stream @param filePath The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path") @param watchType The mode in which the source should operate, i.e. monitor path and react to new data, or process once and exit @param interval In the case of periodic path monitoring, this specifies the interval (in millis) between consecutive path scans @param filter The files to be excluded from the processing @param <OUT> The type of the returned data stream @return The data stream that represents the data read from the given file @deprecated Use {@link FileInputFormat#setFilesFilter(FilePathFilter)} to set a filter and {@link StreamExecutionEnvironment#readFile(FileInputFormat, String, FileProcessingMode, long)}
@Deprecated @SuppressWarnings("deprecation") public DataStream<String> readFileStream(String filePath, long intervalMillis, FileMonitoringFunction.WatchType watchType) { DataStream<Tuple3<String, Long, Long>> source = addSource(new FileMonitoringFunction( filePath, intervalMillis, watchType), "Read File Stream source"); return source.flatMap(new FileReadFunction()); }
Creates a data stream that contains the contents of file created while system watches the given path. The file will be read with the system's default character set. @param filePath The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path/") @param intervalMillis The interval of file watching in milliseconds @param watchType The watch type of file stream. When watchType is {@link org.apache.flink.streaming.api.functions.source.FileMonitoringFunction.WatchType#ONLY_NEW_FILES}, the system processes only new files. {@link org.apache.flink.streaming.api.functions.source.FileMonitoringFunction.WatchType#REPROCESS_WITH_APPENDED} means that the system re-processes all contents of appended file. {@link org.apache.flink.streaming.api.functions.source.FileMonitoringFunction.WatchType#PROCESS_ONLY_APPENDED} means that the system processes only appended contents of files. @return The DataStream containing the given directory. @deprecated Use {@link #readFile(FileInputFormat, String, FileProcessingMode, long)} instead.
@PublicEvolving public <OUT> DataStreamSource<OUT> readFile(FileInputFormat<OUT> inputFormat, String filePath, FileProcessingMode watchType, long interval, TypeInformation<OUT> typeInformation) { Preconditions.checkNotNull(inputFormat, "InputFormat must not be null."); Preconditions.checkArgument(!StringUtils.isNullOrWhitespaceOnly(filePath), "The file path must not be null or blank."); inputFormat.setFilePath(filePath); return createFileInput(inputFormat, typeInformation, "Custom File Source", watchType, interval); }
Reads the contents of the user-specified {@code filePath} based on the given {@link FileInputFormat}. Depending on the provided {@link FileProcessingMode}, the source may periodically monitor (every {@code interval} ms) the path for new data ({@link FileProcessingMode#PROCESS_CONTINUOUSLY}), or process once the data currently in the path and exit ({@link FileProcessingMode#PROCESS_ONCE}). In addition, if the path contains files not to be processed, the user can specify a custom {@link FilePathFilter}. As a default implementation you can use {@link FilePathFilter#createDefaultFilter()}. <p><b>NOTES ON CHECKPOINTING: </b> If the {@code watchType} is set to {@link FileProcessingMode#PROCESS_ONCE}, the source monitors the path <b>once</b>, creates the {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, forwards them to the downstream {@link ContinuousFileReaderOperator readers} to read the actual data, and exits, without waiting for the readers to finish reading. This implies that no more checkpoint barriers are going to be forwarded after the source exits, thus having no checkpoints after that point. @param inputFormat The input format used to create the data stream @param filePath The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path") @param watchType The mode in which the source should operate, i.e. monitor path and react to new data, or process once and exit @param typeInformation Information on the type of the elements in the output stream @param interval In the case of periodic path monitoring, this specifies the interval (in millis) between consecutive path scans @param <OUT> The type of the returned data stream @return The data stream that represents the data read from the given file
@Deprecated public DataStreamSource<String> socketTextStream(String hostname, int port, char delimiter, long maxRetry) { return socketTextStream(hostname, port, String.valueOf(delimiter), maxRetry); }
Creates a new data stream that contains the strings received infinitely from a socket. Received strings are decoded by the system's default character set. On the termination of the socket server connection retries can be initiated. <p>Let us note that the socket itself does not report on abort and as a consequence retries are only initiated when the socket was gracefully terminated. @param hostname The host name which a server socket binds @param port The port number which a server socket binds. A port number of 0 means that the port number is automatically allocated. @param delimiter A character which splits received strings into records @param maxRetry The maximal retry interval in seconds while the program waits for a socket that is temporarily down. Reconnection is initiated every second. A number of 0 means that the reader is immediately terminated, while a negative value ensures retrying forever. @return A data stream containing the strings received from the socket @deprecated Use {@link #socketTextStream(String, int, String, long)} instead.
@PublicEvolving public DataStreamSource<String> socketTextStream(String hostname, int port, String delimiter, long maxRetry) { return addSource(new SocketTextStreamFunction(hostname, port, delimiter, maxRetry), "Socket Stream"); }
Creates a new data stream that contains the strings received infinitely from a socket. Received strings are decoded by the system's default character set. On the termination of the socket server connection retries can be initiated. <p>Let us note that the socket itself does not report on abort and as a consequence retries are only initiated when the socket was gracefully terminated. @param hostname The host name which a server socket binds @param port The port number which a server socket binds. A port number of 0 means that the port number is automatically allocated. @param delimiter A string which splits received strings into records @param maxRetry The maximal retry interval in seconds while the program waits for a socket that is temporarily down. Reconnection is initiated every second. A number of 0 means that the reader is immediately terminated, while a negative value ensures retrying forever. @return A data stream containing the strings received from the socket
@Deprecated @SuppressWarnings("deprecation") public DataStreamSource<String> socketTextStream(String hostname, int port, char delimiter) { return socketTextStream(hostname, port, delimiter, 0); }
Creates a new data stream that contains the strings received infinitely from a socket. Received strings are decoded by the system's default character set. The reader is terminated immediately when the socket is down. @param hostname The host name which a server socket binds @param port The port number which a server socket binds. A port number of 0 means that the port number is automatically allocated. @param delimiter A character which splits received strings into records @return A data stream containing the strings received from the socket @deprecated Use {@link #socketTextStream(String, int, String)} instead.
@PublicEvolving public DataStreamSource<String> socketTextStream(String hostname, int port, String delimiter) { return socketTextStream(hostname, port, delimiter, 0); }
Creates a new data stream that contains the strings received infinitely from a socket. Received strings are decoded by the system's default character set. The reader is terminated immediately when the socket is down. @param hostname The host name which a server socket binds @param port The port number which a server socket binds. A port number of 0 means that the port number is automatically allocated. @param delimiter A string which splits received strings into records @return A data stream containing the strings received from the socket
@PublicEvolving public <OUT> DataStreamSource<OUT> createInput(InputFormat<OUT, ?> inputFormat) { return createInput(inputFormat, TypeExtractor.getInputFormatTypes(inputFormat)); }
Generic method to create an input data stream with {@link org.apache.flink.api.common.io.InputFormat}. <p>Since all data streams need specific information about their types, this method needs to determine the type of the data produced by the input format. It will attempt to determine the data type by reflection, unless the input format implements the {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable} interface. In the latter case, this method will invoke the {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable#getProducedType()} method to determine data type produced by the input format. <p><b>NOTES ON CHECKPOINTING: </b> In the case of a {@link FileInputFormat}, the source (which executes the {@link ContinuousFileMonitoringFunction}) monitors the path, creates the {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, forwards them to the downstream {@link ContinuousFileReaderOperator} to read the actual data, and exits, without waiting for the readers to finish reading. This implies that no more checkpoint barriers are going to be forwarded after the source exits, thus having no checkpoints. @param inputFormat The input format used to create the data stream @param <OUT> The type of the returned data stream @return The data stream that represents the data created by the input format
@PublicEvolving public <OUT> DataStreamSource<OUT> createInput(InputFormat<OUT, ?> inputFormat, TypeInformation<OUT> typeInfo) { DataStreamSource<OUT> source; if (inputFormat instanceof FileInputFormat) { @SuppressWarnings("unchecked") FileInputFormat<OUT> format = (FileInputFormat<OUT>) inputFormat; source = createFileInput(format, typeInfo, "Custom File source", FileProcessingMode.PROCESS_ONCE, -1); } else { source = createInput(inputFormat, typeInfo, "Custom Source"); } return source; }
Generic method to create an input data stream with {@link org.apache.flink.api.common.io.InputFormat}. <p>The data stream is typed to the given TypeInformation. This method is intended for input formats where the return type cannot be determined by reflection analysis, and that do not implement the {@link org.apache.flink.api.java.typeutils.ResultTypeQueryable} interface. <p><b>NOTES ON CHECKPOINTING: </b> In the case of a {@link FileInputFormat}, the source (which executes the {@link ContinuousFileMonitoringFunction}) monitors the path, creates the {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, forwards them to the downstream {@link ContinuousFileReaderOperator} to read the actual data, and exits, without waiting for the readers to finish reading. This implies that no more checkpoint barriers are going to be forwarded after the source exits, thus having no checkpoints. @param inputFormat The input format used to create the data stream @param typeInfo The information about the type of the output type @param <OUT> The type of the returned data stream @return The data stream that represents the data created by the input format
public <OUT> DataStreamSource<OUT> addSource(SourceFunction<OUT> function, String sourceName) { return addSource(function, sourceName, null); }
Adds a data source with a custom type information thus opening a {@link DataStream}. Only in very special cases does the user need to support type information. Otherwise use {@link #addSource(org.apache.flink.streaming.api.functions.source.SourceFunction)} @param function the user defined function @param sourceName Name of the data source @param <OUT> type of the returned stream @return the data stream constructed
public <OUT> DataStreamSource<OUT> addSource(SourceFunction<OUT> function, TypeInformation<OUT> typeInfo) { return addSource(function, "Custom Source", typeInfo); }
Ads a data source with a custom type information thus opening a {@link DataStream}. Only in very special cases does the user need to support type information. Otherwise use {@link #addSource(org.apache.flink.streaming.api.functions.source.SourceFunction)} @param function the user defined function @param <OUT> type of the returned stream @param typeInfo the user defined type information for the stream @return the data stream constructed
@SuppressWarnings("unchecked") public <OUT> DataStreamSource<OUT> addSource(SourceFunction<OUT> function, String sourceName, TypeInformation<OUT> typeInfo) { if (typeInfo == null) { if (function instanceof ResultTypeQueryable) { typeInfo = ((ResultTypeQueryable<OUT>) function).getProducedType(); } else { try { typeInfo = TypeExtractor.createTypeInfo( SourceFunction.class, function.getClass(), 0, null, null); } catch (final InvalidTypesException e) { typeInfo = (TypeInformation<OUT>) new MissingTypeInfo(sourceName, e); } } } boolean isParallel = function instanceof ParallelSourceFunction; clean(function); final StreamSource<OUT, ?> sourceOperator = new StreamSource<>(function); return new DataStreamSource<>(this, typeInfo, sourceOperator, isParallel, sourceName); }
Ads a data source with a custom type information thus opening a {@link DataStream}. Only in very special cases does the user need to support type information. Otherwise use {@link #addSource(org.apache.flink.streaming.api.functions.source.SourceFunction)} @param function the user defined function @param sourceName Name of the data source @param <OUT> type of the returned stream @param typeInfo the user defined type information for the stream @return the data stream constructed
@Internal public void addOperator(StreamTransformation<?> transformation) { Preconditions.checkNotNull(transformation, "transformation must not be null."); this.transformations.add(transformation); }
Adds an operator to the list of operators that should be executed when calling {@link #execute}. <p>When calling {@link #execute()} only the operators that where previously added to the list are executed. <p>This is not meant to be used by users. The API methods that create operators must call this method.
public static StreamExecutionEnvironment getExecutionEnvironment() { if (contextEnvironmentFactory != null) { return contextEnvironmentFactory.createExecutionEnvironment(); } // because the streaming project depends on "flink-clients" (and not the other way around) // we currently need to intercept the data set environment and create a dependent stream env. // this should be fixed once we rework the project dependencies ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); if (env instanceof ContextEnvironment) { return new StreamContextEnvironment((ContextEnvironment) env); } else if (env instanceof OptimizerPlanEnvironment || env instanceof PreviewPlanEnvironment) { return new StreamPlanEnvironment(env); } else { return createLocalEnvironment(); } }
Creates an execution environment that represents the context in which the program is currently executed. If the program is invoked standalone, this method returns a local execution environment, as returned by {@link #createLocalEnvironment()}. @return The execution environment of the context in which the program is executed.
public static LocalStreamEnvironment createLocalEnvironment(int parallelism, Configuration configuration) { final LocalStreamEnvironment currentEnvironment; currentEnvironment = new LocalStreamEnvironment(configuration); currentEnvironment.setParallelism(parallelism); return currentEnvironment; }
Creates a {@link LocalStreamEnvironment}. The local execution environment will run the program in a multi-threaded fashion in the same JVM as the environment was created in. It will use the parallelism specified in the parameter. @param parallelism The parallelism for the local environment. @param configuration Pass a custom configuration into the cluster @return A local execution environment with the specified parallelism.
public static StreamExecutionEnvironment createRemoteEnvironment( String host, int port, String... jarFiles) { return new RemoteStreamEnvironment(host, port, jarFiles); }
Creates a {@link RemoteStreamEnvironment}. The remote environment sends (parts of) the program to a cluster for execution. Note that all file paths used in the program must be accessible from the cluster. The execution will use no parallelism, unless the parallelism is set explicitly via {@link #setParallelism}. @param host The host name or address of the master (JobManager), where the program should be executed. @param port The port of the master (JobManager), where the program should be executed. @param jarFiles The JAR files with code that needs to be shipped to the cluster. If the program uses user-defined functions, user-defined input formats, or any libraries, those must be provided in the JAR files. @return A remote environment that executes the program on a cluster.
public static StreamExecutionEnvironment createRemoteEnvironment( String host, int port, int parallelism, String... jarFiles) { RemoteStreamEnvironment env = new RemoteStreamEnvironment(host, port, jarFiles); env.setParallelism(parallelism); return env; }
Creates a {@link RemoteStreamEnvironment}. The remote environment sends (parts of) the program to a cluster for execution. Note that all file paths used in the program must be accessible from the cluster. The execution will use the specified parallelism. @param host The host name or address of the master (JobManager), where the program should be executed. @param port The port of the master (JobManager), where the program should be executed. @param parallelism The parallelism to use during the execution. @param jarFiles The JAR files with code that needs to be shipped to the cluster. If the program uses user-defined functions, user-defined input formats, or any libraries, those must be provided in the JAR files. @return A remote environment that executes the program on a cluster.
public static StreamExecutionEnvironment createRemoteEnvironment( String host, int port, Configuration clientConfig, String... jarFiles) { return new RemoteStreamEnvironment(host, port, clientConfig, jarFiles); }
Creates a {@link RemoteStreamEnvironment}. The remote environment sends (parts of) the program to a cluster for execution. Note that all file paths used in the program must be accessible from the cluster. The execution will use the specified parallelism. @param host The host name or address of the master (JobManager), where the program should be executed. @param port The port of the master (JobManager), where the program should be executed. @param clientConfig The configuration used by the client that connects to the remote cluster. @param jarFiles The JAR files with code that needs to be shipped to the cluster. If the program uses user-defined functions, user-defined input formats, or any libraries, those must be provided in the JAR files. @return A remote environment that executes the program on a cluster.
public StreamQueryConfig withIdleStateRetentionTime(Time minTime, Time maxTime) { if (maxTime.toMilliseconds() - minTime.toMilliseconds() < 300000 && !(maxTime.toMilliseconds() == 0 && minTime.toMilliseconds() == 0)) { throw new IllegalArgumentException( "Difference between minTime: " + minTime.toString() + " and maxTime: " + maxTime.toString() + "shoud be at least 5 minutes."); } minIdleStateRetentionTime = minTime.toMilliseconds(); maxIdleStateRetentionTime = maxTime.toMilliseconds(); return this; }
Specifies a minimum and a maximum time interval for how long idle state, i.e., state which was not updated, will be retained. State will never be cleared until it was idle for less than the minimum time and will never be kept if it was idle for more than the maximum time. <p>When new data arrives for previously cleaned-up state, the new data will be handled as if it was the first data. This can result in previous results being overwritten. <p>Set to 0 (zero) to never clean-up the state. <p>NOTE: Cleaning up state requires additional bookkeeping which becomes less expensive for larger differences of minTime and maxTime. The difference between minTime and maxTime must be at least 5 minutes. @param minTime The minimum time interval for which idle state is retained. Set to 0 (zero) to never clean-up the state. @param maxTime The maximum time interval for which idle state is retained. Must be at least 5 minutes greater than minTime. Set to 0 (zero) to never clean-up the state.
public static <T> DataSet<LongValue> count(DataSet<T> input) { return input .map(new MapTo<>(new LongValue(1))) .returns(LONG_VALUE_TYPE_INFO) .name("Emit 1") .reduce(new AddLongValue()) .name("Sum"); }
Count the number of elements in a DataSet. @param input DataSet of elements to be counted @param <T> element type @return count
@Override public int compareTo(ValueArray<NullValue> o) { NullValueArray other = (NullValueArray) o; return Integer.compare(position, other.position); }
--------------------------------------------------------------------------------------------
public static void main(String[] args) throws Exception { // parse the parameters final ParameterTool params = ParameterTool.fromArgs(args); final long windowSize = params.getLong("windowSize", 2000); final long rate = params.getLong("rate", 3L); System.out.println("Using windowSize=" + windowSize + ", data rate=" + rate); System.out.println("To customize example, use: WindowJoin [--windowSize <window-size-in-millis>] [--rate <elements-per-second>]"); // obtain execution environment, run this example in "ingestion time" StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime); // make parameters available in the web interface env.getConfig().setGlobalJobParameters(params); // create the data sources for both grades and salaries DataStream<Tuple2<String, Integer>> grades = GradeSource.getSource(env, rate); DataStream<Tuple2<String, Integer>> salaries = SalarySource.getSource(env, rate); // run the actual window join program // for testability, this functionality is in a separate method. DataStream<Tuple3<String, Integer, Integer>> joinedStream = runWindowJoin(grades, salaries, windowSize); // print the results with a single thread, rather than in parallel joinedStream.print().setParallelism(1); // execute program env.execute("Windowed Join Example"); }
*************************************************************************
public static void install(SecurityConfiguration config) throws Exception { // install the security modules List<SecurityModule> modules = new ArrayList<>(); try { for (SecurityModuleFactory moduleFactory : config.getSecurityModuleFactories()) { SecurityModule module = moduleFactory.createModule(config); // can be null if a SecurityModule is not supported in the current environment if (module != null) { module.install(); modules.add(module); } } } catch (Exception ex) { throw new Exception("unable to establish the security context", ex); } installedModules = modules; // First check if we have Hadoop in the ClassPath. If not, we simply don't do anything. try { Class.forName( "org.apache.hadoop.security.UserGroupInformation", false, SecurityUtils.class.getClassLoader()); // install a security context // use the Hadoop login user as the subject of the installed security context if (!(installedContext instanceof NoOpSecurityContext)) { LOG.warn("overriding previous security context"); } UserGroupInformation loginUser = UserGroupInformation.getLoginUser(); installedContext = new HadoopSecurityContext(loginUser); } catch (ClassNotFoundException e) { LOG.info("Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath."); } catch (LinkageError e) { LOG.error("Cannot install HadoopSecurityContext.", e); } }
Installs a process-wide security configuration. <p>Applies the configuration using the available security modules (i.e. Hadoop, JAAS).
public static <REQ extends MessageBody> ByteBuf serializeRequest( final ByteBufAllocator alloc, final long requestId, final REQ request) { Preconditions.checkNotNull(request); return writePayload(alloc, requestId, MessageType.REQUEST, request.serialize()); }
Serializes the request sent to the {@link org.apache.flink.queryablestate.network.AbstractServerBase}. @param alloc The {@link ByteBufAllocator} used to allocate the buffer to serialize the message into. @param requestId The id of the request to which the message refers to. @param request The request to be serialized. @return A {@link ByteBuf} containing the serialized message.
public static <RESP extends MessageBody> ByteBuf serializeResponse( final ByteBufAllocator alloc, final long requestId, final RESP response) { Preconditions.checkNotNull(response); return writePayload(alloc, requestId, MessageType.REQUEST_RESULT, response.serialize()); }
Serializes the response sent to the {@link org.apache.flink.queryablestate.network.Client}. @param alloc The {@link ByteBufAllocator} used to allocate the buffer to serialize the message into. @param requestId The id of the request to which the message refers to. @param response The response to be serialized. @return A {@link ByteBuf} containing the serialized message.
public static ByteBuf serializeRequestFailure( final ByteBufAllocator alloc, final long requestId, final Throwable cause) throws IOException { final ByteBuf buf = alloc.ioBuffer(); // Frame length is set at the end buf.writeInt(0); writeHeader(buf, MessageType.REQUEST_FAILURE); buf.writeLong(requestId); try (ByteBufOutputStream bbos = new ByteBufOutputStream(buf); ObjectOutput out = new ObjectOutputStream(bbos)) { out.writeObject(cause); } // Set frame length int frameLength = buf.readableBytes() - Integer.BYTES; buf.setInt(0, frameLength); return buf; }
Serializes the exception containing the failure message sent to the {@link org.apache.flink.queryablestate.network.Client} in case of protocol related errors. @param alloc The {@link ByteBufAllocator} used to allocate the buffer to serialize the message into. @param requestId The id of the request to which the message refers to. @param cause The exception thrown at the server. @return A {@link ByteBuf} containing the serialized message.
public static ByteBuf serializeServerFailure( final ByteBufAllocator alloc, final Throwable cause) throws IOException { final ByteBuf buf = alloc.ioBuffer(); // Frame length is set at end buf.writeInt(0); writeHeader(buf, MessageType.SERVER_FAILURE); try (ByteBufOutputStream bbos = new ByteBufOutputStream(buf); ObjectOutput out = new ObjectOutputStream(bbos)) { out.writeObject(cause); } // Set frame length int frameLength = buf.readableBytes() - Integer.BYTES; buf.setInt(0, frameLength); return buf; }
Serializes the failure message sent to the {@link org.apache.flink.queryablestate.network.Client} in case of server related errors. @param alloc The {@link ByteBufAllocator} used to allocate the buffer to serialize the message into. @param cause The exception thrown at the server. @return The failure message.
private static void writeHeader(final ByteBuf buf, final MessageType messageType) { buf.writeInt(VERSION); buf.writeInt(messageType.ordinal()); }
Helper for serializing the header. @param buf The {@link ByteBuf} to serialize the header into. @param messageType The {@link MessageType} of the message this header refers to.
private static ByteBuf writePayload( final ByteBufAllocator alloc, final long requestId, final MessageType messageType, final byte[] payload) { final int frameLength = HEADER_LENGTH + REQUEST_ID_SIZE + payload.length; final ByteBuf buf = alloc.ioBuffer(frameLength + Integer.BYTES); buf.writeInt(frameLength); writeHeader(buf, messageType); buf.writeLong(requestId); buf.writeBytes(payload); return buf; }
Helper for serializing the messages. @param alloc The {@link ByteBufAllocator} used to allocate the buffer to serialize the message into. @param requestId The id of the request to which the message refers to. @param messageType The {@link MessageType type of the message}. @param payload The serialized version of the message. @return A {@link ByteBuf} containing the serialized message.
public static MessageType deserializeHeader(final ByteBuf buf) { // checking the version int version = buf.readInt(); Preconditions.checkState(version == VERSION, "Version Mismatch: Found " + version + ", Expected: " + VERSION + '.'); // fetching the message type int msgType = buf.readInt(); MessageType[] values = MessageType.values(); Preconditions.checkState(msgType >= 0 && msgType < values.length, "Illegal message type with index " + msgType + '.'); return values[msgType]; }
De-serializes the header and returns the {@link MessageType}. <pre> <b>The buffer is expected to be at the header position.</b> </pre> @param buf The {@link ByteBuf} containing the serialized header. @return The message type. @throws IllegalStateException If unexpected message version or message type.
public REQ deserializeRequest(final ByteBuf buf) { Preconditions.checkNotNull(buf); return requestDeserializer.deserializeMessage(buf); }
De-serializes the request sent to the {@link org.apache.flink.queryablestate.network.AbstractServerBase}. <pre> <b>The buffer is expected to be at the request position.</b> </pre> @param buf The {@link ByteBuf} containing the serialized request. @return The request.
public RESP deserializeResponse(final ByteBuf buf) { Preconditions.checkNotNull(buf); return responseDeserializer.deserializeMessage(buf); }
De-serializes the response sent to the {@link org.apache.flink.queryablestate.network.Client}. <pre> <b>The buffer is expected to be at the response position.</b> </pre> @param buf The {@link ByteBuf} containing the serialized response. @return The response.
public static RequestFailure deserializeRequestFailure(final ByteBuf buf) throws IOException, ClassNotFoundException { long requestId = buf.readLong(); Throwable cause; try (ByteBufInputStream bis = new ByteBufInputStream(buf); ObjectInputStream in = new ObjectInputStream(bis)) { cause = (Throwable) in.readObject(); } return new RequestFailure(requestId, cause); }
De-serializes the {@link RequestFailure} sent to the {@link org.apache.flink.queryablestate.network.Client} in case of protocol related errors. <pre> <b>The buffer is expected to be at the correct position.</b> </pre> @param buf The {@link ByteBuf} containing the serialized failure message. @return The failure message.
public static Throwable deserializeServerFailure(final ByteBuf buf) throws IOException, ClassNotFoundException { try (ByteBufInputStream bis = new ByteBufInputStream(buf); ObjectInputStream in = new ObjectInputStream(bis)) { return (Throwable) in.readObject(); } }
De-serializes the failure message sent to the {@link org.apache.flink.queryablestate.network.Client} in case of server related errors. <pre> <b>The buffer is expected to be at the correct position.</b> </pre> @param buf The {@link ByteBuf} containing the serialized failure message. @return The failure message.
void addColumn(String family, String qualifier, Class<?> clazz) { Preconditions.checkNotNull(family, "family name"); Preconditions.checkNotNull(qualifier, "qualifier name"); Preconditions.checkNotNull(clazz, "class type"); Map<String, TypeInformation<?>> qualifierMap = this.familyMap.get(family); if (!HBaseRowInputFormat.isSupportedType(clazz)) { // throw exception throw new IllegalArgumentException("Unsupported class type found " + clazz + ". " + "Better to use byte[].class and deserialize using user defined scalar functions"); } if (qualifierMap == null) { qualifierMap = new LinkedHashMap<>(); } qualifierMap.put(qualifier, TypeExtractor.getForClass(clazz)); familyMap.put(family, qualifierMap); }
Adds a column defined by family, qualifier, and type to the table schema. @param family the family name @param qualifier the qualifier name @param clazz the data type of the qualifier
byte[][] getFamilyKeys() { Charset c = Charset.forName(charset); byte[][] familyKeys = new byte[this.familyMap.size()][]; int i = 0; for (String name : this.familyMap.keySet()) { familyKeys[i++] = name.getBytes(c); } return familyKeys; }
Returns the HBase identifiers of all registered column families. @return The HBase identifiers of all registered column families.
String[] getQualifierNames(String family) { Map<String, TypeInformation<?>> qualifierMap = familyMap.get(family); if (qualifierMap == null) { throw new IllegalArgumentException("Family " + family + " does not exist in schema."); } String[] qualifierNames = new String[qualifierMap.size()]; int i = 0; for (String qualifier: qualifierMap.keySet()) { qualifierNames[i] = qualifier; i++; } return qualifierNames; }
Returns the names of all registered column qualifiers of a specific column family. @param family The name of the column family for which the column qualifier names are returned. @return The names of all registered column qualifiers of a specific column family.
byte[][] getQualifierKeys(String family) { Map<String, TypeInformation<?>> qualifierMap = familyMap.get(family); if (qualifierMap == null) { throw new IllegalArgumentException("Family " + family + " does not exist in schema."); } Charset c = Charset.forName(charset); byte[][] qualifierKeys = new byte[qualifierMap.size()][]; int i = 0; for (String name : qualifierMap.keySet()) { qualifierKeys[i++] = name.getBytes(c); } return qualifierKeys; }
Returns the HBase identifiers of all registered column qualifiers for a specific column family. @param family The name of the column family for which the column qualifier identifiers are returned. @return The HBase identifiers of all registered column qualifiers for a specific column family.
TypeInformation<?>[] getQualifierTypes(String family) { Map<String, TypeInformation<?>> qualifierMap = familyMap.get(family); if (qualifierMap == null) { throw new IllegalArgumentException("Family " + family + " does not exist in schema."); } TypeInformation<?>[] typeInformation = new TypeInformation[qualifierMap.size()]; int i = 0; for (TypeInformation<?> typeInfo : qualifierMap.values()) { typeInformation[i] = typeInfo; i++; } return typeInformation; }
Returns the types of all registered column qualifiers of a specific column family. @param family The name of the column family for which the column qualifier types are returned. @return The types of all registered column qualifiers of a specific column family.
@SuppressWarnings("unchecked") public <T> Class<T> getClass(String key, Class<? extends T> defaultValue, ClassLoader classLoader) throws ClassNotFoundException { Object o = getRawValue(key); if (o == null) { return (Class<T>) defaultValue; } if (o.getClass() == String.class) { return (Class<T>) Class.forName((String) o, true, classLoader); } LOG.warn("Configuration cannot evaluate value " + o + " as a class name"); return (Class<T>) defaultValue; }
Returns the class associated with the given key as a string. @param <T> The type of the class to return. @param key The key pointing to the associated value @param defaultValue The optional default value returned if no entry exists @param classLoader The class loader used to resolve the class. @return The value associated with the given key, or the default value, if to entry for the key exists.
public String getString(String key, String defaultValue) { Object o = getRawValue(key); if (o == null) { return defaultValue; } else { return o.toString(); } }
Returns the value associated with the given key as a string. @param key the key pointing to the associated value @param defaultValue the default value which is returned in case there is no value associated with the given key @return the (default) value associated with the given key
@PublicEvolving public String getString(ConfigOption<String> configOption, String overrideDefault) { Object o = getRawValueFromOption(configOption); return o == null ? overrideDefault : o.toString(); }
Returns the value associated with the given config option as a string. If no value is mapped under any key of the option, it returns the specified default instead of the option's default value. @param configOption The configuration option @return the (default) value associated with the given config option
@PublicEvolving public void setString(ConfigOption<String> key, String value) { setValueInternal(key.key(), value); }
Adds the given value to the configuration object. The main key of the config option will be used to map the value. @param key the option specifying the key to be added @param value the value of the key/value pair to be added
public int getInteger(String key, int defaultValue) { Object o = getRawValue(key); if (o == null) { return defaultValue; } return convertToInt(o, defaultValue); }
Returns the value associated with the given key as an integer. @param key the key pointing to the associated value @param defaultValue the default value which is returned in case there is no value associated with the given key @return the (default) value associated with the given key
@PublicEvolving public int getInteger(ConfigOption<Integer> configOption) { Object o = getValueOrDefaultFromOption(configOption); return convertToInt(o, configOption.defaultValue()); }
Returns the value associated with the given config option as an integer. @param configOption The configuration option @return the (default) value associated with the given config option
@PublicEvolving public int getInteger(ConfigOption<Integer> configOption, int overrideDefault) { Object o = getRawValueFromOption(configOption); if (o == null) { return overrideDefault; } return convertToInt(o, configOption.defaultValue()); }
Returns the value associated with the given config option as an integer. If no value is mapped under any key of the option, it returns the specified default instead of the option's default value. @param configOption The configuration option @param overrideDefault The value to return if no value was mapper for any key of the option @return the configured value associated with the given config option, or the overrideDefault
@PublicEvolving public void setInteger(ConfigOption<Integer> key, int value) { setValueInternal(key.key(), value); }
Adds the given value to the configuration object. The main key of the config option will be used to map the value. @param key the option specifying the key to be added @param value the value of the key/value pair to be added
public long getLong(String key, long defaultValue) { Object o = getRawValue(key); if (o == null) { return defaultValue; } return convertToLong(o, defaultValue); }
Returns the value associated with the given key as a long. @param key the key pointing to the associated value @param defaultValue the default value which is returned in case there is no value associated with the given key @return the (default) value associated with the given key
@PublicEvolving public long getLong(ConfigOption<Long> configOption) { Object o = getValueOrDefaultFromOption(configOption); return convertToLong(o, configOption.defaultValue()); }
Returns the value associated with the given config option as a long integer. @param configOption The configuration option @return the (default) value associated with the given config option
@PublicEvolving public long getLong(ConfigOption<Long> configOption, long overrideDefault) { Object o = getRawValueFromOption(configOption); if (o == null) { return overrideDefault; } return convertToLong(o, configOption.defaultValue()); }
Returns the value associated with the given config option as a long integer. If no value is mapped under any key of the option, it returns the specified default instead of the option's default value. @param configOption The configuration option @param overrideDefault The value to return if no value was mapper for any key of the option @return the configured value associated with the given config option, or the overrideDefault
@PublicEvolving public void setLong(ConfigOption<Long> key, long value) { setValueInternal(key.key(), value); }
Adds the given value to the configuration object. The main key of the config option will be used to map the value. @param key the option specifying the key to be added @param value the value of the key/value pair to be added
public boolean getBoolean(String key, boolean defaultValue) { Object o = getRawValue(key); if (o == null) { return defaultValue; } return convertToBoolean(o); }
Returns the value associated with the given key as a boolean. @param key the key pointing to the associated value @param defaultValue the default value which is returned in case there is no value associated with the given key @return the (default) value associated with the given key
@PublicEvolving public boolean getBoolean(ConfigOption<Boolean> configOption) { Object o = getValueOrDefaultFromOption(configOption); return convertToBoolean(o); }
Returns the value associated with the given config option as a boolean. @param configOption The configuration option @return the (default) value associated with the given config option
@PublicEvolving public boolean getBoolean(ConfigOption<Boolean> configOption, boolean overrideDefault) { Object o = getRawValueFromOption(configOption); if (o == null) { return overrideDefault; } return convertToBoolean(o); }
Returns the value associated with the given config option as a boolean. If no value is mapped under any key of the option, it returns the specified default instead of the option's default value. @param configOption The configuration option @param overrideDefault The value to return if no value was mapper for any key of the option @return the configured value associated with the given config option, or the overrideDefault
@PublicEvolving public void setBoolean(ConfigOption<Boolean> key, boolean value) { setValueInternal(key.key(), value); }
Adds the given value to the configuration object. The main key of the config option will be used to map the value. @param key the option specifying the key to be added @param value the value of the key/value pair to be added
public float getFloat(String key, float defaultValue) { Object o = getRawValue(key); if (o == null) { return defaultValue; } return convertToFloat(o, defaultValue); }
Returns the value associated with the given key as a float. @param key the key pointing to the associated value @param defaultValue the default value which is returned in case there is no value associated with the given key @return the (default) value associated with the given key
@PublicEvolving public float getFloat(ConfigOption<Float> configOption) { Object o = getValueOrDefaultFromOption(configOption); return convertToFloat(o, configOption.defaultValue()); }
Returns the value associated with the given config option as a float. @param configOption The configuration option @return the (default) value associated with the given config option