code
stringlengths
67
466k
docstring
stringlengths
1
13.2k
public static boolean isNullOrWhitespaceOnly(String str) { if (str == null || str.length() == 0) { return true; } final int len = str.length(); for (int i = 0; i < len; i++) { if (!Character.isWhitespace(str.charAt(i))) { return false; } } return true; }
Checks if the string is null, empty, or contains only whitespace characters. A whitespace character is defined via {@link Character#isWhitespace(char)}. @param str The string to check @return True, if the string is null or blank, false otherwise.
@Nullable public static String concatenateWithAnd(@Nullable String s1, @Nullable String s2) { if (s1 != null) { return s2 == null ? s1 : s1 + " and " + s2; } else { return s2; } }
If both string arguments are non-null, this method concatenates them with ' and '. If only one of the arguments is non-null, this method returns the non-null argument. If both arguments are null, this method returns null. @param s1 The first string argument @param s2 The second string argument @return The concatenated string, or non-null argument, or null
public static String toQuotedListString(Object[] values) { return Arrays.stream(values).filter(Objects::nonNull) .map(v -> v.toString().toLowerCase()) .collect(Collectors.joining(", ", "\"", "\"")); }
Generates a string containing a comma-separated list of values in double-quotes. Uses lower-cased values returned from {@link Object#toString()} method for each element in the given array. Null values are skipped. @param values array of elements for the list @return The string with quoted list of elements
@Override public void runFetchLoop() throws Exception { try { final Handover handover = this.handover; // kick off the actual Kafka consumer consumerThread.start(); while (running) { // this blocks until we get the next records // it automatically re-throws exceptions encountered in the consumer thread final ConsumerRecords<byte[], byte[]> records = handover.pollNext(); // get the records for each topic partition for (KafkaTopicPartitionState<TopicPartition> partition : subscribedPartitionStates()) { List<ConsumerRecord<byte[], byte[]>> partitionRecords = records.records(partition.getKafkaPartitionHandle()); for (ConsumerRecord<byte[], byte[]> record : partitionRecords) { final T value = deserializer.deserialize(record); if (deserializer.isEndOfStream(value)) { // end of stream signaled running = false; break; } // emit the actual record. this also updates offset state atomically // and deals with timestamps and watermark generation emitRecord(value, partition, record.offset(), record); } } } } finally { // this signals the consumer thread that no more work is to be done consumerThread.shutdown(); } // on a clean exit, wait for the runner thread try { consumerThread.join(); } catch (InterruptedException e) { // may be the result of a wake-up interruption after an exception. // we ignore this here and only restore the interruption state Thread.currentThread().interrupt(); } }
------------------------------------------------------------------------
protected void emitRecord( T record, KafkaTopicPartitionState<TopicPartition> partition, long offset, @SuppressWarnings("UnusedParameters") ConsumerRecord<?, ?> consumerRecord) throws Exception { // the 0.9 Fetcher does not try to extract a timestamp emitRecord(record, partition, offset); }
------------------------------------------------------------------------
@Override public TopicPartition createKafkaPartitionHandle(KafkaTopicPartition partition) { return new TopicPartition(partition.getTopic(), partition.getPartition()); }
------------------------------------------------------------------------
public void startThreads() { if (this.sortThread != null) { this.sortThread.start(); } if (this.spillThread != null) { this.spillThread.start(); } if (this.mergeThread != null) { this.mergeThread.start(); } }
Starts all the threads that are used by this sorter.
@Override public void close() { // check if the sorter has been closed before synchronized (this) { if (this.closed) { return; } // mark as closed this.closed = true; } // from here on, the code is in a try block, because even through errors might be thrown in this block, // we need to make sure that all the memory is released. try { // if the result iterator has not been obtained yet, set the exception synchronized (this.iteratorLock) { if (this.iteratorException == null) { this.iteratorException = new IOException("The sorter has been closed."); this.iteratorLock.notifyAll(); } } // stop all the threads if (this.sortThread != null) { try { this.sortThread.shutdown(); } catch (Throwable t) { LOG.error("Error shutting down sorter thread: " + t.getMessage(), t); } } if (this.spillThread != null) { try { this.spillThread.shutdown(); } catch (Throwable t) { LOG.error("Error shutting down spilling thread: " + t.getMessage(), t); } } if (this.mergeThread != null) { try { this.mergeThread.shutdown(); } catch (Throwable t) { LOG.error("Error shutting down merging thread: " + t.getMessage(), t); } } try { if (this.sortThread != null) { this.sortThread.join(); this.sortThread = null; } if (this.spillThread != null) { this.spillThread.join(); this.spillThread = null; } if (this.mergeThread != null) { this.mergeThread.join(); this.mergeThread = null; } } catch (InterruptedException iex) { LOG.debug("Closing of sort/merger was interrupted. " + "The reading/sorting/spilling/merging threads may still be working.", iex); } } finally { releaseSortMemory(); // Eliminate object references for MemorySegments. circularQueues = null; currWriteBuffer = null; iterator = null; merger.close(); channelManager.close(); } }
Shuts down all the threads initiated by this sorter. Also releases all previously allocated memory, if it has not yet been released by the threads, and closes and deletes all channels (removing the temporary files). <p>The threads are set to exit directly, but depending on their operation, it may take a while to actually happen. The sorting thread will for example not finish before the current batch is sorted. This method attempts to wait for the working thread to exit. If it is however interrupted, the method exits immediately and is not guaranteed how long the threads continue to exist and occupy resources afterwards.
private void setResultIterator(MutableObjectIterator<BinaryRow> iterator) { synchronized (this.iteratorLock) { // set the result iterator only, if no exception has occurred if (this.iteratorException == null) { this.iterator = iterator; this.iteratorLock.notifyAll(); } } }
Sets the result iterator. By setting the result iterator, all threads that are waiting for the result iterator are notified and will obtain it. @param iterator The result iterator to set.
@Override public void dispose() { IOUtils.closeQuietly(cancelStreamRegistry); if (kvStateRegistry != null) { kvStateRegistry.unregisterAll(); } lastName = null; lastState = null; keyValueStatesByName.clear(); }
Closes the state backend, releasing all internal resources, but does not delete any persistent checkpoint data.
@SuppressWarnings("unchecked") @Override public <N, S extends State> S getPartitionedState( final N namespace, final TypeSerializer<N> namespaceSerializer, final StateDescriptor<S, ?> stateDescriptor) throws Exception { checkNotNull(namespace, "Namespace"); if (lastName != null && lastName.equals(stateDescriptor.getName())) { lastState.setCurrentNamespace(namespace); return (S) lastState; } InternalKvState<K, ?, ?> previous = keyValueStatesByName.get(stateDescriptor.getName()); if (previous != null) { lastState = previous; lastState.setCurrentNamespace(namespace); lastName = stateDescriptor.getName(); return (S) previous; } final S state = getOrCreateKeyedState(namespaceSerializer, stateDescriptor); final InternalKvState<K, N, ?> kvState = (InternalKvState<K, N, ?>) state; lastName = stateDescriptor.getName(); lastState = kvState; kvState.setCurrentNamespace(namespace); return state; }
TODO: NOTE: This method does a lot of work caching / retrieving states just to update the namespace. This method should be removed for the sake of namespaces being lazily fetched from the keyed state backend, or being set on the state directly. @see KeyedStateBackend
@SuppressWarnings("unchecked") public static <T> T stripProxy(@Nullable final WrappingProxy<T> wrappingProxy) { if (wrappingProxy == null) { return null; } T delegate = wrappingProxy.getWrappedDelegate(); int numProxiesStripped = 0; while (delegate instanceof WrappingProxy) { throwIfSafetyNetExceeded(++numProxiesStripped); delegate = ((WrappingProxy<T>) delegate).getWrappedDelegate(); } return delegate; }
Expects a proxy, and returns the unproxied delegate. @param wrappingProxy The initial proxy. @param <T> The type of the delegate. Note that all proxies in the chain must be assignable to T. @return The unproxied delegate.
public static Short min(Short a, Short b) { return a <= b ? a : b; }
Like Math.min() except for shorts.
public static Short max(Short a, Short b) { return a >= b ? a : b; }
Like Math.max() except for shorts.
private <T> WatermarkGaugeExposingOutput<StreamRecord<T>> createOutputCollector( StreamTask<?, ?> containingTask, StreamConfig operatorConfig, Map<Integer, StreamConfig> chainedConfigs, ClassLoader userCodeClassloader, Map<StreamEdge, RecordWriterOutput<?>> streamOutputs, List<StreamOperator<?>> allOperators) { List<Tuple2<WatermarkGaugeExposingOutput<StreamRecord<T>>, StreamEdge>> allOutputs = new ArrayList<>(4); // create collectors for the network outputs for (StreamEdge outputEdge : operatorConfig.getNonChainedOutputs(userCodeClassloader)) { @SuppressWarnings("unchecked") RecordWriterOutput<T> output = (RecordWriterOutput<T>) streamOutputs.get(outputEdge); allOutputs.add(new Tuple2<>(output, outputEdge)); } // Create collectors for the chained outputs for (StreamEdge outputEdge : operatorConfig.getChainedOutputs(userCodeClassloader)) { int outputId = outputEdge.getTargetId(); StreamConfig chainedOpConfig = chainedConfigs.get(outputId); WatermarkGaugeExposingOutput<StreamRecord<T>> output = createChainedOperator( containingTask, chainedOpConfig, chainedConfigs, userCodeClassloader, streamOutputs, allOperators, outputEdge.getOutputTag()); allOutputs.add(new Tuple2<>(output, outputEdge)); } // if there are multiple outputs, or the outputs are directed, we need to // wrap them as one output List<OutputSelector<T>> selectors = operatorConfig.getOutputSelectors(userCodeClassloader); if (selectors == null || selectors.isEmpty()) { // simple path, no selector necessary if (allOutputs.size() == 1) { return allOutputs.get(0).f0; } else { // send to N outputs. Note that this includes the special case // of sending to zero outputs @SuppressWarnings({"unchecked", "rawtypes"}) Output<StreamRecord<T>>[] asArray = new Output[allOutputs.size()]; for (int i = 0; i < allOutputs.size(); i++) { asArray[i] = allOutputs.get(i).f0; } // This is the inverse of creating the normal ChainingOutput. // If the chaining output does not copy we need to copy in the broadcast output, // otherwise multi-chaining would not work correctly. if (containingTask.getExecutionConfig().isObjectReuseEnabled()) { return new CopyingBroadcastingOutputCollector<>(asArray, this); } else { return new BroadcastingOutputCollector<>(asArray, this); } } } else { // selector present, more complex routing necessary // This is the inverse of creating the normal ChainingOutput. // If the chaining output does not copy we need to copy in the broadcast output, // otherwise multi-chaining would not work correctly. if (containingTask.getExecutionConfig().isObjectReuseEnabled()) { return new CopyingDirectedOutput<>(selectors, allOutputs); } else { return new DirectedOutput<>(selectors, allOutputs); } } }
------------------------------------------------------------------------
@Override public void close() { IOUtils.closeQuietly(defaultColumnFamilyHandle); IOUtils.closeQuietly(nativeMetricMonitor); IOUtils.closeQuietly(db); // Making sure the already created column family options will be closed columnFamilyDescriptors.forEach((cfd) -> IOUtils.closeQuietly(cfd.getOptions())); }
Necessary clean up iff restore operation failed.
public static void main(String[] args) throws Exception { final ParameterTool params = ParameterTool.fromArgs(args); final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime); env.getConfig().setGlobalJobParameters(params); @SuppressWarnings({"rawtypes", "serial"}) DataStream<Tuple4<Integer, Integer, Double, Long>> carData; if (params.has("input")) { carData = env.readTextFile(params.get("input")).map(new ParseCarData()); } else { System.out.println("Executing TopSpeedWindowing example with default input data set."); System.out.println("Use --input to specify file input."); carData = env.addSource(CarSource.create(2)); } int evictionSec = 10; double triggerMeters = 50; DataStream<Tuple4<Integer, Integer, Double, Long>> topSpeeds = carData .assignTimestampsAndWatermarks(new CarTimestamp()) .keyBy(0) .window(GlobalWindows.create()) .evictor(TimeEvictor.of(Time.of(evictionSec, TimeUnit.SECONDS))) .trigger(DeltaTrigger.of(triggerMeters, new DeltaFunction<Tuple4<Integer, Integer, Double, Long>>() { private static final long serialVersionUID = 1L; @Override public double getDelta( Tuple4<Integer, Integer, Double, Long> oldDataPoint, Tuple4<Integer, Integer, Double, Long> newDataPoint) { return newDataPoint.f2 - oldDataPoint.f2; } }, carData.getType().createSerializer(env.getConfig()))) .maxBy(1); if (params.has("output")) { topSpeeds.writeAsText(params.get("output")); } else { System.out.println("Printing result to stdout. Use --output to specify output path."); topSpeeds.print(); } env.execute("CarTopSpeedWindowingExample"); }
*************************************************************************
@Override public BaseRow addInput(@Nullable BaseRow previousAcc, BaseRow input) throws Exception { BaseRow currentAcc; if (previousAcc == null) { currentAcc = localAgg.createAccumulators(); } else { currentAcc = previousAcc; } localAgg.setAccumulators(currentAcc); localAgg.merge(input); return localAgg.getAccumulators(); }
The {@code previousAcc} is accumulator, but input is a row in &lt;key, accumulator&gt; schema, the specific generated {@link #localAgg} will project the {@code input} to accumulator in merge method.
@Override public void writeFixed(byte[] bytes, int start, int len) throws IOException { out.write(bytes, start, len); }
--------------------------------------------------------------------------------------------
@Override public void writeString(String str) throws IOException { byte[] bytes = Utf8.getBytesFor(str); writeBytes(bytes, 0, bytes.length); }
--------------------------------------------------------------------------------------------
public static void writeVarLongCount(DataOutput out, long val) throws IOException { if (val < 0) { throw new IOException("Illegal count (must be non-negative): " + val); } while ((val & ~0x7FL) != 0) { out.write(((int) val) | 0x80); val >>>= 7; } out.write((int) val); }
--------------------------------------------------------------------------------------------
public static boolean isInternalSSLEnabled(Configuration sslConfig) { @SuppressWarnings("deprecation") final boolean fallbackFlag = sslConfig.getBoolean(SecurityOptions.SSL_ENABLED); return sslConfig.getBoolean(SecurityOptions.SSL_INTERNAL_ENABLED, fallbackFlag); }
Checks whether SSL for internal communication (rpc, data transport, blob server) is enabled.
public static boolean isRestSSLEnabled(Configuration sslConfig) { @SuppressWarnings("deprecation") final boolean fallbackFlag = sslConfig.getBoolean(SecurityOptions.SSL_ENABLED); return sslConfig.getBoolean(SecurityOptions.SSL_REST_ENABLED, fallbackFlag); }
Checks whether SSL for the external REST endpoint is enabled.
public static boolean isRestSSLAuthenticationEnabled(Configuration sslConfig) { checkNotNull(sslConfig, "sslConfig"); return isRestSSLEnabled(sslConfig) && sslConfig.getBoolean(SecurityOptions.SSL_REST_AUTHENTICATION_ENABLED); }
Checks whether mutual SSL authentication for the external REST endpoint is enabled.
public static ServerSocketFactory createSSLServerSocketFactory(Configuration config) throws Exception { SSLContext sslContext = createInternalSSLContext(config); if (sslContext == null) { throw new IllegalConfigurationException("SSL is not enabled"); } String[] protocols = getEnabledProtocols(config); String[] cipherSuites = getEnabledCipherSuites(config); SSLServerSocketFactory factory = sslContext.getServerSocketFactory(); return new ConfiguringSSLServerSocketFactory(factory, protocols, cipherSuites); }
Creates a factory for SSL Server Sockets from the given configuration. SSL Server Sockets are always part of internal communication.
public static SocketFactory createSSLClientSocketFactory(Configuration config) throws Exception { SSLContext sslContext = createInternalSSLContext(config); if (sslContext == null) { throw new IllegalConfigurationException("SSL is not enabled"); } return sslContext.getSocketFactory(); }
Creates a factory for SSL Client Sockets from the given configuration. SSL Client Sockets are always part of internal communication.
public static SSLHandlerFactory createInternalServerSSLEngineFactory(final Configuration config) throws Exception { SSLContext sslContext = createInternalSSLContext(config); if (sslContext == null) { throw new IllegalConfigurationException("SSL is not enabled for internal communication."); } return new SSLHandlerFactory( sslContext, getEnabledProtocols(config), getEnabledCipherSuites(config), false, true, config.getInteger(SecurityOptions.SSL_INTERNAL_HANDSHAKE_TIMEOUT), config.getInteger(SecurityOptions.SSL_INTERNAL_CLOSE_NOTIFY_FLUSH_TIMEOUT)); }
Creates a SSLEngineFactory to be used by internal communication server endpoints.
public static SSLHandlerFactory createRestServerSSLEngineFactory(final Configuration config) throws Exception { SSLContext sslContext = createRestServerSSLContext(config); if (sslContext == null) { throw new IllegalConfigurationException("SSL is not enabled for REST endpoints."); } return new SSLHandlerFactory( sslContext, getEnabledProtocols(config), getEnabledCipherSuites(config), false, isRestSSLAuthenticationEnabled(config), -1, -1); }
Creates a {@link SSLHandlerFactory} to be used by the REST Servers. @param config The application configuration.
public static SSLHandlerFactory createRestClientSSLEngineFactory(final Configuration config) throws Exception { SSLContext sslContext = createRestClientSSLContext(config); if (sslContext == null) { throw new IllegalConfigurationException("SSL is not enabled for REST endpoints."); } return new SSLHandlerFactory( sslContext, getEnabledProtocols(config), getEnabledCipherSuites(config), true, isRestSSLAuthenticationEnabled(config), -1, -1); }
Creates a {@link SSLHandlerFactory} to be used by the REST Clients. @param config The application configuration.
@Nullable private static SSLContext createInternalSSLContext(Configuration config) throws Exception { checkNotNull(config, "config"); if (!isInternalSSLEnabled(config)) { return null; } String keystoreFilePath = getAndCheckOption( config, SecurityOptions.SSL_INTERNAL_KEYSTORE, SecurityOptions.SSL_KEYSTORE); String keystorePassword = getAndCheckOption( config, SecurityOptions.SSL_INTERNAL_KEYSTORE_PASSWORD, SecurityOptions.SSL_KEYSTORE_PASSWORD); String certPassword = getAndCheckOption( config, SecurityOptions.SSL_INTERNAL_KEY_PASSWORD, SecurityOptions.SSL_KEY_PASSWORD); String trustStoreFilePath = getAndCheckOption( config, SecurityOptions.SSL_INTERNAL_TRUSTSTORE, SecurityOptions.SSL_TRUSTSTORE); String trustStorePassword = getAndCheckOption( config, SecurityOptions.SSL_INTERNAL_TRUSTSTORE_PASSWORD, SecurityOptions.SSL_TRUSTSTORE_PASSWORD); String sslProtocolVersion = config.getString(SecurityOptions.SSL_PROTOCOL); int sessionCacheSize = config.getInteger(SecurityOptions.SSL_INTERNAL_SESSION_CACHE_SIZE); int sessionTimeoutMs = config.getInteger(SecurityOptions.SSL_INTERNAL_SESSION_TIMEOUT); KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType()); try (InputStream keyStoreFile = Files.newInputStream(new File(keystoreFilePath).toPath())) { keyStore.load(keyStoreFile, keystorePassword.toCharArray()); } KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType()); try (InputStream trustStoreFile = Files.newInputStream(new File(trustStoreFilePath).toPath())) { trustStore.load(trustStoreFile, trustStorePassword.toCharArray()); } KeyManagerFactory kmf = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm()); kmf.init(keyStore, certPassword.toCharArray()); TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm()); tmf.init(trustStore); SSLContext sslContext = SSLContext.getInstance(sslProtocolVersion); sslContext.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null); if (sessionCacheSize >= 0) { sslContext.getClientSessionContext().setSessionCacheSize(sessionCacheSize); } if (sessionTimeoutMs >= 0) { sslContext.getClientSessionContext().setSessionTimeout(sessionTimeoutMs / 1000); } return sslContext; }
Creates the SSL Context for internal SSL, if internal SSL is configured. For internal SSL, the client and server side configuration are identical, because of mutual authentication.
@Nullable private static SSLContext createRestSSLContext(Configuration config, RestSSLContextConfigMode configMode) throws Exception { checkNotNull(config, "config"); if (!isRestSSLEnabled(config)) { return null; } KeyManager[] keyManagers = null; if (configMode == RestSSLContextConfigMode.SERVER || configMode == RestSSLContextConfigMode.MUTUAL) { String keystoreFilePath = getAndCheckOption( config, SecurityOptions.SSL_REST_KEYSTORE, SecurityOptions.SSL_KEYSTORE); String keystorePassword = getAndCheckOption( config, SecurityOptions.SSL_REST_KEYSTORE_PASSWORD, SecurityOptions.SSL_KEYSTORE_PASSWORD); String certPassword = getAndCheckOption( config, SecurityOptions.SSL_REST_KEY_PASSWORD, SecurityOptions.SSL_KEY_PASSWORD); KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType()); try (InputStream keyStoreFile = Files.newInputStream(new File(keystoreFilePath).toPath())) { keyStore.load(keyStoreFile, keystorePassword.toCharArray()); } KeyManagerFactory kmf = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm()); kmf.init(keyStore, certPassword.toCharArray()); keyManagers = kmf.getKeyManagers(); } TrustManager[] trustManagers = null; if (configMode == RestSSLContextConfigMode.CLIENT || configMode == RestSSLContextConfigMode.MUTUAL) { String trustStoreFilePath = getAndCheckOption( config, SecurityOptions.SSL_REST_TRUSTSTORE, SecurityOptions.SSL_TRUSTSTORE); String trustStorePassword = getAndCheckOption( config, SecurityOptions.SSL_REST_TRUSTSTORE_PASSWORD, SecurityOptions.SSL_TRUSTSTORE_PASSWORD); KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType()); try (InputStream trustStoreFile = Files.newInputStream(new File(trustStoreFilePath).toPath())) { trustStore.load(trustStoreFile, trustStorePassword.toCharArray()); } TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm()); tmf.init(trustStore); trustManagers = tmf.getTrustManagers(); } String sslProtocolVersion = config.getString(SecurityOptions.SSL_PROTOCOL); SSLContext sslContext = SSLContext.getInstance(sslProtocolVersion); sslContext.init(keyManagers, trustManagers, null); return sslContext; }
Creates an SSL context for the external REST SSL. If mutual authentication is configured the client and the server side configuration are identical.
@Nullable public static SSLContext createRestServerSSLContext(Configuration config) throws Exception { final RestSSLContextConfigMode configMode; if (isRestSSLAuthenticationEnabled(config)) { configMode = RestSSLContextConfigMode.MUTUAL; } else { configMode = RestSSLContextConfigMode.SERVER; } return createRestSSLContext(config, configMode); }
Creates an SSL context for the external REST endpoint server.
@Nullable public static SSLContext createRestClientSSLContext(Configuration config) throws Exception { final RestSSLContextConfigMode configMode; if (isRestSSLAuthenticationEnabled(config)) { configMode = RestSSLContextConfigMode.MUTUAL; } else { configMode = RestSSLContextConfigMode.CLIENT; } return createRestSSLContext(config, configMode); }
Creates an SSL context for clients against the external REST endpoint.
private static String getAndCheckOption(Configuration config, ConfigOption<String> primaryOption, ConfigOption<String> fallbackOption) { String value = config.getString(primaryOption, config.getString(fallbackOption)); if (value != null) { return value; } else { throw new IllegalConfigurationException("The config option " + primaryOption.key() + " or " + fallbackOption.key() + " is missing."); } }
------------------------------------------------------------------------
@Nonnull public final TypeSerializer<T> currentSchemaSerializer() { if (registeredSerializer != null) { checkState( !isRegisteredWithIncompatibleSerializer, "Unable to provide a serializer with the current schema, because the restored state was " + "registered with a new serializer that has incompatible schema."); return registeredSerializer; } // if we are not yet registered with a new serializer, // we can just use the restore serializer to read / write the state. return previousSchemaSerializer(); }
Gets the serializer that recognizes the current serialization schema of the state. This is the serializer that should be used for regular state serialization and deserialization after state has been restored. <p>If this provider was created from a restored state's serializer snapshot, while a new serializer (with a new schema) was not registered for the state (i.e., because the state was never accessed after it was restored), then the schema of state remains identical. Therefore, in this case, it is guaranteed that the serializer returned by this method is the same as the one returned by {@link #previousSchemaSerializer()}. <p>If this provider was created from a serializer instance, then this always returns the that same serializer instance. If later on a snapshot of the previous serializer is supplied via {@link #setPreviousSerializerSnapshotForRestoredState(TypeSerializerSnapshot)}, then the initially supplied serializer instance will be checked for compatibility. @return a serializer that reads and writes in the current schema of the state.
@Nonnull public final TypeSerializer<T> previousSchemaSerializer() { if (cachedRestoredSerializer != null) { return cachedRestoredSerializer; } if (previousSerializerSnapshot == null) { throw new UnsupportedOperationException( "This provider does not contain the state's previous serializer's snapshot. Cannot provider a serializer for previous schema."); } this.cachedRestoredSerializer = previousSerializerSnapshot.restoreSerializer(); return cachedRestoredSerializer; }
Gets the serializer that recognizes the previous serialization schema of the state. This is the serializer that should be used for restoring the state, i.e. when the state is still in the previous serialization schema. <p>This method only returns a serializer if this provider has the previous serializer's snapshot. Otherwise, trying to access the previous schema serializer will fail with an exception. @return a serializer that reads and writes in the previous schema of the state.
@Override public void onStart() throws Exception { try { startResourceManagerServices(); } catch (Exception e) { final ResourceManagerException exception = new ResourceManagerException(String.format("Could not start the ResourceManager %s", getAddress()), e); onFatalError(exception); throw exception; } }
------------------------------------------------------------------------
@Override public CompletableFuture<RegistrationResponse> registerJobManager( final JobMasterId jobMasterId, final ResourceID jobManagerResourceId, final String jobManagerAddress, final JobID jobId, final Time timeout) { checkNotNull(jobMasterId); checkNotNull(jobManagerResourceId); checkNotNull(jobManagerAddress); checkNotNull(jobId); if (!jobLeaderIdService.containsJob(jobId)) { try { jobLeaderIdService.addJob(jobId); } catch (Exception e) { ResourceManagerException exception = new ResourceManagerException("Could not add the job " + jobId + " to the job id leader service.", e); onFatalError(exception); log.error("Could not add job {} to job leader id service.", jobId, e); return FutureUtils.completedExceptionally(exception); } } log.info("Registering job manager {}@{} for job {}.", jobMasterId, jobManagerAddress, jobId); CompletableFuture<JobMasterId> jobMasterIdFuture; try { jobMasterIdFuture = jobLeaderIdService.getLeaderId(jobId); } catch (Exception e) { // we cannot check the job leader id so let's fail // TODO: Maybe it's also ok to skip this check in case that we cannot check the leader id ResourceManagerException exception = new ResourceManagerException("Cannot obtain the " + "job leader id future to verify the correct job leader.", e); onFatalError(exception); log.debug("Could not obtain the job leader id future to verify the correct job leader."); return FutureUtils.completedExceptionally(exception); } CompletableFuture<JobMasterGateway> jobMasterGatewayFuture = getRpcService().connect(jobManagerAddress, jobMasterId, JobMasterGateway.class); CompletableFuture<RegistrationResponse> registrationResponseFuture = jobMasterGatewayFuture.thenCombineAsync( jobMasterIdFuture, (JobMasterGateway jobMasterGateway, JobMasterId leadingJobMasterId) -> { if (Objects.equals(leadingJobMasterId, jobMasterId)) { return registerJobMasterInternal( jobMasterGateway, jobId, jobManagerAddress, jobManagerResourceId); } else { final String declineMessage = String.format( "The leading JobMaster id %s did not match the received JobMaster id %s. " + "This indicates that a JobMaster leader change has happened.", leadingJobMasterId, jobMasterId); log.debug(declineMessage); return new RegistrationResponse.Decline(declineMessage); } }, getMainThreadExecutor()); // handle exceptions which might have occurred in one of the futures inputs of combine return registrationResponseFuture.handleAsync( (RegistrationResponse registrationResponse, Throwable throwable) -> { if (throwable != null) { if (log.isDebugEnabled()) { log.debug("Registration of job manager {}@{} failed.", jobMasterId, jobManagerAddress, throwable); } else { log.info("Registration of job manager {}@{} failed.", jobMasterId, jobManagerAddress); } return new RegistrationResponse.Decline(throwable.getMessage()); } else { return registrationResponse; } }, getRpcService().getExecutor()); }
------------------------------------------------------------------------
@Override public CompletableFuture<Acknowledge> deregisterApplication( final ApplicationStatus finalStatus, @Nullable final String diagnostics) { log.info("Shut down cluster because application is in {}, diagnostics {}.", finalStatus, diagnostics); try { internalDeregisterApplication(finalStatus, diagnostics); } catch (ResourceManagerException e) { log.warn("Could not properly shutdown the application.", e); } return CompletableFuture.completedFuture(Acknowledge.get()); }
Cleanup application and shut down cluster. @param finalStatus of the Flink application @param diagnostics diagnostics message for the Flink application or {@code null}
private RegistrationResponse registerJobMasterInternal( final JobMasterGateway jobMasterGateway, JobID jobId, String jobManagerAddress, ResourceID jobManagerResourceId) { if (jobManagerRegistrations.containsKey(jobId)) { JobManagerRegistration oldJobManagerRegistration = jobManagerRegistrations.get(jobId); if (Objects.equals(oldJobManagerRegistration.getJobMasterId(), jobMasterGateway.getFencingToken())) { // same registration log.debug("Job manager {}@{} was already registered.", jobMasterGateway.getFencingToken(), jobManagerAddress); } else { // tell old job manager that he is no longer the job leader disconnectJobManager( oldJobManagerRegistration.getJobID(), new Exception("New job leader for job " + jobId + " found.")); JobManagerRegistration jobManagerRegistration = new JobManagerRegistration( jobId, jobManagerResourceId, jobMasterGateway); jobManagerRegistrations.put(jobId, jobManagerRegistration); jmResourceIdRegistrations.put(jobManagerResourceId, jobManagerRegistration); } } else { // new registration for the job JobManagerRegistration jobManagerRegistration = new JobManagerRegistration( jobId, jobManagerResourceId, jobMasterGateway); jobManagerRegistrations.put(jobId, jobManagerRegistration); jmResourceIdRegistrations.put(jobManagerResourceId, jobManagerRegistration); } log.info("Registered job manager {}@{} for job {}.", jobMasterGateway.getFencingToken(), jobManagerAddress, jobId); jobManagerHeartbeatManager.monitorTarget(jobManagerResourceId, new HeartbeatTarget<Void>() { @Override public void receiveHeartbeat(ResourceID resourceID, Void payload) { // the ResourceManager will always send heartbeat requests to the JobManager } @Override public void requestHeartbeat(ResourceID resourceID, Void payload) { jobMasterGateway.heartbeatFromResourceManager(resourceID); } }); return new JobMasterRegistrationSuccess( getFencingToken(), resourceId); }
Registers a new JobMaster. @param jobMasterGateway to communicate with the registering JobMaster @param jobId of the job for which the JobMaster is responsible @param jobManagerAddress address of the JobMaster @param jobManagerResourceId ResourceID of the JobMaster @return RegistrationResponse
private RegistrationResponse registerTaskExecutorInternal( TaskExecutorGateway taskExecutorGateway, String taskExecutorAddress, ResourceID taskExecutorResourceId, int dataPort, HardwareDescription hardwareDescription) { WorkerRegistration<WorkerType> oldRegistration = taskExecutors.remove(taskExecutorResourceId); if (oldRegistration != null) { // TODO :: suggest old taskExecutor to stop itself log.debug("Replacing old registration of TaskExecutor {}.", taskExecutorResourceId); // remove old task manager registration from slot manager slotManager.unregisterTaskManager(oldRegistration.getInstanceID()); } final WorkerType newWorker = workerStarted(taskExecutorResourceId); if (newWorker == null) { log.warn("Discard registration from TaskExecutor {} at ({}) because the framework did " + "not recognize it", taskExecutorResourceId, taskExecutorAddress); return new RegistrationResponse.Decline("unrecognized TaskExecutor"); } else { WorkerRegistration<WorkerType> registration = new WorkerRegistration<>(taskExecutorGateway, newWorker, dataPort, hardwareDescription); log.info("Registering TaskManager with ResourceID {} ({}) at ResourceManager", taskExecutorResourceId, taskExecutorAddress); taskExecutors.put(taskExecutorResourceId, registration); taskManagerHeartbeatManager.monitorTarget(taskExecutorResourceId, new HeartbeatTarget<Void>() { @Override public void receiveHeartbeat(ResourceID resourceID, Void payload) { // the ResourceManager will always send heartbeat requests to the // TaskManager } @Override public void requestHeartbeat(ResourceID resourceID, Void payload) { taskExecutorGateway.heartbeatFromResourceManager(resourceID); } }); return new TaskExecutorRegistrationSuccess( registration.getInstanceID(), resourceId, clusterInformation); } }
Registers a new TaskExecutor. @param taskExecutorGateway to communicate with the registering TaskExecutor @param taskExecutorAddress address of the TaskExecutor @param taskExecutorResourceId ResourceID of the TaskExecutor @param dataPort port used for data transfer @param hardwareDescription of the registering TaskExecutor @return RegistrationResponse
protected void closeJobManagerConnection(JobID jobId, Exception cause) { JobManagerRegistration jobManagerRegistration = jobManagerRegistrations.remove(jobId); if (jobManagerRegistration != null) { final ResourceID jobManagerResourceId = jobManagerRegistration.getJobManagerResourceID(); final JobMasterGateway jobMasterGateway = jobManagerRegistration.getJobManagerGateway(); final JobMasterId jobMasterId = jobManagerRegistration.getJobMasterId(); log.info("Disconnect job manager {}@{} for job {} from the resource manager.", jobMasterId, jobMasterGateway.getAddress(), jobId); jobManagerHeartbeatManager.unmonitorTarget(jobManagerResourceId); jmResourceIdRegistrations.remove(jobManagerResourceId); // tell the job manager about the disconnect jobMasterGateway.disconnectResourceManager(getFencingToken(), cause); } else { log.debug("There was no registered job manager for job {}.", jobId); } }
This method should be called by the framework once it detects that a currently registered job manager has failed. @param jobId identifying the job whose leader shall be disconnected. @param cause The exception which cause the JobManager failed.
protected void closeTaskManagerConnection(final ResourceID resourceID, final Exception cause) { taskManagerHeartbeatManager.unmonitorTarget(resourceID); WorkerRegistration<WorkerType> workerRegistration = taskExecutors.remove(resourceID); if (workerRegistration != null) { log.info("Closing TaskExecutor connection {} because: {}", resourceID, cause.getMessage()); // TODO :: suggest failed task executor to stop itself slotManager.unregisterTaskManager(workerRegistration.getInstanceID()); workerRegistration.getTaskExecutorGateway().disconnectResourceManager(cause); } else { log.debug( "No open TaskExecutor connection {}. Ignoring close TaskExecutor connection. Closing reason was: {}", resourceID, cause.getMessage()); } }
This method should be called by the framework once it detects that a currently registered task executor has failed. @param resourceID Id of the TaskManager that has failed. @param cause The exception which cause the TaskManager failed.
protected void onFatalError(Throwable t) { try { log.error("Fatal error occurred in ResourceManager.", t); } catch (Throwable ignored) {} // The fatal error handler implementation should make sure that this call is non-blocking fatalErrorHandler.onFatalError(t); }
Notifies the ResourceManager that a fatal error has occurred and it cannot proceed. @param t The exception describing the fatal error
@Override public void grantLeadership(final UUID newLeaderSessionID) { final CompletableFuture<Boolean> acceptLeadershipFuture = clearStateFuture .thenComposeAsync((ignored) -> tryAcceptLeadership(newLeaderSessionID), getUnfencedMainThreadExecutor()); final CompletableFuture<Void> confirmationFuture = acceptLeadershipFuture.thenAcceptAsync( (acceptLeadership) -> { if (acceptLeadership) { // confirming the leader session ID might be blocking, leaderElectionService.confirmLeaderSessionID(newLeaderSessionID); }
Callback method when current resourceManager is granted leadership. @param newLeaderSessionID unique leadershipID
@Override public void open(MetricConfig config) { String portsConfig = config.getString(ARG_PORT, null); if (portsConfig != null) { Iterator<Integer> ports = NetUtils.getPortRangeFromString(portsConfig); JMXServer server = new JMXServer(); while (ports.hasNext()) { int port = ports.next(); try { server.start(port); LOG.info("Started JMX server on port " + port + "."); // only set our field if the server was actually started jmxServer = server; break; } catch (IOException ioe) { //assume port conflict LOG.debug("Could not start JMX server on port " + port + ".", ioe); try { server.stop(); } catch (Exception e) { LOG.debug("Could not stop JMX server.", e); } } } if (jmxServer == null) { throw new RuntimeException("Could not start JMX server on any configured port. Ports: " + portsConfig); } } LOG.info("Configured JMXReporter with {port:{}}", portsConfig); }
------------------------------------------------------------------------
@Override public void notifyOfAddedMetric(Metric metric, String metricName, MetricGroup group) { final String domain = generateJmxDomain(metricName, group); final Hashtable<String, String> table = generateJmxTable(group.getAllVariables()); AbstractBean jmxMetric; ObjectName jmxName; try { jmxName = new ObjectName(domain, table); } catch (MalformedObjectNameException e) { /** * There is an implementation error on our side if this occurs. Either the domain was modified and no longer * conforms to the JMX domain rules or the table wasn't properly generated. */ LOG.debug("Implementation error. The domain or table does not conform to JMX rules." , e); return; } if (metric instanceof Gauge) { jmxMetric = new JmxGauge((Gauge<?>) metric); } else if (metric instanceof Counter) { jmxMetric = new JmxCounter((Counter) metric); } else if (metric instanceof Histogram) { jmxMetric = new JmxHistogram((Histogram) metric); } else if (metric instanceof Meter) { jmxMetric = new JmxMeter((Meter) metric); } else { LOG.error("Cannot add unknown metric type: {}. This indicates that the metric type " + "is not supported by this reporter.", metric.getClass().getName()); return; } try { synchronized (this) { mBeanServer.registerMBean(jmxMetric, jmxName); registeredMetrics.put(metric, jmxName); } } catch (NotCompliantMBeanException e) { // implementation error on our side LOG.debug("Metric did not comply with JMX MBean rules.", e); } catch (InstanceAlreadyExistsException e) { LOG.warn("A metric with the name " + jmxName + " was already registered.", e); } catch (Throwable t) { LOG.warn("Failed to register metric", t); } }
------------------------------------------------------------------------
static Hashtable<String, String> generateJmxTable(Map<String, String> variables) { Hashtable<String, String> ht = new Hashtable<>(variables.size()); for (Map.Entry<String, String> variable : variables.entrySet()) { ht.put(replaceInvalidChars(variable.getKey()), replaceInvalidChars(variable.getValue())); } return ht; }
------------------------------------------------------------------------
public void notifyKvStateRegistered( JobVertexID jobVertexId, KeyGroupRange keyGroupRange, String registrationName, KvStateID kvStateId, InetSocketAddress kvStateServerAddress) { KvStateLocation location = lookupTable.get(registrationName); if (location == null) { // First registration for this operator, create the location info ExecutionJobVertex vertex = jobVertices.get(jobVertexId); if (vertex != null) { int parallelism = vertex.getMaxParallelism(); location = new KvStateLocation(jobId, jobVertexId, parallelism, registrationName); lookupTable.put(registrationName, location); } else { throw new IllegalArgumentException("Unknown JobVertexID " + jobVertexId); } } // Duplicated name if vertex IDs don't match if (!location.getJobVertexId().equals(jobVertexId)) { IllegalStateException duplicate = new IllegalStateException( "Registration name clash. KvState with name '" + registrationName + "' has already been registered by another operator (" + location.getJobVertexId() + ")."); ExecutionJobVertex vertex = jobVertices.get(jobVertexId); if (vertex != null) { vertex.fail(new SuppressRestartsException(duplicate)); } throw duplicate; } location.registerKvState(keyGroupRange, kvStateId, kvStateServerAddress); }
Notifies the registry about a registered KvState instance. @param jobVertexId JobVertexID the KvState instance belongs to @param keyGroupRange Key group range the KvState instance belongs to @param registrationName Name under which the KvState has been registered @param kvStateId ID of the registered KvState instance @param kvStateServerAddress Server address where to find the KvState instance @throws IllegalArgumentException If JobVertexID does not belong to job @throws IllegalArgumentException If state has been registered with same name by another operator. @throws IndexOutOfBoundsException If key group index is out of bounds.
public void notifyKvStateUnregistered( JobVertexID jobVertexId, KeyGroupRange keyGroupRange, String registrationName) { KvStateLocation location = lookupTable.get(registrationName); if (location != null) { // Duplicate name if vertex IDs don't match if (!location.getJobVertexId().equals(jobVertexId)) { throw new IllegalArgumentException("Another operator (" + location.getJobVertexId() + ") registered the KvState " + "under '" + registrationName + "'."); } location.unregisterKvState(keyGroupRange); if (location.getNumRegisteredKeyGroups() == 0) { lookupTable.remove(registrationName); } } else { throw new IllegalArgumentException("Unknown registration name '" + registrationName + "'. " + "Probably registration/unregistration race."); } }
Notifies the registry about an unregistered KvState instance. @param jobVertexId JobVertexID the KvState instance belongs to @param keyGroupRange Key group index the KvState instance belongs to @param registrationName Name under which the KvState has been registered @throws IllegalArgumentException If another operator registered the state instance @throws IllegalArgumentException If the registration name is not known
private Object convert(JsonNode node, TypeInformation<?> info) { if (info == Types.VOID || node.isNull()) { return null; } else if (info == Types.BOOLEAN) { return node.asBoolean(); } else if (info == Types.STRING) { return node.asText(); } else if (info == Types.BIG_DEC) { return node.decimalValue(); } else if (info == Types.BIG_INT) { return node.bigIntegerValue(); } else if (info == Types.SQL_DATE) { return Date.valueOf(node.asText()); } else if (info == Types.SQL_TIME) { // according to RFC 3339 every full-time must have a timezone; // until we have full timezone support, we only support UTC; // users can parse their time as string as a workaround final String time = node.asText(); if (time.indexOf('Z') < 0 || time.indexOf('.') >= 0) { throw new IllegalStateException( "Invalid time format. Only a time in UTC timezone without milliseconds is supported yet. " + "Format: HH:mm:ss'Z'"); } return Time.valueOf(time.substring(0, time.length() - 1)); } else if (info == Types.SQL_TIMESTAMP) { // according to RFC 3339 every date-time must have a timezone; // until we have full timezone support, we only support UTC; // users can parse their time as string as a workaround final String timestamp = node.asText(); if (timestamp.indexOf('Z') < 0) { throw new IllegalStateException( "Invalid timestamp format. Only a timestamp in UTC timezone is supported yet. " + "Format: yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"); } return Timestamp.valueOf(timestamp.substring(0, timestamp.length() - 1).replace('T', ' ')); } else if (info instanceof RowTypeInfo) { return convertRow(node, (RowTypeInfo) info); } else if (info instanceof ObjectArrayTypeInfo) { return convertObjectArray(node, ((ObjectArrayTypeInfo) info).getComponentInfo()); } else if (info instanceof BasicArrayTypeInfo) { return convertObjectArray(node, ((BasicArrayTypeInfo) info).getComponentInfo()); } else if (info instanceof PrimitiveArrayTypeInfo && ((PrimitiveArrayTypeInfo) info).getComponentType() == Types.BYTE) { return convertByteArray(node); } else { // for types that were specified without JSON schema // e.g. POJOs try { return objectMapper.treeToValue(node, info.getTypeClass()); } catch (JsonProcessingException e) { throw new IllegalStateException("Unsupported type information '" + info + "' for node: " + node); } } }
--------------------------------------------------------------------------------------------
public static RefCountedBufferingFileStream openNew( final FunctionWithException<File, RefCountedFile, IOException> tmpFileProvider) throws IOException { return new RefCountedBufferingFileStream( tmpFileProvider.apply(null), BUFFER_SIZE); }
------------------------- Factory Methods -------------------------
private Tuple2<List<KeyedStateHandle>, List<KeyedStateHandle>> reAssignSubKeyedStates( OperatorState operatorState, List<KeyGroupRange> keyGroupPartitions, int subTaskIndex, int newParallelism, int oldParallelism) { List<KeyedStateHandle> subManagedKeyedState; List<KeyedStateHandle> subRawKeyedState; if (newParallelism == oldParallelism) { if (operatorState.getState(subTaskIndex) != null) { subManagedKeyedState = operatorState.getState(subTaskIndex).getManagedKeyedState().asList(); subRawKeyedState = operatorState.getState(subTaskIndex).getRawKeyedState().asList(); } else { subManagedKeyedState = Collections.emptyList(); subRawKeyedState = Collections.emptyList(); } } else { subManagedKeyedState = getManagedKeyedStateHandles(operatorState, keyGroupPartitions.get(subTaskIndex)); subRawKeyedState = getRawKeyedStateHandles(operatorState, keyGroupPartitions.get(subTaskIndex)); } if (subManagedKeyedState.isEmpty() && subRawKeyedState.isEmpty()) { return new Tuple2<>(Collections.emptyList(), Collections.emptyList()); } else { return new Tuple2<>(subManagedKeyedState, subRawKeyedState); } }
TODO rewrite based on operator id
public static List<KeyedStateHandle> getManagedKeyedStateHandles( OperatorState operatorState, KeyGroupRange subtaskKeyGroupRange) { final int parallelism = operatorState.getParallelism(); List<KeyedStateHandle> subtaskKeyedStateHandles = null; for (int i = 0; i < parallelism; i++) { if (operatorState.getState(i) != null) { Collection<KeyedStateHandle> keyedStateHandles = operatorState.getState(i).getManagedKeyedState(); if (subtaskKeyedStateHandles == null) { subtaskKeyedStateHandles = new ArrayList<>(parallelism * keyedStateHandles.size()); } extractIntersectingState( keyedStateHandles, subtaskKeyGroupRange, subtaskKeyedStateHandles); } } return subtaskKeyedStateHandles; }
Collect {@link KeyGroupsStateHandle managedKeyedStateHandles} which have intersection with given {@link KeyGroupRange} from {@link TaskState operatorState} @param operatorState all state handles of a operator @param subtaskKeyGroupRange the KeyGroupRange of a subtask @return all managedKeyedStateHandles which have intersection with given KeyGroupRange
public static List<KeyedStateHandle> getRawKeyedStateHandles( OperatorState operatorState, KeyGroupRange subtaskKeyGroupRange) { final int parallelism = operatorState.getParallelism(); List<KeyedStateHandle> extractedKeyedStateHandles = null; for (int i = 0; i < parallelism; i++) { if (operatorState.getState(i) != null) { Collection<KeyedStateHandle> rawKeyedState = operatorState.getState(i).getRawKeyedState(); if (extractedKeyedStateHandles == null) { extractedKeyedStateHandles = new ArrayList<>(parallelism * rawKeyedState.size()); } extractIntersectingState( rawKeyedState, subtaskKeyGroupRange, extractedKeyedStateHandles); } } return extractedKeyedStateHandles; }
Collect {@link KeyGroupsStateHandle rawKeyedStateHandles} which have intersection with given {@link KeyGroupRange} from {@link TaskState operatorState} @param operatorState all state handles of a operator @param subtaskKeyGroupRange the KeyGroupRange of a subtask @return all rawKeyedStateHandles which have intersection with given KeyGroupRange
private static void extractIntersectingState( Collection<KeyedStateHandle> originalSubtaskStateHandles, KeyGroupRange rangeToExtract, List<KeyedStateHandle> extractedStateCollector) { for (KeyedStateHandle keyedStateHandle : originalSubtaskStateHandles) { if (keyedStateHandle != null) { KeyedStateHandle intersectedKeyedStateHandle = keyedStateHandle.getIntersection(rangeToExtract); if (intersectedKeyedStateHandle != null) { extractedStateCollector.add(intersectedKeyedStateHandle); } } } }
Extracts certain key group ranges from the given state handles and adds them to the collector.
public static List<KeyGroupRange> createKeyGroupPartitions(int numberKeyGroups, int parallelism) { Preconditions.checkArgument(numberKeyGroups >= parallelism); List<KeyGroupRange> result = new ArrayList<>(parallelism); for (int i = 0; i < parallelism; ++i) { result.add(KeyGroupRangeAssignment.computeKeyGroupRangeForOperatorIndex(numberKeyGroups, parallelism, i)); } return result; }
Groups the available set of key groups into key group partitions. A key group partition is the set of key groups which is assigned to the same task. Each set of the returned list constitutes a key group partition. <p> <b>IMPORTANT</b>: The assignment of key groups to partitions has to be in sync with the KeyGroupStreamPartitioner. @param numberKeyGroups Number of available key groups (indexed from 0 to numberKeyGroups - 1) @param parallelism Parallelism to generate the key group partitioning for @return List of key group partitions
private static void checkParallelismPreconditions(OperatorState operatorState, ExecutionJobVertex executionJobVertex) { //----------------------------------------max parallelism preconditions------------------------------------- if (operatorState.getMaxParallelism() < executionJobVertex.getParallelism()) { throw new IllegalStateException("The state for task " + executionJobVertex.getJobVertexId() + " can not be restored. The maximum parallelism (" + operatorState.getMaxParallelism() + ") of the restored state is lower than the configured parallelism (" + executionJobVertex.getParallelism() + "). Please reduce the parallelism of the task to be lower or equal to the maximum parallelism." ); } // check that the number of key groups have not changed or if we need to override it to satisfy the restored state if (operatorState.getMaxParallelism() != executionJobVertex.getMaxParallelism()) { if (!executionJobVertex.isMaxParallelismConfigured()) { // if the max parallelism was not explicitly specified by the user, we derive it from the state LOG.debug("Overriding maximum parallelism for JobVertex {} from {} to {}", executionJobVertex.getJobVertexId(), executionJobVertex.getMaxParallelism(), operatorState.getMaxParallelism()); executionJobVertex.setMaxParallelism(operatorState.getMaxParallelism()); } else { // if the max parallelism was explicitly specified, we complain on mismatch throw new IllegalStateException("The maximum parallelism (" + operatorState.getMaxParallelism() + ") with which the latest " + "checkpoint of the execution job vertex " + executionJobVertex + " has been taken and the current maximum parallelism (" + executionJobVertex.getMaxParallelism() + ") changed. This " + "is currently not supported."); } } }
Verifies conditions in regards to parallelism and maxParallelism that must be met when restoring state. @param operatorState state to restore @param executionJobVertex task for which the state should be restored
private static void checkStateMappingCompleteness( boolean allowNonRestoredState, Map<OperatorID, OperatorState> operatorStates, Map<JobVertexID, ExecutionJobVertex> tasks) { Set<OperatorID> allOperatorIDs = new HashSet<>(); for (ExecutionJobVertex executionJobVertex : tasks.values()) { allOperatorIDs.addAll(executionJobVertex.getOperatorIDs()); } for (Map.Entry<OperatorID, OperatorState> operatorGroupStateEntry : operatorStates.entrySet()) { OperatorState operatorState = operatorGroupStateEntry.getValue(); //----------------------------------------find operator for state--------------------------------------------- if (!allOperatorIDs.contains(operatorGroupStateEntry.getKey())) { if (allowNonRestoredState) { LOG.info("Skipped checkpoint state for operator {}.", operatorState.getOperatorID()); } else { throw new IllegalStateException("There is no operator for the state " + operatorState.getOperatorID()); } } } }
Verifies that all operator states can be mapped to an execution job vertex. @param allowNonRestoredState if false an exception will be thrown if a state could not be mapped @param operatorStates operator states to map @param tasks task to map to
public static List<List<OperatorStateHandle>> applyRepartitioner( OperatorStateRepartitioner opStateRepartitioner, List<List<OperatorStateHandle>> chainOpParallelStates, int oldParallelism, int newParallelism) { if (chainOpParallelStates == null) { return Collections.emptyList(); } return opStateRepartitioner.repartitionState( chainOpParallelStates, oldParallelism, newParallelism); }
TODO rewrite based on operator id
public static List<KeyedStateHandle> getKeyedStateHandles( Collection<? extends KeyedStateHandle> keyedStateHandles, KeyGroupRange subtaskKeyGroupRange) { List<KeyedStateHandle> subtaskKeyedStateHandles = new ArrayList<>(keyedStateHandles.size()); for (KeyedStateHandle keyedStateHandle : keyedStateHandles) { KeyedStateHandle intersectedKeyedStateHandle = keyedStateHandle.getIntersection(subtaskKeyGroupRange); if (intersectedKeyedStateHandle != null) { subtaskKeyedStateHandles.add(intersectedKeyedStateHandle); } } return subtaskKeyedStateHandles; }
Determine the subset of {@link KeyGroupsStateHandle KeyGroupsStateHandles} with correct key group index for the given subtask {@link KeyGroupRange}. <p>This is publicly visible to be used in tests.
public void shutdownAndWait() { try { client.shutdown().get(); LOG.info("The Queryable State Client was shutdown successfully."); } catch (Exception e) { LOG.warn("The Queryable State Client shutdown failed: ", e); } }
Shuts down the client and waits until shutdown is completed. <p>If an exception is thrown, a warning is logged containing the exception message.
public ExecutionConfig setExecutionConfig(ExecutionConfig config) { ExecutionConfig prev = executionConfig; this.executionConfig = config; return prev; }
Replaces the existing {@link ExecutionConfig} (possibly {@code null}), with the provided one. @param config The new {@code configuration}. @return The old configuration, or {@code null} if none was specified.
@PublicEvolving public <K, S extends State, V> CompletableFuture<S> getKvState( final JobID jobId, final String queryableStateName, final K key, final TypeHint<K> keyTypeHint, final StateDescriptor<S, V> stateDescriptor) { Preconditions.checkNotNull(keyTypeHint); TypeInformation<K> keyTypeInfo = keyTypeHint.getTypeInfo(); return getKvState(jobId, queryableStateName, key, keyTypeInfo, stateDescriptor); }
Returns a future holding the request result. @param jobId JobID of the job the queryable state belongs to. @param queryableStateName Name under which the state is queryable. @param key The key we are interested in. @param keyTypeHint A {@link TypeHint} used to extract the type of the key. @param stateDescriptor The {@link StateDescriptor} of the state we want to query. @return Future holding the immutable {@link State} object containing the result.
@PublicEvolving public <K, S extends State, V> CompletableFuture<S> getKvState( final JobID jobId, final String queryableStateName, final K key, final TypeInformation<K> keyTypeInfo, final StateDescriptor<S, V> stateDescriptor) { return getKvState(jobId, queryableStateName, key, VoidNamespace.INSTANCE, keyTypeInfo, VoidNamespaceTypeInfo.INSTANCE, stateDescriptor); }
Returns a future holding the request result. @param jobId JobID of the job the queryable state belongs to. @param queryableStateName Name under which the state is queryable. @param key The key we are interested in. @param keyTypeInfo The {@link TypeInformation} of the key. @param stateDescriptor The {@link StateDescriptor} of the state we want to query. @return Future holding the immutable {@link State} object containing the result.
private <K, N, S extends State, V> CompletableFuture<S> getKvState( final JobID jobId, final String queryableStateName, final K key, final N namespace, final TypeInformation<K> keyTypeInfo, final TypeInformation<N> namespaceTypeInfo, final StateDescriptor<S, V> stateDescriptor) { Preconditions.checkNotNull(jobId); Preconditions.checkNotNull(queryableStateName); Preconditions.checkNotNull(key); Preconditions.checkNotNull(namespace); Preconditions.checkNotNull(keyTypeInfo); Preconditions.checkNotNull(namespaceTypeInfo); Preconditions.checkNotNull(stateDescriptor); TypeSerializer<K> keySerializer = keyTypeInfo.createSerializer(executionConfig); TypeSerializer<N> namespaceSerializer = namespaceTypeInfo.createSerializer(executionConfig); stateDescriptor.initializeSerializerUnlessSet(executionConfig); final byte[] serializedKeyAndNamespace; try { serializedKeyAndNamespace = KvStateSerializer .serializeKeyAndNamespace(key, keySerializer, namespace, namespaceSerializer); } catch (IOException e) { return FutureUtils.getFailedFuture(e); } return getKvState(jobId, queryableStateName, key.hashCode(), serializedKeyAndNamespace) .thenApply(stateResponse -> createState(stateResponse, stateDescriptor)); }
Returns a future holding the request result. @param jobId JobID of the job the queryable state belongs to. @param queryableStateName Name under which the state is queryable. @param key The key that the state we request is associated with. @param namespace The namespace of the state. @param keyTypeInfo The {@link TypeInformation} of the keys. @param namespaceTypeInfo The {@link TypeInformation} of the namespace. @param stateDescriptor The {@link StateDescriptor} of the state we want to query. @return Future holding the immutable {@link State} object containing the result.
private CompletableFuture<KvStateResponse> getKvState( final JobID jobId, final String queryableStateName, final int keyHashCode, final byte[] serializedKeyAndNamespace) { LOG.debug("Sending State Request to {}.", remoteAddress); try { KvStateRequest request = new KvStateRequest(jobId, queryableStateName, keyHashCode, serializedKeyAndNamespace); return client.sendRequest(remoteAddress, request); } catch (Exception e) { LOG.error("Unable to send KVStateRequest: ", e); return FutureUtils.getFailedFuture(e); } }
Returns a future holding the serialized request result. @param jobId JobID of the job the queryable state belongs to @param queryableStateName Name under which the state is queryable @param keyHashCode Integer hash code of the key (result of a call to {@link Object#hashCode()} @param serializedKeyAndNamespace Serialized key and namespace to query KvState instance with @return Future holding the serialized result
public TypeSerializer<T> getElementSerializer() { // call getSerializer() here to get the initialization check and proper error message final TypeSerializer<List<T>> rawSerializer = getSerializer(); if (!(rawSerializer instanceof ListSerializer)) { throw new IllegalStateException(); } return ((ListSerializer<T>) rawSerializer).getElementSerializer(); }
Gets the serializer for the elements contained in the list. @return The serializer for the elements in the list.
public boolean decrement() { synchronized (lock) { if (isDisposed) { return false; } referenceCount--; if (referenceCount <= disposeOnReferenceCount) { isDisposed = true; } return isDisposed; } }
Decrements the reference count and returns whether the reference counter entered the disposed state. <p> If the method returns <code>true</code>, the decrement operation disposed the counter. Otherwise it returns <code>false</code>.
@Override public void write(int b) throws IOException { if (this.position >= this.buffer.length) { resize(1); } this.buffer[this.position++] = (byte) (b & 0xff); }
----------------------------------------------------------------------------------------
public static void main(String[] args) { EnvironmentInformation.logEnvironmentInfo(LOG, "YARN TaskExecutor runner", args); SignalHandler.register(LOG); JvmShutdownSafeguard.installAsShutdownHook(LOG); run(args); }
The entry point for the YARN task executor runner. @param args The command line arguments.
private static void run(String[] args) { try { LOG.debug("All environment variables: {}", ENV); final String currDir = ENV.get(Environment.PWD.key()); LOG.info("Current working Directory: {}", currDir); final Configuration configuration = GlobalConfiguration.loadConfiguration(currDir); //TODO provide path. FileSystem.initialize(configuration, PluginUtils.createPluginManagerFromRootFolder(Optional.empty())); setupConfigurationAndInstallSecurityContext(configuration, currDir, ENV); final String containerId = ENV.get(YarnResourceManager.ENV_FLINK_CONTAINER_ID); Preconditions.checkArgument(containerId != null, "ContainerId variable %s not set", YarnResourceManager.ENV_FLINK_CONTAINER_ID); SecurityUtils.getInstalledContext().runSecured((Callable<Void>) () -> { TaskManagerRunner.runTaskManager(configuration, new ResourceID(containerId)); return null; }); } catch (Throwable t) { final Throwable strippedThrowable = ExceptionUtils.stripException(t, UndeclaredThrowableException.class); // make sure that everything whatever ends up in the log LOG.error("YARN TaskManager initialization failed.", strippedThrowable); System.exit(INIT_ERROR_EXIT_CODE); } }
The instance entry point for the YARN task executor. Obtains user group information and calls the main work method {@link TaskManagerRunner#runTaskManager(Configuration, ResourceID)} as a privileged action. @param args The command line arguments.
public static BigDecimal readBigDecimal(DataInputView source) throws IOException { final BigInteger unscaledValue = BigIntSerializer.readBigInteger(source); if (unscaledValue == null) { return null; } final int scale = source.readInt(); // fast-path for 0, 1, 10 if (scale == 0) { if (unscaledValue == BigInteger.ZERO) { return BigDecimal.ZERO; } else if (unscaledValue == BigInteger.ONE) { return BigDecimal.ONE; } else if (unscaledValue == BigInteger.TEN) { return BigDecimal.TEN; } } // default return new BigDecimal(unscaledValue, scale); }
--------------------------------------------------------------------------------------------
public SimpleSlot allocateSimpleSlot() throws InstanceDiedException { synchronized (instanceLock) { if (isDead) { throw new InstanceDiedException(this); } Integer nextSlot = availableSlots.poll(); if (nextSlot == null) { return null; } else { SimpleSlot slot = new SimpleSlot(this, location, nextSlot, taskManagerGateway); allocatedSlots.add(slot); return slot; } } }
Allocates a simple slot on this TaskManager instance. This method returns {@code null}, if no slot is available at the moment. @return A simple slot that represents a task slot on this TaskManager instance, or null, if the TaskManager instance has no more slots available. @throws InstanceDiedException Thrown if the instance is no longer alive by the time the slot is allocated.
public SharedSlot allocateSharedSlot(SlotSharingGroupAssignment sharingGroupAssignment) throws InstanceDiedException { synchronized (instanceLock) { if (isDead) { throw new InstanceDiedException(this); } Integer nextSlot = availableSlots.poll(); if (nextSlot == null) { return null; } else { SharedSlot slot = new SharedSlot( this, location, nextSlot, taskManagerGateway, sharingGroupAssignment); allocatedSlots.add(slot); return slot; } } }
Allocates a shared slot on this TaskManager instance. This method returns {@code null}, if no slot is available at the moment. The shared slot will be managed by the given SlotSharingGroupAssignment. @param sharingGroupAssignment The assignment group that manages this shared slot. @return A shared slot that represents a task slot on this TaskManager instance and can hold other (shared) slots, or null, if the TaskManager instance has no more slots available. @throws InstanceDiedException Thrown if the instance is no longer alive by the time the slot is allocated.
@Override public void returnLogicalSlot(LogicalSlot logicalSlot) { checkNotNull(logicalSlot); checkArgument(logicalSlot instanceof Slot); final Slot slot = ((Slot) logicalSlot); checkArgument(!slot.isAlive(), "slot is still alive"); checkArgument(slot.getOwner() == this, "slot belongs to the wrong TaskManager."); if (slot.markReleased()) { LOG.debug("Return allocated slot {}.", slot); synchronized (instanceLock) { if (isDead) { return; } if (this.allocatedSlots.remove(slot)) { this.availableSlots.add(slot.getSlotNumber()); if (this.slotAvailabilityListener != null) { this.slotAvailabilityListener.newSlotAvailable(this); } } else { throw new IllegalArgumentException("Slot was not allocated from this TaskManager."); } } } }
Returns a slot that has been allocated from this instance. The slot needs have been canceled prior to calling this method. <p>The method will transition the slot to the "released" state. If the slot is already in state "released", this method will do nothing.</p> @param logicalSlot The slot to return. @return Future which is completed with true, if the slot was returned, false if not.
@Override public LocatableInputSplit getNextInputSplit(String host, int taskId) { // for a null host, we return a remote split if (host == null) { synchronized (this.remoteSplitChooser) { synchronized (this.unassigned) { LocatableInputSplitWithCount split = this.remoteSplitChooser.getNextUnassignedMinLocalCountSplit(this.unassigned); if (split != null) { // got a split to assign. Double check that it hasn't been assigned before. if (this.unassigned.remove(split)) { if (LOG.isInfoEnabled()) { LOG.info("Assigning split to null host (random assignment)."); } remoteAssignments++; return split.getSplit(); } else { throw new IllegalStateException("Chosen InputSplit has already been assigned. This should not happen!"); } } else { // all splits consumed if (LOG.isDebugEnabled()) { LOG.debug("No more unassigned input splits remaining."); } return null; } } } } host = host.toLowerCase(Locale.US); // for any non-null host, we take the list of non-null splits LocatableInputSplitChooser localSplits = this.localPerHost.get(host); // if we have no list for this host yet, create one if (localSplits == null) { localSplits = new LocatableInputSplitChooser(); // lock the list, to be sure that others have to wait for that host's local list synchronized (localSplits) { LocatableInputSplitChooser prior = this.localPerHost.putIfAbsent(host, localSplits); // if someone else beat us in the case to create this list, then we do not populate this one, but // simply work with that other list if (prior == null) { // we are the first, we populate // first, copy the remaining splits to release the lock on the set early // because that is shared among threads LocatableInputSplitWithCount[] remaining; synchronized (this.unassigned) { remaining = this.unassigned.toArray(new LocatableInputSplitWithCount[this.unassigned.size()]); } for (LocatableInputSplitWithCount isw : remaining) { if (isLocal(host, isw.getSplit().getHostnames())) { // Split is local on host. // Increment local count isw.incrementLocalCount(); // and add to local split list localSplits.addInputSplit(isw); } } } else { // someone else was faster localSplits = prior; } } } // at this point, we have a list of local splits (possibly empty) // we need to make sure no one else operates in the current list (that protects against // list creation races) and that the unassigned set is consistent // NOTE: we need to obtain the locks in this order, strictly!!! synchronized (localSplits) { synchronized (this.unassigned) { LocatableInputSplitWithCount split = localSplits.getNextUnassignedMinLocalCountSplit(this.unassigned); if (split != null) { // found a valid split. Double check that it hasn't been assigned before. if (this.unassigned.remove(split)) { if (LOG.isInfoEnabled()) { LOG.info("Assigning local split to host " + host); } localAssignments++; return split.getSplit(); } else { throw new IllegalStateException("Chosen InputSplit has already been assigned. This should not happen!"); } } } } // we did not find a local split, return a remote split synchronized (this.remoteSplitChooser) { synchronized (this.unassigned) { LocatableInputSplitWithCount split = this.remoteSplitChooser.getNextUnassignedMinLocalCountSplit(this.unassigned); if (split != null) { // found a valid split. Double check that it hasn't been assigned yet. if (this.unassigned.remove(split)) { if (LOG.isInfoEnabled()) { LOG.info("Assigning remote split to host " + host); } remoteAssignments++; return split.getSplit(); } else { throw new IllegalStateException("Chosen InputSplit has already been assigned. This should not happen!"); } } else { // all splits consumed if (LOG.isDebugEnabled()) { LOG.debug("No more input splits remaining."); } return null; } } } }
--------------------------------------------------------------------------------------------
public static <T extends Savepoint> void storeCheckpointMetadata( T checkpointMetadata, OutputStream out) throws IOException { DataOutputStream dos = new DataOutputStream(out); storeCheckpointMetadata(checkpointMetadata, dos); }
------------------------------------------------------------------------
public static Savepoint loadCheckpointMetadata(DataInputStream in, ClassLoader classLoader) throws IOException { checkNotNull(in, "input stream"); checkNotNull(classLoader, "classLoader"); final int magicNumber = in.readInt(); if (magicNumber == HEADER_MAGIC_NUMBER) { final int version = in.readInt(); final SavepointSerializer<?> serializer = SavepointSerializers.getSerializer(version); if (serializer != null) { return serializer.deserialize(in, classLoader); } else { throw new IOException("Unrecognized checkpoint version number: " + version); } } else { throw new IOException("Unexpected magic number. This can have multiple reasons: " + "(1) You are trying to load a Flink 1.0 savepoint, which is not supported by this " + "version of Flink. (2) The file you were pointing to is not a savepoint at all. " + "(3) The savepoint file has been corrupted."); } }
------------------------------------------------------------------------
public static void disposeSavepoint( String pointer, StateBackend stateBackend, ClassLoader classLoader) throws IOException, FlinkException { checkNotNull(pointer, "location"); checkNotNull(stateBackend, "stateBackend"); checkNotNull(classLoader, "classLoader"); final CompletedCheckpointStorageLocation checkpointLocation = stateBackend.resolveCheckpoint(pointer); final StreamStateHandle metadataHandle = checkpointLocation.getMetadataHandle(); // load the savepoint object (the metadata) to have all the state handles that we need // to dispose of all state final Savepoint savepoint; try (InputStream in = metadataHandle.openInputStream(); DataInputStream dis = new DataInputStream(in)) { savepoint = loadCheckpointMetadata(dis, classLoader); } Exception exception = null; // first dispose the savepoint metadata, so that the savepoint is not // addressable any more even if the following disposal fails try { metadataHandle.discardState(); } catch (Exception e) { exception = e; } // now dispose the savepoint data try { savepoint.dispose(); } catch (Exception e) { exception = ExceptionUtils.firstOrSuppressed(e, exception); } // now dispose the location (directory, table, whatever) try { checkpointLocation.disposeStorageLocation(); } catch (Exception e) { exception = ExceptionUtils.firstOrSuppressed(e, exception); } // forward exceptions caught in the process if (exception != null) { ExceptionUtils.rethrowIOException(exception); } }
------------------------------------------------------------------------
public <X, PP extends MessagePathParameter<X>> X getPathParameter(Class<PP> parameterClass) { @SuppressWarnings("unchecked") PP pathParameter = (PP) pathParameters.get(parameterClass); Preconditions.checkState(pathParameter != null, "No parameter could be found for the given class."); return pathParameter.getValue(); }
Returns the value of the {@link MessagePathParameter} for the given class. @param parameterClass class of the parameter @param <X> the value type that the parameter contains @param <PP> type of the path parameter @return path parameter value for the given class @throws IllegalStateException if no value is defined for the given parameter class
public <X, QP extends MessageQueryParameter<X>> List<X> getQueryParameter(Class<QP> parameterClass) { @SuppressWarnings("unchecked") QP queryParameter = (QP) queryParameters.get(parameterClass); if (queryParameter == null) { return Collections.emptyList(); } else { return queryParameter.getValue(); } }
Returns the value of the {@link MessageQueryParameter} for the given class. @param parameterClass class of the parameter @param <X> the value type that the parameter contains @param <QP> type of the query parameter @return query parameter value for the given class, or an empty list if no parameter value exists for the given class
public static void main(String[] args) throws Exception { // ---- print some usage help ---- System.out.println("Usage with built-in data generator: StateMachineExample [--error-rate <probability-of-invalid-transition>] [--sleep <sleep-per-record-in-ms>]"); System.out.println("Usage with Kafka: StateMachineExample --kafka-topic <topic> [--brokers <brokers>]"); System.out.println("Options for both the above setups: "); System.out.println("\t[--backend <file|rocks>]"); System.out.println("\t[--checkpoint-dir <filepath>]"); System.out.println("\t[--async-checkpoints <true|false>]"); System.out.println("\t[--incremental-checkpoints <true|false>]"); System.out.println("\t[--output <filepath> OR null for stdout]"); System.out.println(); // ---- determine whether to use the built-in source, or read from Kafka ---- final SourceFunction<Event> source; final ParameterTool params = ParameterTool.fromArgs(args); if (params.has("kafka-topic")) { // set up the Kafka reader String kafkaTopic = params.get("kafka-topic"); String brokers = params.get("brokers", "localhost:9092"); System.out.printf("Reading from kafka topic %s @ %s\n", kafkaTopic, brokers); System.out.println(); Properties kafkaProps = new Properties(); kafkaProps.setProperty("bootstrap.servers", brokers); FlinkKafkaConsumer010<Event> kafka = new FlinkKafkaConsumer010<>(kafkaTopic, new EventDeSerializer(), kafkaProps); kafka.setStartFromLatest(); kafka.setCommitOffsetsOnCheckpoints(false); source = kafka; } else { double errorRate = params.getDouble("error-rate", 0.0); int sleep = params.getInt("sleep", 1); System.out.printf("Using standalone source with error rate %f and sleep delay %s millis\n", errorRate, sleep); System.out.println(); source = new EventsGeneratorSource(errorRate, sleep); } // ---- main program ---- // create the environment to create streams and configure execution final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.enableCheckpointing(2000L); final String stateBackend = params.get("backend", "memory"); if ("file".equals(stateBackend)) { final String checkpointDir = params.get("checkpoint-dir"); boolean asyncCheckpoints = params.getBoolean("async-checkpoints", false); env.setStateBackend(new FsStateBackend(checkpointDir, asyncCheckpoints)); } else if ("rocks".equals(stateBackend)) { final String checkpointDir = params.get("checkpoint-dir"); boolean incrementalCheckpoints = params.getBoolean("incremental-checkpoints", false); env.setStateBackend(new RocksDBStateBackend(checkpointDir, incrementalCheckpoints)); } final String outputFile = params.get("output"); // make parameters available in the web interface env.getConfig().setGlobalJobParameters(params); DataStream<Event> events = env.addSource(source); DataStream<Alert> alerts = events // partition on the address to make sure equal addresses // end up in the same state machine flatMap function .keyBy(Event::sourceAddress) // the function that evaluates the state machine over the sequence of events .flatMap(new StateMachineMapper()); // output the alerts to std-out if (outputFile == null) { alerts.print(); } else { alerts .writeAsText(outputFile, FileSystem.WriteMode.OVERWRITE) .setParallelism(1); } // trigger program execution env.execute("State machine job"); }
Main entry point for the program. @param args The command line arguments.
@Override public CheckpointStorage createCheckpointStorage(JobID jobId) throws IOException { return new MemoryBackendCheckpointStorage(jobId, getCheckpointPath(), getSavepointPath(), maxStateSize); }
------------------------------------------------------------------------
@Override public void open(Configuration configuration) throws Exception { if (logFailuresOnly) { callback = new Callback() { @Override public void onCompletion(RecordMetadata metadata, Exception e) { if (e != null) { LOG.error("Error while sending record to Kafka: " + e.getMessage(), e); } acknowledgeMessage(); } }; } else { callback = new Callback() { @Override public void onCompletion(RecordMetadata metadata, Exception exception) { if (exception != null && asyncException == null) { asyncException = exception; } acknowledgeMessage(); } }; } super.open(configuration); }
Initializes the connection to Kafka.
private void flush(KafkaTransactionState transaction) throws FlinkKafka011Exception { if (transaction.producer != null) { transaction.producer.flush(); } long pendingRecordsCount = pendingRecords.get(); if (pendingRecordsCount != 0) { throw new IllegalStateException("Pending record count must be zero at this point: " + pendingRecordsCount); } // if the flushed requests has errors, we should propagate it also and fail the checkpoint checkErroneous(); }
Flush pending records. @param transaction
private FlinkKafkaProducer<byte[], byte[]> createTransactionalProducer() throws FlinkKafka011Exception { String transactionalId = availableTransactionalIds.poll(); if (transactionalId == null) { throw new FlinkKafka011Exception( FlinkKafka011ErrorCode.PRODUCERS_POOL_EMPTY, "Too many ongoing snapshots. Increase kafka producers pool size or decrease number of concurrent checkpoints."); } FlinkKafkaProducer<byte[], byte[]> producer = initTransactionalProducer(transactionalId, true); producer.initTransactions(); return producer; }
For each checkpoint we create new {@link FlinkKafkaProducer} so that new transactions will not clash with transactions created during previous checkpoints ({@code producer.initTransactions()} assures that we obtain new producerId and epoch counters).
public void close() throws IOException { Throwable throwable = null; try { socket.close(); sender.close(); receiver.close(); } catch (Throwable t) { throwable = t; } try { destroyProcess(process); } catch (Throwable t) { throwable = ExceptionUtils.firstOrSuppressed(t, throwable); } ShutdownHookUtil.removeShutdownHook(shutdownThread, getClass().getSimpleName(), LOG); ExceptionUtils.tryRethrowIOException(throwable); }
Closes this streamer. @throws IOException
public final void sendBroadCastVariables(Configuration config) throws IOException { try { int broadcastCount = config.getInteger(PLANBINDER_CONFIG_BCVAR_COUNT, 0); String[] names = new String[broadcastCount]; for (int x = 0; x < names.length; x++) { names[x] = config.getString(PLANBINDER_CONFIG_BCVAR_NAME_PREFIX + x, null); } out.write(new IntSerializer().serializeWithoutTypeInfo(broadcastCount)); StringSerializer stringSerializer = new StringSerializer(); for (String name : names) { Iterator<byte[]> bcv = function.getRuntimeContext().<byte[]>getBroadcastVariable(name).iterator(); out.write(stringSerializer.serializeWithoutTypeInfo(name)); while (bcv.hasNext()) { out.writeByte(1); out.write(bcv.next()); } out.writeByte(0); } } catch (SocketTimeoutException ignored) { throw new RuntimeException("External process for task " + function.getRuntimeContext().getTaskName() + " stopped responding." + msg); } }
Sends all broadcast-variables encoded in the configuration to the external process. @param config configuration object containing broadcast-variable count and names @throws IOException
@Deprecated public static Savepoint convertToOperatorStateSavepointV2( Map<JobVertexID, ExecutionJobVertex> tasks, Savepoint savepoint) { if (savepoint.getOperatorStates() != null) { return savepoint; } boolean expandedToLegacyIds = false; Map<OperatorID, OperatorState> operatorStates = new HashMap<>(savepoint.getTaskStates().size() << 1); for (TaskState taskState : savepoint.getTaskStates()) { ExecutionJobVertex jobVertex = tasks.get(taskState.getJobVertexID()); // on the first time we can not find the execution job vertex for an id, we also consider alternative ids, // for example as generated from older flink versions, to provide backwards compatibility. if (jobVertex == null && !expandedToLegacyIds) { tasks = ExecutionJobVertex.includeLegacyJobVertexIDs(tasks); jobVertex = tasks.get(taskState.getJobVertexID()); expandedToLegacyIds = true; } if (jobVertex == null) { throw new IllegalStateException( "Could not find task for state with ID " + taskState.getJobVertexID() + ". " + "When migrating a savepoint from a version < 1.3 please make sure that the topology was not " + "changed through removal of a stateful operator or modification of a chain containing a stateful " + "operator."); } List<OperatorID> operatorIDs = jobVertex.getOperatorIDs(); Preconditions.checkArgument( jobVertex.getParallelism() == taskState.getParallelism(), "Detected change in parallelism during migration for task " + jobVertex.getJobVertexId() +"." + "When migrating a savepoint from a version < 1.3 please make sure that no changes were made " + "to the parallelism of stateful operators."); Preconditions.checkArgument( operatorIDs.size() == taskState.getChainLength(), "Detected change in chain length during migration for task " + jobVertex.getJobVertexId() +". " + "When migrating a savepoint from a version < 1.3 please make sure that the topology was not " + "changed by modification of a chain containing a stateful operator."); for (int subtaskIndex = 0; subtaskIndex < jobVertex.getParallelism(); subtaskIndex++) { SubtaskState subtaskState; try { subtaskState = taskState.getState(subtaskIndex); } catch (Exception e) { throw new IllegalStateException( "Could not find subtask with index " + subtaskIndex + " for task " + jobVertex.getJobVertexId() + ". " + "When migrating a savepoint from a version < 1.3 please make sure that no changes were made " + "to the parallelism of stateful operators.", e); } if (subtaskState == null) { continue; } ChainedStateHandle<OperatorStateHandle> partitioneableState = subtaskState.getManagedOperatorState(); ChainedStateHandle<OperatorStateHandle> rawOperatorState = subtaskState.getRawOperatorState(); for (int chainIndex = 0; chainIndex < taskState.getChainLength(); chainIndex++) { // task consists of multiple operators so we have to break the state apart for (int operatorIndex = 0; operatorIndex < operatorIDs.size(); operatorIndex++) { OperatorID operatorID = operatorIDs.get(operatorIndex); OperatorState operatorState = operatorStates.get(operatorID); if (operatorState == null) { operatorState = new OperatorState( operatorID, jobVertex.getParallelism(), jobVertex.getMaxParallelism()); operatorStates.put(operatorID, operatorState); } KeyedStateHandle managedKeyedState = null; KeyedStateHandle rawKeyedState = null; // only the head operator retains the keyed state if (operatorIndex == operatorIDs.size() - 1) { managedKeyedState = subtaskState.getManagedKeyedState(); rawKeyedState = subtaskState.getRawKeyedState(); } OperatorSubtaskState operatorSubtaskState = new OperatorSubtaskState( partitioneableState != null ? partitioneableState.get(operatorIndex) : null, rawOperatorState != null ? rawOperatorState.get(operatorIndex) : null, managedKeyedState, rawKeyedState); operatorState.putState(subtaskIndex, operatorSubtaskState); } } } } return new SavepointV2( savepoint.getCheckpointId(), operatorStates.values(), savepoint.getMasterStates()); }
Converts the {@link Savepoint} containing {@link TaskState TaskStates} to an equivalent savepoint containing {@link OperatorState OperatorStates}. @param savepoint savepoint to convert @param tasks map of all vertices and their job vertex ids @return converted completed checkpoint @deprecated Only kept for backwards-compatibility with versions < 1.3. Will be removed in the future.
public static LeaderConnectionInfo retrieveLeaderConnectionInfo( LeaderRetrievalService leaderRetrievalService, Time timeout) throws LeaderRetrievalException { return retrieveLeaderConnectionInfo(leaderRetrievalService, FutureUtils.toFiniteDuration(timeout)); }
Retrieves the leader akka url and the current leader session ID. The values are stored in a {@link LeaderConnectionInfo} instance. @param leaderRetrievalService Leader retrieval service to retrieve the leader connection information @param timeout Timeout when to give up looking for the leader @return LeaderConnectionInfo containing the leader's akka URL and the current leader session ID @throws LeaderRetrievalException
public static LeaderConnectionInfo retrieveLeaderConnectionInfo( LeaderRetrievalService leaderRetrievalService, FiniteDuration timeout ) throws LeaderRetrievalException { LeaderConnectionInfoListener listener = new LeaderConnectionInfoListener(); try { leaderRetrievalService.start(listener); Future<LeaderConnectionInfo> connectionInfoFuture = listener.getLeaderConnectionInfoFuture(); return Await.result(connectionInfoFuture, timeout); } catch (Exception e) { throw new LeaderRetrievalException("Could not retrieve the leader address and leader " + "session ID.", e); } finally { try { leaderRetrievalService.stop(); } catch (Exception fe) { LOG.warn("Could not stop the leader retrieval service.", fe); } } }
Retrieves the leader akka url and the current leader session ID. The values are stored in a {@link LeaderConnectionInfo} instance. @param leaderRetrievalService Leader retrieval service to retrieve the leader connection information @param timeout Timeout when to give up looking for the leader @return LeaderConnectionInfo containing the leader's akka URL and the current leader session ID @throws LeaderRetrievalException
@Override public MemorySegment getNextReturnedBlock() throws IOException { try { while (true) { final MemorySegment next = returnSegments.poll(1000, TimeUnit.MILLISECONDS); if (next != null) { return next; } else { if (this.closed) { throw new IOException("The writer has been closed."); } checkErroneous(); } } } catch (InterruptedException e) { throw new IOException("Writer was interrupted while waiting for the next returning segment."); } }
Gets the next memory segment that has been written and is available again. This method blocks until such a segment is available, or until an error occurs in the writer, or the writer is closed. <p> NOTE: If this method is invoked without any segment ever returning (for example, because the {@link #writeBlock(MemorySegment)} method has not been invoked accordingly), the method may block forever. @return The next memory segment from the writers's return queue. @throws IOException Thrown, if an I/O error occurs in the writer while waiting for the request to return.
@Override public CheckpointStorageLocation initializeLocationForSavepoint( @SuppressWarnings("unused") long checkpointId, @Nullable String externalLocationPointer) throws IOException { // determine where to write the savepoint to final Path savepointBasePath; if (externalLocationPointer != null) { savepointBasePath = new Path(externalLocationPointer); } else if (defaultSavepointDirectory != null) { savepointBasePath = defaultSavepointDirectory; } else { throw new IllegalArgumentException("No savepoint location given and no default location configured."); } // generate the savepoint directory final FileSystem fs = savepointBasePath.getFileSystem(); final String prefix = "savepoint-" + jobId.toString().substring(0, 6) + '-'; Exception latestException = null; for (int attempt = 0; attempt < 10; attempt++) { final Path path = new Path(savepointBasePath, FileUtils.getRandomFilename(prefix)); try { if (fs.mkdirs(path)) { // we make the path qualified, to make it independent of default schemes and authorities final Path qp = path.makeQualified(fs); return createSavepointLocation(fs, qp); } } catch (Exception e) { latestException = e; } } throw new IOException("Failed to create savepoint directory at " + savepointBasePath, latestException); }
Creates a file system based storage location for a savepoint. <p>This methods implements the logic that decides which location to use (given optional parameters for a configured location and a location passed for this specific savepoint) and how to name and initialize the savepoint directory. @param externalLocationPointer The target location pointer for the savepoint. Must be a valid URI. Null, if not supplied. @param checkpointId The checkpoint ID of the savepoint. @return The checkpoint storage location for the savepoint. @throws IOException Thrown if the target directory could not be created.
protected static Path getCheckpointDirectoryForJob(Path baseCheckpointPath, JobID jobId) { return new Path(baseCheckpointPath, jobId.toString()); }
Builds directory into which a specific job checkpoints, meaning the directory inside which it creates the checkpoint-specific subdirectories. <p>This method only succeeds if a base checkpoint directory has been set; otherwise the method fails with an exception. @param jobId The ID of the job @return The job's checkpoint directory, re @throws UnsupportedOperationException Thrown, if no base checkpoint directory has been set.
protected static CompletedCheckpointStorageLocation resolveCheckpointPointer(String checkpointPointer) throws IOException { checkNotNull(checkpointPointer, "checkpointPointer"); checkArgument(!checkpointPointer.isEmpty(), "empty checkpoint pointer"); // check if the pointer is in fact a valid file path final Path path; try { path = new Path(checkpointPointer); } catch (Exception e) { throw new IOException("Checkpoint/savepoint path '" + checkpointPointer + "' is not a valid file URI. " + "Either the pointer path is invalid, or the checkpoint was created by a different state backend."); } // check if the file system can be accessed final FileSystem fs; try { fs = path.getFileSystem(); } catch (IOException e) { throw new IOException("Cannot access file system for checkpoint/savepoint path '" + checkpointPointer + "'.", e); } final FileStatus status; try { status = fs.getFileStatus(path); } catch (FileNotFoundException e) { throw new FileNotFoundException("Cannot find checkpoint or savepoint " + "file/directory '" + checkpointPointer + "' on file system '" + fs.getUri().getScheme() + "'."); } // if we are here, the file / directory exists final Path checkpointDir; final FileStatus metadataFileStatus; // If this is a directory, we need to find the meta data file if (status.isDir()) { checkpointDir = status.getPath(); final Path metadataFilePath = new Path(path, METADATA_FILE_NAME); try { metadataFileStatus = fs.getFileStatus(metadataFilePath); } catch (FileNotFoundException e) { throw new FileNotFoundException("Cannot find meta data file '" + METADATA_FILE_NAME + "' in directory '" + path + "'. Please try to load the checkpoint/savepoint " + "directly from the metadata file instead of the directory."); } } else { // this points to a file and we either do no name validation, or // the name is actually correct, so we can return the path metadataFileStatus = status; checkpointDir = status.getPath().getParent(); } final FileStateHandle metaDataFileHandle = new FileStateHandle( metadataFileStatus.getPath(), metadataFileStatus.getLen()); final String pointer = checkpointDir.makeQualified(fs).toString(); return new FsCompletedCheckpointStorageLocation( fs, checkpointDir, metaDataFileHandle, pointer); }
Takes the given string (representing a pointer to a checkpoint) and resolves it to a file status for the checkpoint's metadata file. @param checkpointPointer The pointer to resolve. @return A state handle to checkpoint/savepoint's metadata. @throws IOException Thrown, if the pointer cannot be resolved, the file system not accessed, or the pointer points to a location that does not seem to be a checkpoint/savepoint.
public static CheckpointStorageLocationReference encodePathAsReference(Path path) { byte[] refBytes = path.toString().getBytes(StandardCharsets.UTF_8); byte[] bytes = new byte[REFERENCE_MAGIC_NUMBER.length + refBytes.length]; System.arraycopy(REFERENCE_MAGIC_NUMBER, 0, bytes, 0, REFERENCE_MAGIC_NUMBER.length); System.arraycopy(refBytes, 0, bytes, REFERENCE_MAGIC_NUMBER.length, refBytes.length); return new CheckpointStorageLocationReference(bytes); }
Encodes the given path as a reference in bytes. The path is encoded as a UTF-8 string and prepended as a magic number. @param path The path to encode. @return The location reference.
public static Path decodePathFromReference(CheckpointStorageLocationReference reference) { if (reference.isDefaultReference()) { throw new IllegalArgumentException("Cannot decode default reference"); } final byte[] bytes = reference.getReferenceBytes(); final int headerLen = REFERENCE_MAGIC_NUMBER.length; if (bytes.length > headerLen) { // compare magic number for (int i = 0; i < headerLen; i++) { if (bytes[i] != REFERENCE_MAGIC_NUMBER[i]) { throw new IllegalArgumentException("Reference starts with the wrong magic number"); } } // covert to string and path try { return new Path(new String(bytes, headerLen, bytes.length - headerLen, StandardCharsets.UTF_8)); } catch (Exception e) { throw new IllegalArgumentException("Reference cannot be decoded to a path", e); } } else { throw new IllegalArgumentException("Reference too short."); } }
Decodes the given reference into a path. This method validates that the reference bytes start with the correct magic number (as written by {@link #encodePathAsReference(Path)}) and converts the remaining bytes back to a proper path. @param reference The bytes representing the reference. @return The path decoded from the reference. @throws IllegalArgumentException Thrown, if the bytes do not represent a proper reference.
@SuppressWarnings("unchecked") public T newInstance(ClassLoader classLoader) { try { return (T) compile(classLoader).getConstructor(Object[].class) // Because Constructor.newInstance(Object... initargs), we need to load // references into a new Object[], otherwise it cannot be compiled. .newInstance(new Object[] {references}); } catch (Exception e) { throw new RuntimeException( "Could not instantiate generated class '" + className + "'", e); } }
Create a new instance of this generated class.
@Nonnull public static StateMetaInfoReader getReader(int readVersion, @Nonnull StateTypeHint stateTypeHint) { if (readVersion < 5) { // versions before 5 still had different state meta info formats between keyed / operator state switch (stateTypeHint) { case KEYED_STATE: return getLegacyKeyedStateMetaInfoReader(readVersion); case OPERATOR_STATE: return getLegacyOperatorStateMetaInfoReader(readVersion); default: throw new IllegalArgumentException("Unsupported state type hint: " + stateTypeHint + " with version " + readVersion); } } else { return getReader(readVersion); } }
Returns a reader for {@link StateMetaInfoSnapshot} with the requested state type and version number. @param readVersion the format version to read. @param stateTypeHint a hint about the expected type to read. @return the requested reader.