code
stringlengths
67
466k
docstring
stringlengths
1
13.2k
public <T> Graph<K, VV, EV> joinWithVertices(DataSet<Tuple2<K, T>> inputDataSet, final VertexJoinFunction<VV, T> vertexJoinFunction) { DataSet<Vertex<K, VV>> resultedVertices = this.getVertices() .coGroup(inputDataSet).where(0).equalTo(0) .with(new ApplyCoGroupToVertexValues<>(vertexJoinFunction)) .name("Join with vertices"); return new Graph<>(resultedVertices, this.edges, this.context); }
Joins the vertex DataSet of this graph with an input Tuple2 DataSet and applies a user-defined transformation on the values of the matched records. The vertex ID and the first field of the Tuple2 DataSet are used as the join keys. @param inputDataSet the Tuple2 DataSet to join with. The first field of the Tuple2 is used as the join key and the second field is passed as a parameter to the transformation function. @param vertexJoinFunction the transformation function to apply. The first parameter is the current vertex value and the second parameter is the value of the matched Tuple2 from the input DataSet. @return a new Graph, where the vertex values have been updated according to the result of the vertexJoinFunction. @param <T> the type of the second field of the input Tuple2 DataSet.
public <T> Graph<K, VV, EV> joinWithEdges(DataSet<Tuple3<K, K, T>> inputDataSet, final EdgeJoinFunction<EV, T> edgeJoinFunction) { DataSet<Edge<K, EV>> resultedEdges = this.getEdges() .coGroup(inputDataSet).where(0, 1).equalTo(0, 1) .with(new ApplyCoGroupToEdgeValues<>(edgeJoinFunction)) .name("Join with edges"); return new Graph<>(this.vertices, resultedEdges, this.context); }
Joins the edge DataSet with an input DataSet on the composite key of both source and target IDs and applies a user-defined transformation on the values of the matched records. The first two fields of the input DataSet are used as join keys. @param inputDataSet the DataSet to join with. The first two fields of the Tuple3 are used as the composite join key and the third field is passed as a parameter to the transformation function. @param edgeJoinFunction the transformation function to apply. The first parameter is the current edge value and the second parameter is the value of the matched Tuple3 from the input DataSet. @param <T> the type of the third field of the input Tuple3 DataSet. @return a new Graph, where the edge values have been updated according to the result of the edgeJoinFunction.
public <T> Graph<K, VV, EV> joinWithEdgesOnSource(DataSet<Tuple2<K, T>> inputDataSet, final EdgeJoinFunction<EV, T> edgeJoinFunction) { DataSet<Edge<K, EV>> resultedEdges = this.getEdges() .coGroup(inputDataSet).where(0).equalTo(0) .with(new ApplyCoGroupToEdgeValuesOnEitherSourceOrTarget<>(edgeJoinFunction)) .name("Join with edges on source"); return new Graph<>(this.vertices, resultedEdges, this.context); }
Joins the edge DataSet with an input Tuple2 DataSet and applies a user-defined transformation on the values of the matched records. The source ID of the edges input and the first field of the input DataSet are used as join keys. @param inputDataSet the DataSet to join with. The first field of the Tuple2 is used as the join key and the second field is passed as a parameter to the transformation function. @param edgeJoinFunction the transformation function to apply. The first parameter is the current edge value and the second parameter is the value of the matched Tuple2 from the input DataSet. @param <T> the type of the second field of the input Tuple2 DataSet. @return a new Graph, where the edge values have been updated according to the result of the edgeJoinFunction.
public Graph<K, VV, EV> filterOnVertices(FilterFunction<Vertex<K, VV>> vertexFilter) { DataSet<Vertex<K, VV>> filteredVertices = this.vertices.filter(vertexFilter); DataSet<Edge<K, EV>> remainingEdges = this.edges.join(filteredVertices) .where(0).equalTo(0).with(new ProjectEdge<>()) .join(filteredVertices).where(1).equalTo(0) .with(new ProjectEdge<>()).name("Filter on vertices"); return new Graph<>(filteredVertices, remainingEdges, this.context); }
Apply a filtering function to the graph and return a sub-graph that satisfies the predicates only for the vertices. @param vertexFilter the filter function for vertices. @return the resulting sub-graph.
public Graph<K, VV, EV> filterOnEdges(FilterFunction<Edge<K, EV>> edgeFilter) { DataSet<Edge<K, EV>> filteredEdges = this.edges.filter(edgeFilter).name("Filter on edges"); return new Graph<>(this.vertices, filteredEdges, this.context); }
Apply a filtering function to the graph and return a sub-graph that satisfies the predicates only for the edges. @param edgeFilter the filter function for edges. @return the resulting sub-graph.
public DataSet<Tuple2<K, LongValue>> outDegrees() { return vertices.coGroup(edges).where(0).equalTo(0).with(new CountNeighborsCoGroup<>()) .name("Out-degree"); }
Return the out-degree of all vertices in the graph. @return A DataSet of {@code Tuple2<vertexId, outDegree>}
public DataSet<Tuple2<K, LongValue>> getDegrees() { return outDegrees() .union(inDegrees()).name("In- and out-degree") .groupBy(0).sum(1).name("Sum"); }
Return the degree of all vertices in the graph. @return A DataSet of {@code Tuple2<vertexId, degree>}
public Graph<K, VV, EV> getUndirected() { DataSet<Edge<K, EV>> undirectedEdges = edges. flatMap(new RegularAndReversedEdgesMap<>()).name("To undirected graph"); return new Graph<>(vertices, undirectedEdges, this.context); }
This operation adds all inverse-direction edges to the graph. @return the undirected graph.
public <T> DataSet<T> groupReduceOnEdges(EdgesFunctionWithVertexValue<K, VV, EV, T> edgesFunction, EdgeDirection direction) throws IllegalArgumentException { switch (direction) { case IN: return vertices.coGroup(edges).where(0).equalTo(1) .with(new ApplyCoGroupFunction<>(edgesFunction)).name("GroupReduce on in-edges"); case OUT: return vertices.coGroup(edges).where(0).equalTo(0) .with(new ApplyCoGroupFunction<>(edgesFunction)).name("GroupReduce on out-edges"); case ALL: return vertices.coGroup(edges.flatMap(new EmitOneEdgePerNode<>()) .name("Emit edge")) .where(0).equalTo(0).with(new ApplyCoGroupFunctionOnAllEdges<>(edgesFunction)) .name("GroupReduce on in- and out-edges"); default: throw new IllegalArgumentException("Illegal edge direction"); } }
Groups by vertex and computes a GroupReduce transformation over the edge values of each vertex. The edgesFunction applied on the edges has access to both the id and the value of the grouping vertex. <p>For each vertex, the edgesFunction can iterate over all edges of this vertex with the specified direction, and emit any number of output elements, including none. @param edgesFunction the group reduce function to apply to the neighboring edges of each vertex. @param direction the edge direction (in-, out-, all-). @param <T> the output type @return a DataSet containing elements of type T @throws IllegalArgumentException
public <T> DataSet<T> groupReduceOnEdges(EdgesFunction<K, EV, T> edgesFunction, EdgeDirection direction) throws IllegalArgumentException { TypeInformation<K> keyType = ((TupleTypeInfo<?>) vertices.getType()).getTypeAt(0); TypeInformation<EV> edgeValueType = ((TupleTypeInfo<?>) edges.getType()).getTypeAt(2); TypeInformation<T> returnType = TypeExtractor.createTypeInfo(EdgesFunction.class, edgesFunction.getClass(), 2, keyType, edgeValueType); return groupReduceOnEdges(edgesFunction, direction, returnType); }
Groups by vertex and computes a GroupReduce transformation over the edge values of each vertex. The edgesFunction applied on the edges only has access to the vertex id (not the vertex value) of the grouping vertex. <p>For each vertex, the edgesFunction can iterate over all edges of this vertex with the specified direction, and emit any number of output elements, including none. @param edgesFunction the group reduce function to apply to the neighboring edges of each vertex. @param direction the edge direction (in-, out-, all-). @param <T> the output type @return a DataSet containing elements of type T @throws IllegalArgumentException
public <T> DataSet<T> groupReduceOnEdges(EdgesFunction<K, EV, T> edgesFunction, EdgeDirection direction, TypeInformation<T> typeInfo) throws IllegalArgumentException { switch (direction) { case IN: return edges.map(new ProjectVertexIdMap<>(1)).name("Vertex ID") .withForwardedFields("f1->f0") .groupBy(0).reduceGroup(new ApplyGroupReduceFunction<>(edgesFunction)) .name("GroupReduce on in-edges").returns(typeInfo); case OUT: return edges.map(new ProjectVertexIdMap<>(0)).name("Vertex ID") .withForwardedFields("f0") .groupBy(0).reduceGroup(new ApplyGroupReduceFunction<>(edgesFunction)) .name("GroupReduce on out-edges").returns(typeInfo); case ALL: return edges.flatMap(new EmitOneEdgePerNode<>()).name("Emit edge") .groupBy(0).reduceGroup(new ApplyGroupReduceFunction<>(edgesFunction)) .name("GroupReduce on in- and out-edges").returns(typeInfo); default: throw new IllegalArgumentException("Illegal edge direction"); } }
Groups by vertex and computes a GroupReduce transformation over the edge values of each vertex. The edgesFunction applied on the edges only has access to the vertex id (not the vertex value) of the grouping vertex. <p>For each vertex, the edgesFunction can iterate over all edges of this vertex with the specified direction, and emit any number of output elements, including none. @param edgesFunction the group reduce function to apply to the neighboring edges of each vertex. @param direction the edge direction (in-, out-, all-). @param <T> the output type @param typeInfo the explicit return type. @return a DataSet containing elements of type T @throws IllegalArgumentException
public Graph<K, VV, EV> reverse() throws UnsupportedOperationException { DataSet<Edge<K, EV>> reversedEdges = edges.map(new ReverseEdgesMap<>()).name("Reverse edges"); return new Graph<>(vertices, reversedEdges, this.context); }
Reverse the direction of the edges in the graph. @return a new graph with all edges reversed @throws UnsupportedOperationException
public Graph<K, VV, EV> addVertex(final Vertex<K, VV> vertex) { List<Vertex<K, VV>> newVertex = new ArrayList<>(); newVertex.add(vertex); return addVertices(newVertex); }
Adds the input vertex to the graph. If the vertex already exists in the graph, it will not be added again. @param vertex the vertex to be added @return the new graph containing the existing vertices as well as the one just added
public Graph<K, VV, EV> addVertices(List<Vertex<K, VV>> verticesToAdd) { // Add the vertices DataSet<Vertex<K, VV>> newVertices = this.vertices.coGroup(this.context.fromCollection(verticesToAdd)) .where(0).equalTo(0).with(new VerticesUnionCoGroup<>()).name("Add vertices"); return new Graph<>(newVertices, this.edges, this.context); }
Adds the list of vertices, passed as input, to the graph. If the vertices already exist in the graph, they will not be added once more. @param verticesToAdd the list of vertices to add @return the new graph containing the existing and newly added vertices
public Graph<K, VV, EV> addEdge(Vertex<K, VV> source, Vertex<K, VV> target, EV edgeValue) { Graph<K, VV, EV> partialGraph = fromCollection(Arrays.asList(source, target), Collections.singletonList(new Edge<>(source.f0, target.f0, edgeValue)), this.context); return this.union(partialGraph); }
Adds the given edge to the graph. If the source and target vertices do not exist in the graph, they will also be added. @param source the source vertex of the edge @param target the target vertex of the edge @param edgeValue the edge value @return the new graph containing the existing vertices and edges plus the newly added edge
public Graph<K, VV, EV> addEdges(List<Edge<K, EV>> newEdges) { DataSet<Edge<K, EV>> newEdgesDataSet = this.context.fromCollection(newEdges); DataSet<Edge<K, EV>> validNewEdges = this.getVertices().join(newEdgesDataSet) .where(0).equalTo(0) .with(new JoinVerticesWithEdgesOnSrc<>()).name("Join with source") .join(this.getVertices()).where(1).equalTo(0) .with(new JoinWithVerticesOnTrg<>()).name("Join with target"); return Graph.fromDataSet(this.vertices, this.edges.union(validNewEdges), this.context); }
Adds the given list edges to the graph. <p>When adding an edge for a non-existing set of vertices, the edge is considered invalid and ignored. @param newEdges the data set of edges to be added @return a new graph containing the existing edges plus the newly added edges.
public Graph<K, VV, EV> removeVertex(Vertex<K, VV> vertex) { List<Vertex<K, VV>> vertexToBeRemoved = new ArrayList<>(); vertexToBeRemoved.add(vertex); return removeVertices(vertexToBeRemoved); }
Removes the given vertex and its edges from the graph. @param vertex the vertex to remove @return the new graph containing the existing vertices and edges without the removed vertex and its edges
public Graph<K, VV, EV> removeVertices(List<Vertex<K, VV>> verticesToBeRemoved) { return removeVertices(this.context.fromCollection(verticesToBeRemoved)); }
Removes the given list of vertices and its edges from the graph. @param verticesToBeRemoved the list of vertices to be removed @return the resulted graph containing the initial vertices and edges minus the vertices and edges removed.
private Graph<K, VV, EV> removeVertices(DataSet<Vertex<K, VV>> verticesToBeRemoved) { DataSet<Vertex<K, VV>> newVertices = getVertices().coGroup(verticesToBeRemoved).where(0).equalTo(0) .with(new VerticesRemovalCoGroup<>()).name("Remove vertices"); DataSet <Edge< K, EV>> newEdges = newVertices.join(getEdges()).where(0).equalTo(0) // if the edge source was removed, the edge will also be removed .with(new ProjectEdgeToBeRemoved<>()).name("Edges to be removed") // if the edge target was removed, the edge will also be removed .join(newVertices).where(1).equalTo(0) .with(new ProjectEdge<>()).name("Remove edges"); return new Graph<>(newVertices, newEdges, context); }
Removes the given list of vertices and its edges from the graph. @param verticesToBeRemoved the DataSet of vertices to be removed @return the resulted graph containing the initial vertices and edges minus the vertices and edges removed.
public Graph<K, VV, EV> removeEdge(Edge<K, EV> edge) { DataSet<Edge<K, EV>> newEdges = getEdges().filter(new EdgeRemovalEdgeFilter<>(edge)).name("Remove edge"); return new Graph<>(this.vertices, newEdges, this.context); }
Removes all edges that match the given edge from the graph. @param edge the edge to remove @return the new graph containing the existing vertices and edges without the removed edges
public Graph<K, VV, EV> removeEdges(List<Edge<K, EV>> edgesToBeRemoved) { DataSet<Edge<K, EV>> newEdges = getEdges().coGroup(this.context.fromCollection(edgesToBeRemoved)) .where(0, 1).equalTo(0, 1).with(new EdgeRemovalCoGroup<>()).name("Remove edges"); return new Graph<>(this.vertices, newEdges, context); }
Removes all the edges that match the edges in the given data set from the graph. @param edgesToBeRemoved the list of edges to be removed @return a new graph where the edges have been removed and in which the vertices remained intact
public Graph<K, VV, EV> union(Graph<K, VV, EV> graph) { DataSet<Vertex<K, VV>> unionedVertices = graph .getVertices() .union(this.getVertices()) .name("Vertices") .distinct() .name("Vertices"); DataSet<Edge<K, EV>> unionedEdges = graph .getEdges() .union(this.getEdges()) .name("Edges"); return new Graph<>(unionedVertices, unionedEdges, this.context); }
Performs union on the vertices and edges sets of the input graphs removing duplicate vertices but maintaining duplicate edges. @param graph the graph to perform union with @return a new graph
public Graph<K, VV, EV> difference(Graph<K, VV, EV> graph) { DataSet<Vertex<K, VV>> removeVerticesData = graph.getVertices(); return this.removeVertices(removeVerticesData); }
Performs Difference on the vertex and edge sets of the input graphs removes common vertices and edges. If a source/target vertex is removed, its corresponding edge will also be removed @param graph the graph to perform difference with @return a new graph where the common vertices and edges have been removed
public Graph<K, NullValue, EV> intersect(Graph<K, VV, EV> graph, boolean distinctEdges) { DataSet<Edge<K, EV>> intersectEdges; if (distinctEdges) { intersectEdges = getDistinctEdgeIntersection(graph.getEdges()); } else { intersectEdges = getPairwiseEdgeIntersection(graph.getEdges()); } return Graph.fromDataSet(intersectEdges, getContext()); }
Performs intersect on the edge sets of the input graphs. Edges are considered equal, if they have the same source identifier, target identifier and edge value. <p>The method computes pairs of equal edges from the input graphs. If the same edge occurs multiple times in the input graphs, there will be multiple edge pairs to be considered. Each edge instance can only be part of one pair. If the given parameter {@code distinctEdges} is set to {@code true}, there will be exactly one edge in the output graph representing all pairs of equal edges. If the parameter is set to {@code false}, both edges of each pair will be in the output. <p>Vertices in the output graph will have no vertex values. @param graph the graph to perform intersect with @param distinctEdges if set to {@code true}, there will be exactly one edge in the output graph representing all pairs of equal edges, otherwise, for each pair, both edges will be in the output graph @return a new graph which contains only common vertices and edges from the input graphs
private DataSet<Edge<K, EV>> getDistinctEdgeIntersection(DataSet<Edge<K, EV>> edges) { return this.getEdges() .join(edges) .where(0, 1, 2) .equalTo(0, 1, 2) .with(new JoinFunction<Edge<K, EV>, Edge<K, EV>, Edge<K, EV>>() { @Override public Edge<K, EV> join(Edge<K, EV> first, Edge<K, EV> second) throws Exception { return first; } }).withForwardedFieldsFirst("*").name("Intersect edges") .distinct() .name("Edges"); }
Computes the intersection between the edge set and the given edge set. For all matching pairs, only one edge will be in the resulting data set. @param edges edges to compute intersection with @return edge set containing one edge for all matching pairs of the same edge
private DataSet<Edge<K, EV>> getPairwiseEdgeIntersection(DataSet<Edge<K, EV>> edges) { return this.getEdges() .coGroup(edges) .where(0, 1, 2) .equalTo(0, 1, 2) .with(new MatchingEdgeReducer<>()) .name("Intersect edges"); }
Computes the intersection between the edge set and the given edge set. For all matching pairs, both edges will be in the resulting data set. @param edges edges to compute intersection with @return edge set containing both edges from all matching pairs of the same edge
public <M> Graph<K, VV, EV> runScatterGatherIteration( ScatterFunction<K, VV, M, EV> scatterFunction, org.apache.flink.graph.spargel.GatherFunction<K, VV, M> gatherFunction, int maximumNumberOfIterations) { return this.runScatterGatherIteration(scatterFunction, gatherFunction, maximumNumberOfIterations, null); }
Runs a ScatterGather iteration on the graph. No configuration options are provided. @param scatterFunction the scatter function @param gatherFunction the gather function @param maximumNumberOfIterations maximum number of iterations to perform @return the updated Graph after the scatter-gather iteration has converged or after maximumNumberOfIterations.
public <M> Graph<K, VV, EV> runScatterGatherIteration( ScatterFunction<K, VV, M, EV> scatterFunction, org.apache.flink.graph.spargel.GatherFunction<K, VV, M> gatherFunction, int maximumNumberOfIterations, ScatterGatherConfiguration parameters) { ScatterGatherIteration<K, VV, M, EV> iteration = ScatterGatherIteration.withEdges( edges, scatterFunction, gatherFunction, maximumNumberOfIterations); iteration.configure(parameters); DataSet<Vertex<K, VV>> newVertices = this.getVertices().runOperation(iteration); return new Graph<>(newVertices, this.edges, this.context); }
Runs a ScatterGather iteration on the graph with configuration options. @param scatterFunction the scatter function @param gatherFunction the gather function @param maximumNumberOfIterations maximum number of iterations to perform @param parameters the iteration configuration parameters @return the updated Graph after the scatter-gather iteration has converged or after maximumNumberOfIterations.
public <M> Graph<K, VV, EV> runGatherSumApplyIteration( org.apache.flink.graph.gsa.GatherFunction<VV, EV, M> gatherFunction, SumFunction<VV, EV, M> sumFunction, ApplyFunction<K, VV, M> applyFunction, int maximumNumberOfIterations) { return this.runGatherSumApplyIteration(gatherFunction, sumFunction, applyFunction, maximumNumberOfIterations, null); }
Runs a Gather-Sum-Apply iteration on the graph. No configuration options are provided. @param gatherFunction the gather function collects information about adjacent vertices and edges @param sumFunction the sum function aggregates the gathered information @param applyFunction the apply function updates the vertex values with the aggregates @param maximumNumberOfIterations maximum number of iterations to perform @param <M> the intermediate type used between gather, sum and apply @return the updated Graph after the gather-sum-apply iteration has converged or after maximumNumberOfIterations.
public <M> Graph<K, VV, EV> runGatherSumApplyIteration( org.apache.flink.graph.gsa.GatherFunction<VV, EV, M> gatherFunction, SumFunction<VV, EV, M> sumFunction, ApplyFunction<K, VV, M> applyFunction, int maximumNumberOfIterations, GSAConfiguration parameters) { GatherSumApplyIteration<K, VV, EV, M> iteration = GatherSumApplyIteration.withEdges( edges, gatherFunction, sumFunction, applyFunction, maximumNumberOfIterations); iteration.configure(parameters); DataSet<Vertex<K, VV>> newVertices = vertices.runOperation(iteration); return new Graph<>(newVertices, this.edges, this.context); }
Runs a Gather-Sum-Apply iteration on the graph with configuration options. @param gatherFunction the gather function collects information about adjacent vertices and edges @param sumFunction the sum function aggregates the gathered information @param applyFunction the apply function updates the vertex values with the aggregates @param maximumNumberOfIterations maximum number of iterations to perform @param parameters the iteration configuration parameters @param <M> the intermediate type used between gather, sum and apply @return the updated Graph after the gather-sum-apply iteration has converged or after maximumNumberOfIterations.
public <M> Graph<K, VV, EV> runVertexCentricIteration( ComputeFunction<K, VV, EV, M> computeFunction, MessageCombiner<K, M> combiner, int maximumNumberOfIterations) { return this.runVertexCentricIteration(computeFunction, combiner, maximumNumberOfIterations, null); }
Runs a {@link VertexCentricIteration} on the graph. No configuration options are provided. @param computeFunction the vertex compute function @param combiner an optional message combiner @param maximumNumberOfIterations maximum number of iterations to perform @return the updated Graph after the vertex-centric iteration has converged or after maximumNumberOfIterations.
public <M> Graph<K, VV, EV> runVertexCentricIteration( ComputeFunction<K, VV, EV, M> computeFunction, MessageCombiner<K, M> combiner, int maximumNumberOfIterations, VertexCentricConfiguration parameters) { VertexCentricIteration<K, VV, EV, M> iteration = VertexCentricIteration.withEdges( edges, computeFunction, combiner, maximumNumberOfIterations); iteration.configure(parameters); DataSet<Vertex<K, VV>> newVertices = this.getVertices().runOperation(iteration); return new Graph<>(newVertices, this.edges, this.context); }
Runs a {@link VertexCentricIteration} on the graph with configuration options. @param computeFunction the vertex compute function @param combiner an optional message combiner @param maximumNumberOfIterations maximum number of iterations to perform @param parameters the {@link VertexCentricConfiguration} parameters @return the updated Graph after the vertex-centric iteration has converged or after maximumNumberOfIterations.
public <T> GraphAnalytic<K, VV, EV, T> run(GraphAnalytic<K, VV, EV, T> analytic) throws Exception { analytic.run(this); return analytic; }
A {@code GraphAnalytic} is similar to a {@link GraphAlgorithm} but is terminal and results are retrieved via accumulators. A Flink program has a single point of execution. A {@code GraphAnalytic} defers execution to the user to allow composing multiple analytics and algorithms into a single program. @param analytic the analytic to run on the Graph @param <T> the result type @throws Exception
public <T> DataSet<T> groupReduceOnNeighbors(NeighborsFunctionWithVertexValue<K, VV, EV, T> neighborsFunction, EdgeDirection direction) throws IllegalArgumentException { switch (direction) { case IN: // create <edge-sourceVertex> pairs DataSet<Tuple2<Edge<K, EV>, Vertex<K, VV>>> edgesWithSources = edges .join(this.vertices).where(0).equalTo(0).name("Edge with source vertex"); return vertices.coGroup(edgesWithSources) .where(0).equalTo("f0.f1") .with(new ApplyNeighborCoGroupFunction<>(neighborsFunction)).name("Neighbors function"); case OUT: // create <edge-targetVertex> pairs DataSet<Tuple2<Edge<K, EV>, Vertex<K, VV>>> edgesWithTargets = edges .join(this.vertices).where(1).equalTo(0).name("Edge with target vertex"); return vertices.coGroup(edgesWithTargets) .where(0).equalTo("f0.f0") .with(new ApplyNeighborCoGroupFunction<>(neighborsFunction)).name("Neighbors function"); case ALL: // create <edge-sourceOrTargetVertex> pairs DataSet<Tuple3<K, Edge<K, EV>, Vertex<K, VV>>> edgesWithNeighbors = edges .flatMap(new EmitOneEdgeWithNeighborPerNode<>()).name("Forward and reverse edges") .join(this.vertices).where(1).equalTo(0) .with(new ProjectEdgeWithNeighbor<>()).name("Edge with vertex"); return vertices.coGroup(edgesWithNeighbors) .where(0).equalTo(0) .with(new ApplyCoGroupFunctionOnAllNeighbors<>(neighborsFunction)).name("Neighbors function"); default: throw new IllegalArgumentException("Illegal edge direction"); } }
Groups by vertex and computes a GroupReduce transformation over the neighbors (both edges and vertices) of each vertex. The neighborsFunction applied on the neighbors only has access to both the vertex id and the vertex value of the grouping vertex. <p>For each vertex, the neighborsFunction can iterate over all neighbors of this vertex with the specified direction, and emit any number of output elements, including none. @param neighborsFunction the group reduce function to apply to the neighboring edges and vertices of each vertex. @param direction the edge direction (in-, out-, all-). @param <T> the output type @return a DataSet containing elements of type T @throws IllegalArgumentException
public <T> DataSet<T> groupReduceOnNeighbors(NeighborsFunction<K, VV, EV, T> neighborsFunction, EdgeDirection direction, TypeInformation<T> typeInfo) throws IllegalArgumentException { switch (direction) { case IN: // create <edge-sourceVertex> pairs DataSet<Tuple3<K, Edge<K, EV>, Vertex<K, VV>>> edgesWithSources = edges .join(this.vertices).where(0).equalTo(0) .with(new ProjectVertexIdJoin<>(1)) .withForwardedFieldsFirst("f1->f0").name("Edge with source vertex ID"); return edgesWithSources.groupBy(0).reduceGroup( new ApplyNeighborGroupReduceFunction<>(neighborsFunction)) .name("Neighbors function").returns(typeInfo); case OUT: // create <edge-targetVertex> pairs DataSet<Tuple3<K, Edge<K, EV>, Vertex<K, VV>>> edgesWithTargets = edges .join(this.vertices).where(1).equalTo(0) .with(new ProjectVertexIdJoin<>(0)) .withForwardedFieldsFirst("f0").name("Edge with target vertex ID"); return edgesWithTargets.groupBy(0).reduceGroup( new ApplyNeighborGroupReduceFunction<>(neighborsFunction)) .name("Neighbors function").returns(typeInfo); case ALL: // create <edge-sourceOrTargetVertex> pairs DataSet<Tuple3<K, Edge<K, EV>, Vertex<K, VV>>> edgesWithNeighbors = edges .flatMap(new EmitOneEdgeWithNeighborPerNode<>()) .join(this.vertices).where(1).equalTo(0) .with(new ProjectEdgeWithNeighbor<>()).name("Edge with vertex ID"); return edgesWithNeighbors.groupBy(0).reduceGroup( new ApplyNeighborGroupReduceFunction<>(neighborsFunction)) .name("Neighbors function").returns(typeInfo); default: throw new IllegalArgumentException("Illegal edge direction"); } }
Groups by vertex and computes a GroupReduce transformation over the neighbors (both edges and vertices) of each vertex. The neighborsFunction applied on the neighbors only has access to the vertex id (not the vertex value) of the grouping vertex. <p>For each vertex, the neighborsFunction can iterate over all neighbors of this vertex with the specified direction, and emit any number of output elements, including none. @param neighborsFunction the group reduce function to apply to the neighboring edges and vertices of each vertex. @param direction the edge direction (in-, out-, all-). @param <T> the output type @param typeInfo the explicit return type @return a DataSet containing elements of type T @throws IllegalArgumentException
public DataSet<Tuple2<K, VV>> reduceOnNeighbors(ReduceNeighborsFunction<VV> reduceNeighborsFunction, EdgeDirection direction) throws IllegalArgumentException { switch (direction) { case IN: // create <vertex-source value> pairs final DataSet<Tuple2<K, VV>> verticesWithSourceNeighborValues = edges .join(this.vertices).where(0).equalTo(0) .with(new ProjectVertexWithNeighborValueJoin<>(1)) .withForwardedFieldsFirst("f1->f0").name("Vertex with in-neighbor value"); return verticesWithSourceNeighborValues.groupBy(0).reduce(new ApplyNeighborReduceFunction<>( reduceNeighborsFunction)).name("Neighbors function"); case OUT: // create <vertex-target value> pairs DataSet<Tuple2<K, VV>> verticesWithTargetNeighborValues = edges .join(this.vertices).where(1).equalTo(0) .with(new ProjectVertexWithNeighborValueJoin<>(0)) .withForwardedFieldsFirst("f0").name("Vertex with out-neighbor value"); return verticesWithTargetNeighborValues.groupBy(0).reduce(new ApplyNeighborReduceFunction<>( reduceNeighborsFunction)).name("Neighbors function"); case ALL: // create <vertex-neighbor value> pairs DataSet<Tuple2<K, VV>> verticesWithNeighborValues = edges .flatMap(new EmitOneEdgeWithNeighborPerNode<>()) .join(this.vertices).where(1).equalTo(0) .with(new ProjectNeighborValue<>()).name("Vertex with neighbor value"); return verticesWithNeighborValues.groupBy(0).reduce(new ApplyNeighborReduceFunction<>( reduceNeighborsFunction)).name("Neighbors function"); default: throw new IllegalArgumentException("Illegal edge direction"); } }
Compute a reduce transformation over the neighbors' vertex values of each vertex. For each vertex, the transformation consecutively calls a {@link ReduceNeighborsFunction} until only a single value for each vertex remains. The {@link ReduceNeighborsFunction} combines a pair of neighbor vertex values into one new value of the same type. @param reduceNeighborsFunction the reduce function to apply to the neighbors of each vertex. @param direction the edge direction (in-, out-, all-) @return a Dataset of Tuple2, with one tuple per vertex. The first field of the Tuple2 is the vertex ID and the second field is the aggregate value computed by the provided {@link ReduceNeighborsFunction}. @throws IllegalArgumentException
public DataSet<Tuple2<K, EV>> reduceOnEdges(ReduceEdgesFunction<EV> reduceEdgesFunction, EdgeDirection direction) throws IllegalArgumentException { switch (direction) { case IN: return edges.map(new ProjectVertexWithEdgeValueMap<>(1)) .withForwardedFields("f1->f0") .name("Vertex with in-edges") .groupBy(0).reduce(new ApplyReduceFunction<>(reduceEdgesFunction)) .name("Reduce on edges"); case OUT: return edges.map(new ProjectVertexWithEdgeValueMap<>(0)) .withForwardedFields("f0->f0") .name("Vertex with out-edges") .groupBy(0).reduce(new ApplyReduceFunction<>(reduceEdgesFunction)) .name("Reduce on edges"); case ALL: return edges.flatMap(new EmitOneVertexWithEdgeValuePerNode<>()) .withForwardedFields("f2->f1") .name("Vertex with all edges") .groupBy(0).reduce(new ApplyReduceFunction<>(reduceEdgesFunction)) .name("Reduce on edges"); default: throw new IllegalArgumentException("Illegal edge direction"); } }
Compute a reduce transformation over the edge values of each vertex. For each vertex, the transformation consecutively calls a {@link ReduceEdgesFunction} until only a single value for each edge remains. The {@link ReduceEdgesFunction} combines two edge values into one new value of the same type. @param reduceEdgesFunction the reduce function to apply to the neighbors of each vertex. @param direction the edge direction (in-, out-, all-) @return a Dataset of Tuple2, with one tuple per vertex. The first field of the Tuple2 is the vertex ID and the second field is the aggregate value computed by the provided {@link ReduceEdgesFunction}. @throws IllegalArgumentException
public CompletableFuture<Void> shutdown() { final CompletableFuture<Void> newShutdownFuture = new CompletableFuture<>(); if (clientShutdownFuture.compareAndSet(null, newShutdownFuture)) { final List<CompletableFuture<Void>> connectionFutures = new ArrayList<>(); for (Map.Entry<InetSocketAddress, EstablishedConnection> conn : establishedConnections.entrySet()) { if (establishedConnections.remove(conn.getKey(), conn.getValue())) { connectionFutures.add(conn.getValue().close()); } } for (Map.Entry<InetSocketAddress, PendingConnection> conn : pendingConnections.entrySet()) { if (pendingConnections.remove(conn.getKey()) != null) { connectionFutures.add(conn.getValue().close()); } } CompletableFuture.allOf( connectionFutures.toArray(new CompletableFuture<?>[connectionFutures.size()]) ).whenComplete((result, throwable) -> { if (throwable != null) { LOG.warn("Problem while shutting down the connections at the {}: {}", clientName, throwable); } if (bootstrap != null) { EventLoopGroup group = bootstrap.group(); if (group != null && !group.isShutdown()) { group.shutdownGracefully(0L, 0L, TimeUnit.MILLISECONDS) .addListener(finished -> { if (finished.isSuccess()) { newShutdownFuture.complete(null); } else { newShutdownFuture.completeExceptionally(finished.cause()); } }); } else { newShutdownFuture.complete(null); } } else { newShutdownFuture.complete(null); } }); return newShutdownFuture; } return clientShutdownFuture.get(); }
Shuts down the client and closes all connections. <p>After a call to this method, all returned futures will be failed. @return A {@link CompletableFuture} that will be completed when the shutdown process is done.
public static java.sql.Date internalToDate(int v, TimeZone tz) { // note that, in this case, can't handle Daylight Saving Time final long t = v * MILLIS_PER_DAY; return new java.sql.Date(t - tz.getOffset(t)); }
Converts the internal representation of a SQL DATE (int) to the Java type used for UDF parameters ({@link java.sql.Date}) with the given TimeZone. <p>The internal int represents the days since January 1, 1970. When we convert it to {@link java.sql.Date} (time milliseconds since January 1, 1970, 00:00:00 GMT), we need a TimeZone.
public static java.sql.Time internalToTime(int v, TimeZone tz) { // note that, in this case, can't handle Daylight Saving Time return new java.sql.Time(v - tz.getOffset(v)); }
Converts the internal representation of a SQL TIME (int) to the Java type used for UDF parameters ({@link java.sql.Time}). <p>The internal int represents the seconds since "00:00:00". When we convert it to {@link java.sql.Time} (time milliseconds since January 1, 1970, 00:00:00 GMT), we need a TimeZone.
public static int dateToInternal(java.sql.Date date, TimeZone tz) { long ts = date.getTime() + tz.getOffset(date.getTime()); return (int) (ts / MILLIS_PER_DAY); }
Converts the Java type used for UDF parameters of SQL DATE type ({@link java.sql.Date}) to internal representation (int). <p>Converse of {@link #internalToDate(int)}.
public static int timeToInternal(java.sql.Time time, TimeZone tz) { long ts = time.getTime() + tz.getOffset(time.getTime()); return (int) (ts % MILLIS_PER_DAY); }
Converts the Java type used for UDF parameters of SQL TIME type ({@link java.sql.Time}) to internal representation (int). <p>Converse of {@link #internalToTime(int)}.
public static Long toTimestamp(String dateStr, TimeZone tz) { int length = dateStr.length(); String format; if (length == 21) { format = DEFAULT_DATETIME_FORMATS[1]; } else if (length == 22) { format = DEFAULT_DATETIME_FORMATS[2]; } else if (length == 23) { format = DEFAULT_DATETIME_FORMATS[3]; } else { // otherwise fall back to the default format = DEFAULT_DATETIME_FORMATS[0]; } return toTimestamp(dateStr, format, tz); }
Parse date time string to timestamp based on the given time zone and "yyyy-MM-dd HH:mm:ss" format. Returns null if parsing failed. @param dateStr the date time string @param tz the time zone
public static Long toTimestamp(String dateStr, String format, TimeZone tz) { SimpleDateFormat formatter = FORMATTER_CACHE.get(format); formatter.setTimeZone(tz); try { return formatter.parse(dateStr).getTime(); } catch (ParseException e) { return null; } }
Parse date time string to timestamp based on the given time zone and format. Returns null if parsing failed. @param dateStr the date time string @param format date time string format @param tz the time zone
public static Long toTimestampTz(String dateStr, String format, String tzStr) { TimeZone tz = TIMEZONE_CACHE.get(tzStr); return toTimestamp(dateStr, format, tz); }
Parse date time string to timestamp based on the given time zone string and format. Returns null if parsing failed. @param dateStr the date time string @param format the date time string format @param tzStr the time zone id string
public static int strToDate(String dateStr, String fromFormat) { // It is OK to use UTC, we just want get the epoch days // TODO use offset, better performance long ts = parseToTimeMillis(dateStr, fromFormat, TimeZone.getTimeZone("UTC")); ZoneId zoneId = ZoneId.of("UTC"); Instant instant = Instant.ofEpochMilli(ts); ZonedDateTime zdt = ZonedDateTime.ofInstant(instant, zoneId); return DateTimeUtils.ymdToUnixDate(zdt.getYear(), zdt.getMonthValue(), zdt.getDayOfMonth()); }
Returns the epoch days since 1970-01-01.
public static String dateFormat(long ts, String format, TimeZone tz) { SimpleDateFormat formatter = FORMATTER_CACHE.get(format); formatter.setTimeZone(tz); Date dateTime = new Date(ts); return formatter.format(dateTime); }
Format a timestamp as specific. @param ts the timestamp to format. @param format the string formatter. @param tz the time zone
public static String dateFormat(String dateStr, String fromFormat, String toFormat, TimeZone tz) { SimpleDateFormat fromFormatter = FORMATTER_CACHE.get(fromFormat); fromFormatter.setTimeZone(tz); SimpleDateFormat toFormatter = FORMATTER_CACHE.get(toFormat); toFormatter.setTimeZone(tz); try { return toFormatter.format(fromFormatter.parse(dateStr)); } catch (ParseException e) { LOG.error("Exception when formatting: '" + dateStr + "' from: '" + fromFormat + "' to: '" + toFormat + "'", e); return null; } }
Format a string datetime as specific. @param dateStr the string datetime. @param fromFormat the original date format. @param toFormat the target date format. @param tz the time zone.
public static String convertTz(String dateStr, String format, String tzFrom, String tzTo) { return dateFormatTz(toTimestampTz(dateStr, format, tzFrom), tzTo); }
Convert datetime string from a time zone to another time zone. @param dateStr the date time string @param format the date time format @param tzFrom the original time zone @param tzTo the target time zone
public static String timestampToString(long ts, int precision, TimeZone tz) { int p = (precision <= 3 && precision >= 0) ? precision : 3; String format = DEFAULT_DATETIME_FORMATS[p]; return dateFormat(ts, format, tz); }
Convert a timestamp to string. @param ts the timestamp to convert. @param precision the milli second precision to preserve @param tz the time zone
private static long parseToTimeMillis(String dateStr, TimeZone tz) { String format; if (dateStr.length() <= 10) { format = DATE_FORMAT_STRING; } else { format = TIMESTAMP_FORMAT_STRING; } return parseToTimeMillis(dateStr, format, tz) + getMillis(dateStr); }
Parses a given datetime string to milli seconds since 1970-01-01 00:00:00 UTC using the default format "yyyy-MM-dd" or "yyyy-MM-dd HH:mm:ss" depends on the string length.
private static int getMillis(String dateStr) { int length = dateStr.length(); if (length == 19) { // "1999-12-31 12:34:56", no milli second left return 0; } else if (length == 21) { // "1999-12-31 12:34:56.7", return 7 return Integer.parseInt(dateStr.substring(20)) * 100; } else if (length == 22) { // "1999-12-31 12:34:56.78", return 78 return Integer.parseInt(dateStr.substring(20)) * 10; } else if (length >= 23 && length <= 26) { // "1999-12-31 12:34:56.123" ~ "1999-12-31 12:34:56.123456" return Integer.parseInt(dateStr.substring(20, 23)) * 10; } else { return 0; } }
Returns the milli second part of the datetime.
public static int extractYearMonth(TimeUnitRange range, int v) { switch (range) { case YEAR: return v / 12; case MONTH: return v % 12; case QUARTER: return (v % 12 + 2) / 3; default: throw new UnsupportedOperationException("Unsupported TimeUnitRange: " + range); } }
--------------------------------------------------------------------------------------------
public static long timestampFloor(TimeUnitRange range, long ts, TimeZone tz) { // assume that we are at UTC timezone, just for algorithm performance long offset = tz.getOffset(ts); long utcTs = ts + offset; switch (range) { case HOUR: return floor(utcTs, MILLIS_PER_HOUR) - offset; case DAY: return floor(utcTs, MILLIS_PER_DAY) - offset; case MONTH: case YEAR: case QUARTER: int days = (int) (utcTs / MILLIS_PER_DAY + EPOCH_JULIAN); return julianDateFloor(range, days, true) * MILLIS_PER_DAY - offset; default: // for MINUTE and SECONDS etc..., // it is more effective to use arithmetic Method throw new AssertionError(range); } }
--------------------------------------------------------------------------------------------
public static long timestampCeil(TimeUnitRange range, long ts, TimeZone tz) { // assume that we are at UTC timezone, just for algorithm performance long offset = tz.getOffset(ts); long utcTs = ts + offset; switch (range) { case HOUR: return ceil(utcTs, MILLIS_PER_HOUR) - offset; case DAY: return ceil(utcTs, MILLIS_PER_DAY) - offset; case MONTH: case YEAR: case QUARTER: int days = (int) (utcTs / MILLIS_PER_DAY + EPOCH_JULIAN); return julianDateFloor(range, days, false) * MILLIS_PER_DAY - offset; default: // for MINUTE and SECONDS etc..., // it is more effective to use arithmetic Method throw new AssertionError(range); } }
Keep the algorithm consistent with Calcite DateTimeUtils.julianDateFloor, but here we take time zone into account.
public static int dateDiff(long t1, long t2, TimeZone tz) { ZoneId zoneId = tz.toZoneId(); LocalDate ld1 = Instant.ofEpochMilli(t1).atZone(zoneId).toLocalDate(); LocalDate ld2 = Instant.ofEpochMilli(t2).atZone(zoneId).toLocalDate(); return (int) ChronoUnit.DAYS.between(ld2, ld1); }
NOTE: (1). JDK relies on the operating system clock for time. Each operating system has its own method of handling date changes such as leap seconds(e.g. OS will slow down the clock to accommodate for this). (2). DST(Daylight Saving Time) is a legal issue, governments changed it over time. Some days are NOT exactly 24 hours long, it could be 23/25 hours long on the first or last day of daylight saving time. JDK can handle DST correctly. TODO: carefully written algorithm can improve the performance
public static String dateSub(String dateStr, int days, TimeZone tz) { long ts = parseToTimeMillis(dateStr, tz); if (ts == Long.MIN_VALUE) { return null; } return dateSub(ts, days, tz); }
Do subtraction on date string. @param dateStr formatted date string. @param days days count you want to subtract. @param tz time zone of the date time string @return datetime string.
public static String dateSub(long ts, int days, TimeZone tz) { ZoneId zoneId = tz.toZoneId(); Instant instant = Instant.ofEpochMilli(ts); ZonedDateTime zdt = ZonedDateTime.ofInstant(instant, zoneId); long resultTs = zdt.minusDays(days).toInstant().toEpochMilli(); return dateFormat(resultTs, DATE_FORMAT_STRING, tz); }
Do subtraction on date string. @param ts the timestamp. @param days days count you want to subtract. @param tz time zone of the date time string @return datetime string.
public static String fromUnixtime(long unixtime, String format, TimeZone tz) { SimpleDateFormat formatter = FORMATTER_CACHE.get(format); formatter.setTimeZone(tz); Date date = new Date(unixtime * 1000); try { return formatter.format(date); } catch (Exception e) { LOG.error("Exception when formatting.", e); return null; } }
Convert unix timestamp (seconds since '1970-01-01 00:00:00' UTC) to datetime string in the given format.
public static long unixTimestamp(String dateStr, String format, TimeZone tz) { long ts = parseToTimeMillis(dateStr, format, tz); if (ts == Long.MIN_VALUE) { return Long.MIN_VALUE; } else { // return the seconds return ts / 1000; } }
Returns the value of the argument as an unsigned integer in seconds since '1970-01-01 00:00:00' UTC.
@Override public boolean validate(Graph<K, VV, EV> graph) throws Exception { DataSet<Tuple1<K>> edgeIds = graph.getEdges() .flatMap(new MapEdgeIds<>()).distinct(); DataSet<K> invalidIds = graph.getVertices().coGroup(edgeIds).where(0) .equalTo(0).with(new GroupInvalidIds<>()).first(1); return invalidIds.map(new KToTupleMap<>()).count() == 0; }
Checks that the edge set input contains valid vertex Ids, i.e. that they also exist in the vertex input set. @return a boolean stating whether a graph is valid with respect to its vertex ids.
public HCatInputFormatBase<T> getFields(String... fields) throws IOException { // build output schema ArrayList<HCatFieldSchema> fieldSchemas = new ArrayList<HCatFieldSchema>(fields.length); for (String field : fields) { fieldSchemas.add(this.outputSchema.get(field)); } this.outputSchema = new HCatSchema(fieldSchemas); // update output schema configuration configuration.set("mapreduce.lib.hcat.output.schema", HCatUtil.serialize(outputSchema)); return this; }
Specifies the fields which are returned by the InputFormat and their order. @param fields The fields and their order which are returned by the InputFormat. @return This InputFormat with specified return fields. @throws java.io.IOException
public HCatInputFormatBase<T> asFlinkTuples() throws HCatException { // build type information int numFields = outputSchema.getFields().size(); if (numFields > this.getMaxFlinkTupleSize()) { throw new IllegalArgumentException("Only up to " + this.getMaxFlinkTupleSize() + " fields can be returned as Flink tuples."); } TypeInformation[] fieldTypes = new TypeInformation[numFields]; fieldNames = new String[numFields]; for (String fieldName : outputSchema.getFieldNames()) { HCatFieldSchema field = outputSchema.get(fieldName); int fieldPos = outputSchema.getPosition(fieldName); TypeInformation fieldType = getFieldType(field); fieldTypes[fieldPos] = fieldType; fieldNames[fieldPos] = fieldName; } this.resultType = new TupleTypeInfo(fieldTypes); return this; }
Specifies that the InputFormat returns Flink tuples instead of {@link org.apache.hive.hcatalog.data.HCatRecord}. <p>Note: Flink tuples might only support a limited number of fields (depending on the API). @return This InputFormat. @throws org.apache.hive.hcatalog.common.HCatException
private void writeObject(ObjectOutputStream out) throws IOException { out.writeInt(this.fieldNames.length); for (String fieldName : this.fieldNames) { out.writeUTF(fieldName); } this.configuration.write(out); }
--------------------------------------------------------------------------------------------
static byte[] readBinaryFieldFromSegments( MemorySegment[] segments, int baseOffset, int fieldOffset, long variablePartOffsetAndLen) { long mark = variablePartOffsetAndLen & HIGHEST_FIRST_BIT; if (mark == 0) { final int subOffset = (int) (variablePartOffsetAndLen >> 32); final int len = (int) variablePartOffsetAndLen; return SegmentsUtil.copyToBytes(segments, baseOffset + subOffset, len); } else { int len = (int) ((variablePartOffsetAndLen & HIGHEST_SECOND_TO_EIGHTH_BIT) >>> 56); if (SegmentsUtil.LITTLE_ENDIAN) { return SegmentsUtil.copyToBytes(segments, fieldOffset, len); } else { // fieldOffset + 1 to skip header. return SegmentsUtil.copyToBytes(segments, fieldOffset + 1, len); } } }
Get binary, if len less than 8, will be include in variablePartOffsetAndLen. <p>Note: Need to consider the ByteOrder. @param baseOffset base offset of composite binary format. @param fieldOffset absolute start offset of 'variablePartOffsetAndLen'. @param variablePartOffsetAndLen a long value, real data or offset and len.
static BinaryString readBinaryStringFieldFromSegments( MemorySegment[] segments, int baseOffset, int fieldOffset, long variablePartOffsetAndLen) { long mark = variablePartOffsetAndLen & HIGHEST_FIRST_BIT; if (mark == 0) { final int subOffset = (int) (variablePartOffsetAndLen >> 32); final int len = (int) variablePartOffsetAndLen; return new BinaryString(segments, baseOffset + subOffset, len); } else { int len = (int) ((variablePartOffsetAndLen & HIGHEST_SECOND_TO_EIGHTH_BIT) >>> 56); if (SegmentsUtil.LITTLE_ENDIAN) { return new BinaryString(segments, fieldOffset, len); } else { // fieldOffset + 1 to skip header. return new BinaryString(segments, fieldOffset + 1, len); } } }
Get binary string, if len less than 8, will be include in variablePartOffsetAndLen. <p>Note: Need to consider the ByteOrder. @param baseOffset base offset of composite binary format. @param fieldOffset absolute start offset of 'variablePartOffsetAndLen'. @param variablePartOffsetAndLen a long value, real data or offset and len.
public static String getVersion() { String version = EnvironmentInformation.class.getPackage().getImplementationVersion(); return version != null ? version : UNKNOWN; }
Returns the version of the code as String. If version == null, then the JobManager does not run from a Maven build. An example is a source code checkout, compile, and run from inside an IDE. @return The version string.
public static RevisionInformation getRevisionInformation() { String revision = UNKNOWN; String commitDate = UNKNOWN; try (InputStream propFile = EnvironmentInformation.class.getClassLoader().getResourceAsStream(".version.properties")) { if (propFile != null) { Properties properties = new Properties(); properties.load(propFile); String propRevision = properties.getProperty("git.commit.id.abbrev"); String propCommitDate = properties.getProperty("git.commit.time"); revision = propRevision != null ? propRevision : UNKNOWN; commitDate = propCommitDate != null ? propCommitDate : UNKNOWN; } } catch (Throwable t) { if (LOG.isDebugEnabled()) { LOG.debug("Cannot determine code revision: Unable to read version property file.", t); } else { LOG.info("Cannot determine code revision: Unable to read version property file."); } } return new RevisionInformation(revision, commitDate); }
Returns the code revision (commit and commit date) of Flink, as generated by the Maven builds. @return The code revision.
public static String getHadoopUser() { try { Class<?> ugiClass = Class.forName( "org.apache.hadoop.security.UserGroupInformation", false, EnvironmentInformation.class.getClassLoader()); Method currentUserMethod = ugiClass.getMethod("getCurrentUser"); Method shortUserNameMethod = ugiClass.getMethod("getShortUserName"); Object ugi = currentUserMethod.invoke(null); return (String) shortUserNameMethod.invoke(ugi); } catch (ClassNotFoundException e) { return "<no hadoop dependency found>"; } catch (LinkageError e) { // hadoop classes are not in the classpath LOG.debug("Cannot determine user/group information using Hadoop utils. " + "Hadoop classes not loaded or compatible", e); } catch (Throwable t) { // some other error occurred that we should log and make known LOG.warn("Error while accessing user/group information via Hadoop utils.", t); } return UNKNOWN; }
Gets the name of the user that is running the JVM. @return The name of the user that is running the JVM.
public static long getMaxJvmHeapMemory() { final long maxMemory = Runtime.getRuntime().maxMemory(); if (maxMemory != Long.MAX_VALUE) { // we have the proper max memory return maxMemory; } else { // max JVM heap size is not set - use the heuristic to use 1/4th of the physical memory final long physicalMemory = Hardware.getSizeOfPhysicalMemory(); if (physicalMemory != -1) { // got proper value for physical memory return physicalMemory / 4; } else { throw new RuntimeException("Could not determine the amount of free memory.\n" + "Please set the maximum memory for the JVM, e.g. -Xmx512M for 512 megabytes."); } } }
The maximum JVM heap size, in bytes. <p>This method uses the <i>-Xmx</i> value of the JVM, if set. If not set, it returns (as a heuristic) 1/4th of the physical memory size. @return The maximum JVM heap size, in bytes.
public static long getSizeOfFreeHeapMemory() { Runtime r = Runtime.getRuntime(); return getMaxJvmHeapMemory() - r.totalMemory() + r.freeMemory(); }
Gets an estimate of the size of the free heap memory. The estimate may vary, depending on the current level of memory fragmentation and the number of dead objects. For a better (but more heavy-weight) estimate, use {@link #getSizeOfFreeHeapMemoryWithDefrag()}. @return An estimate of the size of the free heap memory, in bytes.
public static String getJvmVersion() { try { final RuntimeMXBean bean = ManagementFactory.getRuntimeMXBean(); return bean.getVmName() + " - " + bean.getVmVendor() + " - " + bean.getSpecVersion() + '/' + bean.getVmVersion(); } catch (Throwable t) { return UNKNOWN; } }
Gets the version of the JVM in the form "VM_Name - Vendor - Spec/Version". @return The JVM version.
public static String getJvmStartupOptions() { try { final RuntimeMXBean bean = ManagementFactory.getRuntimeMXBean(); final StringBuilder bld = new StringBuilder(); for (String s : bean.getInputArguments()) { bld.append(s).append(' '); } return bld.toString(); } catch (Throwable t) { return UNKNOWN; } }
Gets the system parameters and environment parameters that were passed to the JVM on startup. @return The options passed to the JVM on startup.
public static String[] getJvmStartupOptionsArray() { try { RuntimeMXBean bean = ManagementFactory.getRuntimeMXBean(); List<String> options = bean.getInputArguments(); return options.toArray(new String[options.size()]); } catch (Throwable t) { return new String[0]; } }
Gets the system parameters and environment parameters that were passed to the JVM on startup. @return The options passed to the JVM on startup.
public static long getOpenFileHandlesLimit() { if (OperatingSystem.isWindows()) { // getMaxFileDescriptorCount method is not available on Windows return -1L; } Class<?> sunBeanClass; try { sunBeanClass = Class.forName("com.sun.management.UnixOperatingSystemMXBean"); } catch (ClassNotFoundException e) { return -1L; } try { Method fhLimitMethod = sunBeanClass.getMethod("getMaxFileDescriptorCount"); Object result = fhLimitMethod.invoke(ManagementFactory.getOperatingSystemMXBean()); return (Long) result; } catch (Throwable t) { LOG.warn("Unexpected error when accessing file handle limit", t); return -1L; } }
Tries to retrieve the maximum number of open file handles. This method will only work on UNIX-based operating systems with Sun/Oracle Java versions. <p>If the number of max open file handles cannot be determined, this method returns {@code -1}.</p> @return The limit of open file handles, or {@code -1}, if the limit could not be determined.
public static void logEnvironmentInfo(Logger log, String componentName, String[] commandLineArgs) { if (log.isInfoEnabled()) { RevisionInformation rev = getRevisionInformation(); String version = getVersion(); String jvmVersion = getJvmVersion(); String[] options = getJvmStartupOptionsArray(); String javaHome = System.getenv("JAVA_HOME"); long maxHeapMegabytes = getMaxJvmHeapMemory() >>> 20; log.info("--------------------------------------------------------------------------------"); log.info(" Starting " + componentName + " (Version: " + version + ", " + "Rev:" + rev.commitId + ", " + "Date:" + rev.commitDate + ")"); log.info(" OS current user: " + System.getProperty("user.name")); log.info(" Current Hadoop/Kerberos user: " + getHadoopUser()); log.info(" JVM: " + jvmVersion); log.info(" Maximum heap size: " + maxHeapMegabytes + " MiBytes"); log.info(" JAVA_HOME: " + (javaHome == null ? "(not set)" : javaHome)); String hadoopVersionString = getHadoopVersionString(); if (hadoopVersionString != null) { log.info(" Hadoop version: " + hadoopVersionString); } else { log.info(" No Hadoop Dependency available"); } if (options.length == 0) { log.info(" JVM Options: (none)"); } else { log.info(" JVM Options:"); for (String s: options) { log.info(" " + s); } } if (commandLineArgs == null || commandLineArgs.length == 0) { log.info(" Program Arguments: (none)"); } else { log.info(" Program Arguments:"); for (String s: commandLineArgs) { log.info(" " + s); } } log.info(" Classpath: " + System.getProperty("java.class.path")); log.info("--------------------------------------------------------------------------------"); } }
Logs information about the environment, like code revision, current user, Java version, and JVM parameters. @param log The logger to log the information to. @param componentName The component name to mention in the log. @param commandLineArgs The arguments accompanying the starting the component.
public Json jsonSchema(String jsonSchema) { Preconditions.checkNotNull(jsonSchema); this.jsonSchema = jsonSchema; this.schema = null; this.deriveSchema = null; return this; }
Sets the JSON schema string with field names and the types according to the JSON schema specification [[http://json-schema.org/specification.html]]. <p>The schema might be nested. @param jsonSchema JSON schema
public Json schema(TypeInformation<Row> schemaType) { Preconditions.checkNotNull(schemaType); this.schema = TypeStringUtils.writeTypeInfo(schemaType); this.jsonSchema = null; this.deriveSchema = null; return this; }
Sets the schema using type information. <p>JSON objects are represented as ROW types. <p>The schema might be nested. @param schemaType type information that describes the schema
public static MemorySize getJobManagerHeapMemory(Configuration configuration) { if (configuration.containsKey(JobManagerOptions.JOB_MANAGER_HEAP_MEMORY.key())) { return MemorySize.parse(configuration.getString(JobManagerOptions.JOB_MANAGER_HEAP_MEMORY)); } else if (configuration.containsKey(JobManagerOptions.JOB_MANAGER_HEAP_MEMORY_MB.key())) { return MemorySize.parse(configuration.getInteger(JobManagerOptions.JOB_MANAGER_HEAP_MEMORY_MB) + "m"); } else { //use default value return MemorySize.parse(configuration.getString(JobManagerOptions.JOB_MANAGER_HEAP_MEMORY)); } }
Get job manager's heap memory. This method will check the new key {@link JobManagerOptions#JOB_MANAGER_HEAP_MEMORY} and the old key {@link JobManagerOptions#JOB_MANAGER_HEAP_MEMORY_MB} for backwards compatibility. @param configuration the configuration object @return the memory size of job manager's heap memory.
public static MemorySize getTaskManagerHeapMemory(Configuration configuration) { if (configuration.containsKey(TaskManagerOptions.TASK_MANAGER_HEAP_MEMORY.key())) { return MemorySize.parse(configuration.getString(TaskManagerOptions.TASK_MANAGER_HEAP_MEMORY)); } else if (configuration.containsKey(TaskManagerOptions.TASK_MANAGER_HEAP_MEMORY_MB.key())) { return MemorySize.parse(configuration.getInteger(TaskManagerOptions.TASK_MANAGER_HEAP_MEMORY_MB) + "m"); } else { //use default value return MemorySize.parse(configuration.getString(TaskManagerOptions.TASK_MANAGER_HEAP_MEMORY)); } }
Get task manager's heap memory. This method will check the new key {@link TaskManagerOptions#TASK_MANAGER_HEAP_MEMORY} and the old key {@link TaskManagerOptions#TASK_MANAGER_HEAP_MEMORY_MB} for backwards compatibility. @param configuration the configuration object @return the memory size of task manager's heap memory.
@Nonnull public static String[] parseTempDirectories(Configuration configuration) { return splitPaths(configuration.getString(CoreOptions.TMP_DIRS)); }
Extracts the task manager directories for temporary files as defined by {@link org.apache.flink.configuration.CoreOptions#TMP_DIRS}. @param configuration configuration object @return array of configured directories (in order)
@Nonnull public static String[] parseLocalStateDirectories(Configuration configuration) { String configValue = configuration.getString(CheckpointingOptions.LOCAL_RECOVERY_TASK_MANAGER_STATE_ROOT_DIRS, ""); return splitPaths(configValue); }
Extracts the local state directories as defined by {@link CheckpointingOptions#LOCAL_RECOVERY_TASK_MANAGER_STATE_ROOT_DIRS}. @param configuration configuration object @return array of configured directories (in order)
@Nonnull public static Configuration createConfiguration(Properties properties) { final Configuration configuration = new Configuration(); final Set<String> propertyNames = properties.stringPropertyNames(); for (String propertyName : propertyNames) { configuration.setString(propertyName, properties.getProperty(propertyName)); } return configuration; }
Creates a new {@link Configuration} from the given {@link Properties}. @param properties to convert into a {@link Configuration} @return {@link Configuration} which has been populated by the values of the given {@link Properties}
@Override public void open() { synchronized (stateLock) { if (!closed) { throw new IllegalStateException("currently not closed."); } closed = false; } // create the partitions final int partitionFanOut = getPartitioningFanOutNoEstimates(this.availableMemory.size()); createPartitions(partitionFanOut); // set up the table structure. the write behind buffers are taken away, as are one buffer per partition final int numBuckets = getInitialTableSize(this.availableMemory.size(), this.segmentSize, partitionFanOut, this.avgRecordLen); initTable(numBuckets, (byte) partitionFanOut); }
Initialize the hash table
@Override public void close() { // make sure that we close only once synchronized (this.stateLock) { if (this.closed) { return; } this.closed = true; } LOG.debug("Closing hash table and releasing resources."); // release the table structure releaseTable(); // clear the memory in the partitions clearPartitions(); }
Closes the hash table. This effectively releases all internal structures and closes all open files and removes them. The call to this method is valid both as a cleanup after the complete inputs were properly processed, and as an cancellation call, which cleans up all resources that are currently held by the hash join. If another process still access the hash table after close has been called no operations will be performed.
public void buildTableWithUniqueKey(final MutableObjectIterator<T> input) throws IOException { // go over the complete input and insert every element into the hash table T value; while (this.running && (value = input.next()) != null) { insertOrReplaceRecord(value); } }
------------------------------------------------------------------------
public void insertOrReplaceRecord(T record) throws IOException { if (this.closed) { return; } final int searchHashCode = MathUtils.jenkinsHash(this.buildSideComparator.hash(record)); final int posHashCode = searchHashCode % this.numBuckets; // get the bucket for the given hash code final MemorySegment originalBucket = this.buckets[posHashCode >> this.bucketsPerSegmentBits]; final int originalBucketOffset = (posHashCode & this.bucketsPerSegmentMask) << NUM_INTRA_BUCKET_BITS; MemorySegment bucket = originalBucket; int bucketInSegmentOffset = originalBucketOffset; // get the basic characteristics of the bucket final int partitionNumber = bucket.get(bucketInSegmentOffset + HEADER_PARTITION_OFFSET); final InMemoryPartition<T> partition = this.partitions.get(partitionNumber); final MemorySegment[] overflowSegments = partition.overflowSegments; this.buildSideComparator.setReference(record); int countInSegment = bucket.getInt(bucketInSegmentOffset + HEADER_COUNT_OFFSET); int numInSegment = 0; int posInSegment = bucketInSegmentOffset + BUCKET_HEADER_LENGTH; // loop over all segments that are involved in the bucket (original bucket plus overflow buckets) while (true) { while (numInSegment < countInSegment) { final int thisCode = bucket.getInt(posInSegment); posInSegment += HASH_CODE_LEN; // check if the hash code matches if (thisCode == searchHashCode) { // get the pointer to the pair final int pointerOffset = bucketInSegmentOffset + BUCKET_POINTER_START_OFFSET + (numInSegment * POINTER_LEN); final long pointer = bucket.getLong(pointerOffset); // deserialize the key to check whether it is really equal, or whether we had only a hash collision T valueAtPosition = partition.readRecordAt(pointer); if (this.buildSideComparator.equalToReference(valueAtPosition)) { long newPointer = insertRecordIntoPartition(record, partition, true); bucket.putLong(pointerOffset, newPointer); return; } } numInSegment++; } // this segment is done. check if there is another chained bucket long newForwardPointer = bucket.getLong(bucketInSegmentOffset + HEADER_FORWARD_OFFSET); if (newForwardPointer == BUCKET_FORWARD_POINTER_NOT_SET) { // nothing found. append and insert long pointer = insertRecordIntoPartition(record, partition, false); if (countInSegment < NUM_ENTRIES_PER_BUCKET) { // we are good in our current bucket, put the values bucket.putInt(bucketInSegmentOffset + BUCKET_HEADER_LENGTH + (countInSegment * HASH_CODE_LEN), searchHashCode); // hash code bucket.putLong(bucketInSegmentOffset + BUCKET_POINTER_START_OFFSET + (countInSegment * POINTER_LEN), pointer); // pointer bucket.putInt(bucketInSegmentOffset + HEADER_COUNT_OFFSET, countInSegment + 1); // update count } else { insertBucketEntryFromStart(originalBucket, originalBucketOffset, searchHashCode, pointer, partitionNumber); } return; } final int overflowSegNum = (int) (newForwardPointer >>> 32); bucket = overflowSegments[overflowSegNum]; bucketInSegmentOffset = (int) newForwardPointer; countInSegment = bucket.getInt(bucketInSegmentOffset + HEADER_COUNT_OFFSET); posInSegment = bucketInSegmentOffset + BUCKET_HEADER_LENGTH; numInSegment = 0; } }
Replaces record in hash table if record already present or append record if not. May trigger expensive compaction. @param record record to insert or replace @throws IOException
private void insertBucketEntryFromStart(MemorySegment bucket, int bucketInSegmentPos, int hashCode, long pointer, int partitionNumber) throws IOException { boolean checkForResize = false; // find the position to put the hash code and pointer final int count = bucket.getInt(bucketInSegmentPos + HEADER_COUNT_OFFSET); if (count < NUM_ENTRIES_PER_BUCKET) { // we are good in our current bucket, put the values bucket.putInt(bucketInSegmentPos + BUCKET_HEADER_LENGTH + (count * HASH_CODE_LEN), hashCode); // hash code bucket.putLong(bucketInSegmentPos + BUCKET_POINTER_START_OFFSET + (count * POINTER_LEN), pointer); // pointer bucket.putInt(bucketInSegmentPos + HEADER_COUNT_OFFSET, count + 1); // update count } else { // we need to go to the overflow buckets final InMemoryPartition<T> p = this.partitions.get(partitionNumber); final long originalForwardPointer = bucket.getLong(bucketInSegmentPos + HEADER_FORWARD_OFFSET); final long forwardForNewBucket; if (originalForwardPointer != BUCKET_FORWARD_POINTER_NOT_SET) { // forward pointer set final int overflowSegNum = (int) (originalForwardPointer >>> 32); final int segOffset = (int) originalForwardPointer; final MemorySegment seg = p.overflowSegments[overflowSegNum]; final int obCount = seg.getInt(segOffset + HEADER_COUNT_OFFSET); // check if there is space in this overflow bucket if (obCount < NUM_ENTRIES_PER_BUCKET) { // space in this bucket and we are done seg.putInt(segOffset + BUCKET_HEADER_LENGTH + (obCount * HASH_CODE_LEN), hashCode); // hash code seg.putLong(segOffset + BUCKET_POINTER_START_OFFSET + (obCount * POINTER_LEN), pointer); // pointer seg.putInt(segOffset + HEADER_COUNT_OFFSET, obCount + 1); // update count return; } else { // no space here, we need a new bucket. this current overflow bucket will be the // target of the new overflow bucket forwardForNewBucket = originalForwardPointer; } } else { // no overflow bucket yet, so we need a first one forwardForNewBucket = BUCKET_FORWARD_POINTER_NOT_SET; } // we need a new overflow bucket MemorySegment overflowSeg; final int overflowBucketNum; final int overflowBucketOffset; // first, see if there is space for an overflow bucket remaining in the last overflow segment if (p.nextOverflowBucket == 0) { // no space left in last bucket, or no bucket yet, so create an overflow segment overflowSeg = getNextBuffer(); overflowBucketOffset = 0; overflowBucketNum = p.numOverflowSegments; // add the new overflow segment if (p.overflowSegments.length <= p.numOverflowSegments) { MemorySegment[] newSegsArray = new MemorySegment[p.overflowSegments.length * 2]; System.arraycopy(p.overflowSegments, 0, newSegsArray, 0, p.overflowSegments.length); p.overflowSegments = newSegsArray; } p.overflowSegments[p.numOverflowSegments] = overflowSeg; p.numOverflowSegments++; checkForResize = true; } else { // there is space in the last overflow bucket overflowBucketNum = p.numOverflowSegments - 1; overflowSeg = p.overflowSegments[overflowBucketNum]; overflowBucketOffset = p.nextOverflowBucket << NUM_INTRA_BUCKET_BITS; } // next overflow bucket is one ahead. if the segment is full, the next will be at the beginning // of a new segment p.nextOverflowBucket = (p.nextOverflowBucket == this.bucketsPerSegmentMask ? 0 : p.nextOverflowBucket + 1); // insert the new overflow bucket in the chain of buckets // 1) set the old forward pointer // 2) let the bucket in the main table point to this one overflowSeg.putLong(overflowBucketOffset + HEADER_FORWARD_OFFSET, forwardForNewBucket); final long pointerToNewBucket = (((long) overflowBucketNum) << 32) | ((long) overflowBucketOffset); bucket.putLong(bucketInSegmentPos + HEADER_FORWARD_OFFSET, pointerToNewBucket); // finally, insert the values into the overflow buckets overflowSeg.putInt(overflowBucketOffset + BUCKET_HEADER_LENGTH, hashCode); // hash code overflowSeg.putLong(overflowBucketOffset + BUCKET_POINTER_START_OFFSET, pointer); // pointer // set the count to one overflowSeg.putInt(overflowBucketOffset + HEADER_COUNT_OFFSET, 1); if (checkForResize && !this.isResizing) { // check if we should resize buckets if (this.buckets.length <= getOverflowSegmentCount()) { resizeHashTable(); } } } }
IMPORTANT!!! We pass only the partition number, because we must make sure we get a fresh partition reference. The partition reference used during search for the key may have become invalid during the compaction.
private void insertBucketEntryFromSearch(MemorySegment originalBucket, MemorySegment currentBucket, int originalBucketOffset, int currentBucketOffset, int countInCurrentBucket, long originalForwardPointer, int hashCode, long pointer, int partitionNumber) throws IOException { boolean checkForResize = false; if (countInCurrentBucket < NUM_ENTRIES_PER_BUCKET) { // we are good in our current bucket, put the values currentBucket.putInt(currentBucketOffset + BUCKET_HEADER_LENGTH + (countInCurrentBucket * HASH_CODE_LEN), hashCode); // hash code currentBucket.putLong(currentBucketOffset + BUCKET_POINTER_START_OFFSET + (countInCurrentBucket * POINTER_LEN), pointer); // pointer currentBucket.putInt(currentBucketOffset + HEADER_COUNT_OFFSET, countInCurrentBucket + 1); // update count } else { // we go to a new overflow bucket final InMemoryPartition<T> partition = this.partitions.get(partitionNumber); MemorySegment overflowSeg; final int overflowSegmentNum; final int overflowBucketOffset; // first, see if there is space for an overflow bucket remaining in the last overflow segment if (partition.nextOverflowBucket == 0) { // no space left in last bucket, or no bucket yet, so create an overflow segment overflowSeg = getNextBuffer(); overflowBucketOffset = 0; overflowSegmentNum = partition.numOverflowSegments; // add the new overflow segment if (partition.overflowSegments.length <= partition.numOverflowSegments) { MemorySegment[] newSegsArray = new MemorySegment[partition.overflowSegments.length * 2]; System.arraycopy(partition.overflowSegments, 0, newSegsArray, 0, partition.overflowSegments.length); partition.overflowSegments = newSegsArray; } partition.overflowSegments[partition.numOverflowSegments] = overflowSeg; partition.numOverflowSegments++; checkForResize = true; } else { // there is space in the last overflow segment overflowSegmentNum = partition.numOverflowSegments - 1; overflowSeg = partition.overflowSegments[overflowSegmentNum]; overflowBucketOffset = partition.nextOverflowBucket << NUM_INTRA_BUCKET_BITS; } // next overflow bucket is one ahead. if the segment is full, the next will be at the beginning // of a new segment partition.nextOverflowBucket = (partition.nextOverflowBucket == this.bucketsPerSegmentMask ? 0 : partition.nextOverflowBucket + 1); // insert the new overflow bucket in the chain of buckets // 1) set the old forward pointer // 2) let the bucket in the main table point to this one overflowSeg.putLong(overflowBucketOffset + HEADER_FORWARD_OFFSET, originalForwardPointer); final long pointerToNewBucket = (((long) overflowSegmentNum) << 32) | ((long) overflowBucketOffset); originalBucket.putLong(originalBucketOffset + HEADER_FORWARD_OFFSET, pointerToNewBucket); // finally, insert the values into the overflow buckets overflowSeg.putInt(overflowBucketOffset + BUCKET_HEADER_LENGTH, hashCode); // hash code overflowSeg.putLong(overflowBucketOffset + BUCKET_POINTER_START_OFFSET, pointer); // pointer // set the count to one overflowSeg.putInt(overflowBucketOffset + HEADER_COUNT_OFFSET, 1); if(checkForResize && !this.isResizing) { // check if we should resize buckets if(this.buckets.length <= getOverflowSegmentCount()) { resizeHashTable(); } } } }
IMPORTANT!!! We pass only the partition number, because we must make sure we get a fresh partition reference. The partition reference used during search for the key may have become invalid during the compaction.
@Override public <PT> HashTableProber<PT> getProber(TypeComparator<PT> probeSideComparator, TypePairComparator<PT, T> pairComparator) { return new HashTableProber<PT>(probeSideComparator, pairComparator); }
--------------------------------------------------------------------------------------------
private void createPartitions(int numPartitions) { this.partitions.clear(); ListMemorySegmentSource memSource = new ListMemorySegmentSource(this.availableMemory); for (int i = 0; i < numPartitions; i++) { this.partitions.add(new InMemoryPartition<T>(this.buildSideSerializer, i, memSource, this.segmentSize, pageSizeInBits)); } this.compactionMemory = new InMemoryPartition<T>(this.buildSideSerializer, -1, memSource, this.segmentSize, pageSizeInBits); }
--------------------------------------------------------------------------------------------
private long getSize() { long numSegments = 0; numSegments += this.availableMemory.size(); numSegments += this.buckets.length; for(InMemoryPartition<T> p : this.partitions) { numSegments += p.getBlockCount(); numSegments += p.numOverflowSegments; } numSegments += this.compactionMemory.getBlockCount(); return numSegments*this.segmentSize; }
Size of all memory segments owned by this hash table @return size in bytes
private long getPartitionSize() { long numSegments = 0; for(InMemoryPartition<T> p : this.partitions) { numSegments += p.getBlockCount(); } return numSegments*this.segmentSize; }
Size of all memory segments owned by the partitions of this hash table excluding the compaction partition @return size in bytes
private boolean resizeHashTable() throws IOException { final int newNumBuckets = 2*this.numBuckets; final int bucketsPerSegment = this.bucketsPerSegmentMask + 1; final int newNumSegments = (newNumBuckets + (bucketsPerSegment-1)) / bucketsPerSegment; final int additionalSegments = newNumSegments-this.buckets.length; final int numPartitions = this.partitions.size(); if (this.availableMemory.size() < additionalSegments) { for (int i = 0; i < numPartitions; i++) { compactPartition(i); if(this.availableMemory.size() >= additionalSegments) { break; } } } if (this.availableMemory.size() < additionalSegments || this.closed) { return false; } else { this.isResizing = true; // allocate new buckets final int startOffset = (this.numBuckets * HASH_BUCKET_SIZE) % this.segmentSize; final int oldNumBuckets = this.numBuckets; final int oldNumSegments = this.buckets.length; MemorySegment[] mergedBuckets = new MemorySegment[newNumSegments]; System.arraycopy(this.buckets, 0, mergedBuckets, 0, this.buckets.length); this.buckets = mergedBuckets; this.numBuckets = newNumBuckets; // initialize all new buckets boolean oldSegment = (startOffset != 0); final int startSegment = oldSegment ? (oldNumSegments-1) : oldNumSegments; for (int i = startSegment, bucket = oldNumBuckets; i < newNumSegments && bucket < this.numBuckets; i++) { MemorySegment seg; int bucketOffset; if(oldSegment) { // the first couple of new buckets may be located on an old segment seg = this.buckets[i]; for (int k = (oldNumBuckets % bucketsPerSegment) ; k < bucketsPerSegment && bucket < this.numBuckets; k++, bucket++) { bucketOffset = k * HASH_BUCKET_SIZE; // initialize the header fields seg.put(bucketOffset + HEADER_PARTITION_OFFSET, assignPartition(bucket, (byte)numPartitions)); seg.putInt(bucketOffset + HEADER_COUNT_OFFSET, 0); seg.putLong(bucketOffset + HEADER_FORWARD_OFFSET, BUCKET_FORWARD_POINTER_NOT_SET); } } else { seg = getNextBuffer(); // go over all buckets in the segment for (int k = 0; k < bucketsPerSegment && bucket < this.numBuckets; k++, bucket++) { bucketOffset = k * HASH_BUCKET_SIZE; // initialize the header fields seg.put(bucketOffset + HEADER_PARTITION_OFFSET, assignPartition(bucket, (byte)numPartitions)); seg.putInt(bucketOffset + HEADER_COUNT_OFFSET, 0); seg.putLong(bucketOffset + HEADER_FORWARD_OFFSET, BUCKET_FORWARD_POINTER_NOT_SET); } } this.buckets[i] = seg; oldSegment = false; // we write on at most one old segment } int hashOffset; int hash; int pointerOffset; long pointer; IntArrayList hashList = new IntArrayList(NUM_ENTRIES_PER_BUCKET); LongArrayList pointerList = new LongArrayList(NUM_ENTRIES_PER_BUCKET); IntArrayList overflowHashes = new IntArrayList(64); LongArrayList overflowPointers = new LongArrayList(64); // go over all buckets and split them between old and new buckets for (int i = 0; i < numPartitions; i++) { InMemoryPartition<T> partition = this.partitions.get(i); final MemorySegment[] overflowSegments = partition.overflowSegments; int posHashCode; for (int j = 0, bucket = i; j < this.buckets.length && bucket < oldNumBuckets; j++) { MemorySegment segment = this.buckets[j]; // go over all buckets in the segment belonging to the partition for (int k = bucket % bucketsPerSegment; k < bucketsPerSegment && bucket < oldNumBuckets; k += numPartitions, bucket += numPartitions) { int bucketOffset = k * HASH_BUCKET_SIZE; if((int)segment.get(bucketOffset + HEADER_PARTITION_OFFSET) != i) { throw new IOException("Accessed wrong bucket! wanted: " + i + " got: " + segment.get(bucketOffset + HEADER_PARTITION_OFFSET)); } // loop over all segments that are involved in the bucket (original bucket plus overflow buckets) int countInSegment = segment.getInt(bucketOffset + HEADER_COUNT_OFFSET); int numInSegment = 0; pointerOffset = bucketOffset + BUCKET_POINTER_START_OFFSET; hashOffset = bucketOffset + BUCKET_HEADER_LENGTH; while (true) { while (numInSegment < countInSegment) { hash = segment.getInt(hashOffset); if((hash % this.numBuckets) != bucket && (hash % this.numBuckets) != (bucket+oldNumBuckets)) { throw new IOException("wanted: " + bucket + " or " + (bucket + oldNumBuckets) + " got: " + hash%this.numBuckets); } pointer = segment.getLong(pointerOffset); hashList.add(hash); pointerList.add(pointer); pointerOffset += POINTER_LEN; hashOffset += HASH_CODE_LEN; numInSegment++; } // this segment is done. check if there is another chained bucket final long forwardPointer = segment.getLong(bucketOffset + HEADER_FORWARD_OFFSET); if (forwardPointer == BUCKET_FORWARD_POINTER_NOT_SET) { break; } final int overflowSegNum = (int) (forwardPointer >>> 32); segment = overflowSegments[overflowSegNum]; bucketOffset = (int) forwardPointer; countInSegment = segment.getInt(bucketOffset + HEADER_COUNT_OFFSET); pointerOffset = bucketOffset + BUCKET_POINTER_START_OFFSET; hashOffset = bucketOffset + BUCKET_HEADER_LENGTH; numInSegment = 0; } segment = this.buckets[j]; bucketOffset = k * HASH_BUCKET_SIZE; // reset bucket for re-insertion segment.putInt(bucketOffset + HEADER_COUNT_OFFSET, 0); segment.putLong(bucketOffset + HEADER_FORWARD_OFFSET, BUCKET_FORWARD_POINTER_NOT_SET); // refill table if(hashList.size() != pointerList.size()) { throw new IOException("Pointer and hash counts do not match. hashes: " + hashList.size() + " pointer: " + pointerList.size()); } int newSegmentIndex = (bucket + oldNumBuckets) / bucketsPerSegment; MemorySegment newSegment = this.buckets[newSegmentIndex]; // we need to avoid overflows in the first run int oldBucketCount = 0; int newBucketCount = 0; while (!hashList.isEmpty()) { hash = hashList.removeLast(); pointer = pointerList.removeLong(pointerList.size()-1); posHashCode = hash % this.numBuckets; if (posHashCode == bucket && oldBucketCount < NUM_ENTRIES_PER_BUCKET) { bucketOffset = (bucket % bucketsPerSegment) * HASH_BUCKET_SIZE; insertBucketEntryFromStart(segment, bucketOffset, hash, pointer, partition.getPartitionNumber()); oldBucketCount++; } else if (posHashCode == (bucket + oldNumBuckets) && newBucketCount < NUM_ENTRIES_PER_BUCKET) { bucketOffset = ((bucket + oldNumBuckets) % bucketsPerSegment) * HASH_BUCKET_SIZE; insertBucketEntryFromStart(newSegment, bucketOffset, hash, pointer, partition.getPartitionNumber()); newBucketCount++; } else if (posHashCode == (bucket + oldNumBuckets) || posHashCode == bucket) { overflowHashes.add(hash); overflowPointers.add(pointer); } else { throw new IOException("Accessed wrong bucket. Target: " + bucket + " or " + (bucket + oldNumBuckets) + " Hit: " + posHashCode); } } hashList.clear(); pointerList.clear(); } } // reset partition's overflow buckets and reclaim their memory this.availableMemory.addAll(partition.resetOverflowBuckets()); // clear overflow lists int bucketArrayPos; int bucketInSegmentPos; MemorySegment bucket; while(!overflowHashes.isEmpty()) { hash = overflowHashes.removeLast(); pointer = overflowPointers.removeLong(overflowPointers.size()-1); posHashCode = hash % this.numBuckets; bucketArrayPos = posHashCode >>> this.bucketsPerSegmentBits; bucketInSegmentPos = (posHashCode & this.bucketsPerSegmentMask) << NUM_INTRA_BUCKET_BITS; bucket = this.buckets[bucketArrayPos]; insertBucketEntryFromStart(bucket, bucketInSegmentPos, hash, pointer, partition.getPartitionNumber()); } overflowHashes.clear(); overflowPointers.clear(); } this.isResizing = false; return true; } }
Attempts to double the number of buckets @return true on success @throws IOException
private void compactPartition(final int partitionNumber) throws IOException { // do nothing if table was closed, parameter is invalid or no garbage exists if (this.closed || partitionNumber >= this.partitions.size() || this.partitions.get(partitionNumber).isCompacted()) { return; } // release all segments owned by compaction partition this.compactionMemory.clearAllMemory(availableMemory); this.compactionMemory.allocateSegments(1); this.compactionMemory.pushDownPages(); T tempHolder = this.buildSideSerializer.createInstance(); final int numPartitions = this.partitions.size(); InMemoryPartition<T> partition = this.partitions.remove(partitionNumber); MemorySegment[] overflowSegments = partition.overflowSegments; long pointer; int pointerOffset; int bucketOffset; final int bucketsPerSegment = this.bucketsPerSegmentMask + 1; for (int i = 0, bucket = partitionNumber; i < this.buckets.length && bucket < this.numBuckets; i++) { MemorySegment segment = this.buckets[i]; // go over all buckets in the segment belonging to the partition for (int k = bucket % bucketsPerSegment; k < bucketsPerSegment && bucket < this.numBuckets; k += numPartitions, bucket += numPartitions) { bucketOffset = k * HASH_BUCKET_SIZE; if((int)segment.get(bucketOffset + HEADER_PARTITION_OFFSET) != partitionNumber) { throw new IOException("Accessed wrong bucket! wanted: " + partitionNumber + " got: " + segment.get(bucketOffset + HEADER_PARTITION_OFFSET)); } // loop over all segments that are involved in the bucket (original bucket plus overflow buckets) int countInSegment = segment.getInt(bucketOffset + HEADER_COUNT_OFFSET); int numInSegment = 0; pointerOffset = bucketOffset + BUCKET_POINTER_START_OFFSET; while (true) { while (numInSegment < countInSegment) { pointer = segment.getLong(pointerOffset); tempHolder = partition.readRecordAt(pointer, tempHolder); pointer = this.compactionMemory.appendRecord(tempHolder); segment.putLong(pointerOffset, pointer); pointerOffset += POINTER_LEN; numInSegment++; } // this segment is done. check if there is another chained bucket final long forwardPointer = segment.getLong(bucketOffset + HEADER_FORWARD_OFFSET); if (forwardPointer == BUCKET_FORWARD_POINTER_NOT_SET) { break; } final int overflowSegNum = (int) (forwardPointer >>> 32); segment = overflowSegments[overflowSegNum]; bucketOffset = (int) forwardPointer; countInSegment = segment.getInt(bucketOffset + HEADER_COUNT_OFFSET); pointerOffset = bucketOffset + BUCKET_POINTER_START_OFFSET; numInSegment = 0; } segment = this.buckets[i]; } } // swap partition with compaction partition this.compactionMemory.setPartitionNumber(partitionNumber); this.partitions.add(partitionNumber, compactionMemory); this.partitions.get(partitionNumber).overflowSegments = partition.overflowSegments; this.partitions.get(partitionNumber).numOverflowSegments = partition.numOverflowSegments; this.partitions.get(partitionNumber).nextOverflowBucket = partition.nextOverflowBucket; this.partitions.get(partitionNumber).setIsCompacted(true); //this.partitions.get(partitionNumber).pushDownPages(); this.compactionMemory = partition; this.compactionMemory.resetRecordCounter(); this.compactionMemory.setPartitionNumber(-1); this.compactionMemory.overflowSegments = null; this.compactionMemory.numOverflowSegments = 0; this.compactionMemory.nextOverflowBucket = 0; // try to allocate maximum segment count this.compactionMemory.clearAllMemory(this.availableMemory); int maxSegmentNumber = this.getMaxPartition(); this.compactionMemory.allocateSegments(maxSegmentNumber); this.compactionMemory.resetRWViews(); this.compactionMemory.pushDownPages(); }
Compacts (garbage collects) partition with copy-compact strategy using compaction partition @param partitionNumber partition to compact @throws IOException
@Override public void close() throws Exception { Throwable exception = null; try { blobStoreService.close(); } catch (Throwable t) { exception = t; } internalClose(); if (exception != null) { ExceptionUtils.rethrowException(exception, "Could not properly close the ZooKeeperHaServices."); } }
------------------------------------------------------------------------
private void tryDeleteEmptyParentZNodes() throws Exception { // try to delete the parent znodes if they are empty String remainingPath = getParentPath(getNormalizedPath(client.getNamespace())); final CuratorFramework nonNamespaceClient = client.usingNamespace(null); while (!isRootPath(remainingPath)) { try { nonNamespaceClient.delete().forPath(remainingPath); } catch (KeeperException.NotEmptyException ignored) { // We can only delete empty znodes break; } remainingPath = getParentPath(remainingPath); } }
Tries to delete empty parent znodes. <p>IMPORTANT: This method can be removed once all supported ZooKeeper versions support the container {@link org.apache.zookeeper.CreateMode}. @throws Exception if the deletion fails for other reason than {@link KeeperException.NotEmptyException}
@Override protected void initialize() throws ResourceManagerException { // create and start the worker store try { this.workerStore = mesosServices.createMesosWorkerStore(flinkConfig, getRpcService().getExecutor()); workerStore.start(); } catch (Exception e) { throw new ResourceManagerException("Unable to initialize the worker store.", e); } // Prepare to register with Mesos Protos.FrameworkInfo.Builder frameworkInfo = mesosConfig.frameworkInfo() .clone() .setCheckpoint(true); if (webUiUrl != null) { frameworkInfo.setWebuiUrl(webUiUrl); } try { Option<Protos.FrameworkID> frameworkID = workerStore.getFrameworkID(); if (frameworkID.isEmpty()) { LOG.info("Registering as new framework."); } else { LOG.info("Recovery scenario: re-registering using framework ID {}.", frameworkID.get().getValue()); frameworkInfo.setId(frameworkID.get()); } } catch (Exception e) { throw new ResourceManagerException("Unable to recover the framework ID.", e); } initializedMesosConfig = mesosConfig.withFrameworkInfo(frameworkInfo); MesosConfiguration.logMesosConfig(LOG, initializedMesosConfig); this.selfActor = createSelfActor(); // configure the artifact server to serve the TM container artifacts try { LaunchableMesosWorker.configureArtifactServer(artifactServer, taskManagerContainerSpec); } catch (IOException e) { throw new ResourceManagerException("Unable to configure the artifact server with TaskManager artifacts.", e); } }
------------------------------------------------------------------------
@SuppressWarnings("unchecked") @Nonnull @Override public <T extends HeapPriorityQueueElement & PriorityComparable & Keyed> KeyGroupedInternalPriorityQueue<T> create( @Nonnull String stateName, @Nonnull TypeSerializer<T> byteOrderedElementSerializer) { final HeapPriorityQueueSnapshotRestoreWrapper existingState = registeredPQStates.get(stateName); if (existingState != null) { // TODO we implement the simple way of supporting the current functionality, mimicking keyed state // because this should be reworked in FLINK-9376 and then we should have a common algorithm over // StateMetaInfoSnapshot that avoids this code duplication. TypeSerializerSchemaCompatibility<T> compatibilityResult = existingState.getMetaInfo().updateElementSerializer(byteOrderedElementSerializer); if (compatibilityResult.isIncompatible()) { throw new FlinkRuntimeException(new StateMigrationException("For heap backends, the new priority queue serializer must not be incompatible.")); } else { registeredPQStates.put( stateName, existingState.forUpdatedSerializer(byteOrderedElementSerializer)); } return existingState.getPriorityQueue(); } else { final RegisteredPriorityQueueStateBackendMetaInfo<T> metaInfo = new RegisteredPriorityQueueStateBackendMetaInfo<>(stateName, byteOrderedElementSerializer); return createInternal(metaInfo); } }
------------------------------------------------------------------------
@VisibleForTesting @SuppressWarnings("unchecked") @Override public int numKeyValueStateEntries() { int sum = 0; for (StateSnapshotRestore state : registeredKVStates.values()) { sum += ((StateTable<?, ?, ?>) state).size(); } return sum; }
Returns the total number of state entries across all keys/namespaces.