code
stringlengths
67
466k
docstring
stringlengths
1
13.2k
public @Nullable String build() { StringBuilder queryString = new StringBuilder(); for (NameValuePair param : params) { if (queryString.length() > 0) { queryString.append(PARAM_SEPARATOR); } queryString.append(Escape.urlEncode(param.getName())); queryString.append(VALUE_SEPARATOR); queryString.append(Escape.urlEncode(param.getValue())); } if (queryString.length() > 0) { return queryString.toString(); } else { return null; } }
Build query string. @return Query string or null if query string contains no parameters at all.
@SuppressWarnings("null") public static <T> @NotNull T notNull(@NotNull Adaptable adaptable, @NotNull Class<T> type) { T object = adaptable.adaptTo(type); if (object == null) { throw new UnableToAdaptException(adaptable, type); } return object; }
Try to adapt the adaptable to the given type and ensures that it succeeds. @param adaptable Adaptable @param type Type @param <T> Type @return Adaption result (not null) @throws UnableToAdaptException if the adaption was not successful
public AssemblyResponse getAssembly(String id) throws RequestException, LocalOperationException { Request request = new Request(this); return new AssemblyResponse(request.get("/assemblies/" + id)); }
Returns a single assembly. @param id id of the Assembly to retrieve. @return {@link AssemblyResponse} @throws RequestException if request to transloadit server fails. @throws LocalOperationException if something goes wrong while running non-http operations.
public AssemblyResponse getAssemblyByUrl(String url) throws RequestException, LocalOperationException { Request request = new Request(this); return new AssemblyResponse(request.get(url)); }
Returns a single assembly. @param url full url of the Assembly. @return {@link AssemblyResponse} @throws RequestException if request to transloadit server fails. @throws LocalOperationException if something goes wrong while running non-http operations.
public AssemblyResponse cancelAssembly(String url) throws RequestException, LocalOperationException { Request request = new Request(this); return new AssemblyResponse(request.delete(url, new HashMap<String, Object>())); }
cancels a running assembly. @param url full url of the Assembly. @return {@link AssemblyResponse} @throws RequestException if request to transloadit server fails. @throws LocalOperationException if something goes wrong while running non-http operations.
public Response getTemplate(String id) throws RequestException, LocalOperationException { Request request = new Request(this); return new Response(request.get("/templates/" + id)); }
Returns a single template. @param id id of the template to retrieve. @return {@link Response} @throws RequestException if request to transloadit server fails. @throws LocalOperationException if something goes wrong while running non-http operations.
public Response updateTemplate(String id, Map<String, Object> options) throws RequestException, LocalOperationException { Request request = new Request(this); return new Response(request.put("/templates/" + id, options)); }
Updates the template with the specified id. @param id id of the template to update @param options a Map of options to update/add. @return {@link Response} @throws RequestException if request to transloadit server fails. @throws LocalOperationException if something goes wrong while running non-http operations.
public Response deleteTemplate(String id) throws RequestException, LocalOperationException { Request request = new Request(this); return new Response(request.delete("/templates/" + id, new HashMap<String, Object>())); }
Deletes a template. @param id id of the template to delete. @return {@link Response} @throws RequestException if request to transloadit server fails. @throws LocalOperationException if something goes wrong while running non-http operations.
public ListResponse listTemplates(Map<String, Object> options) throws RequestException, LocalOperationException { Request request = new Request(this); return new ListResponse(request.get("/templates", options)); }
Returns a list of all templates under the user account @param options {@link Map} extra options to send along with the request. @return {@link ListResponse} @throws RequestException if request to transloadit server fails. @throws LocalOperationException if something goes wrong while running non-http operations.
public Response getBill(int month, int year) throws RequestException, LocalOperationException { Request request = new Request(this); return new Response(request.get("/bill/" + year + String.format("-%02d", month))); }
Returns the bill for the month specified. @param month for which bill to retrieve. @param year for which bill to retrieve. @return {@link Response} @throws RequestException if request to transloadit server fails. @throws LocalOperationException if something goes wrong while running non-http operations.
protected final <R> Iterable<R> mapReduce(String name, final MapReduceResultHandler<R> conv) { return this.dataAccess.mapReduce(name, conv); }
runs a map-reduce-job on the collection. same as {@link #mapReduce(String, DBObject, DBObject, Map, MapReduceResultHandler) mapReduce(name, null, null, null, conv)} @param <R> the type of the result class @param name the name of the map-reduce functions @param conv the converter to convert the result @return an {@link Iterable} with the result entries
protected final <R> Iterable<R> mapReduce(String name, DBObject query, DBObject sort, Map<String, Object> scope, final MapReduceResultHandler<R> conv) { return this.dataAccess.mapReduce(name, query, sort, scope, conv); }
runs a map-reduce-job on the collection. The functions are read from the classpath in the folder mongodb. The systems reads them from files called &lt;name&gt;.map.js, &lt;name&gt;.reduce.js and optionally &lt;name&gt;.finalize.js. After this the result is converted using the given {@link MapReduceResultHandler} @param <R> the type of the result class @param name the name of the map-reduce functions @param query the query to filter the elements used for the map-reduce @param sort sort query to sort elements before running map-reduce @param scope the global scope for the JavaScript run @param conv the converter to convert the result @return an {@link Iterable} with the result entries @throws RuntimeException if resources cannot be read
protected final List<T> findByQuery(String query, Object... params) { return this.dataAccess.findByQuery(query, params); }
finds all elements matching the given query @param query the query to search for @param params the parameters to replace # symbols @return the list of elements found
protected final List<T> findSortedByQuery(String query, String sort, Object... params) { return this.dataAccess.findSortedByQuery(query, sort, params); }
finds all elements matching the given query and sorts them accordingly @param query the query to search for @param sort the sort query to apply @param params the parameters to replace # symbols @return the list of elements found
protected final <P> List<P> findSortedByQuery(String query, String sort, String projection, ResultHandler<P> handler, Object... params) { return this.dataAccess.findSortedByQuery(query, sort, projection, handler, params); }
finds all elements matching the given query and sorts them accordingly. With this method it is possible to specify a projection to rename or filter fields in the result elements. Instead of returning typed objects it returns objects converted by the given {@link ResultHandler} @param query the query to search for @param sort the sort query to apply @param projection the projection of fields to use @param handler the handler to convert result elements with @param params the parameters to replace # symbols @param <P> the element type @return the list of elements found
protected final List<T> findSortedByQuery(String query, String sort, Integer skip, Integer limit, Object... params) { return this.dataAccess.findSortedByQuery(query, sort, skip, limit, params); }
finds all elements matching the given query and sorts them accordingly @param query the query to search for @param sort the sort query to apply @param skip the number of elements to skip @param limit the number of elements to fetch @param params the parameters to replace # symbols @return the list of elements found
protected final <P> List<P> findSortedByQuery(String query, String sort, Integer skip, Integer limit, String projection, Class<P> as, Object... params) { return this.dataAccess.findSortedByQuery(query, sort, skip, limit, projection, as, params); }
finds all elements matching the given query and sorts them accordingly. With this method it is possible to specify a projection to rename or filter fields in the result elements. Instead of returning typed objects it returns objects of type <code>as</code> @param query the query to search for @param sort the sort query to apply @param skip the number of elements to skip @param limit the number of elements to fetch @param projection the projection of fields to use @param as the target to convert result elements to @param params the parameters to replace # symbols @param <P> the element type @return the list of elements found
protected final List<T> searchSorted(String searchString, String sort) { return this.dataAccess.searchSorted(searchString, sort); }
finds all elements containing the given searchString in any text field and sorts them accordingly. @param searchString the searchString to search for @param sort the sort query to apply @return the list of elements found
protected final T findFirstByQuery(String query, String sort, Object... params) { return this.dataAccess.findFirstByQuery(query, sort, params).orElse(null); }
queries with the given string, sorts the result and returns the first element. <code>null</code> is returned if no element is found. @param query the query string @param sort the sort string @param params the parameters to replace # symbols @return the first element found or <code>null</code> if none is found
protected String getTableName() { Class<T> entityClass = this.getEntityClass(); if (!entityClass.isAnnotationPresent(DynamoDBTable.class)) { throw new IllegalStateException("Used getTableName on entity without @DynamoDBTable annotation"); } return entityClass.getAnnotation(DynamoDBTable.class).tableName(); }
override to use custom table name instead of @{@link DynamoDBTable} annotation @return the name of the table to use
@SuppressWarnings("unchecked") public static <T> T convert(Object fromValue, Class<T> toType) throws TypeCastException { return convert(fromValue, toType, (String) null); }
Convert given value to given target @param fromValue the value to convert @param toType target target @param <T> target of the result @return the value converted to given target @throws TypeCastException if conversion was not possible
public String getInterfaceImplements() { StringBuilder builder = new StringBuilder(); for(ImplementsDef i : this.implementsDef) { builder.append(", "); builder.append(i.getName()); } if(builder.toString().trim().length() < 1) { return ""; } return "extends " + builder.substring(2); }
velocity use @return the interface implementations
@SuppressWarnings("resource") public static void main(String[] args) { try { System.out.println("Select (k=generate key; c=crypt; d=decrypt):"); System.out.println(); Scanner scan = new Scanner(System.in, "UTF-8"); if (!scan.hasNextLine()) { return; } switch (scan.nextLine()) { case "k": MessageCryptoUtil.generateKey(); break; case "c": System.out.print("Input data: "); final String data = scan.nextLine(); System.out.println(MessageCryptoUtil.crypt(data)); break; case "d": System.out.print("Input data: "); final String ddata = scan.nextLine(); System.out.println(MessageCryptoUtil.decrypt(ddata)); break; default: break; } } catch (final Exception e) { e.printStackTrace(); } }
start this to generate an AES key or (de/en)crypt data @param args the CLI arguments
public Map<String, String> getInstanceTags() { Map<String, String> map = new HashMap<>(); Instance instance = this.getInstance(); for (Tag tag : instance.getTags()) { map.put(tag.getKey(), tag.getValue()); } return map; }
<br> Needed AWS actions: <ul> <li>ec2:DescribeInstances</li> </ul> @return the tags of the current instance
public String getAutoScalingGroup() { DescribeAutoScalingInstancesRequest req = new DescribeAutoScalingInstancesRequest(); req.setInstanceIds(Collections.singleton(this.getInstanceId())); DescribeAutoScalingInstancesResult result = this.autoScalingClient.describeAutoScalingInstances(req); if (result.getAutoScalingInstances().size() != 1) { throw new IllegalStateException("Found multiple auto scaling instances"); } return result.getAutoScalingInstances().get(0).getAutoScalingGroupName(); }
<br> Needed AWS actions: <ul> <li>autoscaling:DescribeAutoScalingInstances</li> </ul> @return the name of the auto scaling group the current instance is a member of
public List<String> getAutoScalingMembers(String autoScalingGroupName) { Preconditions.checkArgument(autoScalingGroupName != null && !autoScalingGroupName.isEmpty()); AutoScalingGroup autoScalingGroup = this.getAutoScalingGroup(autoScalingGroupName); List<String> list = new ArrayList<>(); for (com.amazonaws.services.autoscaling.model.Instance instance : autoScalingGroup.getInstances()) { if (instance.getHealthStatus().equals("Healthy")) { list.add(instance.getInstanceId()); } } return list; }
<br> Needed AWS actions: <ul> <li>autoscaling:DescribeAutoScalingGroups</li> </ul> @param autoScalingGroupName the name to search for @return the members of the given auto scaling group
public List<String> getPrivateAutoScalingMemberIPs(String autoScalingGroupName) { Preconditions.checkArgument(autoScalingGroupName != null && !autoScalingGroupName.isEmpty()); List<String> members = this.getAutoScalingMembers(autoScalingGroupName); DescribeInstancesRequest req = new DescribeInstancesRequest(); req.setInstanceIds(members); DescribeInstancesResult result = this.ec2Client.describeInstances(req); List<String> list = new ArrayList<>(); for (Reservation reservation : result.getReservations()) { for (Instance instance : reservation.getInstances()) { if (instance.getState().getName().equals("running")) { list.add(instance.getPrivateIpAddress()); } } } return list; }
<br> Needed AWS actions: <ul> <li>autoscaling:DescribeAutoScalingGroups</li> <li>ec2:DescribeInstances</li> </ul> @param autoScalingGroupName the name of the group @return the list of private IP addresses of the members
public AutoScalingGroup getAutoScalingGroup(String autoScalingGroupName) { Preconditions.checkArgument(autoScalingGroupName != null && !autoScalingGroupName.isEmpty()); DescribeAutoScalingGroupsRequest req = new DescribeAutoScalingGroupsRequest(); req.setAutoScalingGroupNames(Collections.singleton(autoScalingGroupName)); DescribeAutoScalingGroupsResult result = this.autoScalingClient.describeAutoScalingGroups(req); if (result.getAutoScalingGroups().size() != 1) { throw new IllegalStateException("Found multiple auto scaling groups"); } return result.getAutoScalingGroups().get(0); }
<br> Needed AWS actions: <ul> <li>autoscaling:DescribeAutoScalingGroups</li> </ul> @param autoScalingGroupName the name to search for @return the given auto scaling group
public Map<String, String> getAutoScalingGroupTags(String autoScalingGroupName) { Preconditions.checkArgument(autoScalingGroupName != null && !autoScalingGroupName.isEmpty()); AutoScalingGroup group = this.getAutoScalingGroup(autoScalingGroupName); Map<String, String> tags = new HashMap<>(); for (TagDescription tagDescription : group.getTags()) { tags.put(tagDescription.getKey(), tagDescription.getValue()); } return tags; }
<br> Needed AWS actions: <ul> <li>autoscaling:DescribeAutoScalingGroups</li> </ul> @return the tags of the given auto sclaing group
public Instance getInstance() { DescribeInstancesRequest req = new DescribeInstancesRequest(); req.setInstanceIds(Collections.singleton(this.getInstanceId())); DescribeInstancesResult result = this.ec2Client.describeInstances(req); if (result.getReservations().size() != 1) { throw new IllegalStateException("Found multiple instances"); } List<Instance> instances = result.getReservations().get(0).getInstances(); if (instances.size() != 1) { throw new IllegalStateException("Found multiple instances"); } return instances.get(0); }
<br> Needed AWS actions: <ul> <li>ec2:DescribeInstances</li> </ul> @return the current EC2 instance
public static String stringFor(int n) { switch (n) { case CUDNN_REDUCE_TENSOR_ADD: return "CUDNN_REDUCE_TENSOR_ADD"; case CUDNN_REDUCE_TENSOR_MUL: return "CUDNN_REDUCE_TENSOR_MUL"; case CUDNN_REDUCE_TENSOR_MIN: return "CUDNN_REDUCE_TENSOR_MIN"; case CUDNN_REDUCE_TENSOR_MAX: return "CUDNN_REDUCE_TENSOR_MAX"; case CUDNN_REDUCE_TENSOR_AMAX: return "CUDNN_REDUCE_TENSOR_AMAX"; case CUDNN_REDUCE_TENSOR_AVG: return "CUDNN_REDUCE_TENSOR_AVG"; case CUDNN_REDUCE_TENSOR_NORM1: return "CUDNN_REDUCE_TENSOR_NORM1"; case CUDNN_REDUCE_TENSOR_NORM2: return "CUDNN_REDUCE_TENSOR_NORM2"; case CUDNN_REDUCE_TENSOR_MUL_NO_ZEROS: return "CUDNN_REDUCE_TENSOR_MUL_NO_ZEROS"; } return "INVALID cudnnReduceTensorOp: "+n; }
Returns a string representation of the given constant @return A string representation of the given constant
public SignedJWT signToken(JWTClaimsSet claims) { try { SignedJWT signedJWT = new SignedJWT(new JWSHeader(JWSAlgorithm.HS256), claims); signedJWT.sign(new MACSigner(this.jwtSharedSecret)); return signedJWT; } catch (JOSEException e) { throw new RuntimeException("Error signing JSON Web Token", e); } }
Sign the given claims @param claims the claims to sign @return the created and signed JSON Web Token
public String signToken(AuthenticatedUser user) { Date expiry = new Date(System.currentTimeMillis() + this.jwtTimeout); JWTClaimsSet claimsSet = user.toClaimSet(this.jwtIssuer, expiry); SignedJWT jwt = this.signToken(claimsSet); return jwt.serialize(); }
Sign the given claims @param user the user to create the claims from @return the created and signed JSON Web Token as string
public SignedJWT verifyToken(String jwtString) throws ParseException { try { SignedJWT jwt = SignedJWT.parse(jwtString); if (jwt.verify(new MACVerifier(this.jwtSharedSecret))) { return jwt; } return null; } catch (JOSEException e) { throw new RuntimeException("Error verifying JSON Web Token", e); } }
Check the given JWT @param jwtString the JSON Web Token @return the parsed and verified token or null if token is invalid @throws ParseException if the token cannot be parsed
public void bind(T service, Map<String, Object> props) { synchronized (serviceMap) { serviceMap.put(ServiceUtil.getComparableForServiceRanking(props), service); updateSortedServices(); } }
Handle bind service event. @param service Service instance @param props Service reference properties
public void unbind(T service, Map<String, Object> props) { synchronized (serviceMap) { serviceMap.remove(ServiceUtil.getComparableForServiceRanking(props)); updateSortedServices(); } }
Handle unbind service event. @param service Service instance @param props Service reference properties
private void updateSortedServices() { List<T> copiedList = new ArrayList<T>(serviceMap.values()); sortedServices = Collections.unmodifiableList(copiedList); if (changeListener != null) { changeListener.changed(); } }
Update list of sorted services by copying it from the array and making it unmodifiable.
public static boolean hasSelector(@NotNull SlingHttpServletRequest request, @NotNull String expectedSelector) { String[] selectors = request.getRequestPathInfo().getSelectors(); return ArrayUtils.contains(selectors, expectedSelector); }
Checks if the given selector is present in the current URL request (at any position). @param request Sling request @param expectedSelector Selector string to check for. @return true if the selector was found
@SuppressWarnings("null") public static boolean hasAnySelector(@NotNull SlingHttpServletRequest request, @NotNull String @NotNull... expectedSelectors) { String[] selectors = request.getRequestPathInfo().getSelectors(); if (selectors != null && expectedSelectors != null) { for (String expectedSelector : expectedSelectors) { if (ArrayUtils.contains(selectors, expectedSelector)) { return true; } } } return false; }
Checks if one of the given selectors is present in the current URL request (at any position). @param request Sling request @param expectedSelectors Selectors string to check for. @return true if the selector was found
public static <T> JacksonParser<T> json(Class<T> contentType) { return new JacksonParser<>(null, contentType); }
Creates typed parser @param contentType class of parsed object @param <T> type of parsed object @return parser of objects of given type
public static String stringFor(int n) { switch (n) { case CUDNN_STATUS_SUCCESS: return "CUDNN_STATUS_SUCCESS"; case CUDNN_STATUS_NOT_INITIALIZED: return "CUDNN_STATUS_NOT_INITIALIZED"; case CUDNN_STATUS_ALLOC_FAILED: return "CUDNN_STATUS_ALLOC_FAILED"; case CUDNN_STATUS_BAD_PARAM: return "CUDNN_STATUS_BAD_PARAM"; case CUDNN_STATUS_INTERNAL_ERROR: return "CUDNN_STATUS_INTERNAL_ERROR"; case CUDNN_STATUS_INVALID_VALUE: return "CUDNN_STATUS_INVALID_VALUE"; case CUDNN_STATUS_ARCH_MISMATCH: return "CUDNN_STATUS_ARCH_MISMATCH"; case CUDNN_STATUS_MAPPING_ERROR: return "CUDNN_STATUS_MAPPING_ERROR"; case CUDNN_STATUS_EXECUTION_FAILED: return "CUDNN_STATUS_EXECUTION_FAILED"; case CUDNN_STATUS_NOT_SUPPORTED: return "CUDNN_STATUS_NOT_SUPPORTED"; case CUDNN_STATUS_LICENSE_ERROR: return "CUDNN_STATUS_LICENSE_ERROR"; case CUDNN_STATUS_RUNTIME_PREREQUISITE_MISSING: return "CUDNN_STATUS_RUNTIME_PREREQUISITE_MISSING"; case CUDNN_STATUS_RUNTIME_IN_PROGRESS: return "CUDNN_STATUS_RUNTIME_IN_PROGRESS"; case CUDNN_STATUS_RUNTIME_FP_OVERFLOW: return "CUDNN_STATUS_RUNTIME_FP_OVERFLOW"; } return "INVALID cudnnStatus: "+n; }
Returns a string representation of the given constant @return A string representation of the given constant
public void addStep(String name, String robot, Map<String, Object> options) { all.put(name, new Step(name, robot, options)); }
Adds a new step to the list of steps. @param name Name of the step to add. @param robot The name of the robot ot use with the step. @param options extra options required for the step.
protected boolean determineFeatureState(final ITemplateContext context, final IProcessableElementTag tag, final AttributeName attributeName, final String attributeValue, boolean defaultState) { final IStandardExpressionParser expressionParser = StandardExpressions.getExpressionParser(context.getConfiguration()); final IStandardExpression expression = expressionParser.parseExpression(context, attributeValue); final Object value = expression.execute(context); if (value != null) { return isFeatureActive(value.toString()); } else { return defaultState; } }
Determines the feature state @param context the template context @param tag the tag @param attributeName the attribute name @param attributeValue the attribute value @param defaultState the default state if the expression evaluates to null @return the feature state
public void init(Configuration configuration) { if (devMode && reload && !listeningToDispatcher) { // this is the only way I found to be able to get added to to // ConfigurationProvider list // listening to events in Dispatcher listeningToDispatcher = true; Dispatcher.addDispatcherListener(this); } }
Not used.
public void addFile(File file, String name) { files.put(name, file); // remove duplicate key if (fileStreams.containsKey(name)) { fileStreams.remove(name); } }
Adds a file to your assembly. If the field name specified already exists, it will override the content of the existing name. @param file {@link File} the file to be uploaded. @param name {@link String} the field name of the file when submitted Transloadit.
public void addFile(File file) { String name = "file"; files.put(normalizeDuplicateName(name), file); }
Adds a file to your assembly but automatically generates the field name of the file. @param file {@link File} the file to be uploaded.
public void addFile(InputStream inputStream, String name) { fileStreams.put(name, inputStream); // remove duplicate key if (files.containsKey(name)) { files.remove(name); } }
Adds a file to your assembly. If the field name specified already exists, it will override the content of the existing name. @param inputStream {@link InputStream} the file to be uploaded. @param name {@link String} the field name of the file when submitted Transloadit.
public void addFile(InputStream inputStream) { String name = "file"; fileStreams.put(normalizeDuplicateName(name), inputStream); }
Adds a file to your assembly but automatically genarates the name of the file. @param inputStream {@link InputStream} the file to be uploaded.
public void removeFile(String name) { if(files.containsKey(name)) { files.remove(name); } if(fileStreams.containsKey(name)) { fileStreams.remove(name); } }
Removes file from your assembly. @param name field name of the file to remove.
public AssemblyResponse save(boolean isResumable) throws RequestException, LocalOperationException { Request request = new Request(getClient()); options.put("steps", steps.toMap()); // only do tus uploads if files will be uploaded if (isResumable && getFilesCount() > 0) { Map<String, String> tusOptions = new HashMap<String, String>(); tusOptions.put("tus_num_expected_upload_files", Integer.toString(getFilesCount())); AssemblyResponse response = new AssemblyResponse( request.post("/assemblies", options, tusOptions, null, null), true); // check if the assembly returned an error if (response.hasError()) { throw new RequestException("Request to Assembly failed: " + response.json().getString("error")); } try { handleTusUpload(response); } catch (IOException e) { throw new LocalOperationException(e); } catch (ProtocolException e) { throw new RequestException(e); } return response; } else { return new AssemblyResponse(request.post("/assemblies", options, null, files, fileStreams)); } }
Submits the configured assembly to Transloadit for processing. @param isResumable boolean value that tells the assembly whether or not to use tus. @return {@link AssemblyResponse} the response received from the Transloadit server. @throws RequestException if request to Transloadit server fails. @throws LocalOperationException if something goes wrong while running non-http operations.
protected void processTusFiles(String assemblyUrl) throws IOException, ProtocolException { tusClient.setUploadCreationURL(new URL(getClient().getHostUrl() + "/resumable/files/")); tusClient.enableResuming(tusURLStore); for (Map.Entry<String, File> entry : files.entrySet()) { processTusFile(entry.getValue(), entry.getKey(), assemblyUrl); } for (Map.Entry<String, InputStream> entry : fileStreams.entrySet()) { processTusFile(entry.getValue(), entry.getKey(), assemblyUrl); } }
Prepares all files added for tus uploads. @param assemblyUrl the assembly url affiliated with the tus upload. @throws IOException when there's a failure with file retrieval. @throws ProtocolException when there's a failure with tus upload.
protected void processTusFile(InputStream inptStream, String fieldName, String assemblyUrl) throws IOException { TusUpload upload = getTusUploadInstance(inptStream, fieldName, assemblyUrl); Map<String, String> metadata = new HashMap<String, String>(); metadata.put("filename", fieldName); metadata.put("assembly_url", assemblyUrl); metadata.put("fieldname", fieldName); upload.setMetadata(metadata); uploads.add(upload); }
Prepares a file for tus upload. @param inptStream {@link InputStream} @param fieldName the form field name assigned to the file. @param assemblyUrl the assembly url affiliated with the tus upload. @throws IOException when there's a failure with reading the input stream.
protected void processTusFile(File file, String fieldName, String assemblyUrl) throws IOException { TusUpload upload = getTusUploadInstance(file); Map<String, String> metadata = new HashMap<String, String>(); metadata.put("filename", file.getName()); metadata.put("assembly_url", assemblyUrl); metadata.put("fieldname", fieldName); upload.setMetadata(metadata); uploads.add(upload); }
Prepares a file for tus upload. @param file {@link File} @param fieldName the form field name assigned to the file. @param assemblyUrl the assembly url affiliated with the tus upload. @throws IOException when there's a failure with file retrieval.
protected TusUpload getTusUploadInstance(InputStream inputStream, String fieldName, String assemblyUrl) throws IOException { TusUpload tusUpload = new TusUpload(); tusUpload.setInputStream(inputStream); tusUpload.setFingerprint(String.format("%s-%d-%s", fieldName, inputStream.available(), assemblyUrl)); tusUpload.setSize(inputStream.available()); return tusUpload; }
Returns the {@link TusUpload} instance that would be used to upload a file. @param inputStream {@link InputStream} @param fieldName {@link String} the field name assigned to the file @param assemblyUrl {@link String} the assembly url @return {@link TusUpload} @throws IOException when there's a failure with reading the input stream.
protected void uploadTusFiles() throws IOException, ProtocolException { while (uploads.size() > 0) { final TusUploader tusUploader = tusClient.resumeOrCreateUpload(uploads.get(0)); TusExecutor tusExecutor = new TusExecutor() { @Override protected void makeAttempt() throws ProtocolException, IOException { int uploadedChunk = 0; while (uploadedChunk > -1) { uploadedChunk = tusUploader.uploadChunk(); } tusUploader.finish(); } }; tusExecutor.makeAttempts(); // remove upload instance from list uploads.remove(0); } }
Does the actual uploading of files (when tus is enabled). @throws IOException when there's a failure with file retrieval. @throws ProtocolException when there's a failure with tus upload.
@SuppressWarnings("unchecked") public static <T extends Serializable> T makeClone(T from) { return (T) SerializationUtils.clone(from); }
Creates a clone using java serialization @param from Object to be cloned @param <T> type of the cloned object @return Clone of the object
public static String stringFor(int n) { switch (n) { case CUDNN_ACTIVATION_SIGMOID: return "CUDNN_ACTIVATION_SIGMOID"; case CUDNN_ACTIVATION_RELU: return "CUDNN_ACTIVATION_RELU"; case CUDNN_ACTIVATION_TANH: return "CUDNN_ACTIVATION_TANH"; case CUDNN_ACTIVATION_CLIPPED_RELU: return "CUDNN_ACTIVATION_CLIPPED_RELU"; case CUDNN_ACTIVATION_ELU: return "CUDNN_ACTIVATION_ELU"; case CUDNN_ACTIVATION_IDENTITY: return "CUDNN_ACTIVATION_IDENTITY"; } return "INVALID cudnnActivationMode: "+n; }
Returns a string representation of the given constant @return A string representation of the given constant
public static @NotNull String makeAbsolute(@NotNull String resourceType, @NotNull ResourceResolver resourceResolver) { if (StringUtils.isEmpty(resourceType) || StringUtils.startsWith(resourceType, "/")) { return resourceType; } // first try to resolve path via component manager - because on publish instance the original resource may not accessible ComponentManager componentManager = resourceResolver.adaptTo(ComponentManager.class); if (componentManager != null) { Component component = componentManager.getComponent(resourceType); if (component != null) { return component.getPath(); } else { return resourceType; } } // otherwise use resource resolver directly Resource resource = resourceResolver.getResource(resourceType); if (resource != null) { return resource.getPath(); } else { return resourceType; } }
Converts the resource type to an absolute path. If it does not start with "/" the resource is resolved via search paths using resource resolver. If not matching resource is found it is returned unchanged. @param resourceType Resource type @param resourceResolver Resource resolver @return Absolute resource type
public static @NotNull String makeRelative(@NotNull String resourceType, @NotNull ResourceResolver resourceResolver) { String[] searchPaths = resourceResolver.getSearchPath(); for (String prefix : searchPaths) { if (StringUtils.startsWith(resourceType, prefix)) { return resourceType.substring(prefix.length()); } } return resourceType; }
Makes the given resource type relative by stripping off any search path prefix. In case the given resource type does not start with any of these prefixes it is returned unmodified. @param resourceType The resource type to make relative. @param resourceResolver Resource resolver @return Relative resource type
@Deprecated public static @NotNull String makeRelative(@NotNull String resourceType) { if (StringUtils.startsWith(resourceType, APPS_PREFIX)) { return resourceType.substring(APPS_PREFIX.length()); } else if (StringUtils.startsWith(resourceType, LIBS_PREFIX)) { return resourceType.substring(LIBS_PREFIX.length()); } return resourceType; }
Makes the given resource type relative by stripping off an /apps/ or /libs/ prefix. In case the given resource type does not start with any of these prefixes it is returned unmodified. This method does not take the real configured search paths into account, but in case of AEM usually only /apps/ and /libs/ are used. @param resourceType The resource type to make relative. @return Relative resource type @deprecated Please use {@link #makeRelative(String, ResourceResolver)} instead.
public static boolean is(@Nullable Resource resource, @Nullable String resourceType) { if (resource == null || resourceType == null) { return false; } ResourceResolver resolver = resource.getResourceResolver(); // Check if the resource is of the given type. This method first checks the // resource type of the resource, then its super resource type and continues // to go up the resource super type hierarchy. boolean result = false; if (ResourceType.equals(resourceType, resource.getResourceType())) { result = true; } else { Set<String> superTypesChecked = new HashSet<>(); String superType = resolver.getParentResourceType(resource); while (!result && superType != null) { if (ResourceType.equals(resourceType, superType)) { result = true; } else { superTypesChecked.add(superType); superType = resolver.getParentResourceType(superType); if (superType != null && superTypesChecked.contains(superType)) { throw new SlingException("Cyclic dependency for resourceSuperType hierarchy detected on resource " + resource.getPath(), null); } } } } return result; }
Returns <code>true</code> if the resource type or any of the resource's super type(s) equals the given resource type. This implementation is equal to {@link ResourceResolver#isResourceType(Resource, String)} - but in earlier sling version the comparison check did not take potentieal mixtures of relative and absolute resource types into account. This method respects this. @param resource The resource to check @param resourceType The resource type to check this resource against. @return <code>true</code> if the resource type or any of the resource's super type(s) equals the given resource type. <code>false</code> is also returned if <code>resource</code> or<code>resourceType</code> are <code>null</code>.
private static int checkResult(int result) { if (exceptionsEnabled && result != cudnnStatus.CUDNN_STATUS_SUCCESS) { throw new CudaException(cudnnStatus.stringFor(result)); } return result; }
If the given result is not cudnnStatus.CUDNN_STATUS_SUCCESS and exceptions have been enabled, this method will throw a CudaException with an error message that corresponds to the given result code. Otherwise, the given result is simply returned. @param result The result to check @return The result that was given as the parameter @throws CudaException If exceptions have been enabled and the given result code is not cudnnStatus.CUDNN_STATUS_SUCCESS
public static int cudnnSetTensor4dDescriptorEx( cudnnTensorDescriptor tensorDesc, int dataType, /** image data type */ int n, /** number of inputs (batch size) */ int c, /** number of input feature maps */ int h, /** height of input section */ int w, /** width of input section */ int nStride, int cStride, int hStride, int wStride) { return checkResult(cudnnSetTensor4dDescriptorExNative(tensorDesc, dataType, n, c, h, w, nStride, cStride, hStride, wStride)); }
width of input section
public static int cudnnTransformTensor( cudnnHandle handle, Pointer alpha, cudnnTensorDescriptor xDesc, Pointer x, Pointer beta, cudnnTensorDescriptor yDesc, Pointer y) { return checkResult(cudnnTransformTensorNative(handle, alpha, xDesc, x, beta, yDesc, y)); }
Tensor layout conversion helper (y = alpha * x + beta * y)
public static int cudnnAddTensor( cudnnHandle handle, Pointer alpha, cudnnTensorDescriptor aDesc, Pointer A, Pointer beta, cudnnTensorDescriptor cDesc, Pointer C) { return checkResult(cudnnAddTensorNative(handle, alpha, aDesc, A, beta, cDesc, C)); }
Tensor Bias addition : C = alpha * A + beta * C
public static int cudnnOpTensor( cudnnHandle handle, cudnnOpTensorDescriptor opTensorDesc, Pointer alpha1, cudnnTensorDescriptor aDesc, Pointer A, Pointer alpha2, cudnnTensorDescriptor bDesc, Pointer B, Pointer beta, cudnnTensorDescriptor cDesc, Pointer C) { return checkResult(cudnnOpTensorNative(handle, opTensorDesc, alpha1, aDesc, A, alpha2, bDesc, B, beta, cDesc, C)); }
B tensor is ignored for CUDNN_OP_TENSOR_SQRT, CUDNN_OP_TENSOR_NOT.
public static int cudnnGetReductionIndicesSize( cudnnHandle handle, cudnnReduceTensorDescriptor reduceTensorDesc, cudnnTensorDescriptor aDesc, cudnnTensorDescriptor cDesc, long[] sizeInBytes) { return checkResult(cudnnGetReductionIndicesSizeNative(handle, reduceTensorDesc, aDesc, cDesc, sizeInBytes)); }
Helper function to return the minimum size of the index space to be passed to the reduction given the input and output tensors
public static int cudnnGetReductionWorkspaceSize( cudnnHandle handle, cudnnReduceTensorDescriptor reduceTensorDesc, cudnnTensorDescriptor aDesc, cudnnTensorDescriptor cDesc, long[] sizeInBytes) { return checkResult(cudnnGetReductionWorkspaceSizeNative(handle, reduceTensorDesc, aDesc, cDesc, sizeInBytes)); }
Helper function to return the minimum size of the workspace to be passed to the reduction given the input and output tensors
public static int cudnnReduceTensor( cudnnHandle handle, cudnnReduceTensorDescriptor reduceTensorDesc, Pointer indices, long indicesSizeInBytes, Pointer workspace, long workspaceSizeInBytes, Pointer alpha, cudnnTensorDescriptor aDesc, Pointer A, Pointer beta, cudnnTensorDescriptor cDesc, Pointer C) { return checkResult(cudnnReduceTensorNative(handle, reduceTensorDesc, indices, indicesSizeInBytes, workspace, workspaceSizeInBytes, alpha, aDesc, A, beta, cDesc, C)); }
The indices space is ignored for reduce ops other than min or max.
public static int cudnnSetTensor( cudnnHandle handle, cudnnTensorDescriptor yDesc, Pointer y, Pointer valuePtr) { return checkResult(cudnnSetTensorNative(handle, yDesc, y, valuePtr)); }
Set all values of a tensor to a given value : y[i] = value[0]
public static int cudnnScaleTensor( cudnnHandle handle, cudnnTensorDescriptor yDesc, Pointer y, Pointer alpha) { return checkResult(cudnnScaleTensorNative(handle, yDesc, y, alpha)); }
Scale all values of a tensor by a given factor : y[i] = alpha * y[i]
public static int cudnnGetFilter4dDescriptor( cudnnFilterDescriptor filterDesc, int[] dataType, /** image data type */ int[] format, int[] k, /** number of output feature maps */ int[] c, /** number of input feature maps */ int[] h, /** height of each input filter */ int[] w)/** width of each input filter */ { return checkResult(cudnnGetFilter4dDescriptorNative(filterDesc, dataType, format, k, c, h, w)); }
width of each input filter
public static int cudnnSetFilterNdDescriptor( cudnnFilterDescriptor filterDesc, int dataType, /** image data type */ int format, int nbDims, int[] filterDimA) { return checkResult(cudnnSetFilterNdDescriptorNative(filterDesc, dataType, format, nbDims, filterDimA)); }
width of each input filter
public static int cudnnGetConvolution2dForwardOutputDim( cudnnConvolutionDescriptor convDesc, cudnnTensorDescriptor inputTensorDesc, cudnnFilterDescriptor filterDesc, int[] n, int[] c, int[] h, int[] w) { return checkResult(cudnnGetConvolution2dForwardOutputDimNative(convDesc, inputTensorDesc, filterDesc, n, c, h, w)); }
Helper function to return the dimensions of the output tensor given a convolution descriptor
public static int cudnnGetConvolutionNdDescriptor( cudnnConvolutionDescriptor convDesc, int arrayLengthRequested, int[] arrayLength, int[] padA, int[] strideA, int[] dilationA, int[] mode, int[] computeType)/** convolution data type */ { return checkResult(cudnnGetConvolutionNdDescriptorNative(convDesc, arrayLengthRequested, arrayLength, padA, strideA, dilationA, mode, computeType)); }
convolution data type
public static int cudnnGetConvolutionNdForwardOutputDim( cudnnConvolutionDescriptor convDesc, cudnnTensorDescriptor inputTensorDesc, cudnnFilterDescriptor filterDesc, int nbDims, int[] tensorOuputDimA) { return checkResult(cudnnGetConvolutionNdForwardOutputDimNative(convDesc, inputTensorDesc, filterDesc, nbDims, tensorOuputDimA)); }
Helper function to return the dimensions of the output tensor given a convolution descriptor
public static int cudnnGetConvolutionForwardWorkspaceSize( cudnnHandle handle, cudnnTensorDescriptor xDesc, cudnnFilterDescriptor wDesc, cudnnConvolutionDescriptor convDesc, cudnnTensorDescriptor yDesc, int algo, long[] sizeInBytes) { return checkResult(cudnnGetConvolutionForwardWorkspaceSizeNative(handle, xDesc, wDesc, convDesc, yDesc, algo, sizeInBytes)); }
Helper function to return the minimum size of the workspace to be passed to the convolution given an algo
public static int cudnnConvolutionForward( cudnnHandle handle, Pointer alpha, cudnnTensorDescriptor xDesc, Pointer x, cudnnFilterDescriptor wDesc, Pointer w, cudnnConvolutionDescriptor convDesc, int algo, Pointer workSpace, long workSpaceSizeInBytes, Pointer beta, cudnnTensorDescriptor yDesc, Pointer y) { return checkResult(cudnnConvolutionForwardNative(handle, alpha, xDesc, x, wDesc, w, convDesc, algo, workSpace, workSpaceSizeInBytes, beta, yDesc, y)); }
Function to perform the forward pass for batch convolution
public static int cudnnConvolutionBiasActivationForward( cudnnHandle handle, Pointer alpha1, cudnnTensorDescriptor xDesc, Pointer x, cudnnFilterDescriptor wDesc, Pointer w, cudnnConvolutionDescriptor convDesc, int algo, Pointer workSpace, long workSpaceSizeInBytes, Pointer alpha2, cudnnTensorDescriptor zDesc, Pointer z, cudnnTensorDescriptor biasDesc, Pointer bias, cudnnActivationDescriptor activationDesc, cudnnTensorDescriptor yDesc, Pointer y) { return checkResult(cudnnConvolutionBiasActivationForwardNative(handle, alpha1, xDesc, x, wDesc, w, convDesc, algo, workSpace, workSpaceSizeInBytes, alpha2, zDesc, z, biasDesc, bias, activationDesc, yDesc, y)); }
Fused conv/bias/activation operation : y = Act( alpha1 * conv(x) + alpha2 * z + bias )
public static int cudnnConvolutionBackwardBias( cudnnHandle handle, Pointer alpha, cudnnTensorDescriptor dyDesc, Pointer dy, Pointer beta, cudnnTensorDescriptor dbDesc, Pointer db) { return checkResult(cudnnConvolutionBackwardBiasNative(handle, alpha, dyDesc, dy, beta, dbDesc, db)); }
Function to compute the bias gradient for batch convolution
public static int cudnnGetConvolutionBackwardFilterWorkspaceSize( cudnnHandle handle, cudnnTensorDescriptor xDesc, cudnnTensorDescriptor dyDesc, cudnnConvolutionDescriptor convDesc, cudnnFilterDescriptor gradDesc, int algo, long[] sizeInBytes) { return checkResult(cudnnGetConvolutionBackwardFilterWorkspaceSizeNative(handle, xDesc, dyDesc, convDesc, gradDesc, algo, sizeInBytes)); }
Helper function to return the minimum size of the workspace to be passed to the convolution given an algo
public static int cudnnGetConvolutionBackwardDataWorkspaceSize( cudnnHandle handle, cudnnFilterDescriptor wDesc, cudnnTensorDescriptor dyDesc, cudnnConvolutionDescriptor convDesc, cudnnTensorDescriptor dxDesc, int algo, long[] sizeInBytes) { return checkResult(cudnnGetConvolutionBackwardDataWorkspaceSizeNative(handle, wDesc, dyDesc, convDesc, dxDesc, algo, sizeInBytes)); }
Helper function to return the minimum size of the workspace to be passed to the convolution given an algo
public static int cudnnSoftmaxForward( cudnnHandle handle, int algo, int mode, Pointer alpha, cudnnTensorDescriptor xDesc, Pointer x, Pointer beta, cudnnTensorDescriptor yDesc, Pointer y) { return checkResult(cudnnSoftmaxForwardNative(handle, algo, mode, alpha, xDesc, x, beta, yDesc, y)); }
Function to perform forward softmax
public static int cudnnSoftmaxBackward( cudnnHandle handle, int algo, int mode, Pointer alpha, cudnnTensorDescriptor yDesc, Pointer y, cudnnTensorDescriptor dyDesc, Pointer dy, Pointer beta, cudnnTensorDescriptor dxDesc, Pointer dx) { return checkResult(cudnnSoftmaxBackwardNative(handle, algo, mode, alpha, yDesc, y, dyDesc, dy, beta, dxDesc, dx)); }
Function to perform backward softmax
public static int cudnnPoolingForward( cudnnHandle handle, cudnnPoolingDescriptor poolingDesc, Pointer alpha, cudnnTensorDescriptor xDesc, Pointer x, Pointer beta, cudnnTensorDescriptor yDesc, Pointer y) { return checkResult(cudnnPoolingForwardNative(handle, poolingDesc, alpha, xDesc, x, beta, yDesc, y)); }
Function to perform forward pooling
public static int cudnnPoolingBackward( cudnnHandle handle, cudnnPoolingDescriptor poolingDesc, Pointer alpha, cudnnTensorDescriptor yDesc, Pointer y, cudnnTensorDescriptor dyDesc, Pointer dy, cudnnTensorDescriptor xDesc, Pointer x, Pointer beta, cudnnTensorDescriptor dxDesc, Pointer dx) { return checkResult(cudnnPoolingBackwardNative(handle, poolingDesc, alpha, yDesc, y, dyDesc, dy, xDesc, x, beta, dxDesc, dx)); }
Function to perform backward pooling
public static int cudnnGetActivationDescriptor( cudnnActivationDescriptor activationDesc, int[] mode, int[] reluNanOpt, double[] coef)/** ceiling for clipped RELU, alpha for ELU */ { return checkResult(cudnnGetActivationDescriptorNative(activationDesc, mode, reluNanOpt, coef)); }
ceiling for clipped RELU, alpha for ELU
public static int cudnnActivationForward( cudnnHandle handle, cudnnActivationDescriptor activationDesc, Pointer alpha, cudnnTensorDescriptor xDesc, Pointer x, Pointer beta, cudnnTensorDescriptor yDesc, Pointer y) { return checkResult(cudnnActivationForwardNative(handle, activationDesc, alpha, xDesc, x, beta, yDesc, y)); }
Function to perform forward activation
public static int cudnnActivationBackward( cudnnHandle handle, cudnnActivationDescriptor activationDesc, Pointer alpha, cudnnTensorDescriptor yDesc, Pointer y, cudnnTensorDescriptor dyDesc, Pointer dy, cudnnTensorDescriptor xDesc, Pointer x, Pointer beta, cudnnTensorDescriptor dxDesc, Pointer dx) { return checkResult(cudnnActivationBackwardNative(handle, activationDesc, alpha, yDesc, y, dyDesc, dy, xDesc, x, beta, dxDesc, dx)); }
Function to perform backward activation
public static int cudnnSetLRNDescriptor( cudnnLRNDescriptor normDesc, int lrnN, double lrnAlpha, double lrnBeta, double lrnK) { return checkResult(cudnnSetLRNDescriptorNative(normDesc, lrnN, lrnAlpha, lrnBeta, lrnK)); }
<pre> Uses a window [center-lookBehind, center+lookAhead], where lookBehind = floor( (lrnN-1)/2 ), lookAhead = lrnN-lookBehind-1. Values of double parameters cast to tensor data type. </pre>
public static int cudnnGetLRNDescriptor( cudnnLRNDescriptor normDesc, int[] lrnN, double[] lrnAlpha, double[] lrnBeta, double[] lrnK) { return checkResult(cudnnGetLRNDescriptorNative(normDesc, lrnN, lrnAlpha, lrnBeta, lrnK)); }
<pre> Retrieve the settings currently stored in an LRN layer descriptor Any of the provided pointers can be NULL (no corresponding value will be returned) </pre>
public static int cudnnLRNCrossChannelForward( cudnnHandle handle, cudnnLRNDescriptor normDesc, int lrnMode, Pointer alpha, cudnnTensorDescriptor xDesc, Pointer x, Pointer beta, cudnnTensorDescriptor yDesc, Pointer y) { return checkResult(cudnnLRNCrossChannelForwardNative(handle, normDesc, lrnMode, alpha, xDesc, x, beta, yDesc, y)); }
LRN cross-channel forward computation. Double parameters cast to tensor data type
public static int cudnnLRNCrossChannelBackward( cudnnHandle handle, cudnnLRNDescriptor normDesc, int lrnMode, Pointer alpha, cudnnTensorDescriptor yDesc, Pointer y, cudnnTensorDescriptor dyDesc, Pointer dy, cudnnTensorDescriptor xDesc, Pointer x, Pointer beta, cudnnTensorDescriptor dxDesc, Pointer dx) { return checkResult(cudnnLRNCrossChannelBackwardNative(handle, normDesc, lrnMode, alpha, yDesc, y, dyDesc, dy, xDesc, x, beta, dxDesc, dx)); }
LRN cross-channel backward computation. Double parameters cast to tensor data type
public static int cudnnDivisiveNormalizationForward( cudnnHandle handle, cudnnLRNDescriptor normDesc, int mode, Pointer alpha, cudnnTensorDescriptor xDesc, /** same desc for means, temp, temp2 */ Pointer x, Pointer means, /** if NULL, means are assumed to be zero */ Pointer temp, Pointer temp2, Pointer beta, cudnnTensorDescriptor yDesc, Pointer y) { return checkResult(cudnnDivisiveNormalizationForwardNative(handle, normDesc, mode, alpha, xDesc, x, means, temp, temp2, beta, yDesc, y)); }
LCN/divisive normalization functions: y = alpha * normalize(x) + beta * y
public static int cudnnDeriveBNTensorDescriptor( cudnnTensorDescriptor derivedBnDesc, cudnnTensorDescriptor xDesc, int mode) { return checkResult(cudnnDeriveBNTensorDescriptorNative(derivedBnDesc, xDesc, mode)); }
<pre> Derives a tensor descriptor from layer data descriptor for BatchNormalization scale, invVariance, bnBias, bnScale tensors. Use this tensor desc for bnScaleBiasMeanVarDesc and bnScaleBiasDiffDesc in Batch Normalization forward and backward functions. </pre>
public static int cudnnBatchNormalizationForwardTraining( cudnnHandle handle, int mode, Pointer alpha, /** alpha[0] = result blend factor */ Pointer beta, /** beta[0] = dest layer blend factor */ cudnnTensorDescriptor xDesc, Pointer x, /** NxCxHxW */ cudnnTensorDescriptor yDesc, Pointer y, /** NxCxHxW */ /** * <pre> * Shared desc for the next 6 tensors in the argument list. Data type to be set as follows: type = (typeOf(x) == double) ? double : float Dimensions for this descriptor depend on normalization mode - Spatial Normalization : tensors are expected to have dims 1xCx1x1 (normalization is performed across NxHxW) - Per-Activation Normalization : tensors are expected to have dims of 1xCxHxW * (normalization is performed across N) * </pre> */ cudnnTensorDescriptor bnScaleBiasMeanVarDesc, /** 'Gamma' and 'Beta' respectively in Ioffe and Szegedy's paper's notation */ Pointer bnScale, Pointer bnBias, /** * <pre> * MUST use factor=1 in the very first call of a complete training cycle. Use a factor=1/(1+n) at N-th call to the function to get Cumulative Moving Average (CMA) behavior CMA[n] = (x[1]+...+x[n])/n Since CMA[n+1] = (n*CMA[n]+x[n+1])/(n+1) = ((n+1)*CMA[n]-CMA[n])/(n+1) + x[n+1]/(n+1) = * CMA[n]*(1-1/(n+1)) + x[n+1]*1/(n+1) * </pre> */ double exponentialAverageFactor, /** Used in Training phase only. runningMean = newMean*factor + runningMean*(1-factor) */ Pointer resultRunningMean, /** Output in training mode, input in inference. Is the moving average of variance[x] (factor is applied in the same way as for runningMean) */ Pointer resultRunningVariance, /** Has to be >= CUDNN_BN_MIN_EPSILON. Should be the same in forward and backward functions. */ double epsilon, /** Optionally save intermediate results from the forward pass here - can be reused to speed up backward pass. NULL if unused */ Pointer resultSaveMean, Pointer resultSaveInvVariance) { return checkResult(cudnnBatchNormalizationForwardTrainingNative(handle, mode, alpha, beta, xDesc, x, yDesc, y, bnScaleBiasMeanVarDesc, bnScale, bnBias, exponentialAverageFactor, resultRunningMean, resultRunningVariance, epsilon, resultSaveMean, resultSaveInvVariance)); }
Computes y = BN(x). Also accumulates moving averages of mean and inverse variances
public static int cudnnBatchNormalizationForwardTrainingEx( cudnnHandle handle, int mode, int bnOps, Pointer alpha, /** alpha[0] = result blend factor */ Pointer beta, /** beta[0] = dest layer blend factor */ cudnnTensorDescriptor xDesc, Pointer xData, cudnnTensorDescriptor zDesc, Pointer zData, cudnnTensorDescriptor yDesc, Pointer yData, cudnnTensorDescriptor bnScaleBiasMeanVarDesc, Pointer bnScale, Pointer bnBias, double exponentialAverageFactor, Pointer resultRunningMean, Pointer resultRunningVariance, /** Has to be >= CUDNN_BN_MIN_EPSILON. Should be the same in forward and backward functions. */ double epsilon, /** Optionally save intermediate results from the forward pass here - can be reused to speed up backward pass. NULL if unused */ Pointer resultSaveMean, Pointer resultSaveInvVariance, cudnnActivationDescriptor activationDesc, Pointer workspace, long workSpaceSizeInBytes, Pointer reserveSpace, long reserveSpaceSizeInBytes) { return checkResult(cudnnBatchNormalizationForwardTrainingExNative(handle, mode, bnOps, alpha, beta, xDesc, xData, zDesc, zData, yDesc, yData, bnScaleBiasMeanVarDesc, bnScale, bnBias, exponentialAverageFactor, resultRunningMean, resultRunningVariance, epsilon, resultSaveMean, resultSaveInvVariance, activationDesc, workspace, workSpaceSizeInBytes, reserveSpace, reserveSpaceSizeInBytes)); }
Computes y = relu(BN(x) + z). Also accumulates moving averages of mean and inverse variances
public static int cudnnBatchNormalizationForwardInference( cudnnHandle handle, int mode, Pointer alpha, /** alpha[0] = result blend factor */ Pointer beta, /** beta[0] = dest layer blend factor */ cudnnTensorDescriptor xDesc, Pointer x, /** NxCxHxW */ cudnnTensorDescriptor yDesc, Pointer y, /** NxCxHxW */ cudnnTensorDescriptor bnScaleBiasMeanVarDesc, Pointer bnScale, Pointer bnBias, Pointer estimatedMean, Pointer estimatedVariance, double epsilon) { return checkResult(cudnnBatchNormalizationForwardInferenceNative(handle, mode, alpha, beta, xDesc, x, yDesc, y, bnScaleBiasMeanVarDesc, bnScale, bnBias, estimatedMean, estimatedVariance, epsilon)); }
<pre> Performs Batch Normalization during Inference: y[i] = bnScale[k]*(x[i]-estimatedMean[k])/sqrt(epsilon+estimatedVariance[k]) + bnBias[k] with bnScale, bnBias, runningMean, runningInvVariance tensors indexed according to spatial or per-activation mode. Refer to cudnnBatchNormalizationForwardTraining above for notes on function arguments. </pre>
public static int cudnnBatchNormalizationBackward( cudnnHandle handle, int mode, Pointer alphaDataDiff, Pointer betaDataDiff, Pointer alphaParamDiff, Pointer betaParamDiff, cudnnTensorDescriptor xDesc, /** same desc for x, dx, dy */ Pointer x, cudnnTensorDescriptor dyDesc, Pointer dy, cudnnTensorDescriptor dxDesc, Pointer dx, /** Shared tensor desc for the 4 tensors below */ cudnnTensorDescriptor dBnScaleBiasDesc, Pointer bnScale, /** bnBias doesn't affect backpropagation */ /** scale and bias diff are not backpropagated below this layer */ Pointer dBnScaleResult, Pointer dBnBiasResult, /** Same epsilon as forward pass */ double epsilon, /** Optionally cached intermediate results from forward pass */ Pointer savedMean, Pointer savedInvVariance) { return checkResult(cudnnBatchNormalizationBackwardNative(handle, mode, alphaDataDiff, betaDataDiff, alphaParamDiff, betaParamDiff, xDesc, x, dyDesc, dy, dxDesc, dx, dBnScaleBiasDesc, bnScale, dBnScaleResult, dBnBiasResult, epsilon, savedMean, savedInvVariance)); }
Performs backward pass of Batch Normalization layer. Returns x gradient, bnScale gradient and bnBias gradient
public static int cudnnRestoreDropoutDescriptor( cudnnDropoutDescriptor dropoutDesc, cudnnHandle handle, float dropout, Pointer states, long stateSizeInBytes, long seed) { return checkResult(cudnnRestoreDropoutDescriptorNative(dropoutDesc, handle, dropout, states, stateSizeInBytes, seed)); }
Restores the dropout descriptor to a previously saved-off state