content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Using the Scala Shell Currently, the Flink Scala shell for EMR clusters is only configured to start new YARN sessions. You can use the Scala shell by following the procedure below. Using the Flink Scala shell on the master node Log in to the master node using SSH as described in Connect to the Master Node using SSH. Type the following to start a shell: In Amazon EMR version 5.5.0 and later, you can use: % flink-scala-shell yarn -n 1 In earlier versions of Amazon EMR, use: % /usr/lib/flink/bin/start-scala-shell.sh yarn -n 1 This starts the Flink Scala shell so you can interactively use Flink. Just as with other interfaces and options, you can scale the -noption value used in the example based on the number of tasks you want to run from the shell.
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/flink-scala.html
2018-06-17T21:45:10
CC-MAIN-2018-26
1529267859817.15
[]
docs.aws.amazon.com
What is Kumu? Kumu is a powerful visualization platform for mapping systems and better understanding relationships. We blend systems thinking, stakeholder mapping, and social network analysis to help the world’s top influencers turn ideas into impact. Kumu is a Hawaiian term for "teacher" or "source of wisdom." We're a remote team with roots in Oahu and Silicon Valley. Read more about our story on the About Us page. Who's using Kumu? Kumu supports hundreds of organizations across the world to more effectively engage complex issues. Make sure to check out a few of our highlighted case studies: - Humanity United: Building a Better Brick Market in Nepal - Stanford ChangeLabs: Launcing Large Scale Sustainable Transformations - Hewlett Foundation: Making Congress Work Again A tool AND a community Kumu is more than just a tool. It's also a robust community of do-ers who are paving the way for how to create lasting impact. Don't miss the manifesto and make sure to check out our curated team of expert consultants available to help on your next project called the Kumus.
https://docs.kumu.io/about/what-is-kumu.html
2018-06-17T22:08:32
CC-MAIN-2018-26
1529267859817.15
[]
docs.kumu.io
- About Snapshots - Take a VM Snapshot - Revert to a Snapshot - Create a New VM From a Snapshot - Create a New Template From a Snapshot - Export a Snapshot to a File - Delete a Snapshot While it is not possible to copy a VM snapshot directly, you can create a new VM template from a snapshot and then use that to make copies of the snapshot. Templates are a "gold image" - ordinary VMs which are intended to be used as master copies from which to create new VMs. Once you have set up a VM the way you want it and taken a snapshot of it, you can then save the snapshot as a new template and use it to create copies of your specially-configured VM in the same resource pool. Note that the snapshot's memory state will not be saved when you do this. If the original VM used to create the snapshot has been deleted, you can save it as a new template as follows:
https://docs.citrix.com/en-us/xencenter/6-5/xs-xc-vms-snapshots/xs-xc-vms-snapshots-newtemplate.html
2018-06-17T21:59:33
CC-MAIN-2018-26
1529267859817.15
[]
docs.citrix.com
Appearances can make a DriveWorks 3D Model more realistic and realism to a 3D scene. Appearances can also play a big part in the configuration experience of a DriveWorks 3D File. You can configure colors and materials without going back through SOLIDWORKS. In this topic we are going to look at Appearances, how you use them and how to create appealing 3D scenes. Below are a list of Appearance properties and how they affect the Appearance of your model. Ambient Intensity This is a multiplier for the intensity of the Ambient Light on this Appearance. You can use this property to make objects look like they are glowing. This is great for lights bulbs. You may need to reduce the Ambient Intensity for transparent models. Reflectivity Affects the strength of the Environment Map. The higher the value the more visible the Environment Map is on the model. It also makes Specular highlights smaller. You may need to increase Specular Intensity as you increase Reflectivity. Reflectivity can be used to make models look glossy or wet. Diffuse Color Diffuse Color determines the color of diffused light that hits the model. The alpha channel can be used to make objects transparent. For things like glass this can be set very low. The diffuse color is multiplied with the light color. This means that black will always be black no matter how many lights you shine at it. Diffuse Intensity This is a multiplier for the intensity of the Diffuse light on this Appearance. For transparent objects, the Diffuse Intensity should be almost 0. Diffuse Intensity should be avoided on objects that are not transparent. Specular Color This is the color of theSpecular highlight on the Appearance. This is the shiny area on a model. Specular Color should always be white on organic objects. For metallic materials, use a similar color to the Diffuse Color. Like Diffuse Color this is multiplied with the light color. Specular Intensity Specular Intensity is the multiplier for the intensity of the Specular highlight on this Appearance. This is the intensity of the shiny spots on a model. You can turn this property up to make things look shiny. This simulates how light gets reflected by a surface. Texture - Source File Path Texture allows you to browse for an image that exist in the DriveWorks 3D Document or browse for a new image to be used as the Texture for this Appearance. Source File Path shows you the path where the image is stored. You can also build a rule for this property to make the Texture image dynamic. Texture Scale X / Y Texture Scale X and Y affects the size of the Texture image in the horizontal and vertical direction. The values can be set independently of each other. The higher the number the smaller the Texture image will become. For example: setting the scale to 2 will make the texture tile twice. Setting it to 0.5 will display only half the image. Texture Offset X / Y Texture Offset X and Y affect the position of the Texture image on a surface. They allow you to pan a texture and line it up with other textures on other faces or models. These values can be set independently of each other allowing for greater control. Texture Angle Texture Angle allows you to rotate the texture around the top left hand corner. This allows you to rotate the texture and line it up in the correct orientation. Static Models Static Models are nodes in a DriveWorks 3D Document that use the Model Entity. The Model Entity allows you to assign Geometry to the Node in the tree. Geometries can have 1 or multiple Appearances applied to them from the CAD package. Each Appearance will be imported into the 3D Document when the Geometry is imported in. The Geometry will contain a number of Appearances with certain names, if these exist in the 3D Document then they will be used. If these Appearances are changed at the Document level then the Geometry will change as well. It is possible to override all Appearances on a Geometry by using the property called "Override Appearance". Override Appearance will override all Appearances on this instance of the Geometry and replace them with the Appearance specified. To individually control each Appearance on a Geometry, each Appearance needs to exist at the top level of the 3D Document. Replacement Models 3D Models can be replaced into a DriveWorks 3D Document using the Replace Model Entity. When swapping Models into a 3D Document there are times where you may want to modify their Appearances to show highlighting or selection. To do this you would use the Remap Appearance Entity. The Remap Appearance Entity lets you remap an Appearance on the replaced model to be another Appearance in your 3D Document. With this Entity, you have two choices. You can remap all Appearances using an asterisk * or you can type the name of the Appearance you want to replace. It is worth noting that you cannot swap multiple individual Appearances, you can only swap 1 for 1 or all for 1. If the Replace Entity is on a Model Node, i.e. it contains a Model Entity, the Override Appearance property will not work on the replaced file. When working with Textures there are a few things that you should to consider. DriveWorks will tile textures if the surface is larger than the Texture image. Therefore it is important that you use seamless Textures. This prevents any nasty lines appearing on your models. Images bigger than this are large in file size and cause performance decreases. In addition to this GPU's won't render Textures over a certain size. Square texture images provide the best looking models. If a texture is not square GPU's can make the image square adding in white space. This can leave white lines on your models. JPG images are smaller than PNG. You should only use PNG files if you require transparency on the Appearance. Some surfaces don't require large textures. If you can compress images, make them smaller in size and make sure they fit your surface. Appearances can be used to imitate shadows in a DriveWorks 3D Document. Shadows add another depth of realism to your 3D scene and model. We simulate shadows by adding a 3D model of a Plane under the model we are configuring. We then add an Appearance to that plane. The Appearance contains a Texture image that looks like a shadow. We do this on demos like the Trailer, Conveyor and Play System. Steps: We create Shadow Textures in applications such as Photoshop or Gimp. These photo editing applications let you draw shapes that can be simply used as Shadows in your 3D scene.
http://docs.driveworkspro.com/Topic/DriveWorks3DUsingAppearances
2018-06-17T22:23:01
CC-MAIN-2018-26
1529267859817.15
[]
docs.driveworkspro.com
Draggable< T>. On multitouch devices, multiple drags can occur simultaneously because there can be multiple pointers in contact with the device at once. To limit the number of simultaneous drags, use the maxSimultaneousDrags property. The default is to allow an unlimited number of simultaneous drags. This widget displays child when zero drags are under way. If childWhenDragging is non-null, this widget instead displays childWhenDragging when one or more drags are underway. Otherwise, this widget always displays child. See also: - Inheritance - - Implementers - Constructors - Draggable({Key key, @required Widget child, @required Widget feedback, T data, Axis axis, Widget childWhenDragging, Offset feedbackOffset: Offset.zero, DragAnchor dragAnchor: DragAnchor.child, Axis affinity, int maxSimultaneousDrags, VoidCallback onDragStarted, DraggableCanceledCallback onDraggableCanceled, DragEndCallback onDragEnd, VoidCallback onDragCompleted, bool ignoringFeedbackSemantics: true }) - Creates a widget that can be dragged to a DragTarget. [...]const Properties - affinity → Axis - Controls how this widget competes with other gestures to initiate a drag. [...]final - axis → Axis - The Axis to restrict this draggable's movement, if specified. [...]final - child → Widget - The widget below this widget in the tree. [...]final - childWhenDragging → Widget - The widget to display instead of child when one or more drags are under way. [...]final - data → T - The data that will be dropped by this draggable.final - dragAnchor → DragAnchor - Where this widget should be anchored during a drag.final - feedback → Widget - The widget to show under the pointer when a drag is under way. [...]final - feedbackOffset → Offset - The feedbackOffset can be used to set the hit test target point for the purposes of finding a drag target. It is especially useful if the feedback is transformed compared to the child.final - ignoringFeedbackSemantics → bool - Whether the semantics of the feedback widget is ignored when building the semantics tree. [...]final - maxSimultaneousDrags → int - How many simultaneous drags to support. [...]final - onDragCompleted → VoidCallback - Called when the draggable is dropped and accepted by a DragTarget. [...]final - onDragEnd → DragEndCallback - Called when the draggable is dropped. [...]final - onDraggableCanceled → DraggableCanceledCallback - Called when the draggable is dropped without being accepted by a DragTarget. [...]final - onDragStarted → VoidCallback - Called when the draggable starts being dragged.final - hashCode → int - The hash code for this object. [...]read-only, inherited - key → Key - Controls how one widget replaces another widget in the tree. [...]final, inherited - runtimeType → Type - A representation of the runtime type of the object.read-only, inherited Methods - createRecognizer( GestureMultiDragStartCallback onStart) → MultiDragGestureRecognizer< MultiDragPointerState> - Creates a gesture recognizer that recognizes the start of the drag. [...]@protected - createState( ) → _DraggableState<
https://docs.flutter.io/flutter/widgets/Draggable-class.html
2018-12-10T07:39:32
CC-MAIN-2018-51
1544376823318.33
[]
docs.flutter.io
Your RhoConnect source adapter model can use any of these methods to interact with your backend service. Refer to the source adapter sample for a complete example. Login to your backend service (optional). def login MyWebService.login(current_user.login) end Logoff from your back-end service (optional). def logoff MyWebService.logoff(current_user.login) end Query your back-end service and build a hash of hashes (required). The query_params parameter in this method is nil by default but you can use it to pass custom data from the client to the RhoConnect server using RhoConnectClient.doSyncSource() method. Typically this is done when you want to implement some custom query behavior. def query(query_params = nil) parsed = JSON.parse(RestClient.get("#{@base}.json").body) @result = {} parsed.each do |item| @result[item["product"]["id"].to_s] = item["product"] end if parsed end Search your back-end based on params and build a hash of hashes (optional). Similar to query, however the master document accumulates the data in @result instead of replacing when it runs. def search(params) parsed = JSON.parse(RestClient.get("#{@base}.json").body) @result = {} parsed.each do |item| if item["product"]["name"].downcase == params['name'].downcase @result[item["product"]["id"].to_s] = item["product"] end end if parsed end Next, you will need to add search to your Rhodes application. For details, see the Rhodes search section. Create a new record in the back-end (optional). def create(create_hash) res = MyWebService.create(create_hash) # return new product id so we establish a client link res.new_id end Update an existing record in the backend (optional). def update(update_hash) end Delete an existing record in the backend (optional). def delete(delete_hash) MyWebService.delete(delete_hash['id']) end Returns the current user which called the adapter. For example, you could filter results for a specific user in your query method: def query @result = MyWebService.get_records_for_user(current_user.login) end Saves the current state of @result to redis and assigns it to nil. Typically this is used when your adapter has to paginate through backend service data. def query @result = {} ('a'..'z').each_with_index do |letter,i| @result ||= {} @result.merge!( DictionaryService.get_records_for(letter) ) stash_result if i % 2 == 0 end end If your Rhodes application sends blobs as a part of the create/update operation – you must implement this method inside of your Source Adapter Model and do not use default implementation where blob is stored in the tempfile provided by Rack, because RhoConnect processing is asynchronous and there is no guarantee that the temporary file will exist at the actual time when create is called. The following example stores the file permanently and keeps its :filename argument as another object attribute. def store_blob(obj,field_name,blob) # ... custom code to store the blob file ... my_stored_filename = do_custom_store[blob[:filename]] obj['filename'] = my_stored_filename end Sometimes, different groups of users share the common source data. To leverage this, you can implement the following method in your Source Adapter Model to provide custom partition names for the users with shared data. In this case, RhoConnect will store the data for MD of the grouped users using your custom partition name, which will reduce the memory footprint in Redis. From this standpoint, app partition is the edge-case of the custom user partitioning where all users share the same data for the particular source. To use the custom user partitioning, implement the following class method in your Source Adapter’s model: class Product < Rhoconnect::Model::Base # group users by the first letter def self.partition_name(user_id) return user_id[0] end end You can access your model’s document data by with get_data method. By default, when called without arguments – it returns the Master Document (:md). # some custom controller's method that has access to model def my_custom_method my_model_md = @model.get_data end
http://docs.tau-technologies.com/en/6.0/rhoconnectapi/source-adapter-model-api-ruby
2018-12-10T05:56:39
CC-MAIN-2018-51
1544376823318.33
[]
docs.tau-technologies.com
Return an antiderivative (indefinite integral) of a polynomial. The returned order m antiderivative P of polynomial p satisfies and is defined up to m - 1 integration constants k. The constants determine the low-order polynomial part of P so that . See also Examples The defining property of the antiderivative: >>> p = np.poly1d([1,1,1]) >>> P = np.polyint(p) >>> P poly1d([ 0.33333333, 0.5 , 1. , 0. ]) >>> np.polyder(P) == p True The integration constants default to zero, but can be specified: >>> P = np.polyint(p, 3) >>> P(0) 0.0 >>> np.polyder(P)(0) 0.0 >>> np.polyder(P, 2)(0) 0.0 >>> P = np.polyint(p, 3, k=[6,5,3]) >>> P poly1d([ 0.01666667, 0.04166667, 0.16666667, 3. , 5. , 3. ]) Note that 3 = 6 / 2!, and that the constants are given in the order of integrations. Constant of the highest-order polynomial term comes first: >>> np.polyder(P, 2)(0) 6.0 >>> np.polyder(P, 1)(0) 5.0 >>> P(0) 3.0
https://docs.scipy.org/doc/numpy-1.7.0/reference/generated/numpy.polyint.html
2016-10-20T22:00:59
CC-MAIN-2016-44
1476988717954.1
[]
docs.scipy.org
User Guide Local Navigation Search This Document About playlists You can create both standard and automatic playlists for songs. You create a standard playlist by manually adding songs that are on your BlackBerry® device or media card. You create an automatic playlist by specifying criteria for artists, albums, or genres of songs. When you add a song to your device that meets the criteria, your device adds it to the automatic playlist. An indicator appears beside automatic playlists in your list of playlists. Next topic: Create a standard playlist Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/20405/About_playlists_293904_11.jsp
2013-12-05T01:23:52
CC-MAIN-2013-48
1386163037952
[]
docs.blackberry.com
public abstract class AbstractDelegatingSmartContextLoader extends Object implements SmartContextLoader AbstractDelegatingSmartContextLoaderserves as an abstract base class for implementations of the SmartContextLoaderSPI that delegate to a set of candidate SmartContextLoaders (i.e., one that supports XML configuration files and one that supports annotated classes) to determine which context loader is appropriate for a given test class's configuration. Each candidate is given a chance to processthe ContextConfigurationAttributesfor each class in the test class hierarchy that is annotated with @ContextConfiguration, and the candidate that supports the merged, processed configuration will be used to actually loadthe context. Placing an empty @ContextConfiguration annotation on a test class signals that default resource locations (i.e., XML configuration files) or default configuration classes should be detected. Furthermore, if a specific ContextLoader or SmartContextLoader is not explicitly declared via @ContextConfiguration, a concrete subclass of AbstractDelegatingSmartContextLoader will be used as the default loader, thus providing automatic support for either XML configuration files or annotated classes, but not both simultaneously. As of Spring 3.2, a test class may optionally declare neither XML configuration files nor annotated classes and instead declare only application context initializers. In such cases, an attempt will still be made to detect defaults, but their absence will not result an an exception. SmartContextLoader clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public AbstractDelegatingSmartContextLoader() protected abstract SmartContextLoader getXmlLoader() SmartContextLoaderthat supports XML configuration files. protected abstract SmartContextLoader getAnnotationConfigLoader() SmartContextLoaderthat supports annotated classes. public void processContextConfiguration(ContextConfigurationAttributes configAttributes) SmartContextLoadersto process the supplied ContextConfigurationAttributes. Delegation is based on explicit knowledge of the implementations of the default loaders for XML configuration files and annotated classes. Specifically, the delegation algorithm is as follows: ContextConfigurationAttributesare not empty, the appropriate candidate loader will be allowed to process the configuration as is, without any checks for detection of defaults. infomessage will be logged. infomessage will be logged. processContextConfigurationin interface SmartContextLoader configAttributes- the context configuration attributes to process IllegalArgumentException- if the supplied configuration attributes are null, or if the supplied configuration attributes include both resource locations and annotated classes IllegalStateException- if the XML-based loader detects default configuration classes; if the annotation-based loader detects default resource locations; if neither candidate loader detects defaults for the supplied context configuration; or if both candidate loaders detect defaults for the supplied context configuration public ApplicationContext loadContext(MergedContextConfiguration mergedConfig) throws Exception SmartContextLoaderto load an ApplicationContext. Delegation is based on explicit knowledge of the implementations of the default loaders for XML configuration files and annotated classes. Specifically, the delegation algorithm is as follows: MergedContextConfigurationare not empty and the annotated classes are empty, the XML-based loader will load the ApplicationContext. MergedContextConfigurationare not empty and the resource locations are empty, the annotation-based loader will load the ApplicationContext. loadContextin interface SmartContextLoader mergedConfig- the merged context configuration to use to load the application context IllegalArgumentException- if the supplied merged configuration is null IllegalStateException- if neither candidate loader is capable of loading an ApplicationContextfrom the supplied merged context configuration Exception- if context loading failed SmartContextLoader.processContextConfiguration(ContextConfigurationAttributes), #registerAnnotationConfigProcessors(org.springframework.beans.factory.support.BeanDefinitionRegistry), MergedContextConfiguration.getActiveProfiles(), ConfigurableApplicationContext.getEnvironment() public final String[] processLocations(Class<?> clazz, String... locations) AbstractDelegatingSmartContextLoaderdoes not support the ContextLoader.processLocations(Class, String...)method. Call processContextConfiguration(ContextConfigurationAttributes)instead. processLocationsin interface ContextLoader clazz- the class with which the locations are associated: used to determine how to process the supplied locations locations- the unmodified locations to use for loading the application context (can be nullor empty) UnsupportedOperationException public final ApplicationContext loadContext(String... locations) throws Exception AbstractDelegatingSmartContextLoaderdoes not support the ContextLoader.loadContext(String...)method. Call loadContext(MergedContextConfiguration)instead. loadContextin interface ContextLoader locations- the resource locations to use to load the application context UnsupportedOperationException Exception- if context loading failed
http://docs.spring.io/spring-framework/docs/3.2.0.RELEASE/javadoc-api/org/springframework/test/context/support/AbstractDelegatingSmartContextLoader.html
2013-12-05T01:22:34
CC-MAIN-2013-48
1386163037952
[]
docs.spring.io
The. pool that may need to be shared by both Quartz and non-Quartz components. ThreadPoolTaskExecutor. Spring provides annotation support for both task scheduling and asynchronous method execution.Inititalizer { private final SampleBean bean; public SampleBeanInitializer(SampleBean bean) { this.bean = bean; } @PostConstruct public void initialize() { bean.doSomething(); } }. Beginning with Spring 3.0, there is an XML namespace for configuring TaskExecutor and TaskScheduler instances. It also provides a convenient way to configure tasks to be scheduled with a trigger..
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/scheduling.html
2013-12-05T01:25:14
CC-MAIN-2013-48
1386163037952
[]
docs.spring.io
User Guide Local Navigation Search This Document Getting to know your smartphone Find out about apps and indicators, and what the keys do on your BlackBerry® smartphone. - Important keys - About the slider and keyboard - Select commands using toolbars and pop-up menus - Tips: Doing things quickly - Applications - Status indicators - Tips: Managing indicators - Phone - Contacts - Browser - Camera - Messages - Cut, copy, and paste - Type text - Feature availability Previous topic: Reconcile email over the wireless network Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/18577/Getting_to_know_your_device_60_1295786_11.jsp
2013-12-05T01:36:25
CC-MAIN-2013-48
1386163037952
[]
docs.blackberry.com
User Guide Local Navigation Search This Document Zoom in to or out from a webpage On a webpage, press theAfter you finish: To turn off zoom mode, press the key > Zoom. Previous topic: Open, close, or switch between tabs Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/23895/Zoom_in_to_a_web_page_60_1065587_11.jsp
2013-12-05T01:22:53
CC-MAIN-2013-48
1386163037952
[array(['menu_key_bb_bullets_39752_11.jpg', 'Menu'], dtype=object) array(['escape_key_arrow_curve_up_left_39748_11.jpg', 'Escape'], dtype=object) ]
docs.blackberry.com
public interface Cache null). String getName() Object getNativeCache() Cache.ValueWrapper get(Object key) nullif the cache contains no mapping for this key. key- key whose associated value is to be returned. nullif the cache contains no mapping for this key()
http://docs.spring.io/spring/docs/3.2.1.RELEASE/javadoc-api/org/springframework/cache/Cache.html
2013-12-05T01:23:26
CC-MAIN-2013-48
1386163037952
[]
docs.spring.io
As part of the Jenkow Jenkins Plugin which leverages the Activiti workflow engine for Jenkins job orchestration, I have added a new task type "Jenkins task" to the Activiti Modeler. Because unlike the Designer, the Modeler doesn't (yet) have a mechanism for incrementally adding new task types, I had to modify the Activiti code base. This Wiki page summarizes the changes I had to do. The enhanced Activiti code base is available on GitHub. My enhancement is done on top of Activiti 5.11. My new task type is mapped onto the existing ServiceTask, using a custom task delegate class at run time. It's good to take an existing task type as the starting point. User task or script task is a good choice. The artwork The new artwork went into modules/activiti-webapp-explorer2/src/main/webapp/editor/stencilsets/bpmn2.0/icons/jenkins. - Created an icon PNG file - Created a stencil SVG file - I used Inkscape with "Save as Plain SVG" - Manually copied the relevant <g> element (SVG group) defining the stencil logo into the desired stencil SVG. The SVG saved with Inkscape didn't fully work. - Initially my logo <g> element had a transform attribute which confused the Modeler's calculation of the stencil's bounding box. Ungrouping & regrouping made Inkscape remove the transform attribute and place the recalculated path coordinates into the <g> element. That solved the bounding box problem. - I had to add an oryx:anchors="top left" attribute to each path element, otherwise the logo would not remain intact when the stencil got resized. New stencil In modules/activiti-webapp-explorer2/src/main/resources/stencilset.json added a task base added the actual task I don't fully understand the semantics of all existing propertyPackages. Java Code Declared a new stencil name constant in modules/activiti-json-converter/src/main/java/org/activiti/editor/constants/StencilConstants.java: The heavy lifting is done by a new class modules/activiti-json-converter/src/main/java/org/activiti/editor/language/json/converter/MyNewTaskJsonConverter.java. It is added to the existing converters in modules/activiti-json-converter/src/main/java/org/activiti/editor/language/json/converter/BpmnJsonConverter.java by calling MyNewTaskJsconConverter.fillTypes() in the static initializer. It's important to also add the stencil name into the right DI_... shape map, otherwise edges to the new stencil miss a waypoint. Here, the method convertJsonToElement() is used to populate a ServiceTask model object which later gets converted into XML. Added another element in the static initializer of the diagram generator modules/activiti-engine/src/main/java/org/activiti/engine/impl/bpmn/diagram/ProcessDiagramGenerator.java:
http://docs.codehaus.org/pages/viewpage.action?pageId=230398663
2013-12-05T01:23:49
CC-MAIN-2013-48
1386163037952
[]
docs.codehaus.org
Schema for ebMS-3 XML Infoset This schema defines the XML Infoset of ebMS-3 headers. These headers are placed within the SOAP Header element of either a SOAP 1.1 or SOAP 1.2 message. The eb:Messaging element is the top element of ebMS-3 headers, and it is placed within the SOAP Header element (either SOAP 1.1 or SOAP 1.2). The eb:Messaging element may contain several instances of eb:SignalMessage and eb:UserMessage elements. However in the core part of the ebMS-3 specification, only one instance of either eb:UserMessage or eb:SignalMessage must be present. The second part of ebMS-3 specification may need to include multiple instances of either eb:SignalMessage, eb:UserMessage or both. Therefore, this schema is allowing multiple instances of eb:SignalMessage and eb:UserMessage elements for part 2 of the ebMS-3 specification. Note that the eb:Messaging element cannot be empty (at least one of eb:SignalMessage or eb:UserMessage element must present). In the core part of ebMS-3 specification, an eb:Signal Message is allowed to contain eb:MessageInfo and at most one Receipt Signal, at most one eb:PullRequest element, and/or a series of eb:Error elements. In part 2 of the ebMS-3 specification, new signals may be introduced, and for this reason, an extensibility point is added here to the eb:SignalMessage element to allow it to contain any elements. if SOAP 1.1 is being used, this attribute is required if SOAP 1.2 is being used, this attribute is required
http://docs.oasis-open.org/ebxml-msg/ebms/v3.0/core/cs02/ebms-header-3_0-200704.xsd
2009-07-04T18:38:41
crawl-002
crawl-002-011
[]
docs.oasis-open.org
Section 1804(a) requires that a customer who intends to seek intervenor compensation shall file a NOI to claim compensation within 30 days after the prehearing conference is held, or, when a prehearing conference is not held, according to the procedure specified by the Commission. In this matter, no prehearing conference was held before the Commission acted upon Greenlining's petition by issuing the rulemaking. The Commission held a prehearing conference in the rulemaking on June 23, 2003. Greenlining did not file a NOI in either the petition or the rulemaking. Generally, the failure of an intervenor to file an NOI would be fatal to its claim for intervenor compensation. Although we occasionally excused lateness or even omission of an NOI filing, we have also placed great importance on the NOI as a tool to ensure intervenor accountability. (See D.98-04-059, 79 CPUC2d 628.) Since the issuance of D.98-04-059, we have held intervenor to the statutory NOI filing standard unless exceptional circumstances are present and have denied compensation if the intervenor fails to comply with the statute. In this case, however, Greenlining filed the petition requesting the Commission to amend a regulation by instituting a rulemaking. If Greenlining had not filed the petition, there would not have been the rulemaking. Because § 1708.5 is a relatively new statue, taking effect on January 1, 2000, intervenors have not had much experience in determining the appropriate time to file an NOI in § 1708.5 proceedings. Therefore, solely for this proceeding, we determine that Greenlining's petition filed pursuant to § 1708.5 serves in substance as its NOI with respect to P.02-10-035 and R.03-02-035. This holding is limited to this case only and shall not serve as precedent for future intervenors filing a §1708.5 petition. In the future, when the Commission acts on a § 1708.5 petition, we will provide direction to potential claimants regarding the timeframe for filing NOIs. However, in the absence of any direction, if a rulemaking issues in response to a petition, and a prehearing conference is then held, potential intervenors must file the NOIs no later than 30 days after the prehearing conference. (See § 1804(a).)
http://docs.cpuc.ca.gov/published/Comment_decision/38264-02.htm
2009-07-04T18:38:37
crawl-002
crawl-002-011
[]
docs.cpuc.ca.gov
For many years, telephone companies charged higher rates for some services in order to subsidize the provision of affordable local exchange service for residential customers located in high-cost areas of the State. This system of internal cross subsidies was replaced by the CHCF-B in Decision (D.) 96-10-066. The CHCF-B currently provides Roseville and other large local exchange carriers (LECs) with more than $530 million per year to subsidize residential local exchange service in high-cost areas. Funding for the CHCF-B comes from a surcharge paid by the end users of intrastate telecommunications services. To ensure that the large LECs did not reap a windfall from the CHCF-B, D.96-10-066 directed the large LECs to reduce their rates as follows: Concurrent with the effective date of the [CHCF-B], the... LECs...shall reduce all of their rates, except for residential basic service and existing contracts, by an equal percentage. This overall reduction shall equal the anticipated monthly draw... from the [CHCF-B]...We shall [also] afford the . . . LECs the opportunity to decide what rates or price caps should be reduced...to permanently offset the explicit subsidy support. Until that is accomplished, a monthly surcredit shall be used to offset any anticipated monthly draw. The LECs shall be permitted to file applications describing what rates or price caps they seek to permanently [reduce] as a result of receiving monies from the CHCF-B. (D.96-10-066, mimeo., p. 209.) Roseville currently draws about $400,000 per year from the CHCF-B, and has a surcredit in place to offset its draws. In D.98-09-039, the Commission gave Roseville until September 3, 2000, to file an application to propose targeted rate reductions to offset its draws from the CHCF-B. On July 26, 2000, the Commission's Executive Director granted Roseville a one-year extension to file the application authorized by D.98-09-039. On August 24, 2001, the Executive Director granted Roseville until the date when I.01-04-026 is closed to file the application authorized by D.98-09-039. In I.01-04-026, the Commission is investigating whether to increase Roseville's rates to offset the loss of $11.5 million in annual revenues for Extended Area Service (EAS) that Roseville previously received from Pacific Bell. To this end, the Commission ordered Roseville to submit a rate-design proposal in I.01-04-026 to recover $11.5 million.1 Roseville submitted its rate-design proposal on September 17, 2001. On November 30, 2001, the assigned Commissioner in I.01-04-026 issued a scoping memo. Page 2 of the memo provided instructions regarding the content of the rate-design proposal that Roseville is required to submit in that proceeding. On December 3, 2001, Roseville filed a petition to modify D.98-09-039 (Petition). In its Petition, Roseville asks the Commission modify D.98-09-039 by adding the following sentence to the end of O.P. 17: "In addition, parties are authorized to present proposals to adjust Roseville's rates and price ceilings by an amount equal to its CHCF-B draws in I.01-04-026." Roseville states that it would be far more efficient for the Commission and the parties to consider revisions to Roseville's rates in one proceeding rather than two. Roseville represents in its Petition that the rate-design proposal it submitted in I.01-04-026 on September 17, 2001, already reflects its draws from the CHCF-B. Roseville also represents that no party in I.01-04-026 has objected to the inclusion of Roseville's CHCF-B draws in its rate-design proposal. In addition, Roseville notes that the assigned Commissioner's scoping memo issued in I.01-04-026 on November 30, 2001, determined that Roseville's draws from the CHCF-B could be included in its rate-design proposal, but only if Roseville filed a petition to modify D.98-09-039 and the Commission approved the petition. The Office of Ratepayer Advocates (ORA) does not object to Roseville's Petition, but states that Roseville's draws from the CHCF-B do not appear to be included in the rate-design proposal that Roseville filed in I.01-04-026. Therefore, if the Petition is granted, ORA states that it might be necessary for Roseville to update its rate-design proposal.1 Order Instituting Investigation 01-04-026, mimeo., p. 3.
http://docs.cpuc.ca.gov/published/Agenda_decision/13050-01.htm
2009-07-04T18:40:20
crawl-002
crawl-002-011
[]
docs.cpuc.ca.gov
Decision 00-09-043 September 7, 2000 ORDER DENYING REHEARING DECISION 00-05-023 AND MODIFYING THE DECISION This matter concerns the merger of AT&T Corporation ("AT&T") and MediaOne Group, Inc. ("MediaOne"). The application came before the Commission because the national merger includes AT&T's acquisition of MediaOne Telecommunications of California, Inc. ("MediaOne Telecom"), which is a cable company authorized by the Commission to provide local exchange telephone service in Los Angeles. The merger also includes AT&T's acquisition of MediaOne's minority interest in Time Warner Telecom, Inc. ("TWT"). A subsidiary of TWT, TWT-California, is a competitive local exchange company ("CLEC") serving customers in the San Diego area. In D.00-05-023, the Commission approved the transfer of MediaOne Telecom pursuant to a public interest evaluation under Sections 852 and 854(a) of the California Public Utilities Code. 1 However, the Commission exempted AT&T's acquisition of the indirect, minority interest in TWT-California from the merger approval requirement of Section 852. GTE Internetworking Inc. and GTE Media Ventures Inc. (together, "GTE," or "applicants") have jointly filed an application for rehearing of D.00-05-023. They contend that contrary to precedent, the Commission did not "consider" the question of anticompetitive effects of the merger "on the nascent market for residential broadband Internet services." (GTE Application for Rehearing, at 1.) On this basis, the applicants claim an abuse of discretion by the Commission. (GTE Application for Rehearing, at 2.) AT&T filed a response to the rehearing application arguing that GTE has mischaracterized prior Commission decisions. After carefully reviewing all issues raised, we find that the applicants have not demonstrated that the Commission violated its constitutional and statutory mandates, or failed to act pursuant to those mandates. Nor have they shown that the Commission's decision is contrary to established precedent. 2 Finding no legal error, we deny GTE's application for rehearing of D.00-05-023. However, subsequent to the issuance of D.00-05-023, the Federal Communications Commission ("FCC") issued an order approving the AT&T/MediaOne merger with comments regarding certain actions the merging companies are to take to preclude anticompetitive behavior in their provision of cable broadband Internet services. (In the Matter of Applications for Consent to the Transfer of Control of Licenses and Section 214 Authorizations from MediaGroup, Inc., to AT&T Corp., ( "FCC Memorandum and Order"), FCC 00-202, CS Docket No. 99-251, June 5, 2000.) Also in June 2000, the United States Court of Appeals for the Ninth Circuit issued an opinion which holds that access to the Internet via cable broadband is a telecommunications service and, therefore, is not subject to the Cable Communications Policy Act of 1984 [47 U.S.C. § 521 et seq.] (AT&T Corp. et al v. City of Portland ("City of Portland") 2000 U.S. App. Lexis 14383 (9th Cir. June 22, 2000). In light of the FCC and Ninth Circuit decisions, and other federal court opinions recently issued, it is appropriate that we modify and supplement our discussion on jurisdiction and Internet access issues. In doing so, however, we do not disturb our approval of AT&T's acquisition of MediaOne Telecom or our conclusion that GTE's rehearing request is without merit, as we shall explain.1 All statutory references shall be to the California Public Utilities Code unless otherwise noted. 2 We read GTE's application as being directed solely at the acquisition of MediaOne Telecom of California. Applicants have not explained how AT&T's acquisition of a passive interest in TWT-California fits into their argument regarding the alleged need to consider cable broadband competition.
http://docs.cpuc.ca.gov/published/Final_decision/2132.htm
2009-07-04T18:40:41
crawl-002
crawl-002-011
[]
docs.cpuc.ca.gov
California Public Utilities Commission 505 Van Ness Ave., San Francisco _________________________________________________________________________________ FOR IMMEDIATE RELEASE PRESS RELEASE Media Contact: Terrie Prosper, 415.703.1366, [email protected] Docket #: A.06-03-005 CPUC GIVES CUSTOMERS GREATER CONTROL OVER THEIR ELECTRICITY BILLS WITH DYNAMIC PRICING RATES FOR PG&E SAN FRANCISCO, July 31, 2008 - The California Public Utilities Commission (CPUC) today continued its commitment to empower energy consumers by setting a timetable for Pacific Gas and Electric Company (PG&E) to propose new "dynamic pricing" rate structures for all of its customers. Dynamic pricing will enable PG&E customers to take advantage of the new advanced meters that PG&E is installing throughout its service territory. With the new advanced meters, customers will no longer have to wait until the end of the month to see how much energy they used. The new meters will tell customers how much energy they are using from day-to-day and hour-to-hour. Dynamic pricing will give consumers a tool to take advantage of the new meters and reduce their electricity bills. Dynamic pricing refers to electric rates that reflect actual wholesale market conditions. One example is critical peak pricing (CPP), which is a rate that includes a short-term rate increase during critical conditions. Another example is real time pricing - a rate linked to actual hourly wholesale energy prices. Dynamic pricing can enable customers to better manage their electricity usage and reduce their bills. Dynamic pricing does this by charging customers more when power is in short supply, such as on a hot summer afternoon, and less when supplies are plentiful, such as at night and in the winter. Thereby, customers that can cut their electricity use during peak periods will be rewarded with a reduction in their bills. Dynamic pricing also connects rates with California's greenhouse gas policies. When wholesale energy prices are high, the most inefficient generation sources with high greenhouse gas emissions are operating. By linking retail rates to wholesale market conditions, dynamic pricing encourages customers to avoid consuming polluting power. Today's decision adopts a timetable that requires PG&E to propose dynamic pricing rates for all commercial, industrial, and agricultural electric customers by 2011. The decision also requires PG&E to introduce new dynamic pricing options for residential customers in 2011. "California's Energy Action Plan identifies demand response, along with energy efficiency, as the state's preferred way to meet growing energy needs," said CPUC President Michael R. Peevey. "These new rates will give consumers a new tool to control their energy bills and reduce their impact on the environment." "Every PG&E customer will be getting a new, smarter meter during the next four years. Dynamic pricing will give customers a new tool to use the real-time usage information from the meters to manage their energy use and cut their bills," said Commissioner Rachelle Chong. For more information on the CPUC, please visit. ###
http://docs.cpuc.ca.gov/PUBLISHED/NEWS_RELEASE/85953.htm
2009-07-04T18:41:37
crawl-002
crawl-002-011
[]
docs.cpuc.ca.gov
OASIS ebXML Messaging Services 3.0 Conformance Profiles Committee Draft 02 25 July 2007 Specification URIs: This Version: Previous Version: non-normativeMS open.org/who/trademark.php for above guidance. Table of Contents 1.2 Normative References 6 1.3 Non-normative References 6 2 The Gateway Conformance Profile 7 2.2 Conformance Profile: Gateway RM V3 7 2.3 Conformance Profile: Gateway RX V3 9 2.3.2 WS-I Conformance Requirements 9 2.3.3 Processing Mode Parameters 10 2.4 Conformance Profile: Gateway RM V2/3 10 2.4.2 WS-I Conformance Requirements 13 2.4.3 Processing Mode Parameters 13 2.5 Conformance Profile: Gateway RX V2/3 13 2.5.2 WS-I Conformance Requirements 14 2.5.3 Processing Mode Parameters 14 3 Examples of Alternate Conformance Profiles 15 3.2 Conformance Profile: Light Handler (LH-RM CP) 15 3.2.2 WS-I Conformance Requirements 16 3.3 Conformance Profile: Activity Monitor (AM-CP) 16 3.3.2 WS-I Conformance Requirements 17 Appendix A Conformance Profile Template and Terminology 18 Appendix B Acknowledgments 20 Appendix C Revision History 21. The document is non-normative in the sense that conformance profiles only refer to selected options and features that are already described in a normative way in the exists23, excepts ebXML Message Service Specification Version 2.0, April 1, 2002. [ebMS3]. [QAFrameW] Karl Dubost, et al, eds, QA Framework: Specification Guidelines, 2005. The Gateway conformance profile (or G-CP) is to be considered an already pervasive: Gateway RM V3 is defined as follows, using the table template and terminology provided in Appendix F (“Conformance”) of the core ebXML Messaging Services V3.0 specification [ebMS3]. The Gateway RX V3 is identified by the URI: interoperability across different SOAP stacks, MIME and HTTP implementations, this conformance profile requires compliance with the following WS-I profiles. Basic Security Profile (BSP) 1.1 [WSIBSP11] Attachment Profile (AP) 1.0, [WSIAP10] with regard to the use of MIME and SwA. facilitate the definition and comparison of conformance profiles, it is recommended to use the following template for describing a conformance profile:: Hamid Ben Malek, Fujitsu Software <[email protected]>> Rajashekar Kailar, Centers for Disease Control and Prevention <[email protected]> Dale Moberg, Axway Inc. <[email protected]> Sacha Schlegel, Individual <[email protected]> Pete Wenzel, Sun Microsystems <[email protected]>
http://docs.oasis-open.org/ebxml-msg/ebms/v3.0/prof/cd02/ebms-3.0-confprofiles-cd-02.html
2009-07-04T18:43:35
crawl-002
crawl-002-011
[]
docs.oasis-open.org
dbForge Studio allows creating two types of relations: physically existing foreign key relations and virtual relations. They represent physically existing foreign key relations between tables. On the diagram, these relations are displayed as connecting lines. Their appearance depends on the selected notation. Options available for managing physical relations are as follows: Click a relation and press the DELETE key, then click Yes in the appeared dialog box. Note When you delete a relation from the diagram, the corresponding foreign key is deleted from the database. To change the relation parent table, drag the parent end of the relation to another table. You cannot change the relation child table. Note The relation cardinality is determined automatically from the child table constraints and cannot be changed by a user except for editing child table constraints. Note Relations on the diagram correspond to existing foreign keys in a database. When you create a relation on the diagram, a foreign key constraint is created in the database. If the foreign key cannot be created in the database, the relation cannot be created on the diagram. They allow you to create relations between tables if their storage engine does not support foreign keys. Virtual relations do not exist physically, but only stored on the diagram, however they can be converted to physical foreign keys. On the diagram they are displayed as thin dotted lines connecting tables. You can move, bend, reroute virtual relations, drag a relation’s parent end to another table the same way as you do with physical relations. You can also select entire tables to create a virtual relation between them. In this case, the Virtual Relation Properties dialog box will appear where you should select constraints and referenced columns. Do either of these actions: Use the Virtual Relation Manager window to view, edit, remove virtual relations, generate their DDL, convert to Foreign keys, etc. To open it, click the Virtual Relation Manager icon on the toolbar or, on the View menu, point to Other Windows, and click Virtual Relation Manager. Right-click a required relation in the grid and select a required command on the menu. Each relation has a comment - a yellow rectangle with a relation foreign key name. Select the Show Relation Comment check box in the diagram options or click Show Comments on the Diagram toolbar to display comments. By default, they are displayed near the child end of relations. You can drag them to any other place, and they will move with the relation. To hide a comment for a specific relation, click it and hit the DELETE key. To display it again, right-click the relation and select Show Comment on the shortcut menu.
https://docs.devart.com/studio-for-mysql/designing-databases-with-database-designer/working-with-relations.html
2021-02-24T23:06:08
CC-MAIN-2021-10
1614178349708.2
[]
docs.devart.com
Row Totals are rows containing a calculated sum of row field values. Column totals are columns containing a calculated sum of column field values. Totals are calculated based on selected summary functions and displayed in the Data area highlighted with the gray color. The default function calculates the sum of all values. You can cancel the default function or, quite the contrary, add others.
https://docs.devart.com/studio-for-postgresql/data-analysis/totals.html
2021-02-24T23:19:53
CC-MAIN-2021-10
1614178349708.2
[]
docs.devart.com
You can configure the schedule at which Unified ManagerSnapshot backups are created by using the Unified Manager UI. Snapshot backups are created in just a few minutes and the Unified Manager database is locked only for few seconds. You can add capacity to the volume using ONTAP System Manager or the ONTAP CLI so that the Unified Manager database does not run out of space. You can delete older snapshots using ONTAP System Manager or the ONTAP CLI so that there is always room for new snapshot backups. You configure alerts in the Alert Setup page.
https://docs.netapp.com/ocum-98/topic/com.netapp.doc.onc-um-ag/GUID-2AFD1064-B58D-4605-857A-73C3E888516B.html
2021-02-25T00:26:26
CC-MAIN-2021-10
1614178349708.2
[]
docs.netapp.com
You can add a new virtual network to a cluster configuration to enable a multi-tenant environment connection to a cluster running NetApp Element software. When a virtual network is added, an interface for each node is created and each requires a virtual network IP address. The number of IP addresses you specify when creating a new virtual network must be equal to or greater than the number of nodes in the cluster. Virtual network addresses are bulk provisioned by and assigned to individual nodes automatically. You do not need to manually assign virtual network addresses to the nodes in the cluster.
https://docs.netapp.com/sfe-118/topic/com.netapp.doc.sfe-mg-vcp/GUID-651650C4-9F85-414E-8C2A-4C1A295DC2EE.html
2021-02-24T23:50:00
CC-MAIN-2021-10
1614178349708.2
[]
docs.netapp.com
Welcome to PokemonGo-Map’s Documentation!¶ Pokemon-Go Map gives you a live visualization map of nearby Pokémon, Pokéstops, and gyms in a form of a web-app as well as native phone application [ Official Homepage ] [ Official GitHub ] [ Discord Support ] [ GitHub Issues ] - Community Tools - Spawnpoint Scanning Scheduler - Speed Scheduler - Apache2 Reverse Proxy - Beehive - Command Line - Configuration files - Windows ENV Fix - External Access to Map - Common Questions and Answers - Detailed Gym Information - Using Multiple Accounts - Using a MySQL Server - Nginx - SSL Certificate Windows - Status Page - Supervisord on Linux - Workers
https://pgm.readthedocs.io/en/develop/
2021-02-24T23:36:56
CC-MAIN-2021-10
1614178349708.2
[array(['_images/cover.png', '_images/cover.png'], dtype=object)]
pgm.readthedocs.io
Interchange 5.12 Administrator Guide Save PDF Selected topic Selected topic and subtopics All content AS4 Can you use AS4? You can use AS4 if your software license enables both WebServices and AS4 functionality. To check enabled license keys, select Help > License information on the toolbar in the user interface. The list of enabled license keys must include the following two keys: MessageProtocolAS4 MessageProtocolWebServices AS4 overview AS4 is a Business to Business (B2B) protocol that extends the functionality of AS2 with ebMS-based Web Services technology. AS4 is easier to implement and lower in cost to set up and operate than traditional B2B protocols. The main benefits of AS4 compared to AS2 are: Compatibility with Web Services standards Message pulling capability Built-in receipt mechanisms AS4, like AS2, only supports HTTP for Internet transport. AS4 provides two conformance profiles for ebMS 3.0: The ebHandler conformance profile – supports both sending and receiving roles, and both message pushing and message pulling for each role. The Light Client conformance profile – supports both sending and receiving roles, but only message pushing for sending and message pulling for receiving. In other words, it does not support incoming HTTP requests, and may have no IP address. AS4 supports non-repudiation of receipts, similar to the MDN used in AS2, and specified as an XML schema. Non-repudiation receipts are returned using a dedicated signal message and defaults to requiring message recipients to return a signed receipt containing the digests necessary for non-repudiation. The receipt may also contain error handling information if there was some problem with the document exchange. AS4 supports duplicate message detection and message retry/resending scenarios for when receipts for messages are not received by the sender. Related topics Concepts: AS4 messages AS4 metadata Message Partition Channels (MPC) AS4 user authentication Large message splitting and joining Enable handling of empty SOAP Body messages Examples: Configure a one-way client pull AS4 use case: One-way push (MMD initiated) AS4 use case: One-way pull (MMD initiated) Tasks: Add an AS4 trading delivery Modify an AS4 trading delivery Add an AS4 embedded server pickup Modify an AS4 embedded server pickup Add an AS4 polling client pickup Modify an AS4 polling client pickup Manage AS4 polling queues Related Links
https://docs.axway.com/bundle/Interchange_512_AdministratorsGuide_allOS_en_HTML5/page/Content/Transports/AS4/trdng_AS4_intro.htm
2021-02-24T23:51:36
CC-MAIN-2021-10
1614178349708.2
[]
docs.axway.com
This topic describes how to import data from a Google Sheets file. Note Data Import Wizard pages can slightly differ due to the product you have been using. Decide what table to import the data to: For a new table: For an existing table: -or- On the Data Import > Destination wizard tab, specify the connection, database, and table to import the data to, and click Next. Note If you selected the table in Database Explorer before opening the Data Import wizard, the wizard will open with the predefined connection parameters of the selected table. To create or edit the Oracle connection, use the corresponding options and follow the instructions. On the Data Import > Destination wizard tab, set additional importing options to customize the import, and click Next. On the Data Import > Mapping wizard tab, map the Source columns to the Target on the top and the Source columns at the bottom of the wizard page. Note To cancel mapping of all the columns, click Clear Mappings on the toolbar. To restore it, click Fill Mapping. If you are importing to a new table, you can edit the Target column properties by double-clicking them on the top grid. Select the Key check box for a column with a primary key and click Next. Note You should select at least one column with a primary key, otherwise some of import modes on the Modes wizard page will be disabled. On the Data Import > Output wizard tab, use of the following output options to manage the data import script and click Next: Note You can save the import settings as a template for future uses. Click Save on any wizard page to save the selected settings. Next time you should only select a template and specify a location of the Source data - all the settings will be already set.
https://docs.devart.com/studio-for-oracle/exporting-and-importing-data/google-sheets-import.html
2021-02-25T00:00:16
CC-MAIN-2021-10
1614178349708.2
[]
docs.devart.com
Thecomponent relays logged events from hardware drivers. Interpretation of these numbers depends on the hardware and drivers used for the server. Treat this data as a general indicator of server problems. The Last Event attribute (see) captures the last event detected by the grid node. You can perform a custom query to generate a list of the event messages generated by the server over time. These messages can contain useful troubleshooting information and can be used to help determine the source of a problem. To view the last SSM event, select.
https://docs.netapp.com/sgws-110/topic/com.netapp.doc.sg-troubleshooting/GUID-993EEDE7-105A-4CAC-AB91-4EB17653A015.html?lang=en
2021-02-24T23:50:40
CC-MAIN-2021-10
1614178349708.2
[]
docs.netapp.com
Add device settings to your configuration In the device settings, you define the protocol for the reader-host communication. If you use Autoread mode, you can optionally also define reader feedback (e.g. LED and beeper). Note for Ethernet developers If you want to run your application as an Ethernet server, you'll need to use a special form in which you can also configure the host (learn more). Keep factory device settings or create your own? BALTECH readers are shipped with factory device settings defining default protocols and feedback depending on the reader type. Do the factory device settings fully meet your requirements? Then you don't need to create your own device settings. If, however, you want to make any changes, you need to create your own device settings that fully overwrite the factory device settings. Do you have multiple reader types in your project? You may have multiple reader types with different host protocols and/or different user feedback in the same project, e.g. ACCESS2xx readers for access control and ID-engine readers for time and attendance. If this is the case and the factory device settings do not meet your needs, you have to create individual device settings per reader type. Create a device settings > Settings > Device Settings. Only 1 device settings component per configuration file If you already have a device settings component, you need to remove it before you can create a new one. Fill out device settings Specify host protocol and parameters - In the Protocol drop-down, select the protocol that the host system uses to communicate with the reader. Find more details in the sections below. - In the Autoread drop-down, keep the option EnabledIfDefinedRules selected. If applicable for your protocol, specify the Parameters as described in the sections below. Please refer to your host system documentation or ask your host system provider for this information. Ccid - Protocol description: PC/SC (based on USB/CCID) - Parameters: none BrpTcp - Protocol description: Ethernet/TCP; host system is implemented as a client. - Parameters: none BrpHid - Protocol description: USB/HID - Parameters: none BrpSerial - Protocol description: RS-232/virtual COM port - Parameters: - Baud rate: Specify the number of bits transferred per second - Parity: Specify if an even, odd, or no parity bit used RawSerial - Protocol description: RS-232/virtual COM port The communication is unidirectional, i.e. the reader transmits a number that can be processed by any software supporting serial communication, e.g. a terminal software. Parameters: Serial Baudrate: Specify the number of bits transferred per second Note for developers: Specify the same baud rate as in your application. Parity: Specify if an even, odd, or no parity bit used - Prefix and postfix: Specify characters to be added to the output before transmission to the host system KeyboardEmulation - Protocol description: Reader output emulates keystrokes - Parameters: - Prefix and postfix: Specify characters to be added to the output before transmission to the host system. OSDP To enable OSDP on ACCESS2xx readers, you don't need device settings. Instead, you can set a bus address on each reader using BALTECH AdrCard. You only need device settings if you want to change the default parameters or set a fixed bus address on all readers. - Parameters: - Baud rate: Specify the number of bits transferred per second - Spec compliance: Specify which version of OSDP you want to use. For unencrypted communication, select V1. To enable encrypted communication, select V2. - Install mode (OSDP v2 only): When enabled, the host uses the default encryption key to initially authenticate with the reader. This is needed so the host can write the project-specific encryption key to the reader. Once this is done, the host will disable Install mode, so the default encryption key can no longer be used. - Secure mode (OSDP v2 only): When enabled, the communication will be encrypted whenever this is possible. Only certain non-critical commands will still be transmitted in plain text. - Bus address: To set an individual bus address on each reader, select Set with BALTECH AdrCard. You can later use BALTECH AdrCard to set the address when the readers are installed. If your host only supports 1 reader per bus, select Set fixed address and specify it in the field below. - Prefix and postfix: Specify characters to be added to the output before transmission to the host system. Wiegand - Parameters: - Length in Bit: Length including parity bit - Prefix and postfix: Specify characters to be added to the output before transmission to the host system. Define feedback to card holder Only possible in Autoread mode Defining feedback in the device settings is optional, but only possible in Autoread mode. If you develop your own application based on BRP, you can alternatively define feedback with the UI command groupcall_made, no matter if you use Autoread, VHL, or low-level commands. - Enable the LED / Beeper settings checkbox. - Click at the bottom of the input box. - In the Events drop-down, select an event in which you want the reader to emit a signal, e.g. when powering up or when accepting a card. - In the Beeper and LED sections, define the signal to be emitted in the selected event. Now you're done with the device settings and can continue in your process. Unlike project settings, you cannot test device settings in ID-engine Explorer, but only in your test application or test environment.
https://docs.baltech.de/project-setup/add-device-settings.html
2021-02-24T22:40:52
CC-MAIN-2021-10
1614178349708.2
[array(['../img/scr-confed-config-led-beeper-settings.png#scr', 'Screenshot: Dialog to define custom LED and beeper signals for certain events in BALTECH ConfigEditor'], dtype=object) ]
docs.baltech.de
About This Book The InterSystems SQL Gateway provides access from InterSystems IRIS® data platform to external databases via JDBC and ODBC. You can use various wizards to create links to tables, views, or stored procedures in external sources, allowing you to access the data in the same way you access any InterSystems IRIS object: Access data stored in third-party relational databases within InterSystems IRIS applications using objects and/or SQL queries. Store persistent InterSystems IRIS objects in external relational databases. Create class methods that perform the same actions as corresponding external stored procedures. This book covers the following topics: Using the SQL Gateway — gives an overview of the Gateway and describes how to link to external sources. Connecting with the JDBC Driver — describes how to create a JDBC logical connection definition for the SQL Gateway. Connecting with the ODBC Driver — describes how to create an ODBC logical connection definition for the SQL Gateway. Using the ODBC SQL Gateway Programmatically — describes how to use the %SQLGatewayConnection class to call ODBC functions from ObjectScript. The following documents contain related material: Using InterSystems SQL — describes how to use InterSystems SQL, which provides standard relational access to data stored in an InterSystems IRIS database. Using Java with InterSystems Software — provides an overview of all InterSystems Java technologies enabled by the InterSystems JDBC driver, and describes how to use the driver to access data sources via SQL. Using ODBC with InterSystems Software — describes how to connect to InterSystems IRIS from an external application via InterSystems ODBC, and how to access external ODBC data sources from InterSystems IRIS.
https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=BSQG_PREFACE
2021-02-24T23:43:18
CC-MAIN-2021-10
1614178349708.2
[]
docs.intersystems.com
IF, IFN Synopsis IF condition command IF condition label IFN condition command IFN condition label where condition: x {= | # | < | >} y x {= | #} (patcode) [#]E [#]S[n] Arguments Description The IF PROC command applies a boolean test on a condition statement, and, if the condition is true, either executes the specified command, or goes to the specified label location. A condition is true if any one of the value comparisons are true. IF performs a string condition comparison. IFN performs a numeric condition comparison. A condition can use the = (equal to) or # (not equal to) operator. The following conditions are supported: x {= | #} y An equality condition compares a value to a value. A value can be a literal, an A command, or a reference to a buffer or select list. When used with IF an A reference cannot contain a char delimiter character. If x is a reference, only the first value found in the buffer or select list is compared to y. If y is a reference, all of the values found in the buffer or select list are compared to x. x {= | #} (patcode) A pattern match condition compares a value to a pattern match code. A pattern match code is enclosed in parentheses; for example, (3A). A=alphabetic characters; N=numbers. If x is a reference, only the first value found in the buffer or select list is matched to the pattern code. [#]E An error condition tests whether an error code exists (E) or does not exist (#E). Because E contains the error code value, you can also perform an IF test on the value of the error code: E < 1. Error codes are integers beginning with 260, through 277. [#]Sn A select list condition tests whether the specified select list is active. For example, S3 determines if select list 3 is active; #S3 determines if select list 3 is not active. Compound IF Expressions You can use the ] character to create compound IF expressions. In a compound expression, the right side of the condition consists of two (or more) match conditions, each of which has its corresponding command or label. In the following example, the value of x is first compared to all of the values in %3; if any of these match, command1 is executed. If x does not match %3, x is then matched with all of the values in %4; if any of these match, command2 is executed. IF x = %3]%4 command1]command2 In the following example, the value of x is first compared to the literal Foo; if this is a match, a goto operation is performed to label 100. If x does not match Foo, x is then matched with Bar; if this is a match, a goto operation is performed to label 200. IF x = Foo]Bar 100]200
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RVPC_CIF
2021-02-24T23:28:34
CC-MAIN-2021-10
1614178349708.2
[]
docs.intersystems.com
Glossary¶ The following terminology is commonly used within the Jina framework. Jina is an active development, meaning these terms may be updated and refined regularly as new and improved features are added. - Chunk¶ Chunks are semantic units from a larger parent Document. A Chunk is a Document itself. Subdividing parent Documents into Chunks is performed by the Segmenter class of Executors. Examples of individual units would be sentences from large documents or pixel patches from an image. For further information see the Understand Jina Recursive Document Representation guide. - Client¶ A Python client that connects to Jina gateway. - Container¶ A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. Jina Peas and Pods can be deployed within containers to avail of these features. - Crafter¶ Crafters are a class of Executors. They transform a Document. This includes tasks such as resizing images or reducing all text in a document to lowercase. - Document¶ Document is the fundamental data type within Jina. Anything you want to search within Jina is considered a Document. This could include images, sounds clips or text documents. - Driver¶ Drivers wrap an Executor. Drivers interpret network traffic into a format the Executor can understand. All Executors must have at least one driver. A Driver is further wrapped by a Pea. - Embedding space¶ Embedding space is the vector space in which the data is embedded after encoding. Depending on how the space is created, semantically similar items are closer. Position (distance and direction) in the vector space potentially encodes semantics. [reference_1] - Encoder¶ Encoders are a class of Executors. Their purpose is to generate meaningful vector representations from high dimensional data. This is achieved by passing the input to a pretrained model which returns a fixed length vector. - Evaluator¶ Evaluators are a class of Executors. Evaluators provide advanced evaluation metrics on the performance of a search system. Therefore, they compare a Document against a ground truth Document. Evaluators provide several kinds of metrics: Distance metrics, such as cosine and euclidean distance. Evaluation metrics, such as precision and recall. - Executor¶ Executors represent an algorithmic class within the Jina framework. Examples include Ranker classes, Evaluator classes etc. - Flow¶ A Flow is created for each high-level task within Jina, such as querying or indexing. It manages the state and context of the Pods or Peas who work together to complete this high-level task. - gRPC¶ gRPC is a modern open source high performance RPC (add wikipedia link) framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. - HeadPea¶ There are two possible reasons for a HeadPea to exist: uses_before is defined in a Pod or parallel > 1 for a given Pod. In both cases, traffic directed to a Pod arrives at the HeadPea and it distributes it afterwards to the Peas in the Pod. Its counterpart it the TailPea - Indexer¶ Indexers are a stateful class of Executors. Indexers save/retrieve vectors and key-value information to/from storage. - Indexing¶ Indexing is the process of collecting, parsing, and storing of the data to facilitate fast and accurate information retrieval. This includes adding, updating, deleting, and reading of Jina Documents. - JAML¶ JAML is a Jina YAML parser that supports loading, dumping and substituting variables. - Jina Box¶ Jina Box is a light-weight, highly customizable JavaScript-based front-end component. Jina Box enables devs to easily create front-end applications and GUIs for their end-users. - Jina Dashboard¶ Jina Dashboard is a low code environment to create, deploy, manage and monitor Jina flows. It is also tightly integrated with our Hub to create a seamless end-to-end experience with Jina. - Jina Hub¶ Jina Hub is a centralized repository that hosts key Jina executors and integrations contributed by the community or the Jina Dev team. The components (pods) or full flows (apps) are offered on an accessible, easy to use platform. - JinaD¶ JinaD stands for Jina Daemon. It is a daemon enabling Jina on remote and distributed machines. [reference_2] - Match¶ A result retrieved from the Flow. This is attached to the Document in the response, in the matches attribute. - Neural Search¶ Neural Search is Semantic search created using deep learning neural networks to create vector embeddings. The search itself is typically performed by measuring distances between these vector embeddings. - Optimizer¶ Optimizer is a Jina feature that runs the given flows on multiple parameter configurations in order to find the best performing parameters. In order to run them, an Evaluator needs to be defined. - Parallelization¶ Parallelization means data is processed simultaneously on several machines (or virtual machines). Data could be split in equal parts across those machines, or it could be duplicated. - Pea¶ A Pea wraps an Executor and driver and lets it exchange data with other Peas. This can occur over a remote network or locally within the same system. Peas can also run in standalone Docker containers, which manages all dependencies and context in one place. Peas are stored within Pods. - Pod¶ A Pod is a wrapper around a group of Peas with the same executor. Since Peas hold the same type of Executor, the Pod unifies the network interfaces of those Peas. The Pod makes them look like one single Pea from the rest of the components of a Flow. After a Flow is started, the Pod itself does not receive any traffic anymore. - Primitive data types¶ Jina offers a Pythonic interface to allow users access to and manipulation of Protobuf objects without working with Protobuf itself through its defined primitive data types. - Protobuf¶. - QueryLanguage¶ Within the context of Jina, QueryLang is a specific set of commands. QueryLang adds logical statements to search queries, such as filter, select, sort, reverse. To see the full list see here - REST¶ REST is an application programming interface (API or web API) that conforms to the constraints of REST architectural style and allows for interaction with RESTful web services. - Ranker¶ Rankers are a class of Executors. They provide ranking functionality, based on users’ business logic needs. - Runtime¶ A Jina Runtime is a procedure that blocks the main process once running. It begins when a program is opened (or executed) and ends when the program ends or is closed. - Searching¶ Searching is the process of retrieving previously indexed Documents for a given query. A query in Jina can be text, an image, a video or even more complex objects, like a pdf. - Segmenter¶ Segmenters are a class of Executors. Segmenters divide large Documents into smaller parts. For example, they divide a text document into paragraphs. A user can determine the granularity or method by which data should be divided. For further information see the Understand Jina Recursive Document Representation guide. - Semantic Search¶ Semantic search is search with meaning, as distinguished from lexical search, where the search engine looks for literal matches of the query words or variants of them, without understanding the overall meaning of the query [reference_3] - Sharding¶ Sharding is splitting data across multiple Peas, which are all stored inside a single Pod. - Shards¶ A section of the data stored or processed in separate Peas inside a single Pod. - TailPea¶ There are two possible reasons for a TailPea to exist: uses_after is defined in a Pod or parallel > 1 for a given Pod. In both cases, the TailPea collects the calculated results from the Peas in the Pod and forwards it to the next Pod. For example, when using sharding with indexers, the TailPea merges the retrieval results. This is achieved by adding uses_after. - Vector embedding¶ Vector embedding is a vector representation of the semantic meaning of a single document. - Workspace¶ A workspace is a directory that stores the indexed files (embeddings and documents) plus the serialization of executors if needed. A workspace is automatically created after the first indexing. - YAML¶ YAML (a recursive acronym for “YAML Ain’t Markup Language”) is a human-readable data-serialization language. It is commonly used for configuration files and in applications where data is being stored or transmitted.
https://docs.jina.ai/master/chapters/glossary.html
2021-02-24T23:20:38
CC-MAIN-2021-10
1614178349708.2
[]
docs.jina.ai
Add existing constraints to a DataSet .NET Framework .NET Core .NET Standard The Fill method of the SqlData. Note Foreign key constraint information is not included and must be created explicitly.. Note. The following code example shows how to add schema information to a DataSet using FillSchema: // Assumes that connection is a valid SqlConnection object. string queryString = "SELECT CustomerID, CompanyName FROM dbo.Customers"; SqlDataAdapter adapter = new SqlDataAdapter(queryString, connection); DataSet customers = new DataSet(); adapter.Fill(customers, "Customers"); The following code example shows how to add schema information to a DataSet using the MissingSchemaAction property and the Fill method: // Assumes that customerConnection and orderConnection are valid SqlConnection objects. SqlDataAdapter custAdapter = new SqlDataAdapter( "SELECT * FROM dbo.Customers", customerConnection); SqlDataAdapter ordAdapter = new SqlData"]); } Handling multiple result sets If the DataAdapter meets multiple result sets returned from the SelectCommand, it will create multiple tables in the DataSet. The tables will be given a zero-based incremental default name of Table N, starting with Table instead of "Table0". The tables will be given a zero-based incremental name of TableName N, starting with TableName instead of "TableName0" if a table name is passed as an argument to the FillSchema method.
https://docs.microsoft.com/en-us/sql/connect/ado-net/add-existing-constraints-to-dataset?view=sql-server-ver15
2021-02-25T00:06:53
CC-MAIN-2021-10
1614178349708.2
[]
docs.microsoft.com
Dialog Context. End Dialog Async(Object, CancellationToken) Method Definition Ends a dialog by popping it off the stack and returns an optional result to the dialog's parent. The parent dialog is the dialog the started the on being ended via a call to either BeginDialogAsync(String, Object, CancellationToken) or PromptAsync(String, PromptOptions, CancellationToken). The parent dialog will have its ResumeDialogAsync(DialogContext, DialogReason, Object, CancellationToken) method invoked with any returned result. If the parent dialog hasn't implemented a ResumeDialogAsync method, then it will be automatically ended as well and the result passed to its parent. If there are no more parent dialogs on the stack then processing of the turn will end. public System.Threading.Tasks.Task<Microsoft.Bot.Builder.Dialogs.DialogTurnResult> EndDialogAsync (object result = default, System.Threading.CancellationToken cancellationToken = default); member this.EndDialogAsync : obj * System.Threading.CancellationToken -> System.Threading.Tasks.Task<Microsoft.Bot.Builder.Dialogs.DialogTurnResult> Public Function EndDialogAsync (Optional result As Object = Nothing, Optional cancellationToken As CancellationToken = Nothing) As Task(Of DialogTurnResult) Parameters - cancellationToken - CancellationToken A cancellation token that can be used by other objects or threads to receive notice of cancellation. Returns A task that represents the work queued to execute. Remarks If the task is successful, the result indicates that the dialog ended after the turn was processed by the dialog. In general, the parent context is the dialog or bot turn handler that started the dialog. If the parent is a dialog, the stack calls the parent's <xref data-</xref> method to return a result to the parent dialog. If the parent dialog does not implement `ResumeDialogAsync`, then the parent will end, too, and the result passed to the next parent context. The returned <xref data-</xref> contains the return value in its <xref data-</xref> property.
https://docs.microsoft.com/ko-kr/dotnet/api/microsoft.bot.builder.dialogs.dialogcontext.enddialogasync?view=botbuilder-dotnet-stable
2021-02-25T00:19:37
CC-MAIN-2021-10
1614178349708.2
[]
docs.microsoft.com
Make your lookup automatic When you create a lookup configuration in transforms.conf, you invoke it by running searches that reference it. However, you can optionally create an additional props.conf configuration that makes the lookup "automatic." This means that it runs in the background at search time and automatically adds output fields to events that have the correct match fields. You can make all lookup types automatic. However, KV Store lookups have an additional setup step that you must complete before you configure them as automatic lookups in props.conf. See Enable replication for a KV Store collection. Each automatic lookup configuration you create is limited to events that belong to a specific host, source, or source type. Automatic lookups can access any data in a lookup table that belongs to you or which you have shared. When your lookup is automatic you do not need to invoke its transforms.conf configuration with the lookup command. Splunk software does not support nested automatic lookups. The automatic lookup format in props.conf An automatic lookup configuration in props.conf: - References the lookup table you configured in transforms.conf. - Specifies the fields in your events that the lookup should match in the lookup table. - Specifies the corresponding fields that the lookup should output from the lookup table to your events. At search time, the LOOKUP-<class> configuration identifies a lookup and describes how that lookup should be applied to your events. To create an automatic lookup, follow this syntax: [<spec>] LOOKUP-<class> = $TRANSFORM <match_field_in_lookup_table> OUTPUT|OUTPUTNEW <output_field_in_lookup_table> -. $TRANSFORM: References the transforms.confstanza that defines the lookup table. match_field_in_lookup_table: This variable is the field in your lookup table that matches a field in your events with the same host, source, or source as this props.confstanza. If the match field in your events has a different name from the match field in the lookup table, use the AS clause as specified in Step 3, below. output_field_from_lookup_table: The corresponding field in the lookup table that you want to add to your events. If the output field in your events should have a different name from the output field in the lookup table, use the AS clause as specified in Step 3, below. You can have multiple fields on either side of the lookup. For example, you can have: $TRANSFORM <match_field_in_lookup_table1>, <match_field_in_lookup_table>OUTPUT|OUTPUTNEW <output_field_from_lookup_table1>, <output_field_from_lookup_table2> You can also have one matching field return two output fields, three matching fields return one output field, and so on. If you do not include an OUTPUT|OUTPUTNEW clause, Splunk software adds all the field names and values from the lookup table to your events. When you use OUTPUTNEW, Splunk software can add only the output fields that are "new" to the event. If you use OUTPUT, output fields that already exist in the event are overwritten. If the "match" field names in the lookup table and your events are not identical, or if you want to "rename" the output field or fields that get added to your events, use the AS clause: [<stanza name>] LOOKUP-<class> = $TRANSFORM <match_field_in_lookup_table> AS <match_field_in_event>OUTPUT|OUTPUTNEW <output_field_from_lookup_table> AS <output_field_in_event> For example, if the lookup table has a field named dept and you want the automatic lookup to add it to your events as department_name, set department_name as the value of <output_field_in_event>. Note: You can have multiple LOOKUP-<class> configurations in a single props.conf stanza. Each lookup should have its own unique lookup name. For example, if you have multiple lookups, you can name them LOOKUP-table1, LOOKUP-table2, and so on. You can also have different props.conf automatic lookup stanzas that each reference the same lookup stanza in transforms.conf. Create an automatic lookup stanza in props.conf - Create a stanza header that references the host, source, or source type that you are associating the lookup with. - Add a LOOKUP-<class>configuration to the stanza that you have identified or created. - As described in the preceding section this configuration specifies: - What fields in your events it should match to fields in the lookup table. - What corresponding output fields it should add to your events from the lookup table. - Be sure to make the <class>value unique. You can run into trouble if two or more automatic lookup configurations have the same <class>name. See "Do not use identical names in automatic lookup configurations." - (Optional) Include the AS clause in the configuration when the "match" field names in the lookup table and your events are not identical, or when you want to "rename" the output field or fields that get added to your events, use the ASclause. - Restart Splunk Enterprise to apply your changes. - If you have set up an automatic lookup, after restart you should see the outputfields from your lookup table listed in the fields sidebar. From there, you can select the fields to display in each of the matching search results. Enable replication for a KV store collection In Splunk Enterprise, KV Store collections are not bundle-replicated to indexers by default, and lookups run locally on the search head rather than on remote peers. When you enable replication for a KV Store collection, you can run the lookups on your indexers which let you use automatic lookups with your KV Store collections. To enable replication for a KV Store collection and allow lookups against that collection to be automatic: - Open collections.conf. - Set replicateto truein the stanza for the collection. - This parameter is set to falseby default. - Restart Splunk Enterprise to apply your changes. If your indexers are running a version of Splunk Enterprise that is older than 6.3, attempts to run an automatic lookup fail with a "lookup does not exist" error. You must upgrade your indexers to 6.3 or later to use this functionality. For more information, see Use configuration files to create a KV Store collection at the Splunk Developer Portal. Example configuration of an automatic KV Store lookup This configuration references the example KV Store lookup configuration in Configure KV Store lookups, in this manual. The KV Store lookup is defined in transforms.conf, in a stanza named employee_info. [access_combined] LOOKUP-http = employee_info CustID AS cust_ID OUTPUT CustName AS cust_name, CustCity AS cust_city This configuration uses the employee_info lookup in transforms.conf to add fields to your events. Specifically it adds cust_name and cust_city fields to any access_combined event with a cust_ID value that matches a custID value in the kvstorecoll KV Store collection. It also uses the AS clause to: - Find matching fields in the KV Store collection. - Rename output fields when they are added to!
https://docs.splunk.com/Documentation/Splunk/6.5.3/Knowledge/Makeyourlookupautomatic
2021-02-24T23:48:35
CC-MAIN-2021-10
1614178349708.2
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
- Setting different access priorities for different types of requests. - Monitoring resource usage patterns, performance tuning, and capacity planning. - Limiting the number of requests or sessions that can run at the same time. Workload management is the act of managing Teradata Database workload performance by monitoring system activity and acting when pre-defined limits are reached. Workload management uses rules, and each rule applies only to some database requests. However, the collection of all rules applies to all active work on the platform. Users can employ Teradata Workload Management as a performance tool to direct how Teradata Database accomplishes expectations. Teradata Workload Management offers two different strategies. See your Teradata representative for details about the license required for each strategy. - Teradata Active System Management (TASM) - TASM performs full workload management in Teradata Database. - TASM gives administrators the ability to prioritize workloads, tune performance, and monitor and manage workload and system health. TASM automates tasks that were previously labor-intensive for application DBAs and operational DBAs. - Teradata Integrated Workload Management (TIWM) - TIWM provides basic workload management capabilities to customers without full TASM. TIWM offers a subset of TASM. While this book touches upon TIWM, it focuses primarily on TASM. For more information on the differences between TASM and TIWM, see A Comparison of TASM and TIWM Features.
https://docs.teradata.com/r/XE1o9QrvcquDZ7akeQqyag/HsKbz8Dj8TpWiyZfB2FJ7Q
2021-02-25T00:28:30
CC-MAIN-2021-10
1614178349708.2
[]
docs.teradata.com
Gateways are separated into two categories: Merchant and Non-Merchant. Typically, Merchant gateways require a merchant account and payment is all handled seamlessly through Blesta. Non-Merchant gateways usually do not require a merchant account and payment is processed offsite on the gateways website. Not an exhaustive list. This is not an exhaustive list of gateways for Blesta. This list contains the gateways that ship with Blesta, or can be found on our Github. There are many third party gateways, and some can be found on the Marketplace. We are also available to hire for custom development. If you're looking to create your own gateway, see Getting Started with Gateways in the developer manual. Installing Gateways Merchant Merchant gateways are used to process credit card or ACH transactions seamlessly to the end-user. These gateways typically require a merchant account through a bank. - Authorize.net — Authorize.net is a popular payment gateway for US merchants, and many banks offer Authorize.net as a gateway option with a merchant account. - BluePay — BluePay is a popular payment gateway for US merchants. - Braintree — Braintree is a merchant payment gateway owned by PayPal. - Converge — Converge (Formerly VirtualMerchant) is a payment platform that flexes with your business. - Cornerstone — Cornerstone is one of the nation's leading Christian owned and operated independent sales organizations in the merchant processing industry. - eWay — eWay is a popular payment gateway for Australian merchants. - PayJunction — PayJunction is a popular payment gateway in the United States. - PayPal Payflow Pro — PayPal Payflow Pro is a popular payment gateway offered by PayPal. - Quantum Gateway — Quantum Gateway is a popular gateway for US merchants. - SagePay — SagePay is a popular UK payment gateway. - Stripe — Stripe is a popular payment gateway in North America, and Europe. Stripe is built for developers and offers flat fee processing. - Stripe Payments — Stripe Payment is a popular payment gateway in North America, and Europe. Stripe is built for developers and offers flat fee processing. This integration supports 3DS and SCA. Non-Merchant Non-Merchant gateways take the user to a third party site to complete payment, which is then posted back to Blesta to be recorded. These gateways do not usually require a merchant account. - 2Checkout — 2Checkout is a globally accepted payment processor that accepts Credit Cards, PayPal, and Debit Cards. - Alipay — Alipay is a popular Chinese payment gateway. - Bitpay — Bitpay is a payment processor for bitcoin, which also accepts a multitude of currencies. - CCAvenue — CCAvenue is a payment processor for the Indian rupee. - CoinGate — Pay with Bitcoin or Altcoins via CoinGate.com - CoinPayments — CoinPayments is a popular payment gateway, accepting many different cryptocurrencies. - GoCardless — GoCardless is the easy way to collect Direct Debit. Already serving more than 30000 businesses, perfect for recurring billing and B2B invoicing. - Google Checkout — Google Checkout is a payment gateway by Google. - Hubtel — Hubtel is a popular African payment gateway. - iDeal - Kassa Compleet - Offline Payment — Offline Payment allows instructions to be displayed to the client informing them on how to submit payment offline. - PagSeguro — PagSeguro is a payment processor for the Brazilian real. - PayPal Payments Standard — PayPal Payments Standard is one of the most popular payment gateways in the world. - Paystack — Paystack is a payment processor for Africa. - PayUmoney — PayUmoney is a popular Indian payment gateway. - Payza — Payza is a globally-accepted payment processor. - Perfect Money — Perfect Money is a popular European payment gateway. - Razorpay — Razorpay is the only payments solution in India which allows businesses to accept, process and disburse payments with its product suite. - Skrill — Skrill (Formerly Moneybookers) is one of the worlds leading digital payment companies. - Square — Square is a popular payment gateway for the US, UK, and Canada. - SSLCommerz — SSLCommerz is the first payment gateway in Bangladesh opening doors for merchants to receive payments on the internet via their online stores. - Wide Pay — Wide Pay is a Brazilian payment processor that accepts Credit Cards and bank transfers. - Yandex — Yandex is a popular Russian payment gateway.
https://docs.blesta.com/display/user/Gateways
2021-02-24T23:07:53
CC-MAIN-2021-10
1614178349708.2
[]
docs.blesta.com
Relationship between a User and a Group A user can be associated with a group by two types of relationships: user relationship and owner relationship. A user and a group are related to each other with two-way links. When a group member is added or removed or a user or a group is deleted, Kii Cloud automatically updates those two-way links. Member A member is a user who joins a group. A group member can access the group data (i.e., KiiObjects in the group-scope bucket) by default. Owner An owner is a user who owns a group. The user who creates a group will become the initial owner. The owner can add and remove group members (For more information, see Group Access Control). A group owner is also a member of the group. You can change the group owner through the REST API. When the new owner is specified, they automatically become a group member. Relationship between a User and a Group An owner and a member are expressed as two-way relationships between a user and a group. The KiiUser and KiiGroup classes of the Kii Cloud SDK have methods that return a relationship between a user and a group. The methods of the KiiUser class point to KiiGroup objects and vice versa as shown in the figure below. The figure also shows the multiplicity of instances in each method in the UML notation. That is, "0..*" means 0 or more and "0..1" means 0 or 1. Each instance can be related to 0 or more instances except that a group can be related only to 0 or 1 owner. Getting a Group List for a Member The memberOfGroups(_:)method of the KiiUserclass gets a list of KiiGroupsthat a KiiUserobject is a member of. If the KiiUserobject does not belong to any group, an empty list is returned. Getting a Group List for an Owner The ownerOfGroups(_:)method of the KiiUserclass gets a list of KiiGroupsthat a KiiUserobject is the owner of. The returned KiiGroupsare always included in the result of the memberOfGroups(_:)method because the owner is also a member of their group. If the KiiUserobject does not own any group, an empty list is returned. Getting a Member List of a Group The getMemberList(_:)method of the KiiGroupclass gets a list of KiiUsersthat are members of the KiiGroupobject. This method works in the opposite way of the memberOfGroups(_:)method of the KiiUserclass. Getting the Owner of a Group The getOwnerWith(_:)method of the KiiGroupclass gets a KiiUserthat is the owner of the KiiGroupobject. One KiiUseris returned because a group cannot have multiple owners. This method works in the opposite way of the ownerOfGroups(_:)method of the KiiUserclass. When a group owner who is the sole member of their group is deleted, the group has no member. In this case, the getMemberList(_:) method returns an empty list and the getOwnerWith(_:) method returns a nil value. For more information, see Considerations in deleting users. Example of manipulating user-group relationships See the figure below for the results of the following three actions. Alice creates the Sales Div. group. Relationships created from this action are indicated by the red arrows. Alice adds Bob to the Sales Div. group. Relationships created from this action are indicated by the blue arrows. Bob creates the Tennis Club group. Relationships created from this action are indicated by the green arrows. The figure indicates that all the relationships between the users and the groups are symmetrical. However, the methods for listing users and groups return a different number of objects. For example, the memberOfGroups(_:) method of Bob returns Sales Div. and Tennis Club while the getMemberList(_:) method of Tennis Club returns only Bob. Also, the getMemberList(_:) method of Sales Div. returns Bob and Alice. When you implement your mobile app, be careful about the type of objects, KiiUsers or KiiGroups, in your listing operation. Retrieving a group for its member or owner You can retrieve group data by using the group listing feature of the KiiUser class in addition to the methods of using the group URI or ID. You can get groups that are related to the logged-in user or a specific user by getting a list of KiiGroups with the memberOfGroups(_:) method of the target user. Then, you can perform desired actions through accessing KiiObjects in the group scope and getting a list of group members. You need to implement logic to distinguish retrieved groups in your mobile app. The group listing feature just gets KiiGroups that are related to the target KiiUser and it is not possible to know the role of each KiiGroup in the list. Suppose your mobile app has a bulletin board feature and a group chat feature. You could not distinguish KiiGroups in the group list for the bulletin board and the group chat. In order to use those groups separately by purpose, you would need to develop your mobile app accordingly.
https://docs.kii.com/en/guides/cloudsdk/ios/managing-groups/user-and-group/
2021-02-24T23:35:37
CC-MAIN-2021-10
1614178349708.2
[array(['01.png', None], dtype=object) array(['02.png', None], dtype=object) array(['03.png', None], dtype=object)]
docs.kii.com
Split Storyboard Dialog Box The Split Storyboard dialog box lets you divide a. For tasks related to this dialog box, see Project Management. - In the Timeline, drag the red playhead to the panel where you want the split to take place. - Select File > Project Management > Split.
https://docs.toonboom.com/help/storyboard-pro-5/storyboard/reference/dialogs/split-storyboard-dialog.html
2021-02-24T22:49:02
CC-MAIN-2021-10
1614178349708.2
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/Activation.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/download.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/SBP/Storyboard_Supervision/split_storyboard.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Masks a sequence by using a mask value to skip timesteps.. Consider a Numpy data array x of shape (samples, timesteps, features), to be fed to an LSTM layer. You want to mask timestep #3 and #5 because you lack data for these timesteps. You can: x[:, 3, :] = 0.and x[:, 5, :] = 0.. Licensed under the Creative Commons Attribution License 3.0. Code samples licensed under the Apache 2.0 License.
https://docs.w3cub.com/tensorflow~2.3/keras/layers/masking
2021-02-24T23:41:02
CC-MAIN-2021-10
1614178349708.2
[]
docs.w3cub.com
After you have compared data, dbForge Studio gives you an easy and convenient way to synchronize data. You may manually specify which tables and even which records to synchronize. Use the check boxes in the first column of the data comparison document grid to include or exclude objects to synchronization. Note that it is highly recommended to backup target database before data synchronization. If you synchronize data with different data type, you may encounter synchronization warnings. They are displayed at the Summary page of the Data Synchronization Wizard. If you have any synchronization warnings, that means that during synchronizing data you may encounter errors or data loss because of rounding, truncation, etc. The Data Synchronization Wizard allows you to either apply updates immediately or create an update script for the Target database and save it to a file. Before launching the synchronization, you may view Action Plan that contains all the actions that will be performed during synchronization. The action plan represents the synchronization script structure, so it is good to analyze it before this script will be generated or executed if you have chosen to execute the script immediately. The action Plan is displayed on the Summary page of the Data Synchronization Wizard.
https://docs.devart.com/studio-for-oracle/data-comparison-and-synchronization/data-synchronization-process.html
2021-02-24T23:15:52
CC-MAIN-2021-10
1614178349708.2
[]
docs.devart.com
3.1.4.3 String Handling A server holds values of properties for objects. Some of these values are strings. The Exchange Server NSPI Protocol allows string values to be represented as 8-bit character strings or Unicode strings. All string valued properties held by a server are categorized as either natively of property type PtypString or natively of property type PtypString8. Those properties natively of property type PtypString8 are further categorized as either case-sensitive or case-insensitive.
https://docs.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxnspi/5bc58da5-c337-44d8-b1c7-f4eee7021ab3
2021-02-25T00:40:09
CC-MAIN-2021-10
1614178349708.2
[]
docs.microsoft.com
When a connection in a terminal window is lost, you can re-establish the connection from the Client Connections window. If no terminal windows are open, the commands are not available for use. - From the Client Connections window, select the Connection menu, and then select any of the following commands:
https://docs.teradata.com/r/ULK3h~H_CWRoPgUHHeFjyA/bPNzS6MzefuxIP714y2pyg
2021-02-24T22:51:09
CC-MAIN-2021-10
1614178349708.2
[]
docs.teradata.com
Some parameters in rulesets can be changed in different states and some cannot. You can customize system-level attributes, such as filters and throttles, for different states. Different states can have filters and throttles enabled or disabled or they can have different throttles limits. You can also customize some workload characteristics for different states. To understand what you can change about a workload when a state change occurs and what you cannot change, consider that workloads contain these two types of properties: - Fixed attributes: The details that define the workload, which include the following items: - Classification criteria - Exception definitions and actions - Position (or tier) in the priority hierarchy - Evaluation order of the workload - Working values: The variables that are part of the Workload Definition, which include the following items: - Workload or virtual partition share percent values - Workload throttles - Exception enabling or disabling - Service Level Goals - Minimum response time (called Hold Query Responses in Viewpoint Workload Designer) The following workload working values may vary based on the planned environment in effect: - Service Level Goals - Hold Query Responses - Exception enabling or disabling - Workload management method priority values (workload distribution or timeshare access levels) The following settings may vary based on the state in effect: - Session controls - Workload throttles - System throttles - Resource limits - Filters - Query session limits - Utility limits Often, workloads do not keep the same level of importance throughout the day, week, month, or year. A load workload may be more important at night, and a request workload may be more important during the day. Also, when the system is degraded, it may be more important to complete tactical workloads than strategic workloads.
https://docs.teradata.com/r/XE1o9QrvcquDZ7akeQqyag/YgyC64KTICy76d6T9IRlyA
2021-02-25T00:10:23
CC-MAIN-2021-10
1614178349708.2
[]
docs.teradata.com
BadgerStockMarket is a stock market that is based on the real stock market! The prices and graphs actually come from the real stock market! I wanted to make a nifty script to interact with the stock market and actually invest it, but without investing real money. You can do that with this script! You use ESX money in-game to invest into the stock market. The stock market updates/refreshes every time you open the phone, so this updates in real live time. Make sure you sell your stocks when you are making money, not when you are losing money! /stocks - Opens up the phone menu for the stocks Config = {maxStocksOwned = {['BadgerStockMarket.Stocks.Normal'] = 20,['BadgerStockMarket.Stocks.Bronze'] = 100,['BadgerStockMarket.Stocks.Silver'] = 500,['BadgerStockMarket.Stocks.Gold'] = 99999999999999999999999999999999999999999999},stocks = {['Apple Stock'] = {link = '',tags = {'Technology', 'Software'}},['Citigroup Stock'] = {link = '',tags = {'Bank'}},['General Electric Stock'] = {link = '',tags = {'Automobiles', 'Vehicles', 'Cars'}}}} You must use website for this to work properly (for each Stock). Add as many stocks as you would like :) 'BadgerStockMarket.Stocks.Normal' is the permission node (ACE permission node) that a player has to have to have that many stocks. You can set up multiple different permission nodes and allow a certain number of stocks for different groups of people. Maybe donators get more stocks? For liability reasons, I wanted to include this at the bottom. BadgerStockMarket is in no way affiliated with the Stock Market and/or its proprieters. BadgerStockMarket was created as an educational tool to be used within the GTA V modification known as FiveM. BadgerStockMarket is also not affiliated with the Robinhood application. Although the script used the Robinhood logo. Once again, BadgerStockMarket was created as an educational tool. If anyone has a problem with this, please contact me and we can get the changes adjusted appropriately, thank you.
https://docs.badger.store/fivem-misc.-scripts/badgerstockmarket
2021-02-24T23:03:56
CC-MAIN-2021-10
1614178349708.2
[]
docs.badger.store
Backup and restore Contents: Backups vary by our offering and their retention is governed by our data retention. This section details our Recovery Point Objective (RPO) and Recovery Time Objective (RTO) for our Platform.sh Professional and Platform.sh Enterprise offerings. Platform.sh Professional With Platform.sh Professional, we have enabled our clients to manage their own backup and restore functions. Please see the backup and restore page for details. RPO: User defined. The RPO is configured by our clients. RTO: Variable. Recovery time will depend upon the size of the data we are recovering Platform.sh Enterprise-Grid and Enterprise-Dedicated RPO: 6 hours. Platform.sh takes a backup of Platform.sh Enterprise environments every 6 hours. RTO: Variable. Recovery time will depend upon the size of the data we are recovering
https://docs.platform.sh/security/backups.html
2021-02-24T22:48:10
CC-MAIN-2021-10
1614178349708.2
[]
docs.platform.sh
Amazon Web Services (AWS) legacy discovery connection End-of-Life announcement As of February 8th, 2020, Rapid7 will no longer support the AWS Legacy Discovery Connection in Nexpose. The AWS Legacy Discovery Connection allowed you to import your AWS assets into Nexpose. A newer option, called AWS Asset Sync, has replaced the AWS Legacy Discovery Connection, so the latter will be removed from Nexpose. As a result of this change, you will need to change your AWS Legacy Discovery Connection to AWS Asset Sync. AWS Asset Sync Migration Overview Here’s a high-level overview of how to migrate to AWS Asset Sync: - Add or edit your Scan Engine. - Configure your AWS environment with Nexpose. - Create the AWS Asset Sync Connection in the Security Console. Add or Edit Your Existing Scan Engine The first thing you need to do is confirm whether your Scan Engine is deployed inside your AWS environment in the form of the AMI, or deployed as standard Scan Engine. For more information, see the following resources: Configure Your AWS Environment with Nexpose Next, configure the AWS environment by creating security groups and establishing IAM Users or Roles. Create Security Groups We recommend creating two security groups: one for the scan engine and one for the scan targets. For more information, see “Configuring Your AWS Environment with Nexpose" in the Amazon Web Services documentation. The steps needed to create a scan engine and scan targets security group follow that section. Create an IAM User or Role In order to give the Security Console access to the AWS environment, you will need to add permissions to your existing account using CloudTrail logs. See “Creating an IAM User or Role” section in our Amazon Web Services documentation. Add the AWS Asset Sync Connection in the Security Console To add an AWS Asset Sync Connection in Nexpose, use the AWS Asset Sync option as the “Connection Type” instead of the AWS Legacy Discovery Connection. NOTE You must have Global Administrator permissions to add an AWS Asset Sync Connection. See the Discovering Amazon Web Services instances article for step-by-step instructions. This process allows Nexpose to scan and manage assets on the AWS server. CAUTION When migrating from the legacy connection to the AWS Asset Sync, deleting the original site with the legacy connection will result in losing that site’s asset scan history. These assets will be synced back to Nexpose with the new connection but history will not carry over. Also, legacy connections cannot be deleted until associated sites are deleted. For more details on the AWS Asset Sync, see our AWS Asset Sync Connection: More Visibility Into Your AWS Infrastructure blog. Schedule of Events Frequently Asked Questions What is the difference between AWS Legacy Discovery Connection and AWS Asset Sync? All capabilities of the AWS Legacy Discovery Connection are replicated and improved. AWS Asset Sync offers additional capabilities over the legacy connection. Here are some improvements: - When instances are terminated on the AWS side, they are automatically deleted from Nexpose, keeping license counts up-to-date - A single connection allows cross-region and AWS cross-account capabilities - Supports ingesting AWS metadata as tags in Nexpose Who can I contact if I have more questions that are not addressed in this announcement? Customers should contact their Customer Success Manager or Support with any questions or concerns.
https://docs.rapid7.com/nexpose/amazon-web-services-aws-legacy-discovery-connection-end-of-life-announcement/
2021-02-24T22:33:33
CC-MAIN-2021-10
1614178349708.2
[]
docs.rapid7.com
CentOS installation¶ This guide describes the fastest way to install Graylog on CentOS 7. All links and packages are present at the time of writing but might need to be updated later on. Warning This setup should not be done on publicly exposed servers. This guide does not cover security settings! Prerequisites¶ Taking a minimal server setup as base will need this additional packages: $ sudo. Additionally, run these last steps to start MongoDB during the operating system’s boot and start it right away: $ sudo chkconfig --add mongod $ sudo systemctl daemon-reload $ sudo systemctl enable mongod.service $ sudo systemctl start mongod.service Elasticsearch¶ Graylog 2.0.0 and higher requires Elasticsearch 2.x, so we took the installation instructions from the Elasticsearch installation guide. First install the Elastic GPG key with rpm --import then add the repository file /etc/yum.repos.d/elasticsearch.repo with the following contents: [elasticsearch-2.x] name=Elasticsearch repository for 2.x packages baseurl= gpgcheck=1 gpgkey= enabled=1 followed by the installation of the latest release with sudo yum Graylog allow access to each port individually: - Graylog REST API and web interface: sudo semanage port -a -t http_port_t -p tcp 9000 - Elasticsearch (only if the HTTP API is being used): sudo semanage port -a -t http_port_t -p tcp 9200 Allow using MongoDB’s default port (27017/tcp): sudo semanage port -a -t mongod_port_t -p tcp 27017 If you run a single server environment with NGINX or Apache proxy, enabling the Graylog REST API is enough. All other rules are only required in a multi-node setup. Hint Depending on your actual setup and configuration, you might need to add more SELinux rules to get to a running setup. Further reading¶ - - - - - -.
http://docs.graylog.org/en/2.1/pages/installation/os/centos.html
2018-06-18T02:08:51
CC-MAIN-2018-26
1529267859923.59
[]
docs.graylog.org
CaptchaValidator validates that the attribute value is the same as the verification code displayed in the CAPTCHA. CaptchaValidator should be used together with yii\captcha\CaptchaAction.. The route of the controller action that renders the CAPTCHA image. public string $captchaAction = 'site/captcha' Whether the comparison is case sensitive. Defaults to false. public boolean $caseSensitive = false Whether to skip this validator if the input is empty. public boolean $skipOnEmpty = false Creates the CAPTCHA action object from the route specified by $captchaAction..
http://docs.w3cub.com/yii~2.0/yii-captcha-captchavalidator/
2018-06-18T01:58:45
CC-MAIN-2018-26
1529267859923.59
[]
docs.w3cub.com
Visual Basic Reference Invalid property array index (Error 381) See Also An inappropriate property array index value is being used. This error has the following cause and solution: You tried to set a property array index to a value outside its permissible range. Change the index value of the property array to a valid setting. For example, the index value of the List property for a ListBox must be from 0 to 32,766.
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-basic-6/aa277298(v=vs.60)
2018-06-18T01:46:05
CC-MAIN-2018-26
1529267859923.59
[]
docs.microsoft.com
When you create a design for a Horizon 7 production environment that accommodates more than 500 desktops, several considerations affect whether to use one vCenter Server instance rather than multiple instances. Starting with View 5.2, VMware supports managing up to 10,000 desktop virtual machines within a single Horizon 7 pod with a single vCenter 5.1 or later server. Before you attempt to manage 10,000 virtual machines with a single vCenter Server instance, take the following considerations into account: Duration of your company's maintenance windows Capacity for tolerating Horizon 7 component failures Frequency of power, provisioning, and refit operations Simplicity of infrastructure Duration of Maintenance Windows Concurrency settings for virtual machine power, provisioning, and maintenance operations are determined per vCenter Server instance.. Capacity for Tolerating Component Failures The role of vCenter Server in Horizon 7 pods is to provide power, provisioning, and refit (refresh, recompose, and rebalance) operations. After a virtual machine desktop is deployed and powered on, Horizon 7 does not rely on vCenter Server for the normal course of operations. Because each vSphere cluster must be managed by a single vCenter Server instance, this server represents a single point of failure in every Horizon 7 design. This risk is also true for each View Composer instance. (There is a one-to-one mapping between each View Composer instance and vCenter Server instance.) Using one of the following products can mitigate the impact of a vCenter Server or View Composer outage: VMware vSphere High Availability (HA) VMware vCenter Server Heartbeat™ Compatible third-party failover products To use one of these failover strategies, the vCenter Server instance must not be installed in a virtual machine that is part of the cluster that the vCenter Server instance manages. In addition to these automated options for vCenter Server failover, you can also choose to rebuild the failed server on a new virtual machine or physical server. Most key information is stored in the vCenter Server database. Risk tolerance is an important factor in determining whether to use one or multiple vCenter Server instances in your pod design. If your operations require the ability to perform desktop management tasks such as power and refit of all desktops simultaneously, you should spread the impact of an outage across fewer desktops at a time by deploying multiple vCenter Server instances. If you can tolerate your desktop environment being unavailable for management or provisioning operations for a long period, or if you choose to use a manual rebuild process, you can deploy a single vCenter Server instance for your pod. Frequency of Power, Provisioning, and Refit Operations Certain virtual machine desktop power, provisioning, and refit operations are initiated only by administrator actions, are usually predictable and controllable, and can be confined to established maintenance windows. Other virtual machine desktop power and refit operations are triggered by user behavior, such as using the Refresh on Logoff or Suspend on Logoff settings, or by scripted action, such as using Distributed Power Management (DPM) during windows of user inactivity to power off idle ESXi hosts. If your Horizon 7 design does not require user-triggered power and refit operations, a single vCenter Server instance can probably suit your needs. Without a high frequency of user-triggered power and refit operations, no long queue of operations can accumulate that might cause Horizon Connection Server to time-out waiting for vCenter Server to complete the requested operations within the defined concurrency setting limits. Many customers elect to deploy floating pools and use the Refresh on Logoff setting to consistently deliver desktops that are free of stale data from previous sessions. Examples of stale data include unclaimed memory pages in pagefile.sys or Windows temp files. Floating pools can also minimize the impact of malware by frequently resetting desktops to a known clean state. Some customers are reducing electricity usage by configuring Horizon 7 to power off desktops not in use so that vSphere DRS (Distributed Resources Scheduler) can consolidate the running virtual machines onto a minimum number of ESXi hosts. VMware Distributed Power Management then powers off the idle hosts. In scenarios such as these, multiple vCenter Server instances can better accommodate the higher frequency of power and refit operations required to avoid operations time-outs. Simplicity of Infrastructure A single vCenter Server instance in a large-scale Horizon 7 design offers some compelling benefits, such as a single place to manage golden master images and parent virtual machines, a single vCenter Server view to match the Horizon Administrator console view, and fewer production back-end databases and database servers. Disaster Recovery planning is simpler for one vCenter Server than it is for multiple instances. Make sure you weigh the advantages of multiple vCenter Server instances, such as duration of maintenance windows and frequency of power and refit operations, against the disadvantages, such as the additional administrative overhead of managing parent virtual machine images and the increased number of infrastructure components required. Your design might benefit from a hybrid approach. You can choose to have very large and relatively static pools managed by one vCenter Server instance and have several smaller, more dynamic desktop pools managed by multiple vCenter Server instances. The best strategy for upgrading existing large-scale pods is to first upgrade the VMware software components of your existing pod. Before changing your pod design, gauge the impact of the improvements of the latest version's power, provisioning, and refit operations, and later experiment with increasing the size of your desktop pools to find the right balance of more large desktop pools on fewer vCenter Server instances.
https://docs.vmware.com/en/VMware-Horizon-7/7.2/com.vmware.horizon-view.planning.doc/GUID-979A744D-9F91-4A90-8430-3D7510AB4F96.html
2018-06-18T02:25:25
CC-MAIN-2018-26
1529267859923.59
[]
docs.vmware.com
. int i = 123; // The following line boxes i. object o = i;. Boxing: int i = 123; The following statement implicitly applies the boxing operation on the variable i: // Boxing copies the value of i into object o. object o =. Boxing Conversion It: int i = 123; // a value type object o = i; // boxing int j = (int)o; // unboxing The following figure demonstrates the result of the previous statements. Unboxing Conversion >. Example. C# Language Specification For more information, see the C# Language Specification. The language specification is the definitive source for C# syntax and usage. Related Sections For more information:
https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/types/boxing-and-unboxing
2018-06-17T23:18:16
CC-MAIN-2018-26
1529267859817.15
[array(['media/vcboxingconversion.gif', 'vcBoxingConversion BoxingConversion graphic'], dtype=object) array(['media/vcunboxingconversion.gif', 'vcUnBoxingConversion UnBoxing Conversion graphic'], dtype=object)]
docs.microsoft.com
Let's get started! Please read these brief onboarding articles thoroughly prior to meeting Misty. You now have (or soon will) a hand-built Misty I robot from the Misty Robotics team. Because she's hand-built, she'd appreciate that you start by reading Safety & Handling. Once you power Misty up, you can quickly control Misty via her companion app, Blockly, and the Misty API Explorer. You can program Misty to do a range of activities, such as drive from one room to the next, issue sounds to speakers, and so on. Misty's abilities are growing, and more will become available as they are developed by the Misty engineering team and by you, the Misty community. If you're interested in a deeper dive, check out Misty's API, available via these interfaces: More Help & Links: - help @ mistyrobotics.com - Misty Community forum - Misty GitHub repo - Misty Community Slack. When you're accepted into the Misty Developer Program, you receive an invitation to the Misty Community Slack channel. This channel serves as the first line of response for all issues and questions you may have with your robot. The support channel is monitored for immediate response (5-10 minutes) from 7:00am-10:00pm MST on weekdays and from 7:00am-7:00pm MST on weekends.
https://docs.mistyrobotics.com/
2018-06-17T22:33:30
CC-MAIN-2018-26
1529267859817.15
[]
docs.mistyrobotics.com
ActiveMQ Integration Apache ActiveMQ is a popular open source messaging provider which is easy to integrate with Mule. ActiveMQ supports the JMS 1.1 and J2EE 1.4 specifications and is released under the Apache 2.0 License. Usage To configure ActiveMQ connector with most common settings, use <jms:activemq-connector> or <jms:activemq-xa-connector> (for XA transaction support) element in your Mule configuration, for example: Also copy the JAR files you need to the Mule lib directory ( $MULE_HOME/lib/user) or your application lib directory. Dependencies Conflict With JAR Files The use of[activemq-all-x.x.x.jar] creates conflicts with other dependencies, most notably with log4j. Only add the JAR files required by the deployment mode you have chosen for ActiveMQ: JARs for Embedded ActiveMQ: apache-activemq-5.12.0/lib/activemq-broker-5.12.0.jar apache-activemq-5.12.0/lib/activemq-client-5.12.0.jar apache-activemq-5.12.0/lib/activemq-kahadb-store-5.12.0.jar apache-activemq-5.12.0/lib/activemq-openwire-legacy-5.12.0.jar apache-activemq-5.12.0/lib/activemq-protobuf-1.1.jar apache-activemq-5.12.0/lib/geronimo-j2ee-management_1.1_spec-1.0.1.jar apache-activemq-5.12.0/lib/hawtbuf-1.11.jar JARs for External JARs for Failover To include these .jar files in your project, simply right-click on your project and select Build Path > Add External Archives… ActiveMQ Failover and VM Initialization Mule initializes the ActiveMQ connector with the default instance of the ActiveMQ connection factory and establishes a TCP connection to the remote standalone broker running on local host and listening on port 61616. Use the failover: protocol to connect to the cluster of brokers, and pass additional ActiveMQ options as URI parameters, for example: To create an embedded instance of ActiveMQ broker, such as a broker running on the same Java VM as Mule, use the vm: protocol, for example: <jms:activemq-connector You may also use additional connector attributes (See “Configuration Reference” for more details). Sometimes it might be necessary to explicitly configure an instance of ActiveMQ connection factory, for example, to set redelivery policy, or other ActiveMQ-specific features that are not exposed through Mule connector parameters. To create custom ActiveMQ connection factory instance, first configure it using Spring, for example: Reference this bean in <jms:activemq-connector>, for example: <jms:activemq-connector Debugging ActiveMQ Debugging information for ActiveMQ can be enabled with the following log4j2 configuration. This provides debugging information on the REST queries sent between the connector and ActiveMQ service: <AsyncLogger name="com.mulesoft.mq" level="DEBUG"/> ActiveMQ Connector Reference The activemq-connector element configures an ActiveMQ version of the JMS connector.
https://docs.mulesoft.com/mule-user-guide/v/3.7/activemq-integration
2018-06-17T22:28:04
CC-MAIN-2018-26
1529267859817.15
[]
docs.mulesoft.com
Quick Start Vcpkg helps you get C and C++ libraries on Windows. This tool and ecosystem are currently in a preview state; your involvement is vital to its success. Examples User Help Maintainer help Specifications Blog posts - Vcpkg: introducing the export command - Binary Compatibility and Pain-free Upgrade Why Moving to Visual Studio 2017 is almost "too easy" - Vcpkg recent enhancements - Vcpkg 3 Months Anniversary, Survey - Vcpkg updates: Static linking is now available - Vcpkg: a tool to acquire and build C++ open source libraries on Windows
https://vcpkg.readthedocs.io/en/latest/
2018-06-17T22:05:43
CC-MAIN-2018-26
1529267859817.15
[]
vcpkg.readthedocs.io
Error getting tags : error 404Error getting tags : error 404 set the invisible of player "Splash Screen" to false set the invisible of the mouseControl to true Use the invisible property to determine whether an object is hidden or not. Value: The invisible of an object is true or false. A hidden object is still present and still takes up memory, and a handler can access its properties and contents, but the user cannot see or or interact with it. An object that cannot be seen only because it's behind another object is still visible, in the sense that its invisible property is still false. The invisible property of grouped controls is independent of the invisible property of the group. Setting a group's invisible property to true doesn't change the invisible property of its controls; their invisible property is still false, even though the controls cannot be seen because the group is invisible. You can set the invisible property of a card, but doing so has no effect. Cards cannot be made invisible. The invisible property is the logical inverse of the visible property. When an object's invisible is true, its visible is false, and vice versa.
http://docs.runrev.com/Property/invisible
2018-06-17T21:49:08
CC-MAIN-2018-26
1529267859817.15
[]
docs.runrev.com
Install Waves Free Version Prerequites - A WordPress installation Installing Waves Theme - Download Theme Zip File From - Login to WordPress Dashboard Goto Dashboard => Appearance => Themes. And click on Add New Click on Upload Theme Click on Choose File Browse and select the theme zip Click on Install Now Click on Activatelink Setting Up Home Page - Go to Dashboard => Pages => Add New - Create 2 pages. e.g. Home Page and Blog - Go to Dashboard => Settings => Reading - 1. Select 'A static page' option - Select 'Home Page' from Front Page drop down menu - Select 'Blog' from Posts Page drop down menu Save Changes - Next Step: Continue to Setup Home Page
http://docs.webulous.in/waves-free/install.php
2018-06-17T21:38:38
CC-MAIN-2018-26
1529267859817.15
[]
docs.webulous.in
NiFi Processor Guide¶ ImportSqoop Processor¶ About¶ The ImportSqoop processor allows loading data from a relational system into HDFS. This document discusses the setup required to use this processor. Starter template¶ A starter template for using the processor is provided at: samples/templates/nifi-1.0/template-starter-sqoop-import.xml Configuration¶ For use with Kylo UI, configure values for the two properties (nifi.service.<controller service name in NiFi>.password, config.sqoop.hdfs.ingest.root) in the below section in the properties file: /opt/kylo/kylo-services/conf/application.properties ### Sqoop import # DB Connection password (format: nifi.service.<controller service name in NiFi>.password=<password> #nifi.service.sqoop-mysql-connection.password=hadoop # Base HDFS landing directory #config.sqoop.hdfs.ingest.root=/sqoopimport Note The DB Connection password section will have the name of the key derived from the controller service name in NiFi. In the above snippet, the controller service name is called sqoop-mysql-connection. Drivers¶ Sqoop requires the JDBC drivers for the specific database server in order to transfer data. The processor has been tested on MySQL, Oracle, Teradata and SQL Server databases, using Sqoop v1.4.6. The drivers need to be downloaded, and the .jar files must be copied over to Sqoop’s /lib directory. As an example, consider that the MySQL driver is downloaded and available in a file named: mysql-connector-java.jar. This would have to be copied over into Sqoop’s /lib directory, which may be in a location similar to: /usr/hdp/current/sqoop-client/lib. The driver class can then be referred to in the property Source Driver in StandardSqoopConnectionService controller service configuration. For example: com.mysql.jdbc.Driver. Tip Avoid providing the driver class name in the controller service configuration. Sqoop will try to infer the best connector and driver for the transfer on the basis of the Source Connection String property configured for StandardSqoopConnectionService controller service. Passwords¶ The processor’s connection controller service allows three modes of providing the password: - Entered as clear text - Entered as encrypted text - Encrypted text in a file on HDFS For modes (2) and (3), which allow encrypted passwords to be used, details are provided below: Encrypt the password by providing the: - Password to encrypt - Passphrase - Location to write encrypted file to The following command can be used to generate the encrypted password: /opt/kylo/bin/encryptSqoopPassword.sh The above utility will output a base64 encoded encrypted password, which can be entered directly in the controller service configuration via the SourcePassword and Source Password Passphrase properties (mode 2). The above utility will also output a file on disk that contains the encrypted password. This can be used with mode 3 as described below: Say, the file containing encrypted password is named: /user/home/sec-pwd.enc. Put this file in HDFS and secure it by restricting permissions to be only read by nifi user. Provide the file location and passphrase via the Source Password File and Source Password Passphrase properties in the StandardSqoopConnectionService controller service configuration. During the processor execution, password will be decrypted for modes 2 and 3, and used for connecting to the source system. TriggerFeed¶ Trigger Feed Overview¶ In Kylo, the TriggerFeed Processor allows feeds to be configured in such a way that a feed depending upon other feeds is automatically triggered when the dependent feed(s) complete successfully. Obtaining the Dependent Feed Execution Context¶ To get dependent feed execution context data, specify the keys that you are looking for. This is done through the “Matching Execution Context Keys” property. The dependent feed execution context will only be populated the specified matching keys. For example: Feed_A runs and has the following attributes in the flow-file as it runs: -property.name = "first name" -property.age=23 -feedts=1478283486860 -another.property= "test" Feed_B depends on Feed A and has a Trigger Feed that has “Matching Execution Context Keys” set to “property”. It will then get the ExecutionContext for Feed A populated with 2 properties: "Feed_A":{property.name:"first name", property.age:23} Trigger Feed JSON Payload¶ The FlowFile content of the Trigger feed includes a JSON string of the following structure: { "feedName":"string", "feedId":"string", "dependentFeedNames":[ "string" ], "feedJobExecutionContexts":{ }, "latestFeedJobExecutionContext":{ } } JSON structure with a field description: { "feedName":"<THE NAME OF THIS FEED>", "feedId":"<THE UUID OF THIS FEED>", "dependentFeedNames":[<array of the dependent feed names], "feedJobExecutionContexts":{<dependent_feed_name>:[ { "jobExecutionId":<Long ops mgr job id>, "startTime":<millis>, "endTime":<millis>, "executionContext":{ <key,value> matching any of the keys defined as being "exported" in this trigger feed } } ] }, "latestFeedJobExecutionContext":{ <dependent_feed_name>:{ "jobExecutionId":<Long ops mgr job id>, "startTime":<millis>, "endTime":<millis>, "executionContext":{ <key,value> matching any of the keys defined as being "exported" in this trigger feed } } } } Example JSON for a Feed: { "feedName":"companies.check_test", "feedId":"b4ed909e-8e46-4bb2-965c-7788beabf20d", "dependentFeedNames":[ "companies.company_data" ], "feedJobExecutionContexts":{ "companies.company_data":[ { "jobExecutionId":21342, "startTime":1478275338000, "endTime":1478275500000, "executionContext":{ } } ] }, "latestFeedJobExecutionContext":{ "companies.company_data":{ "jobExecutionId":21342, "startTime":1478275338000, "endTime":1478275500000, "executionContext":{ } } } } Example Flow¶ The screenshot shown here is an example of a flow in which the inspection of the payload triggers dependent feed data. The EvaluateJSONPath processor is used to extract JSON content from the flow file. Refer to the Data Confidence Invalid Records flow for an example: High-Water Mark Processors¶ The high-water mark processors are used to manage one or more high-water marks for a feed. High-water marks support incremental batch processing by storing the highest value of an increasing field in the source records (such as a timestamp or record number) so that subsequent batches can pick up where the previous one left off. The water mark processors have two roles: - To load the current value of a water mark of a feed as a flow file attribute, and to later commit (or rollback on error) the latest value of that attribute as the new water mark value - To bound a section of a flow so that only one flow file at a time is allowed to process data for the latest water mark value There are two water mark processors: LoadHighWaterMark and ReleaseHighWaterMark. The section of a NiFi flow where a water mark becomes active is starts when a flow file passes through a LoadHighWaterMark processor and ends when it passes through a ReleaseHighWaterMark. After a flow file passes through a LoadHighWaterMark processor there must be a ReleaseHighWaterMark present to release that water mark somewhere along every possible subsequent route in the flow. LoadHighWaterMark Processor¶ This processor is used, when a flow files is created by it or passes through it, to load the value of a single high-water mark for the feed and to store that value in a particular attribute in the flow file. It also marks that water mark as active; preventing other flow files from passing through this processor until the active water mark is released (committed or rolled back.) It is up to other processors in the flow to make use of the water mark value stored in the flow file and to update it to some new high-water value as data is successfully processed. Processor Properties: Processor Relationships: ReleaseHighWaterMark Processor¶ This processor is used to either commit or reject the latest high-water value of a water mark (or the values of all water marks) for a feed, and to release that water mark so that other flow files can activate it and make use of the latest high-water value in their incremental processing. Since other flow files are blocked from entering the section of the flow while the current flow file is using the active water mark, it is very important to make sure that ever possible path a flow may take after passing through a LoadHighWaterMark processor also passes through a ReleaseHighWaterMark processor. For the successful path it should pass through a ReleaseHighWaterMark processor in Commit mode, and any failure paths should pass through ReleaseHighWaterMark processor in Reject mode. It is also necessary for some processor in the flow to have updated the water mark attribute value in the flow file to the latest high-water value reached during data processing. Whatever that value happens to be is written to the feed’s metadata when it is committed by ReleaseHighWaterMark. Processor Properties: Processor Relationships: Example¶ Say you have a feed that will wake up periodically and process any new records in a data source that have arrived since it last ran based a timestamp column marking when each record was created. This feed can make use of the high-water mark processors to accomplish this task. A successful flow of the feed would perform the following steps: - The flow might start with LoadHighWaterMark processor scheduled to periodically load the feed’s latest water mark timestamp value, store that value in a new flow file, and set the water mark to the active state - Subsequent processors will query the data source for all records with a timestamp that is greater than the water mark value in the flow file and process those records - A processor (such as UpdateAttribute) will reset the water mark flow file attribute to the highest timestamp value found in the records that were processed - A ReleaseHighWaterMark processor which will commit the updated water mark attribute value as the new high-water mark in the feed’s metadata and release the active state of the water mark If at step #1 the LoadHighWaterMark processor sees that the water mark is already active for a prior flow file then processing is delayed by yielding the processor. If processing failure occurs anytime after step #1 then the flow would route through a different ReleaseHighWaterMark processor configured to reject any updates to the water mark attribute and simply release the active state of the water mark.
http://kylo.readthedocs.io/en/latest/how-to-guides/NiFiProcessorsDocs.html
2018-06-17T22:19:51
CC-MAIN-2018-26
1529267859817.15
[]
kylo.readthedocs.io
Test your Windows app for Windows 10 in S mode You can test your Windows app to ensure that it will operate correctly on devices that run Windows 10 in S mode. In fact, if you plan to publish your app to the Microsoft Store, you must do this because it is a store requirement. To test your app, you can apply a Device Guard Code Integrity policy on a device that is running Windows 10 Pro. Note The device on which you apply the Device Guard Code Integrity policy must be running Windows 10 Creators Edition (10.0; Build 15063) or later. The Device Guard Code Integrity policy enforces the rules that apps must conform to in order to run on Windows 10 S. Important We recommend that you apply these policies to a virtual machine, but if you want to apply them to your local machine, make sure to review our best practice guidance in the "Next, install the policy and restart your system" section of this topic before you apply a policy. First, download the policies and then choose one Download the Device Guard Code Integrity policies here. Then, choose the one that makes the most sense to you. Here's summary of each policy. We recommend that you start with audit mode policy. You can review the Code Integrity Event Logs and use that information to help you make adjustments to your app. Then, apply the Production mode policy when you're ready for final testing. Here’s a bit more information about each policy. Audit mode policy With this mode, your app runs even if it performs tasks that aren’t supported on Windows 10 S. Windows logs any executables that would have been blocked into the Code Integrity Event Logs. You can find those logs by opening the Event Viewer, and then browsing to this location: Application and Services Logs->Microsoft->Windows->CodeIntegrity->Operational. This mode is safe and it won't prevent your system from starting. (Optional) Find specific failure points in the call stack To find specific points in the call stack where blocking issues occur, add this registry key, and then set up a kernel-mode debugging environment. Production mode policy This policy enforces code integrity rules that match Windows 10 S so that you can simulate running on Windows 10 S. This is the strictest policy, and it is great for final production testing. In this mode, your app is subject to the same restrictions as it would be subject to on a user's device. To use this mode, your app must be signed by the Microsoft Store. Production mode policy with self-signed apps This mode is similar to the Production mode policy, but it also allows things to run that are signed with the test certificate that is included in the zip file. Install the PFX file that is included in the AppxTestRootAgency folder of this zip file. Then, sign your app with it. That way, you can quickly iterate without requiring Store signing. Because the publisher name of your certificate must match the publisher name of your app, you'll have to temporarily change the value of the Identity element's Publisher attribute to "CN=Appx Test Root Agency Ex". You can change that attribute back to it's original value after you've completed your tests. Next, install the policy and restart your system We recommend that you apply these policies to a virtual machine because these policies might lead to boot failures. That's because these policies block the execution of code that isn't signed by the Microsoft Store, including drivers. If you want to apply these policies to your local machine, it's best to start with the Audit mode policy. With this policy, you can review the Code Integrity Event Logs to ensure that nothing critical would be blocked in an enforced policy. When you're ready to apply a policy, find the .P7B file for the policy that you chose, rename it to SIPolicy.P7B, and then save that file to this location on your system: C:\Windows\System32\CodeIntegrity\. Then, restart your system. Note To remove a policy from your system, delete the .P7B file and then restart your system. Next steps Find answers to your questions Have questions? Ask us on Stack Overflow. Our team monitors these tags. You can also ask us here. Give feedback or make feature suggestions Review a detailed blog article that was posted by our App Consult Team See Porting and testing your classic desktop applications on Windows 10 S with the Desktop Bridge. Learn about tools that make it easier to test for Windows in S Mode See Unpackage, modify, repackage, sign an APPX.
https://docs.microsoft.com/en-us/windows/uwp/porting/desktop-to-uwp-test-windows-s
2018-06-17T21:38:58
CC-MAIN-2018-26
1529267859817.15
[array(['images/desktop-to-uwp/code-integrity-logs.png', 'code-integrity-event-logs'], dtype=object) array(['images/desktop-to-uwp/ci-debug-setting.png', 'reg-setting'], dtype=object) ]
docs.microsoft.com
blockade¶ Blockade is a utility for testing network failures and partitions in distributed applications. Blockade uses Docker containers to run application processes and manages the network from the host system to create various failure scenarios. A common use is to run a distributed application such as a database or cluster and create network partitions, then observe the behavior of the nodes. For example in a leader election system, you could partition the leader away from the other nodes and ensure that the leader steps down and that another node emerges as leader. Blockade features: - A flexible YAML format to describe the containers in your application - Support for dependencies between containers, using named links - A CLI tool for managing and querying the status of your blockade - Creation of arbitrary partitions between containers - Giving a container a flaky network connection to others (drop packets) - Giving a container a slow network connection to others (latency) - While under partition or network failure control, containers can freely communicate with the host system – so you can still grab logs and monitor the application. Blockade is written and maintained by the Dell Cloud Manager (formerly Enstratius) team and is used internally to test the behaviors of our software. We also release a number of other internal components as open source, most notably Dasein Cloud. Blockade is inspired by the excellent Jepsen article series. Reference Documentation¶ Development and Support¶ Blockade is available on github. Bug reports should be reported as issues there.
http://blockade.readthedocs.io/en/0.1.1/
2018-06-17T22:22:20
CC-MAIN-2018-26
1529267859817.15
[]
blockade.readthedocs.io
Deletewppack: Stsadm operation (Windows SharePoint Services) Applies To: Windows SharePoint Services 3.0 Topic Last Modified: 2007-04-23 Operation name: Deletewppack Description Removes the Web Parts in a Web Part package from a virtual server. When you delete the last instance of a Web Part package on a server or server farm, the Stsadm command-line tool also deletes the Web Part package from the configuration database. Syntax stsadm -o deletewppack ** -name <name>** **\[-lcid\] \<language\>** **\[-url\] \<URL name\>**
https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-2007-products-and-technologies/cc288646(v=office.12)
2018-06-17T22:31:13
CC-MAIN-2018-26
1529267859817.15
[]
docs.microsoft.com
Difference between revisions of "Joomla Installation Resources" From Joomla! Documentation Latest revision as of 10:04, 29 April 2013.4 (Current Short Term Support Version 3.4) - Joomla! 3.x Upgrade Instructions - Joomla! 2.5 Upgrade Instructions - Security
https://docs.joomla.org/index.php?title=Joomla_Installation_Resources&diff=86025&oldid=36291
2016-02-06T04:09:09
CC-MAIN-2016-07
1454701145751.1
[]
docs.joomla.org
Revision history of "JDatabaseMySQL::test/1.6" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 05:23, 3 May 2013 Wilsonge (Talk | contribs) deleted page JDatabaseMySQL::test/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JDatabaseMySQL::test== ===Description=== Test to see if the MySQL connector is available. {{Description:JDatabaseMySQL::test}} <span cl..." (and the only contributor was "Doxiki2"))
https://docs.joomla.org/index.php?title=JDatabaseMySQL::test/1.6&action=history
2016-02-06T03:31:48
CC-MAIN-2016-07
1454701145751.1
[]
docs.joomla.org
Difference between revisions of "Installing Joomla! 1.7" From Joomla! Documentation Redirect page Revision as of 18:03, 13 August 2013 (view source)Nwheeler (Talk | contribs) (redirect needed after all -- it's in official text documentation distributed with the source.) Latest revision as of 18:04, 13 August 2013 (view source) Nwheeler (Talk | contribs) (Redirected page to Installing Joomla) Line 1: Line 1: −#REDIRECT [[Installing_Joomla!]]+#REDIRECT [[Installing_Joomla]] Latest revision as of 18:04, 13 August 2013 Installing Joomla Retrieved from ‘’
https://docs.joomla.org/index.php?title=Installing_Joomla!_1.7&diff=prev&oldid=102270
2016-02-06T03:32:19
CC-MAIN-2016-07
1454701145751.1
[]
docs.joomla.org
Send Docs Feedback Get Count of API Resources for Company App getApiresourceCountApp GET Get Count of API Resources for Company App Gets the count of API resources for a company app. The API resources are aggregated across all API products with which the company app is associated. In other words, this call returns the total number API resources (URIs) that a company app is able to access. Resource URL /organizations/{org_name}/companies/{company_name}/apps/{app_name}?)
http://docs.apigee.com/management/apis/get/organizations/%7Borg_name%7D/companies/%7Bcompany_name%7D/apps/%7Bapp_name%7D?rate=_8iWoGvEp8NCqVHuV8y-jUeZ3pGdat4EjnuO-rfZsHg
2016-02-06T03:23:13
CC-MAIN-2016-07
1454701145751.1
[]
docs.apigee.com
JTableUsergroup::store From Joomla! Documentation Revision as of 20:44,::store Description Inserts a new row if id is zero or updates an existing row in the database table. Description:JTableUsergroup::store [Edit Descripton] public function store ($updateNulls=false) - Returns boolean True successful, false otherwise and an internal error message is set - Defined on line 130 of libraries/joomla/database/table/usergroup.php - Since See also JTableUsergroup::store source code on BitBucket Class JTableUsergroup Subpackage Database - Other versions of JTableUsergroup::store SeeAlso:JTableUsergroup::store [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JTableUsergroup::store&oldid=97058
2016-02-06T03:39:21
CC-MAIN-2016-07
1454701145751.1
[]
docs.joomla.org
Help System Blender has a range of built-in and web-based help options. Tooltips Tooltip of the Renderer selector in the Info Editor. When hovering the mouse cursor over a button or setting, after a few instants a tooltip appears. Elements The context-sensitive Tooltip might contain some of these elements: - Short Description Related details depending on the control. - Shortcut A keyboard or mouse shortcut associated to the tool. - Value The value of the property. - Library Source file of the active object. See also Linked Libraries. - Disabled (red) The reason why the value is not editable. - Python When Python Tooltips are enabled, a Python expression is displayed for scripting (usually an operator or property). Context-Sensitive Manual Access Reference - Mode All modes - 選單 - Shortcut F1 You may want to access help for a tool or area from within Blender. Use the keyboard shortcut or context menu item to visit pages of this reference manual from within Blender. This opens a web page relating to the button under the cursor, supporting both tool and value buttons. Note We do not currently have 100% coverage. You may see an alert in the info header if a tool does not have a link to the manual. Other times, buttons may link to more general sections of the documentation.
https://docs.blender.org/manual/zh-hant/dev/getting_started/help.html
2022-09-24T22:49:37
CC-MAIN-2022-40
1664030333541.98
[array(['../_images/getting-started_help_tooltip.png', '../_images/getting-started_help_tooltip.png'], dtype=object)]
docs.blender.org
Administering Hosts This chapter describes creating, registering, administering, and removing hosts. Creating a Host in orcharhino Use this procedure to create a host in orcharhino. To use the CLI instead of the orcharhino management UI, see the CLI procedure. In the orcharhino management orcharhino. For more information, see Adding Network Interfaces. "My_Host_Name" \ --hostgroup "My_Host_Group" \ --interface="primary=true, \ provision=true, \ mac=mac_address, \ ip=ip_address" \ --organization "My_Organization" \ --location . Cloning Hosts You can clone existing hosts. In the orcharhino management UI, navigate to Hosts > All Hosts. In the Actions menu, click Clone. On the Host tab, ensure to provide a Name different from the original host. On the Interfaces tab, ensure to provide a different IP address. Click Submit to clone the host. For more information, see Creating a Host. orcharhino management(Amazon Amazon orcharhino management" \ --architecture "My_Architecture" \ --content-source-id _My_Content_Source_ID_ \ --content-view "_My_Content_View_" \ --domain "_My_Domain_" \ --lifecycle-environment "_My_Lifecycle_Environment_" \ --locations "_My_Location_" \ --medium-id _My_Installation_Medium_ID_ \ --operatingsystem "_My_Operating_System_" \ --organizations "_My_Organization_" \ --partition-table "_My_Partition_Table_" \ --puppet-ca-proxy-id _My_Puppet_CA_Proxy_ID_ \ --puppet-environment "_My_Puppet_Environment_" \ --puppet-proxy-id _My_Puppet_Proxy_ID_ \ --root-pass "My_Password" \ --subnet "_My_Subnet_" Creating a Host Group for Each Lifecycle Environment Use this procedure to create a host group for the Library lifecycle environment and add nested host groups for other lifecycle environments. To create a host group for each life cycle environment, run the following Bash script: MAJOR="My_Major_OS_Version" ARCH="My_Architecture" ORG="My_Organization" LOCATIONS="My_Location" PTABLE_NAME="My_Partition_Table" DOMAIN="My_Domain" Changing the Host Group of a Host Use this procedure to change the Host Group of a host. In the orcharhino management UI, navigate to Hosts > All hosts. Select the check box of the host you want to change. From the Select Action list, select Change Group. A new option window opens. From the Host Group list, select the group that you want for your host. Click Submit. Changing the Environment of a Host Use this procedure to change the environment of a host. In the orcharhino management UI, navigate to Hosts > All hosts. Select the check box of the host you want to change. From the Select Action list, select Change Environment. A new option window opens. From the Environment list, select the new environment for your host. Click Submit. Changing the Managed Status of a Host Hosts provisioned by orcharhino are Managed by default. When a host is set to Managed, you can configure additional host parameters from orcharhino orcharhino, set the host to Unmanaged. In the orcharhino management UI, navigate to Hosts > All hosts. Select the host. Click Edit. Click Manage host or Unmanage host to change the host’s status. Click Submit. Assigning a Host to a Specific Organization Use this procedure to assign a host to a specific organization. For general information about organizations and how to configure them, see Managing Organizations in the Managing Organizations and Locations in orcharhino Guide. In the orcharhino management. Assigning a Host to a Specific Location Use this procedure to assign a host to a specific location. For general information about locations and how to configure them, see Creating a Location in the Content Management Guide. In the orcharhino management. Removing a Host from orcharhino Use this procedure to remove a host from orcharhino. In the orcharhino management UI, navigate to Hosts > All hosts or Hosts > Content Hosts. Note that there is no difference from what page you remove a host, from All hosts or Content Hosts. In both cases, orcharhino removes a host completely. Select the hosts that you want to remove. From the Select Action list, select Delete Hosts. Click Submit to remove the host from orcharhino permanently. Creating Snapshots of a Managed Host You can use the Foreman Snapshot Management plug-in to create snapshots of managed hosts. You have installed the Foreman Snapshot Management plug-in successfully. Your managed host is running on VMware vSphere or Proxmox. In the orcharhino management UI, navigate to Hosts > All Hosts and select a host. On the Snapshots tab, click Create Snapshot. Enter a Name. Optional: Enter a Description. Select the Include RAM checkbox if you want to include the RAM in your snapshot. Click Submit to create a snapshot. Disassociating A Virtual Machine from orcharhino without Removing It from a Hypervisor In the orcharhino management UI, navigate to Hosts > All Hosts and select the check box to the left of the hosts to be disassociated. From the Select Action list, select the Disassociate Hosts button. Optional: Select the check box to keep the hosts for future action. Click Submit.
https://docs.orcharhino.com/or/docs/sources/guides/amazon_linux/managing_hosts/administering_hosts.html
2022-09-24T23:49:30
CC-MAIN-2022-40
1664030333541.98
[]
docs.orcharhino.com
Security Advisory - Cross-Site Scripting (XSS) First published on April 9, 2021. Last edited on April 9, 2021. A recent automated security scan of the File Fabric detected a potential XSS issue in v1906. Although we are not aware of any real-world exploits taking advantage of this issue, as is our standard practice we are issuing this advisory with the recommended resolution. Please note that this issue does not exist in versions of the File Fabric later than v1906.09. Recommended Resolution Please follow these instructions: to upgrade your File Fabric to the latest version, v2006.03. Alternative Resolution Although we strongly recommend upgrading to v2006.03 as described above, customers using v1906.09 also have the option of applying a patch to resolve the issue. The patch can be downloaded here: and instructions for applying a patch are here: If you are running v1906 with a service pack older than 1906.09, you should first upgrade to v1906.09 by following these instructions: and then apply the patch as described above. Please contact us at [email protected] if you have any questions.
https://docs.storagemadeeasy.com/advisories/security/advisory/1906_xss
2022-09-24T23:04:18
CC-MAIN-2022-40
1664030333541.98
[]
docs.storagemadeeasy.com
Table of Contents Amazon S3 Storage Last updated on November 16, 2021. The service aims to maximize the benefits of scale and to pass those benefits on to developers. The File Fabric enables easy access, management, use of Amazon S3 storage to anyone, not just developers. The AWS GovCloud (US) is also supported. When adding the provider choose Amazon S3 GovCloud US Non-FIPS or FIPS. See also Using Glacier and Glacier Deep Archive Storage. See also Adding an S3 Compatible Cloud Provider 1 Adding the S3 Cloud You can choose to add the the Amazon S3 Service to the File Fabric by first navigating to your Cloud Dashboard Menu>My Dashboard tab and then choosing the Add new Provider and following the wizard therein. 2 Obtaining your Amazon Credentials Login to your Amazon Web Services Account. From the Dashboard view click on the account link and then click on the security credentials link. It is from here that you will be able to obtain the relevant keys needed to connect your Amazon S3 Account with the File Fabric. 3 Entering your Amazon Details From the Wizard that is launched when clicking the “add new provider” from the DashBoard enter the Amazon S3 keys that you retrieved from the prior step and click 'continue'. If you have a problem authenticating consider re-generating your secret key from your Amazon Web Services Account. If your account has access to the File Fabric's Content Search feature and you want the contents of the files stored on this provider to be indexed, tick the box labelled “Index content for search”. Advanced S3 Options The advanced option section can be exposed or hidden by clicking on “Advanced options”. The “Amazon S3 Bucket name” and “Amazon S3 Bucket location” settings are provided for the case where your S3 credentials do not allow you to list the buckets in the S3 account you are using, but give you access to one of the buckets in that account. Enter the bucket name and region here. The “Enable Customer-Provided Encryption Keys (SSE-C)” provides access to S3's SSE-C option. If you select this option then you will have to provide an encryption phrase which SSE-C will use to encrypt the files that are uploaded to this provider through the File Fabric. - Note that if SSE-C is enabled on the File Fabric it is assumed that all files in the bucket are SSE-C encrypted and the File Fabric will not be able to ready objects they are not SSE-C encrypted. 4 Selecting Buckets After you enter your authentication details and these are accepted the File Fabric will discover any S3 buckets that are available. You can choose which buckets you wish to add to your Account, and which will be the default bucket. As part of this process you can choose to create a new default bucket if you wish, and also choose the reason. Any buckets you choose not to index / sync will not be available to be worked with and you would need to go back to the S3 settings from the DashBoard to add them to your account. This is also the case with any new buckets you add directly from S3. If S3 is selected as the primary provider for the File Fabric then the default bucket will be used for interactions with Smart folders. 5 Indexing / Syncing file information After your authentication has been verified and you have chosen the buckets to work with you will be required to sync your meta data which enables the File Fabric to interact with your files. You can choose to do this directly in the browser in which case you need to keep the browser open until it completes, or you can choose to have the service do this server-side. You will be still notified in the browser but if you close the browser the meta-sync will continue. 6 Post Sync report When the sync / index has completed you can choose to access the report of what was indexed. At this point your files can be accessed and managed directly from the Cloud File Manager and also the different desktop and mobile access clients. 7 Amazon S3 Settings The S3 settings can be accessed from navigating to the 'Dashboard→Amazon S3 Settings link. From here you can: - Choose to totally remove the provider which removes all metadata associated with your account from S3. - Enable Amazon S3 server-side encryption for your files - Add / Manage buckets - Activate Reduced Redundancy options for buckets. You can also choose to resync the meta-data of the provider Changing Access Keys You can also enter both the new Access Key and Secret Key here if you are rotating your Access Keys. The keys should be in the same AWS account with access to the same buckets. Direct Download Enabling Direct Download allows client applications and share links to download objects directly from the AWS servers (via signed URLs) rather than through the File Fabric. Direct download is not supported when Customer-Provided Encryption Keys (SSE-C) are enabled, nor for files stored in Glacier or Glacier Deep Archive. 8 Limiting Access (Optional) To restrict the account's access to only required S3 operations and resources create an custom IAM policy and add to the Amazon user for the account. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:ListAllMyBuckets", "s3:AbortMultipartUpload", "s3:RestoreObject", "s3:ListBucket", "s3:DeleteObject", "s3:GetBucketLocation", "s3:DeleteBucket" ], "Resource": "*" } ] } To restrict the account's access to a specific bucket, you could create a policy like this: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListBucket" ], "Resource":"arn:aws:s3:::thisbucketonly" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:RestoreObject" ], "Resource": "arn:aws:s3:::thisbucketonly/*" }, { "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "*" } ] } 9 Working with S3 files When you work with Amazon S3 files you can choose to do so from the web file manager or any of the access clients Apps. When you first import the meta-data from Amazon S3 you will be asked to set a default bucket. A default bucket is what is used if you add data to any smart folders such as 'My Syncs' etc. When you add data to these folders then the data actually resides on your default S3 bucket that you chose at setup. For example if you added data to the smart folder 'My Syncs' then in your default S3 bucket there would be a folder created called 'My Syncs' where this information would reside. Similarly if you create something in the root of “My Amazon S3 files” or whatever else you choose to call this we will automatically try to create a bucket, but if the bucket name is taken the default behaviour is to create a folder, and this folder would reside in your default bucket. If you want precise control over only adding a new bucket then navigate to your S3 Settings from your DashBoard and add a new bucket from here. Any folders/files that you create in normal buckets are stored directly within the buckets on S3. The rules above only apply when using smart folders, which you can choose to use or not to use. Also we don't do anything to your files, such as other S3 provider can do ie. we don't rename them or apply an encoding to the file name etc. Your files are stored with the same name and format as you upload them We also keep additional meta-data as compared to S3. An example of this is the local timestamp. If you upload files to S3 using our desktop sync tools then we are able to keep the local timestamp which direct uploads to S3 do not. For objects uploaded to S3 the metadata property Content-Type is added based on the file extension. Provider Requirements Restrictions The Amazon S3 provider doesn't impose limits such as number of buckets, object size, number of parts (for multi-part upload) and length of object keys except where the S3 API is also limited. Bucket names are also not restricted with the exception that bucket names with dots (periods) are not supported due to security issues with virtual-host-style addressing over HTTPS. If S3 restricts an operation, and an error is returned to the File Fabric, an error will be returned to the client application. Rate Limiting Occasionally Amazon S3 may limit the rate at which it processes requests. This page: File Fabric Handling of Rate-Limiting Storage Providers explains how the File Fabric responds to rate limiting.
https://docs.storagemadeeasy.com/cloudproviders/amazons3
2022-09-24T22:14:36
CC-MAIN-2022-40
1664030333541.98
[]
docs.storagemadeeasy.com
Hardware Integration: IG-1-IMX-5¶ The Inertial Sense IG-1 is a PCB module with IMX-5 and dual ublox ZED-F9P multi-frequency GNSS receivers. - Surface mount reflowable. - Onboard dual GNSS for simultaneous RTK positioning and GPS compassing. - Micro USB and 14 pin I/O header for convenient evaluation. Pinout¶ Module Pinout Header H1 Pinout The module and header H1 have the same pinout assignment for pins 1-14. All pins 15 and above are only on the module. Soldering¶ The IMX-5 can be reflow soldered. Reflow information can be found in the Reflow Information page of this manual Hardware Design¶ Recommend PCB Footprint and Layout¶ The default forward direction is indicated in the PCB footprint figure and on the.
http://docs.inertialsense.com/user-manual/hardware/IG1/
2022-09-24T22:48:55
CC-MAIN-2022-40
1664030333541.98
[array(['../../images/ig-1.1-g2.png', 'uINS_rugged_thumb'], dtype=object) array(['../images/ig-1.1_h1_pinout.png', 'IG1 H1 Pinout'], dtype=object)]
docs.inertialsense.com
Contents - Is there an app for Destiny? - Does Destiny 2 have an app? - Can you play Destiny on phone? - What is the Destiny 2 Companion app? - Can you get bounties from the Destiny app? - What is the best app for Destiny 2? - Does Destiny have Crossplay? - Can you play Destiny 2 without spending money? - What happened to the Destiny 1 companion app? - Is the destiny 2 companion app good? - Is Destiny 2 Vault shared? - Can PS4 and ps3 play Destiny 2 together? - Can you enjoy Destiny 2 solo? - Is there a vault app for Destiny 1? - Can you get bounties from Destiny 2 app? The official Destiny 2 Companion app keeps you connected to your Destiny adventure wherever life takes you. Sign in using PlayStation Network, Xbox Live, Steam, and Stadia. DIRECTOR - See the latest featured content. Track your progress towards bounties, quests, and challenges.
http://docs.promisepay.com/zinotizum84506.html
2022-09-24T23:25:13
CC-MAIN-2022-40
1664030333541.98
[]
docs.promisepay.com
Receiving data When planning a task in Dime.Scheduler, an appointment is created. The appointment is immediately sent back to the CRM web services. For every change to an appointment in Dime.Scheduler, whether a new appointment is created or an existing appointment is modified or deleted, a new DS appointment entity is created. For each resource linked to an appointment – Dime.Scheduler supports multiple resources to be linked to a single appointment – a new DS appointment resource entity is created and linked to this DS appointment. You can follow-up on these operations in the DS appointments view. Go to: Dime.Studio -> DS Appointments The "DS appointment" record and its linked "DS Appointment Resource" records are processed by the ProcessDsAppointment.cs plugin. The source type that was sent with the task (to which the appointment belongs) and which is returned with the appointment is then used to determine which 'Processing Actions' need to be taken to further processes the appointment’s data into CRM. This offers all the flexibility to handle planning or scheduling data in standard entities, custom entities or entities that are part of a vertical solution. This also guarantees that the appointment data is processed entirely through the business logic of CRM. Developing a Receive methodDeveloping a Receive method To create your own processing actions, add a new partial class in the "Custom > ReceiveMethods" folder of the Visual Studio Solution. Each processing method should be contained in a partial class of the DsAppointmentHandler class. You can also group multiple processing methods into a single C# file. Each processing method should always be created without parameters. You can access the DS Appointment and the DS Appointment Resources from within the Processing Method by using the DsAppointment and DsResources properties of the DsAppointmentHandler instance. - Make sure your class is a partial DsAppointmentHandler class - Make sure the method name is unique and maps with a DS Source Type Processing Action in CRM - Execute any logic based on the DsAppointment variable’s values. Example: Each DsAppointmentHandler instance contains the following objects: Calling a Receive-methodCalling a Receive-method Receive methods are automatically called whenever a DS Appointment record needs to be processed. Processing occurs whenever the status of the record is set to New or, when manual processing is triggered from CRM. Manual ProcessingManual Processing You can manually call the processing triggers by opening the DS Appointment record in CRM. Any error that occurred during its processing will be visible. By using the 'Process' button in the command bar, the DS appointment will be re-processed. The appointment table:
https://docs.dimescheduler.com/docs/en/developer-manual/crm/crm-receivedata
2022-09-24T23:19:42
CC-MAIN-2022-40
1664030333541.98
[array(['/docs/assets/developer-manual/crm/dsappointments.png', 'Code'], dtype=object) array(['/docs/assets/developer-manual/crm/code-3.png', 'Code'], dtype=object) array(['/docs/assets/developer-manual/crm/process.png', 'Code'], dtype=object) ]
docs.dimescheduler.com
Appliance Backups General Considerations - File Fabric Appliance administrators should implement backup strategies that are consistent with their organisations' Recovery Time Objective (RTO) and Recovery Point Objective (RPO). - Both the Web server component of the File Fabric Appliance and the database - which may be internal or external to the Web server VM - should be backed up. Appliance Backup Because the contents of an Appliance that does not contain a database (perhaps better termed a “Web server”) change infrequently, RPO is generally a less significant driver for appliance backups than is RTO. A backup should be made of an Appliance with an external database when the Appliance level configuration has changed or when the branding has been revised. A static snapshot will work well for this purpose and will allow the Appliance to be recovered to a recent state quickly in the event of VM failure. An alternative approach is to plan to install a new Appliance in the event of VM failure. The Appliance administrator's Backup feature can be used to create backups of the files that customise the Appliance. These would be copied onto the newly created Appliance to complete the recovery. When several Appliances with identical configurations are used in an HA configuration, making Appliance backups may not be necessary. Database Backups Ensuring Consistency To ensure consistency in the backed up data, the database should be backed up during periods when it is not being updated. While it is being backed up, both user access and scheduled processing (including cron jobs) should be suspended. Physical and Logical Backups SME's MySQL (MariaDB) database can be backed up in two ways: - Physical Backup - A copy of the files that store the database tables and indices. - Logical Backup - A dump of the SQL commands needed to rebuild and repopulate the database. Logical backups are generally easier to work with than physical backups, but recovery may take longer with logical backups. You may wish to retain several generations of database backups so you will have a choice of points in time for recovery. To meet this requirement the database can be backed up to a timestamped file with several generations of backups retained on a rolling basis. SME can provide professional services to automate this process. Appliances with Internal Databases For Appliances with internal databases one backup can serve both purposes: Appliance backup and database backup. In this situation it is the database changes, which occur frequently and in large numbers on a typical system, that drive the backup frequency. Appliances with External Databases Where the database is external, the Appliance and the database need to be handled separately. Database backup files can be made and rotated as described in the previous section, but stored on the database server or on some other reliable storage.
https://docs.storagemadeeasy.com/cloudappliance/backupbp
2022-09-24T22:43:31
CC-MAIN-2022-40
1664030333541.98
[]
docs.storagemadeeasy.com
25.13 Nroff Mode: M-n Move to the beginning of the next line that isn’t an nroff command ( nroff-forward-text-line). An argument is a repeat count. M-p Like M-n but move up ( nroff-backward-text-line). M-? Displays in the echo area the number of text lines (lines that are not nroff commands) in the region ( nroff-count-text-lines). Electric Nroff mode is a buffer-local minor mode that can be used with Nroff mode. To toggle this minor mode, type M-x nroff-electric).
https://emacsdocs.org/docs/emacs/Nroff-Mode
2022-09-24T23:32:42
CC-MAIN-2022-40
1664030333541.98
[]
emacsdocs.org
Reactivate your subscription You can reactivate your subscription in the admin center if: the subscription expired, was disabled by Microsoft, or if you canceled it in the middle of a subscription term. Before you begin You must be a Global or Billing admin to reactivate a subscription. For more information, see About admin roles. Not an admin? Contact your administration to reactivate your subscription. Reactivate a subscription - In the admin center, go to the Billing > Your products page. - In the admin center, go to the Billing > Your products page. - Try or buy a Microsoft 365 for business subscription (article) Renew Microsoft 365 for business (article) Cancel your subscription (article) Feedback Submit and view feedback for
https://docs.microsoft.com/en-US/microsoft-365/commerce/subscriptions/reactivate-your-subscription?view=o365-worldwide
2022-05-16T21:26:05
CC-MAIN-2022-21
1652662512249.16
[]
docs.microsoft.com
[−][src]Crate manger A performant, low-level, lightweight and intuitive combinatoric parser library. Manger allows for translation of the intuition developed for Rust's primitive and standard library types into your intuition for using this library. Most of the behaviour is defined with the Consumable trait, which can be easily implemented using the consume_struct and consume_enum macros. This library is suited for deterministic regular languages. It is optimally used in addition to a predefined syntax. For example, if you have a predefined EBNF, it is really easy to implement the syntax within this crate. Getting Started To get started with implementing Consumable on your own traits, I suggest taking a look at the consume_struct or consume_enum documentation. Then you can come back here and look at some common patterns. Common patterns Parsing and thus consuming has a lot of often used patterns. Ofcourse, these are very easily available here aswell. Concatenation Often we want to express that two patterns follow eachother in a source string. For example, you might want to express that every Line is followed by a ';'. In manger there are two ways to do this. Macro's The first way, and the preferred way, is with the consume_struct or consume_enum macros where you can present sequential consume instructions. You can see in the following example that we are first consuming a i32, followed by a closing '(', followed by a ')'. use manger::{ Consumable, consume_struct }; struct EncasedInteger(i32); consume_struct!( EncasedInteger => [ > '(', value: i32, > ')'; (value) ] ); Tuples Another way to represent the same concept is with the tuple type syntax. This can be done with up to 10 types. Here we are again parsing the same (i32) structure. use manger::chars; type EncasedInteger = (chars::OpenParenthese, i32, chars::CloseParenthese); Repetition Most of the time you want to represent some kind of repetition. There are a lot of different way to represent repetition. Here there are two easy ways. Vec The easiest way to do repetition is with the Vec<T>. This will consume 0 or more instances of type T. Ofcourse, the type T has have has Consumable implemented. Here you can see how what that looks like: Since Vec<T>will consume instances of type Tuntil it finds a error, it can never fail itself. You are therefore safe to unwrap the result. use manger::{ Consumable, consume_struct }; struct EncasedInteger(i32); consume_struct!( EncasedInteger => [ > '[', value: i32, > ']'; (value) ] ); let source = "[3][-4][5]"; let (encased_integers, _) = <Vec<EncasedInteger>>::consume_from(source)?; let sum: i32 = encased_integers .iter() .map(|EncasedInteger(value)| value) .sum(); assert_eq!(sum, 4); OneOrMore The other easy way to do repetition is with OneOrMore<T>. This allows for consuming 1 or more instances of type T. And again, type T has to have Consumable implemented. Here you can see what that looks like: use manger::{ Consumable, consume_struct }; use manger::common::OneOrMore; struct EncasedInteger(i32); consume_struct!( EncasedInteger => [ > '[', value: i32, > ']'; (value) ] ); let source = "[3][-4][5]"; let (encased_integers, _) = <OneOrMore<EncasedInteger>>::consume_from(source)?; let product: i32 = encased_integers .into_iter() .map(|EncasedInteger(value)| value) .product(); assert_eq!(product, -60); Optional value To express optional values you can use the Option<T> standard rust type. This will consume either 0 or 1 of type T. Since Option<T>will consume a instance of type Tif it finds no error, it can never fail itself. You are therefore safe to unwrap the result. use manger::consume_struct; use manger::chars; struct PossiblyEncasedInteger(i32); consume_struct!( PossiblyEncasedInteger => [ : Option<chars::OpenParenthese>, value: i32, : Option<chars::CloseParenthese>; (value) ] ); Recursion Another common pattern seen within combinatoric parsers is recursion. Since rust types need to have a predefined since, we cannot do direct type recursion and we need to do heap allocation with the [ Box<T>][std::box::Box] type from the standard library. We can make a prefixed math expression parser as followed: use manger::consume_enum; use manger::common::{OneOrMore, Whitespace}; enum Expression { Times(Box<Expression>, Box<Expression>), Plus(Box<Expression>, Box<Expression>), Constant(u32), } consume_enum!( Expression { Times => [ > '*', : OneOrMore<Whitespace>, left: Box<Expression>, : OneOrMore<Whitespace>, right: Box<Expression>; (left, right) ], Plus => [ > '+', : OneOrMore<Whitespace>, left: Box<Expression>, : OneOrMore<Whitespace>, right: Box<Expression>; (left, right) ], Constant => [ value: u32; (value) ] } ); Whitespace For whitespace we can use the [ manger::common::Whitespace] struct. This will consume any utf-8 character that is identified as a whitespace character by the char::is_whitespace function. Either If two possibilities are present for consuming there are two options to choose from. Both are valid in certain scenarios. Macro Using the consume_enum you can create an struct which can be consuming in a number of options and you can see which option was selected. If you need to see which of the different options was selected, this should be your choice. Either<L, R> You can also use the [ Either<L, R>][either::Either] type to represent the either relationship. This option is preferred if we do not care about which option is selected.
https://docs.rs/manger/latest/manger/
2022-05-16T22:16:13
CC-MAIN-2022-21
1652662512249.16
[]
docs.rs
AI Paraphraser By updated about 2 months ago The paraphrasing tool completely reformulates any paragraph using AI and generates a different unique version of the article. To start out, input your desired old text into this app and highlight your selected paragraph within 1000 characters limit; the rephrase button will appear once you highlight a paragraph in 1000 characters limit, if you highlight a paragraph that is more than 1000 characters, the paraphrase button won't appear for you. Next, choose which tone you would like to use when rewriting it. Finally, choose which version is best for your purposes, feel free to make any changes needed and choose the best suitable version based on what makes sense along with being fully unique from your original content in order for it to be published onto an offline website or submitted as an article. We recommend you not to enter an entire article because this violates Google's policies and also interferes with copyrights. Google search algorithms can easily distinguish original articles from a rephrased one, so we highly recommend using content curation before rephrasing the article. You can do content curation by collecting elements of your article from multiple sources, like copying one paragraph from a website and another paragraph from another website that speak about the same point and so on. it will cost 1 point from your account balance per 1000 characters.
https://docs.similarcontent.com/article/17-ai-paraphraser
2022-05-16T21:24:18
CC-MAIN-2022-21
1652662512249.16
[]
docs.similarcontent.com
How to compile ABINIT¶ This tutorial explains how to compile ABINIT including the external dependencies without relying on pre-compiled libraries, package managers and root privileges. You will learn how to use the standard configure and make Linux tools to build and install your own software stack including the MPI library and the associated mpif90 and mpicc wrappers required to compile MPI applications. It is assumed that you already have a standard Unix-like installation that provides the basic tools needed to build software from source (Fortran/C compilers and make). The changes required for macOS are briefly mentioned when needed. Windows users should install cygwin that provides a POSIX-compatible environment or, alternatively, use a Windows Subsystem for Linux. Note that the procedure described in this tutorial has been tested with Linux/macOS hence feedback and suggestions from Windows users are welcome. Tip In the last part of the tutorial, we discuss more advanced topics such as using modules in supercomputing centers, compiling and linking with the intel compilers and the MKL library as well as OpenMP threads. You may want to jump directly to this section if you are already familiar with software compilation. In the following, we will make extensive use of the bash shell hence familiarity with the terminal is assumed. For a quick introduction to the command line, please consult this Ubuntu tutorial. If this is the first time you use the configure && make approach to build software, we strongly recommend to read this guide before proceeding with the next steps. If, on the other hand, you are not interested in compiling all the components from source, you may want to consider the following alternatives: Compilation with external libraries provided by apt-based Linux distributions (e.g. Ubuntu). More info available here. Compilation with external libraries on Fedora/RHEL/CentOS Linux distributions. More info available here. Homebrew bottles or macports for macOS. More info available here. Automatic compilation and generation of modules on clusters with EasyBuild. More info available here. Compiling Abinit using the internal fallbacks and the build-abinit-fallbacks.sh script automatically generated by configure if the mandatory dependencies are not found. Using precompiled binaries provided by conda-forge (for Linux and macOS users). Before starting, it is also worth reading this document prepared by Marc Torrent that introduces important concepts and provides a detailed description of the configuration options supported by the ABINIT build system. Note that these slides have been written for Abinit v8 hence some examples should be changed in order to be compatible with the build system of version 9, yet the document represents a valuable source of information. Important The aim of this tutorial is to teach you how to compile code from source but we cannot guarantee that these recipes will work out of the box on every possible architecture. We will do our best to explain how to setup your environment and how to avoid the typical pitfalls but we cannot cover all the possible cases. Fortunately, the internet provides lots of resources. Search engines and stackoverflow are your best friends and in some cases one can find the solution by just copying the error message in the search bar. For more complicated issues, you can ask for help on the ABINIT discourse forum or contact the sysadmin of your cluster but remember to provide enough information about your system and the problem you are encountering. Getting started¶ Since ABINIT is written in Fortran, we need a recent Fortran compiler that supports the F2003 specifications as well as a C compiler. At the time of writing (November 09, 2021 ), the C++ compiler is optional and required only for advanced features that are not treated in this tutorial. In what follows, we will be focusing on the GNU toolchain i.e. gcc for C and gfortran for Fortran. These “sequential” compilers are adequate if you don’t need to compile parallel MPI applications. The compilation of MPI code, indeed, requires the installation of additional libraries and specialized wrappers (mpicc, mpif90 or mpiifort ) replacing the “sequential” compilers. This very important scenario is covered in more detail in the next sections. For the time being, we mainly focus on the compilation of sequential applications/libraries. First of all, let’s make sure the gfortran compiler is installed on your machine by issuing in the terminal the following command: which gfortran /usr/bin/gfortran Tip The which command, returns the absolute path of the executable. This Unix tool is extremely useful to pinpoint possible problems and we will use it a lot in the rest of this tutorial. In our case, we are lucky that the Fortran compiler is already installed in /usr/bin and we can immediately use it to build our software stack. If gfortran is not installed, you may want to use the package manager provided by your Linux distribution to install it. On Ubuntu, for instance, use: sudo apt-get install gfortran To get the version of the compiler, use the --version option: gfortran --version GNU Fortran (GCC) 5.3.1 20160406 (Red Hat 5.3.1-6) Copyright (C) 2015 Free Software Foundation, Inc. Starting with version 9, ABINIT requires gfortran >= v5.4. Consult the release notes to check whether your gfortran version is supported by the latest ABINIT releases. Now let’s check whether make is already installed using: which make /usr/bin/make Hopefully, the C compiler gcc is already installed on your machine. which gcc /usr/bin/gcc At this point, we have all the basic building blocks needed to compile ABINIT from source and we can proceed with the next steps. Tip Life gets hard if you are a macOS user as Apple does not officially support Fortran (😞) so you need to install gfortran and gcc either via homebrew or macport. Alternatively, one can install gfortran using one of the standalone DMG installers provided by the gfortran-for-macOS project. Note also that macOS users will need to install make via Xcode. More info can be found in this page. How to compile BLAS and LAPACK¶ BLAS and LAPACK represent the workhorse of many scientific codes and an optimized implementation is crucial for achieving good performance. In principle this step can be skipped as any decent Linux distribution already provides pre-compiled versions but, as already mentioned in the introduction, we are geeks and we prefer to compile everything from source. Moreover the compilation of BLAS/LAPACK represents an excellent exercise that gives us the opportunity to discuss some basic concepts that will reveal very useful in the other parts of this tutorial. First of all, let’s create a new directory inside your $HOME (let’s call it local) using the command: cd $HOME && mkdir local Tip $HOME is a standard shell variable that stores the absolute path to your home directory. Use: echo My home directory is $HOME to print the value of the variable. The && syntax is used to chain commands together, such that the next command is executed if and only if the preceding command exited without errors (or, more accurately, exits with a return code of 0). We will use this trick a lot in the other examples to reduce the number of lines we have to type in the terminal so that one can easily cut and paste the examples in the terminal. Now create the src subdirectory inside $HOME/local with: cd $HOME/local && mkdir src && cd src The src directory will be used to store the packages with the source files and compile code, whereas executables and libraries will be installed in $HOME/local/bin and $HOME/local/lib, respectively. We use $HOME/local because we are working as normal users and we cannot install software in /usr/local where root privileges are required and a sudo make install would be needed. Moreover, working inside $HOME/local allows us to keep our software stack well separated from the libraries installed by our Linux distribution so that we can easily test new libraries and/or different versions without affecting the software stack installed by our distribution. Now download the tarball from the openblas website with: wget If wget is not available, use curl with the -o option to specify the name of the output file as in: curl -L -o v0.3.7.tar.gz Tip To get the URL associated to a HTML link inside the browser, hover the mouse pointer over the link, press the right mouse button and then select Copy Link Address to copy the link to the system clipboard. Then paste the text in the terminal by selecting the Copy action in the menu activated by clicking on the right button. Alternatively, one can press the central button (mouse wheel) or use CMD + V on macOS. This trick is quite handy to fetch tarballs directly from the terminal. Uncompress the tarball with: tar -xvf v0.3.7.tar.gz then cd to the directory with: cd OpenBLAS-0.3.7 and execute make -j2 USE_THREAD=0 USE_LOCKING=1 to build the single thread version. Tip By default, openblas activates threads (see FAQ page) but in our case we prefer to use the sequential version as Abinit is mainly optimized for MPI. The -j2 option tells make to use 2 processes to build the code in order to speed up the compilation. Adjust this value according to the number of physical cores available on your machine. At the end of the compilation, you should get the following output (note Single threaded): OpenBLAS build complete. (BLAS CBLAS LAPACK LAPACKE) OS ... Linux Architecture ... x86_64 BINARY ... 64bit C compiler ... GCC (command line : cc) Fortran compiler ... GFORTRAN (command line : gfortran) Library Name ... libopenblas_haswell-r0.3.7.a (Single threaded) To install the library, you can run "make PREFIX=/path/to/your/installation install". You may have noticed that, in this particular case, make is not just building the library but is also running unit tests to validate the build. This means that if make completes successfully, we can be confident that the build is OK and we can proceed with the installation. Other packages use a more standard approach and provide a make check option that should be executed after make in order to run the test suite before installing the package. To install openblas in $HOME/local, issue: make PREFIX=$HOME/local/ install At this point, we should have the following include files installed in $HOME/local/include: ls $HOME/local/include/ cblas.h f77blas.h lapacke.h lapacke_config.h lapacke_mangling.h lapacke_utils.h openblas_config.h and the following libraries installed in $HOME/local/lib: ls $HOME/local/lib/libopenblas* /home/gmatteo/local/lib/libopenblas.a /home/gmatteo/local/lib/libopenblas_haswell-r0.3.7.a /home/gmatteo/local/lib/libopenblas.so /home/gmatteo/local/lib/libopenblas_haswell-r0.3.7.so /home/gmatteo/local/lib/libopenblas.so.0 Files ending with .so are shared libraries ( .so stands for shared object) whereas .a files are static libraries. When compiling source code that relies on external libraries, the name of the library (without the lib prefix and the file extension) as well as the directory where the library is located must be passed to the linker. The name of the library is usually specified with the -l option while the directory is given by -L. According to these simple rules, in order to compile source code that uses BLAS/LAPACK routines, one should use the following option: -L$HOME/local/lib -lopenblas We will use a similar syntax to help the ABINIT configure script locate the external linear algebra library. Important You may have noticed that we haven’t specified the file extension in the library name. If both static and shared libraries are found, the linker gives preference to linking with the shared library unless the -static option is used. Dynamic is the default behaviour on several Linux distributions so we assume dynamic linking in what follows. If you are compiling C or Fortran code that requires include files with the declaration of prototypes and the definition of named constants, you will need to specify the location of the include files via the -I option. In this case, the previous options should be augmented by: -L$HOME/local/lib -lopenblas -I$HOME/local/include This approach is quite common for C code where .h files must be included to compile properly. It is less common for modern Fortran code in which include files are usually replaced by .mod files i.e. Fortran modules produced by the compiler whose location is usually specified via the -J option. Still, the -I option for include files is valuable also when compiling Fortran applications as libraries such as FFTW and MKL rely on (Fortran) include files whose location should be passed to the compiler via -I instead of -J, see also the official gfortran documentation. Do not worry if this rather technical point is not clear to you. Any external library has its own requirements and peculiarities and the ABINIT build system provides several options to automate the detection of external dependencies and the final linkage. The most important thing is that you are now aware that the compilation of ABINIT requires the correct specification of -L, -l for libraries, -I for include files, and -J for Fortran modules. We will elaborate more on this topic when we discuss the configuration options supported by the ABINIT build system. Since we have installed the package in a non-standard directory ($HOME/local), we need to update two important shell variables: $PATH and $LD_LIBRARY_PATH. If this is the first time you hear about $PATH and $LD_LIBRARY_PATH, please take some time to learn about the meaning of these environment variables. More information about $PATH is available here. See this page for $LD_LIBRARY_PATH. Add these two lines at the end of your $HOME/.bash_profile file export PATH=$HOME/local/bin:$PATH export LD_LIBRARY_PATH=$HOME/local/lib:$LD_LIBRARY_PATH then execute: source $HOME/.bash_profile to activate these changes without having to start a new terminal session. Now use: echo $PATH echo $LD_LIBRARY_PATH to print the value of these variables. On my Linux box, I get: echo $PATH /home/gmatteo/local/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin echo $LD_LIBRARY_PATH /home/gmatteo/local/lib: Note how /home/gmatteo/local/bin has been prepended to the previous value of $PATH. From now on, we can invoke any executable located in $HOME/local/bin by just typing its base name in the shell without having to the enter the full path. Warning Using: export PATH=$HOME/local/bin is not a very good idea as the shell will stop working. Can you explain why? Tip macOS users should replace LD_LIBRARY_PATH with DYLD_LIBRARY_PATH Remember also that one can use env to print all the environment variables defined in your session and pipe the results to other Unix tools. Try e.g.: env | grep LD_ to print only the variables whose name starts with LD_ We conclude this section with another tip. From time to time, some compilers complain or do not display important messages because language support is improperly configured on your computer. Should this happen, we recommend to export the two variables: export LANG=C export LC_ALL=C This will reset the language support to its most basic defaults and will make sure that you get all messages from the compilers in English. How to compile libxc¶ At this point, it should not be so difficult to compile and install libxc, a library that provides many useful XC functionals (PBE, meta-GGA, hybrid functionals, etc). Libxc is written in C and can be built using the standard configure && make approach. No external dependency is needed, except for basic C libraries that are available on any decent Linux distribution. Let’s start by fetching the tarball from the internet: # Get the tarball. # Note the -O option used in wget to specify the name of the output file cd $HOME/local/src wget -O libxc.tar.gz tar -zxvf libxc.tar.gz Now configure the package with the standard --prefix option to specify the location where all the libraries, executables, include files, Fortran modules, man pages, etc. will be installed when we execute make install (the default is /usr/local) cd libxc-4.3.4 && ./configure --prefix=$HOME/local Finally, build the library, run the tests and install it with: make -j2 make check && make install At this point, we should have the following include files in $HOME/local/include [gmatteo@bob libxc-4.3.4]$ ls ~/local/include/*xc* /home/gmatteo/local/include/libxc_funcs_m.mod /home/gmatteo/local/include/xc_f90_types_m.mod /home/gmatteo/local/include/xc.h /home/gmatteo/local/include/xc_funcs.h /home/gmatteo/local/include/xc_f03_lib_m.mod /home/gmatteo/local/include/xc_funcs_removed.h /home/gmatteo/local/include/xc_f90_lib_m.mod /home/gmatteo/local/include/xc_version.h where .mod are Fortran modules generated by the compiler that are needed when compiling Fortran source using the libxc Fortran API. Warning The .mod files are compiler- and version-dependent. In other words, one cannot use these .mod files to compile code with a different Fortran compiler. Moreover, you should not expect to be able to use modules compiled with a different version of the same compiler, especially if the major version has changed. This is one of the reasons why the version of the Fortran compiler employed to build our software stack is very important. Finally, we have the following static libraries installed in ~/local/lib ls ~/local/lib/libxc* /home/gmatteo/local/lib/libxc.a /home/gmatteo/local/lib/libxcf03.a /home/gmatteo/local/lib/libxcf90.a /home/gmatteo/local/lib/libxc.la /home/gmatteo/local/lib/libxcf03.la /home/gmatteo/local/lib/libxcf90.la where: - libxc is the C library - libxcf90 is the library with the F90 API - libxcf03 is the library with the F2003 API Both libxcf90 and libxcf03 depend on the C library where most of the work is done. At present, ABINIT requires the F90 API only so we should use -L$HOME/local/lib -lxcf90 -lxc for the libraries and -I$HOME/local/include for the include files. Note how libxcf90 comes before the C library libxc. This is done on purpose as libxcf90 depends on libxc (the Fortran API calls the C implementation). Inverting the order of the libraries will likely trigger errors (undefined references) in the last step of the compilation when the linker tries to build the final application. Things become even more complicated when we have to build applications using many different interdependent libraries as the order of the libraries passed to the linker is of crucial importance. Fortunately the ABINIT build system is aware of this problem and all the dependencies (BLAS, LAPACK, FFT, LIBXC, MPI, etc) will be automatically put in the right order so you don’t have to worry about this point although it is worth knowing about it. Compiling and installing FFTW¶ FFTW is a C library for computing the Fast Fourier transform in one or more dimensions. ABINIT already provides an internal implementation of the FFT algorithm implemented in Fortran hence FFTW is considered an optional dependency. Nevertheless, we do not recommend the internal implementation if you really care about performance. The reason is that FFTW (or, even better, the DFTI library provided by intel MKL) is usually much faster than the internal version. Important FFTW is very easy to install on Linux machines once you have gcc and gfortran. The fftalg variable defines the implementation to be used and 312 corresponds to the FFTW implementation. The default value of fftalg is automatically set by the configure script via pre-preprocessing options. In other words, if you activate support for FFTW (DFTI) at configure time, ABINIT will use fftalg 312 (512) as default. The FFTW source code can be downloaded from fftw.org, and the tarball of the latest version is available at cd $HOME/local/src wget tar -zxvf fftw-3.3.8.tar.gz && cd fftw-3.3.8 The compilation procedure is very similar to the one already used for the libxc package. Note, however, that ABINIT needs both the single-precision and the double-precision version. This means that we need to configure, build and install the package twice. To build the single precision version, use: ./configure --prefix=$HOME/local --enable-single make -j2 make check && make install During the configuration step, make sure that configure finds the Fortran compiler because ABINIT needs the Fortran interface. checking for gfortran... gfortran checking whether we are using the GNU Fortran 77 compiler... yes checking whether gfortran accepts -g... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... no checking whether to build static libraries... yes Let’s have a look at the libraries we’ve just installed: ls $HOME/local/lib/libfftw3* /home/gmatteo/local/lib/libfftw3f.a /home/gmatteo/local/lib/libfftw3f.la the f at the end stands for float (C jargon for single precision). Note that only static libraries have been built. To build shared libraries, one should use --enable-shared when configuring. Now we configure for the double precision version (this is the default behaviour so no extra option is needed) ./configure --prefix=$HOME/local make -j2 make check && make install After this step, you should have two libraries with the single and the double precision API: ls $HOME/local/lib/libfftw3* /home/gmatteo/local/lib/libfftw3.a /home/gmatteo/local/lib/libfftw3f.a /home/gmatteo/local/lib/libfftw3.la /home/gmatteo/local/lib/libfftw3f.la To compile ABINIT with FFTW3 support, one should use: -L$HOME/local/lib -lfftw3f -lfftw3 -I$HOME/local/include Note that, unlike in libxc, here we don’t have to specify different libraries for Fortran and C as FFTW3 bundles both the C and the Fortran API in the same library. The Fortran interface is included by default provided the FFTW3 configure script can find a Fortran compiler. In our case, we know that our FFTW3 library supports Fortran as gfortran was found by configure but this may not be true if you are using a precompiled library installed via your package manager. To make sure we have the Fortran API, use the nm tool to get the list of symbols in the library and then use grep to search for the Fortran API. For instance we can check whether our library contains the Fortran routine for multiple single-precision FFTs (sfftw_plan_many_dft) and the version for multiple double-precision FFTs (dfftw_plan_many_dft) [gmatteo@bob fftw-3.3.8]$ nm $HOME/local/lib/libfftw3f.a | grep sfftw_plan_many_dft 0000000000000400 T sfftw_plan_many_dft_ 0000000000003570 T sfftw_plan_many_dft__ 0000000000001a90 T sfftw_plan_many_dft_c2r_ 0000000000004c00 T sfftw_plan_many_dft_c2r__ 0000000000000f60 T sfftw_plan_many_dft_r2c_ 00000000000040d0 T sfftw_plan_many_dft_r2c__ [gmatteo@bob fftw-3.3.8]$ nm $HOME/local/lib/libfftw3.a | grep dfftw_plan_many_dft 0000000000000400 T dfftw_plan_many_dft_ 0000000000003570 T dfftw_plan_many_dft__ 0000000000001a90 T dfftw_plan_many_dft_c2r_ 0000000000004c00 T dfftw_plan_many_dft_c2r__ 0000000000000f60 T dfftw_plan_many_dft_r2c_ 00000000000040d0 T dfftw_plan_many_dft_r2c__ If you are using a FFTW3 library without Fortran support, the ABINIT configure script will complain that the library cannot be called from Fortran and you will need to dig into config.log to understand what’s going on. Note At present, there is no need to compile FFTW with MPI support because ABINIT implements its own version of the MPI-FFT algorithm based on the sequential FFTW version. The MPI algorithm implemented in ABINIT is optimized for plane-waves codes as it supports zero-padding and composite transforms for the applications of the local part of the KS potential. Also,. Installing MPI¶ In this section, we discuss how to compile and install the MPI library. This step is required if you want to run ABINIT with multiple processes and/or you need to compile MPI-based libraries such as PBLAS/Scalapack or the HDF5 library with support for parallel IO. It is worth stressing that the MPI installation provides two scripts (mpif90 and mpicc) that act as a sort of wrapper around the sequential Fortran and the C compilers, respectively. These scripts must be used to compile parallel software using MPI instead of the “sequential” gfortran and gcc. The MPI library also provides launcher scripts installed in the bin directory (mpirun or mpiexec) that must be used to execute an MPI application EXEC with NUM_PROCS MPI processes with the syntax: mpirun -n NUM_PROCS EXEC [EXEC_ARGS] Warning Keep in mind that there are several MPI implementations available around (openmpi, mpich, intel mpi, etc) and you must choose one implementation and stick to it when building your software stack. In other words, all the libraries and executables requiring MPI must be compiled, linked and executed with the same MPI library. Don’t try to link a library compiled with e.g. mpich if you are building the code with the mpif90 wrapper provided by e.g. openmpi. By the same token, don’t try to run executables compiled with e.g. intel mpi with the mpirun launcher provided by openmpi unless you are looking for troubles! Again, the which command is quite handy to pinpoint possible problems especially if there are multiple installations of MPI in your $PATH (not a very good idea!). In this tutorial, we employ the mpich implementation that can be downloaded from this webpage. In the terminal, issue: cd $HOME/local/src wget tar -zxvf mpich-3.3.2.tar.gz cd mpich-3.3.2/ to download and uncompress the tarball. Then configure/compile/test/install the library with: ./configure --prefix=$HOME/local make -j2 make check && make install Once the installation is completed, you should obtain this message (possibly not the last message, you might have to look for it). ---------------------------------------------------------------------- Libraries have been installed in: /home/gmatte. ---------------------------------------------------------------------- The reason why we should add $HOME/local/lib to $LD_LIBRARY_PATH now should be clear to you. Let’s have a look at the MPI executables we have just installed in $HOME/local/bin: ls $HOME/local/bin/mpi* /home/gmatteo/local/bin/mpic++ /home/gmatteo/local/bin/mpiexec /home/gmatteo/local/bin/mpifort /home/gmatteo/local/bin/mpicc /home/gmatteo/local/bin/mpiexec.hydra /home/gmatteo/local/bin/mpirun /home/gmatteo/local/bin/mpichversion /home/gmatteo/local/bin/mpif77 /home/gmatteo/local/bin/mpivars /home/gmatteo/local/bin/mpicxx /home/gmatteo/local/bin/mpif90 Since we added $HOME/local/bin to $PATH, we should see that mpi90 is actually pointing to the version we have just installed: which mpif90 ~/local/bin/mpif90 As already mentioned, mpif90 is a wrapper around the sequential Fortran compiler. To show the Fortran compiler invoked by mpif90, use: mpif90 -v mpifort for MPICH version 3.3.2 Using built-in specs. COLLECT_GCC=gfortran COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/5.3.1/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,objc,obj-c++,fortr-linker-hash-style=gnu --enable-plugin --enable-initfini-array --disable-libgcj --with-isl --enable-libmpx --enable-gnu-indirect-function --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 5.3.1 20160406 (Red Hat 5.3.1-6) (GCC) The C include files (.h) and the Fortran modules (.mod) have been installed in $HOME/local/include ls $HOME/local/include/mpi* /home/gmatteo/local/include/mpi.h /home/gmatteo/local/include/mpicxx.h /home/gmatteo/local/include/mpi.mod /home/gmatteo/local/include/mpif.h /home/gmatteo/local/include/mpi_base.mod /home/gmatteo/local/include/mpio.h /home/gmatteo/local/include/mpi_constants.mod /home/gmatteo/local/include/mpiof.h /home/gmatteo/local/include/mpi_sizeofs.mod In principle, the location of the directory must be passed to the Fortran compiler either with the -J ( mpi.mod module for MPI2+) or the -I option ( mpif.h include file for MPI1). Fortunately, the ABINIT build system can automatically detect your MPI installation and set all the compilation options automatically if you provide the installation root ($HOME/local). Installing HDF5 and netcdf4¶ Abinit developers are trying to move away from Fortran binary files as this format is not portable and difficult to read from high-level languages such as python. For this reason, in Abinit v9, HDF5 and netcdf4 have become hard-requirements. This means that the configure script will abort if these libraries are not found. In this section, we explain how to build HDF5 and netcdf4 from source including support for parallel IO. Netcdf4 is built on top of HDF5 and consists of two different layers: The low-level C library The Fortran bindings i.e. Fortran routines calling the low-level C implementation. This is the high-level API used by ABINIT to perform all the IO operations on netcdf files. To build the libraries required by ABINIT, we will compile the three different layers in a bottom-up fashion starting from the HDF5 package (HDF5 → netcdf-c → netcdf-fortran). Since we want to activate support for parallel IO, we need to compile the libraries using the wrappers provided by our MPI installation instead of using gcc or gfortran directly. Let’s start by downloading the HDF5 tarball from this download page. Uncompress the archive with tar as usual, then configure the package with: ./configure --prefix=$HOME/local/ \ CC=$HOME/local/bin/mpicc --enable-parallel --enable-shared where we’ve used the CC variable to specify the C compiler. This step is crucial in order to activate support for parallel IO. At the end of the configuration step, you should get the following output: AM C Flags: Shared C Library: yes Static C Library: yes Fortran: no C++: no Java: no Features: --------- Parallel HDF5: yes Parallel Filtered Dataset Writes: yes Large Parallel I/O: yes High-level library: yes Threadsafety: no Default API mapping: v110 With deprecated public symbols: yes I/O filters (external): deflate(zlib) MPE: Direct VFD: no dmalloc: no Packages w/ extra debug output: none API tracing: no Using memory checker: no Memory allocation sanity checks: no Function stack tracing: no Strict file format checks: no Optimization instrumentation: no The line with: Parallel HDF5: yes tells us that our HDF5 build supports parallel IO. The Fortran API is not activated but this is not a problem as ABINIT will be interfaced with HDF5 through the Fortran bindings provided by netcdf-fortran. In other words, ABINIT requires netcdf-fortran and not the HDF5 Fortran bindings. Again, issue make -j NUM followed by make check and finally make install. Note that make check may take some time so you may want to install immediately and run the tests in another terminal so that you can continue with the tutorial. Now let’s move to netcdf. Download the C version and the Fortran bindings from the netcdf website and unpack the tarball files as usual. wget tar -xvf netcdf-c-4.7.3.tar.gz wget tar -xvf netcdf-fortran-4.5.2.tar.gz To compile the C library, use: cd netcdf-c-4.7.3 ./configure --prefix=$HOME/local/ \ CC=$HOME/local/bin/mpicc \ LDFLAGS=-L$HOME/local/lib CPPFLAGS=-I$HOME/local/include where mpicc is used as C compiler (CC environment variable) and we have to specify LDFLAGS and CPPFLAGS as we want to link against our installation of hdf5. At the end of the configuration step, we should obtain # NetCDF C Configuration Summary ============================== # General ------- NetCDF Version: 4.7.3 Dispatch Version: 1 Configured On: Wed Apr 8 00:53:19 CEST 2020 Host System: x86_64-pc-linux-gnu Build Directory: /home/gmatteo/local/src/netcdf-c-4.7.3 Install Prefix: /home/gmatteo/local # Compiling Options ----------------- C Compiler: /home/gmatteo/local/bin/mpicc CFLAGS: CPPFLAGS: -I/home/gmatteo/local/include LDFLAGS: -L/home/gmatteo/local/lib AM_CFLAGS: AM_CPPFLAGS: AM_LDFLAGS: Shared Library: yes Static Library: yes Extra libraries: -lhdf5_hl -lhdf5 -lm -ldl -lz -lcurl # Features -------- NetCDF-2 API: yes HDF4 Support: no HDF5 Support: yes NetCDF-4 API: yes NC-4 Parallel Support: yes PnetCDF Support: no DAP2 Support: yes DAP4 Support: yes Byte-Range Support: no Diskless Support: yes MMap Support: no JNA Support: no CDF5 Support: yes ERANGE Fill Support: no Relaxed Boundary Check: yes The section: HDF5 Support: yes NetCDF-4 API: yes NC-4 Parallel Support: yes tells us that configure detected our installation of hdf5 and that support for parallel-IO is activated. Now use the standard sequence of commands to compile and install the package: make -j2 make check && make install Once the installation is completed, use the nc-config executable to inspect the features provided by the library we’ve just installed. which nc-config /home/gmatteo/local/bin/nc-config # installation directory nc-config --prefix /home/gmatteo/local/ To get a summary of the options used to build the C layer and the available features, use nc-config --all This netCDF 4.7.3 has been built with the following features: --cc -> /home/gmatteo/local/bin/mpicc --cflags -> -I/home/gmatteo/local/include --libs -> -L/home/gmatteo/local/lib -lnetcdf --static -> -lhdf5_hl -lhdf5 -lm -ldl -lz -lcurl .... <snip> nc-config is quite useful as it prints the compiler options required to build C applications requiring netcdf-c ( --cflags and --libs). Unfortunately, this tool is not enough for ABINIT as we need the Fortran bindings as well. To compile the Fortran bindings, execute: cd netcdf-fortran-4.5.2 ./configure --prefix=$HOME/local/ \ FC=$HOME/local/bin/mpif90 \ LDFLAGS=-L$HOME/local/lib CPPFLAGS=-I$HOME/local/include where FC points to our mpif90 wrapper (CC is not needed here). For further info on how to build netcdf-fortran, see the official documentation. Now issue: make -j2 make check && make install To inspect the features activated in our Fortran library, use nf-config instead of nc-config (note the nf- prefix): which nf-config /home/gmatteo/local/bin/nf-config # installation directory nf-config --prefix /home/gmatteo/local/ To get a summary of the options used to build the Fortran bindings and the list of available features, use nf-config --all This netCDF-Fortran 4.5.2 has been built with the following features: --cc -> gcc --cflags -> -I/home/gmatteo/local/include -I/home/gmatteo/local/include --fc -> /home/gmatteo/local/bin/mpif90 --fflags -> -I/home/gmatteo/local/include --flibs -> -L/home/gmatteo/local/lib -lnetcdff -L/home/gmatteo/local/lib -lnetcdf -lnetcdf -ldl -lm --has-f90 -> --has-f03 -> yes --has-nc2 -> yes --has-nc4 -> yes --prefix -> /home/gmatteo/local --includedir-> /home/gmatteo/local/include --version -> netCDF-Fortran 4.5.2 Tip nf-config is quite handy to pass options to the ABINIT configure script. Instead of typing the full list of libraries ( --flibs) and the location of the include files ( --fflags) we can delegate this boring task to nf-config using backtick syntax: NETCDF_FORTRAN_LIBS=`nf-config --flibs` NETCDF_FORTRAN_FCFLAGS=`nf-config --fflags` Alternatively, one can simply pass the installation directory (here we use the $(...)syntax): --with-netcdf-fortran=$(nf-config --prefix) and then let configure detect NETCDF_FORTRAN_LIBS and NETCDF_FORTRAN_FCFLAGS for us. How to compile ABINIT¶ In this section, we finally discuss how to compile ABINIT using the MPI compilers and the libraries installed previously. First of all, download the ABINIT tarball from this page using e.g. wget Here we are using version 9.0.2 but you may want to download the latest production version to take advantage of new features and benefit from bug fixes. Once you got the tarball, uncompress it by typing: tar -xvzf abinit-9.0.2.tar.gz Then cd into the newly created abinit-9.0.2 directory. Before actually starting the compilation, type: ./configure --help and take some time to read the documentation of the different options. The documentation mentions the most important environment variables that can be used to specify compilers and compilation flags. We already encountered some of these variables in the previous examples: CXX C++ compiler command CXXFLAGS C++ compiler flags FC Fortran compiler command FCFLAGS Fortran compiler flags Besides the standard environment variables: CC, CFLAGS, FC, FCFLAGS etc. the build system also provides specialized options to activate support for external libraries. For libxc, for instance, we have: LIBXC_CPPFLAGS C preprocessing flags for LibXC. LIBXC_CFLAGS C flags for LibXC. LIBXC_FCFLAGS Fortran flags for LibXC. LIBXC_LDFLAGS Linker flags for LibXC. LIBXC_LIBS Library flags for LibXC. According to what we have seen during the compilation of libxc, one should pass to configure the following options: LIBXC_LIBS="-L$HOME/local/lib -lxcf90 -lxc" LIBXC_FCFLAGS="-I$HOME/local/include" Alternatively, one can use the high-level interface provided by the --with-LIBNAME options to specify the installation directory as in: --with-libxc="$HOME/local/lib" In this case, configure will try to automatically detect the other options. This is the easiest approach but if configure cannot detect the dependency properly, you may need to inspect config.log for error messages and/or set the options manually. In the previous examples, we executed configure in the top level directory of the package but for ABINIT we prefer to do things in a much cleaner way using a build directory The advantage of this approach is that we keep object files and executables separated from the source code and this allows us to build different executables using the same source tree. For example, one can have a build directory with a version compiled with gfortran and another build directory for the intel ifort compiler or other builds done with same compiler but different compilation options. Let’s call the build directory build_gfortran: mkdir build_gfortran && cd build_gfortran Now we should define the options that will be passed to the configure script. Instead of using the command line as done in the previous examples, we will be using an external file (myconf.ac9) to collect all our options. The syntax to read options from file is: ../configure --with-config-file="myconf.ac9" where double quotation marks may be needed for portability reasons. Note the use of ../configure as we are working inside the build directory build_gfortran while the configure script is located in the top level directory of the package. Important The name of the options in myconf.ac9 is in normalized form that is the initial _. Following these simple rules, the configure option --with-mpi becomes with_mpi in the ac9 file. --is removed from the option name and all the other -characters in the string are replaced by an underscore Also note that in the configuration file it is possible to use shell variables and reuse the output of external tools using backtick syntax as is nf-config --flibs or, if you prefer, ${nf-config --flibs}. This tricks allow us to reduce the amount of typing and have configuration files that can be easily reused for other machines. This is an example of configuration file in which we use the high-level interface ( with_LIBNAME=dirpath) as much as possible, except for linalg and FFTW3. The explicit value of LIBNAME_LIBS and LIBNAME_FCFLAGS is also reported in the commented sections. # -------------------------------------------------------------------------- # # MPI support # # -------------------------------------------------------------------------- # # * the build system expects to find subdirectories named bin/, lib/, # include/ inside the with_mpi directory # with_mpi=$HOME/local/ # Flavor of linear algebra libraries to use (default is netlib) # with_linalg_flavor="openblas" # Library flags for linear algebra (default is unset) # LINALG_LIBS="-L$HOME/local/lib -lopenblas" # -------------------------------------------------------------------------- # # Optimized FFT support # # -------------------------------------------------------------------------- # # Flavor of FFT framework to support (default is auto) # # The high-level interface does not work yet so we pass options explicitly #with_fftw3="$HOME/local/lib" # Explicit options for fftw3 with_fft_flavor="fftw3" FFTW3_LIBS="-L$HOME/local/lib -lfftw3f -lfftw3" FFTW3_FCFLAGS="-L$HOME/local/include" # -------------------------------------------------------------------------- # # LibXC # -------------------------------------------------------------------------- # # Install prefix for LibXC (default is unset) # with_libxc="$HOME/local" # Explicit options for libxc #LIBXC_LIBS="-L$HOME/local/lib -lxcf90 -lxc" #LIBXC_FCFLAGS="-I$HOME/local/include" # -------------------------------------------------------------------------- # # NetCDF # -------------------------------------------------------------------------- # # install prefix for NetCDF (default is unset) # with_netcdf=$(nc-config --prefix) with_netcdf_fortran=$(nf-config --prefix) # Explicit options for netcdf #with_netcdf="yes" #NETCDF_FORTRAN_LIBS=`nf-config --flibs` #NETCDF_FORTRAN_FCFLAGS=`nf-config --fflags` # install prefix for HDF5 (default is unset) # with_hdf5="$HOME/local" # Explicit options for hdf5 #HDF5_LIBS=`nf-config --flibs` #HDF5_FCFLAGS=`nf-config --fflags` # Enable OpenMP (default is no) enable_openmp="no" A documented template with all the supported options can be found here # # Generic config file for ABINIT (documented template) # # After editing this file to suit your needs, you may save it as # "~/.abinit/build/<hostname>.ac9" to keep these parameters as per-user # defaults. Just replace "<hostname>" by the name of your machine, # excluding the domain name. # # Example: if your machine is called "myhost.mydomain", you will save # this file as "~/.abinit/build/myhost.ac9". # # You may put this file at the top level of an ABINIT source tree as well, # in which case its definitions will apply to this particular tree only. In # some situations, you may even want to put it at your current top build # directory, in which case it will replace any other config file. # # Hint: If you do not know the name of your machine, just type "hostname" # in a terminal, or "hostname -s" to obtain the name without domain name. # # # IMPORTANT NOTES # # 1. Setting CPPFLAGS, CFLAGS, CXXFLAGS, or FCFLAGS manually is not # recommended and will override any setting made by the build system. # A gentler way to do is to use the CFLAGS_EXTRA, CXXFLAGS_EXTRA and # FCFLAGS_EXTRA environment variables, or to override only one kind # of flags. See the sections dedicated to C, C++ and Fortran below # for details. # # 2. Do not forget to remove the leading "#" on a line when you customize # an option. # # -------------------------------------------------------------------------- # # Global build options # # -------------------------------------------------------------------------- # # Where to install ABINIT (default is /usr/local) # #prefix="${HOME}/hpc" # Select debug level (default is basic) # # Allowed values: # # * none : strip debugging symbols # * custom : allow for user-defined debug flags # * basic : add '-g' option when the compiler allows for it # * verbose : like basic + definition of the DEBUG_VERBOSE CPP option # * enhanced : disable optimizations and debug verbosely # * paranoid : enhanced debugging with additional warnings # * naughty : paranoid debugging with array bound checks # # Levels other than no and yes are "profile mode" levels in which # user-defined flags are overriden and optimizations disabled (see # below) # # Note: debug levels are incremental, i.e. the flags of one level are # appended to those of the previous ones # #with_debug_flavor="custom" # Select optimization level whenever possible (default is standard, # except when debugging is in profile mode - see above - in which case # optimizations are turned off) # # Supported levels: # # * none : disable optimizations # * custom : enable optimizations with user-defined flags # * safe : build slow and very reliable code # * standard : build fast and reliable code # * aggressive : build very fast code, regardless of reliability # # Levels other than no and yes are "profile mode" levels in which # user-defined flags are overriden # # Note: # # * this option is ignored when the debug is level is higher than basic # #with_optim_flavor="aggressive" # Reduce AVX optimizations in sensitive subprograms (default is no) # #enable_avx_safe_mode="yes" # Enable compiler hints (default is yes) # # Allowed values: # # * no : do not apply any hint # * yes : apply all available hints # #enable_hints="no" # -------------------------------------------------------------------------- # # C support # # -------------------------------------------------------------------------- # # C preprocessor (should not be set in most cases) # #CPP="/usr/bin/cpp" # C preprocessor custom debug flags (when with_debug_flavor=custom) # #CPPFLAGS_DEBUG="-DDEV_MG_DEBUG_MODE" # C preprocessor custom optimization flags (when with_optim_flavor=custom) # #CPPFLAGS_OPTIM="-DDEV_DIAGO_DP" # C preprocessor additional custom flags # #CPPFLAGS_EXTRA="-P" # Forced C preprocessor flags # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #CPPFLAGS="-P" # ------------------------------ # # C compiler # #CC="gcc" # C compiler custom debug flags (when with_debug_flavor=custom) # #CFLAGS_DEBUG="-g3" # C compiler custom optimization flags (when with_optim_flavor=custom) # #CFLAGS_OPTIM="-O5" # C compiler additional custom flags # #CFLAGS_EXTRA="-O2" # Forced C compiler flags # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #CFLAGS="-O2" # ------------------------------ # # C linker custom debug flags (when with_debug_flavor=custom) # #CC_LDFLAGS_DEBUG="-Wl,-debug" # C linker custom optimization flags (when with_optim_flavor=custom) # #CC_LDFLAGS_OPTIM="-Wl,-ipo" # C linker additional custom flags # #CC_LDFLAGS_EXTRA="-Bstatic" # Forced C linker flags # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #CC_LDFLAGS="-Bstatic" # C linker custom debug libraries (when with_debug_flavor=custom) # #CC_LIBS_DEBUG="-ldebug" # C linker custom optimization libraries (when with_optim_flavor=custom) # #CC_LIBS_OPTIM="-lopt_funcs" # C linker additional custom libraries # #CC_LIBS_EXTRA="-lrt" # Forced C linker libraries # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #CC_LIBS="-lrt" # -------------------------------------------------------------------------- # # C++ support # # -------------------------------------------------------------------------- # # Note: the XPP* environment variables will likely have no effect # C++ preprocessor (should not be set in most cases) # #XPP="/usr/bin/cpp" # C++ preprocessor custom debug flags (when with_debug_flavor=custom) # #XPPFLAGS_DEBUG="-DDEV_MG_DEBUG_MODE" # C++ preprocessor custom optimization flags (when with_optim_flavor=custom) # #XPPFLAGS_OPTIM="-DDEV_DIAGO_DP" # C++ preprocessor additional custom flags # #XPPFLAGS_EXTRA="-P" # Forced C++ preprocessor flags # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #XPPFLAGS="-P" # ------------------------------ # # C++ compiler # #CXX="g++" # C++ compiler custom debug flags (when with_debug_flavor=custom) # #CXXFLAGS_DEBUG="-g3" # C++ compiler custom optimization flags (when with_optim_flavor=custom) # #CXXFLAGS_OPTIM="-O5" # C++ compiler additional custom flags # #CXXFLAGS_EXTRA="-O2" # Forced C++ compiler flags # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #CXXFLAGS="-O2" # ------------------------------ # # C++ linker custom debug flags (when with_debug_flavor=custom) # #CXX_LDFLAGS_DEBUG="-Wl,-debug" # C++ linker custom optimization flags (when with_optim_flavor=custom) # #CXX_LDFLAGS_OPTIM="-Wl,-ipo" # C++ linker additional custom flags # #CXX_LDFLAGS_EXTRA="-Bstatic" # Forced C++ linker flags # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #CXX_LDFLAGS="-Bstatic" # C++ linker custom debug libraries (when with_debug_flavor=custom) # #CXX_LIBS_DEBUG="-ldebug" # C++ linker custom optimization libraries (when with_optim_flavor=custom) # #CXX_LIBS_OPTIM="-lopt_funcs" # C++ linker additional custom libraries # #CXX_LIBS_EXTRA="-lblitz" # Forced C++ linker libraries # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #CXX_LIBS="-lblitz" # -------------------------------------------------------------------------- # # Fortran support # # -------------------------------------------------------------------------- # # Fortran preprocessor (should not be set in most cases) # #FPP="/usr/local/bin/fpp" # Fortran preprocessor custom debug flags (when with_debug_flavor=custom) # #FPPFLAGS_DEBUG="-DDEV_MG_DEBUG_MODE" # Fortran preprocessor custom optimization flags (when with_optim_flavor=custom) # #FPPFLAGS_OPTIM="-DDEV_DIAGO_DP" # Fortran preprocessor additional custom flags # #FPPFLAGS_EXTRA="-P" # Forced Fortran preprocessor flags # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #FPPFLAGS="-P" # ------------------------------ # # Fortran compiler # #FC="gfortran" # Fortran 77 compiler (addition for the Windows/Cygwin environment) # #F77="gfortran" # Fortran compiler custom debug flags (when with_debug_flavor=custom) # #FCFLAGS_DEBUG="-g3" # Fortran compiler custom OpenMP flags # #FCFLAGS_OPENMP="-fopenmp" # Fortran compiler custom optimization flags (when with_optim_flavor=custom) # #FCFLAGS_OPTIM="-O5" # Fortran compiler additional custom flags # #FCFLAGS_EXTRA="-O2" # Forced Fortran compiler flags # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #FCFLAGS="-O2" # Fortran flags for fixed-form source files # #FCFLAGS_FIXEDFORM="-ffixed-form" # Fortran flags for free-form source files # #FCFLAGS_FREEFORM="-ffree-form" # Fortran compiler flags to use a module directory # #FCFLAGS_MODDIR=="-J$(abinit_moddir)" # Tricky Fortran compiler flags # #FCFLAGS_HINTS="-ffree-line-length-none" # Fortran linker custom debug flags (when with_debug_flavor=custom) # #FC_LDFLAGS_DEBUG="-Wl,-debug" # Fortran linker custom optimization flags (when with_optim_flavor=custom) # #FC_LDFLAGS_OPTIM="-Wl,-ipo" # Fortran linker custom flags # #FC_LDFLAGS_EXTRA="-Bstatic" # Forced Fortran linker flags # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #FC_LDFLAGS="-Bstatic" # Fortran linker custom debug libraries (when with_debug_flavor=custom) # #FC_LIBS_DEBUG="-ldebug" # Fortran linker custom optimization libraries (when with_optim_flavor=custom) # #FC_LIBS_OPTIM="-lopt_funcs" # Fortran linker additional custom libraries # #FC_LIBS_EXTRA="-lsvml" # Forced Fortran linker libraries # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #FC_LIBS="-lsvml" # ------------------------------ # # Use C clock instead of Fortran clock for timings (default is no) # #enable_cclock="yes" # Wrap Fortran compiler calls (default is auto-detected) # Combine this option with with_debug_flavor="basic" (or some other debug flavor) to keep preprocessed source # files (they are removed by default, except if their build fails) # #enable_fc_wrapper="yes" # Choose whether to read file lists from standard input or "ab.files" # (default is yes = standard input) # #enable_stdin="no" # Set per-directory Fortran optimizations (useful when a Fortran compiler # crashes during the build) # # Note: this option is not available through the command line # #fcflags_opt_95_drive="-O0" # -------------------------------------------------------------------------- # # Python support # # -------------------------------------------------------------------------- # # Flags to pass to the Python interpreter (default is unset) # #PYFLAGS="-B" # Preprocessing flags for C/Python bindings # #PYTHON_CPPFLAGS="-I/usr/local/include/numpy" # -------------------------------------------------------------------------- # # Libraries and linking # # -------------------------------------------------------------------------- # # Set archiver name # #AR="xiar" # Archiver custom debug flags (when with_debug_flavor=custom) # #ARFLAGS_DEBUG="" # Archiver custom optimization flags (when with_optim_flavor=custom) # #ARFLAGS_OPTIM="" # Archiver additional custom flags # #ARFLAGS_EXTRA="-X 64" # Forced archiver flags # Note: will override build-system configuration - USE AT YOUR OWN RISKS! # #ARFLAGS="-X 32_64" # ------------------------------ # # Note: the following definitions are necessary for MINGW/WINDOW$ only # and should be left unset on other architectures # Archive index generator #RANLIB="ranlib" # Object symbols lister #NM="nm" # Generic linker #LD="ld" # Language-independent libraries to add to the build configuration #LIBS="-lstdc++ -lpython2.7" # -------------------------------------------------------------------------- # # MPI support # # -------------------------------------------------------------------------- # # Determine whether to build parallel code (default is auto) # # Permitted values: # # * no : disable MPI support # * yes : enable MPI support, assuming the compiler is MPI-aware # * <prefix> : look for MPI in the <prefix> directory # # If left unset, the build system will take all appropriate decisions by # itself, and MPI will be enabled only if the build environment supports # it. If set to "yes", the configure script will stop if it does not find # a working MPI environment. # # Note: # # * the build system expects to find subdirectories named bin/, lib/, # include/ under the prefix. # #with_mpi="/usr/local/openmpi-gcc" # Define MPI flavor (optional, only useful on some systems) # # Permitted values: # # * auto : Let the build system configure MPI (recommended). # # * double-wrap : Internally wrap the MPI compiler wrappers, only for # severely bugged and oldish MPI implementations. # # * flags : Do not look for MPI compiler wrappers, only use # compiler flags to enable MPI. # # * native : Assume that the compilers have native MPI support. # # * prefix : Assume that MPI wrappers are located under # specified prefix and exclude any other # configuration. # #with_mpi_flavor="native" # Activate the MPI_IN_PLACE option whenever possible (default is no) # WARNING: this feature requires MPI2, ignored if the MPI library # is not MPI2 compliant. # #enable_mpi_inplace="no" # Activate parallel I/O (default is auto) # # Permitted values: # # * auto : let the configure script auto-detect MPI I/O support # * no : disable MPI I/O support # * yes : enable MPI I/O support # # If left unset, the build system will take all appropriate decisions by # itself, and MPI I/O will be enabled only if the build environment supports # it. If set to "yes", the configure script will stop if it does not find # a working MPI I/O environment. # #enable_mpi_io="yes" # Enable MPI-IO mode in Abinit (use MPI I/O as default I/O library, # change the default values of iomode) # Beware that not all the features of Abinit support MPI I/O, # This options is mainly used by developers for debugging purposes. # #enable_mpi_io_default="no" # Set MPI standard level (default is auto-detected) # Note: all current implementations should support at least level 2 # # Supported levels: # # * 1 : use 'mpif.h' header # * 2 : use mpi Fortran module # * 3 : use mpi_f08 Fortran module # #with_mpi_level="2" # C preprocessor flags for MPI (default is unset) # #MPI_CPPFLAGS="-I/usr/local/include" # C flags for MPI (default is unset) # #MPI_CFLAGS="" # C++ flags for MPI (default is unset) # #MPI_CXXFLAGS="" # Fortran flags for MPI (default is unset) # #MPI_FCFLAGS="" # Link flags for MPI (default is unset) # #MPI_LDFLAGS="" # Library flags for MPI (default is unset) # #MPI_LIBS="-L/usr/local/lib -lmpi" # -------------------------------------------------------------------------- # # GPU support # # -------------------------------------------------------------------------- # # Requirement: go through README.GPU before doing anything # # Note: this is highly experimental - USE AT YOUR OWN RISKS! # Trigger and install prefix for GPU libraries and compilers # # Permitted values: # # * no : disable GPU support # * yes : enable GPU support, assuming the build environment # is properly set # * <prefix> : look for GPU in the <prefix> directory # # Note: The build system expects to find subdirectories named bin/, lib/, # include/ under the prefix. # #with_gpu="/usr/local/cuda" # Flavor of the GPU library to use (default is cuda-single) # # Supported libraries: # # * cuda-single : Cuda with single-precision arithmetic # * cuda-double : Cuda with double-precision arithmetic # * none : not implemented (will replace enable_gpu) # #with_gpu_flavor="cuda-double" # GPU C preprocessor flags (default is unset) # #GPU_CPPFLAGS="-I/usr/local/include/cuda" # GPU C flags (default is unset) # #GPU_CFLAGS="-I/usr/local/include/cuda" # GPU C++ flags (default is unset) # #GPU_CXXFLAGS="-std=c++11" # GPU Fortran flags (default is unset) # #GPU_FCFLAGS="-I/usr/local/include/cuda" # GPU link flags (default is unset) # #GPU_LDFLAGS="-L/usr/local/cuda/lib64 -lcublas -lcufft -lcudart" # GPU link flags (default is unset) # #GPU_LIBS="-L/usr/local/cuda/lib64 -lcublas -lcufft -lcudart" # ------------------------------ # # Advanced GPU options (experts only) # # DO NOT EDIT THIS SECTION UNLESS YOU *TRULY* KNOW WHAT YOU ARE DOING! # In any case, the outcome of setting the following options is highly # impredictible. # nVidia C compiler (should not be set) # #NVCC="/usr/local/cuda/bin/nvcc" # Forced nVidia C compiler preprocessing flags (should not be set) # #NVCC_CPPFLAGS="-DHAVE_CUDA_SDK" # Forced nVidia C compiler flags (should not be set) # #NVCC_CFLAGS="-arch=sm_13" # Forced nVidia linker flags (should not be set) # #NVCC_LDFLAGS="" # Forced nVidia linker libraries (should not be set) # #NVCC_LIBS="" # -------------------------------------------------------------------------- # # Linear algebra support # # -------------------------------------------------------------------------- # # A large set of linear algebra libraries are available on the Web. # See e.g. # Among all those, many are supported by ABINIT. # WARNING: when setting the value of the linear algebra flavor to "custom", # the associated CPP options may be defined in an impredictable # manner by the build system. In such a case, checking which # CPP options are enabled by looking at the output of abinit # is thus highly recommended before running production # calculations. # Supported libraries: # # * AOCL : AMD Optimizing CPU Libraries # # # * atlas : Automatically Tuned Linear Algebra Software # # # * auto : automatically look for linear algebra libraries # depending on the build environment (default) # # * easybuild : EasyBuild - Building software with ease # # # * elpa : Eigen soLvers for Petaflops Applications # # # * magma : Matrix Algebra on GPU and Multicore Architectures # (MAGMA minimal version>=1.1.0, requires Cuda) # # # * mkl : Intel Math Kernel Library # # # * netlib : Netlib repository reference libraries # # # # * none : just check user-specified libraries, do not try to # detect their type # # * openblas : OpenBLAS - An Optimized BLAS Library # # # * plasma : Parallel Linear Algebra for Scalable Multicore # Architectures (requires MPI) # # # Notes: # # * you may combine "magma" and/or "plasma" with # any other flavor, using '+' as a separator # # * "custom" also works when the Fortran compiler provides a full # BLAS+LAPACK implementation internally (e.g. Lahey Fortran) # # * the include and link flags for MAGMA and ScaLAPACK have to be # specified together with those of BLAS and LAPACK (see options below) # # * please consult the MKL link line advisor if you experience # problems with MKL, by going to # # # * if with_linalg is set to "no", linear algebra tests will be disabled # and the configuration will be assumed to work as-is by the build # system (USE AT YOUR OWN RISKS) # # * with_linalg_incs and with_linalg_libs systematically override the # contents of with_linalg # # * in order to enable SCALAPACK, you might have to specify -lscalapack # or use another appropriate syntax for your system. # # Additional information might be obtained from # # and # # Install prefix for linear algebra # #with_linalg="/usr/local" # Flavor of linear algebra libraries to use (default is netlib) # #with_linalg_flavor="openblas" # C preprocessing flags for linear algebra (default is unset) # #LINALG_CPPFLAGS="-I/usr/local/include" # C flags for linear algebra (default is unset) # #LINALG_CFLAGS="-m64" # C++ flags for linear algebra (default is unset) # #LINALG_CXXFLAGS="-m64" # Fortran flags for linear algebra (default is unset) # #LINALG_FCFLAGS="-I/usr/local/include" # Link flags for linear algebra (default is unset) # #LINALG_LDFLAGS="" # Library flags for linear algebra (default is unset) # #LINALG_LIBS="-L/usr/local/lib -llapack -lblas -lscalapack" # -------------------------------------------------------------------------- # # Optimized FFT support # # -------------------------------------------------------------------------- # # Supported libraries: # # * auto : select library depending on build environment (default) # * custom : bypass build-system checks # * dfti : native MKL FFT library # * fftw3 : serial FFTW3 library # * fftw3-threads : threaded FFTW3 library # * pfft : MPI-parallel PFFT library (for maintainers only) # * goedecker : Abinit internal FFT # # Notes: # # * only one flavor can be selected at a time, flavors being mutually # exclusive # # The following lines relate to a generic FFT library (e.g. they are used for dfti) # Install prefix for the FFT library # #with_fft="/usr/local" # Flavor of FFT framework to support (default is auto) # #with_fft_flavor="fftw3" # C preprocessor flags for the FFT framework (default is unset) # #FFT_CPPFLAGS="-I/usr/local/include/fftw" # C flags for the FFT framework (default is unset) # #FFT_CFLAGS="-I/usr/local/include/fftw" # Fortran flags for the FFT framework (default is unset) # #FFT_FCFLAGS="-I/usr/local/include/fftw" # Link flags for the FFT framework (default is unset) # #FFT_LDFLAGS="-L/usr/local/lib/fftw -lfftw3" # Library flags for the FFT framework (default is unset) # #FFT_LIBS="-L/usr/local/lib/fftw -lfftw3" # ------------------------------ # # The following lines relate specifically to the FFTW3 library # Install prefix for the FFTW3 library # #with_fftw3="/usr/local" # C preprocessor flags for the FFTW3 library (default is unset) # #FFTW3_CPPFLAGS="-I/usr/local/include/fftw" # C flags for the FFTW3 library (default is unset) # #FFTW3_CFLAGS="-I/usr/local/include/fftw" # Fortran flags for the FFTW3 library (default is unset) # #FFTW3_FCFLAGS="-I/usr/local/include/fftw" # Link flags for the FFTW3 library (default is unset) # #FFTW3_LDFLAGS="-L/usr/local/lib/fftw -lfftw3" # Library flags for the FFTW3 library (default is unset) # #FFTW3_LIBS="-L/usr/local/lib/fftw -lfftw3" # ------------------------------ # # The following lines relate specifically to the PFFT library # Install prefix for the PFFT library # #with_pfft="/usr/local" # C preprocessor flags for the PFFT library (default is unset) # #PFFT_CPPFLAGS="-I/usr/local/include/fftw" # C flags for the PFFT library (default is unset) # #PFFT_CFLAGS="-I/usr/local/include/fftw" # Fortran flags for the PFFT library (default is unset) # #PFFT_FCFLAGS="-I/usr/local/include/fftw" # Link flags for the PFFT library (default is unset) # #PFFT_LDFLAGS="-L/usr/local/lib/fftw -lfftw3" # Library flags for the PFFT library (default is unset) # #PFFT_LIBS="-L/usr/local/lib/fftw -lfftw3" # -------------------------------------------------------------------------- # # Feature triggers # # -------------------------------------------------------------------------- # # Through feature triggers, the build system of Abinit tries to link # prioritarily with external libraries to provide the requested # functionality. When unsuccessful, Abinit will run in degraded mode, # which means that it will provide poor performance and scalability, as # well as refuse to run some standard calculations. However, in some # cases, for historical reasons, it can resort to a limited internal # implementation. # # Enabling feature triggers is necessary for packaging and is recommended # in most other cases. Relying upon external optimized libraries is always # smarter than embedding their source code, as their performance and # integration within the local environment are always significantly # better. # # The following optional dependencies are sorted by alphabetical order. # Please note that some of them may depend on others, as indicated. # Notes: # # * when specifying with_package="prefix", the build system automatically # looks for the relevant libraries in prefix/include and prefix/lib # # * the 'with_package' options can also be set to yes or no, to let the # build system find out the corresponding parameters on plaftorms where # the external packages are available system-wide # # * the 'with_package_incs' and 'with_package_libs' options systematically # override the settings provided by corresponding 'with_package' options # # ------------------------------ # # BigDFT (depends on LibXC, see below) # Website: # Enable BigDFT support (default is no) # #with_bigdft="yes" # Fortran flags for BigDFT (default is unset) # #BIGDFT_FCFLAGS="-I/usr/local/include/bigdft" # Link flags for BigDFT (default is unset) # #BIGDFT_LDFLAGS="" # Library flags for BigDFT (default is unset) # #BIGDFT_LIBS="-L/usr/local/lib/bigdft -lbigdft -lpoissonsolver -labinit" # ------------------------------ # # Levenberg-Marquardt algorithm # Website: # Trigger and install prefix for Levmar (default is unset) # #with_levmar="yes" # C preprocessor flags for Levmar (default is unset) # #LEVMAR_CPPFLAGS="-I/opt/etsf/include" # C flags for Levmar (default is unset) # #LEVMAR_CFLAGS="-I/opt/etsf/include" # Fortran flags for Levmar (default is unset) # #LEVMAR_FCFLAGS="-I/opt/etsf/include" # Link flags for Levmar (default is unset) # #LEVMAR_LDFLAGS="" # Library flags for Levmar (default is unset) # #LEVMAR_LIBS="-L/opt/etsf/lib -llevmar" # ------------------------------ # # LibXC # Website: # Trigger and install prefix for LibXC (default is unset) # #with_libxc="/usr/local" # C preprocessor flags for LibXC (default is unset) # #LIBXC_CPPFLAGS="-I/usr/local/include" # C flags for LibXC (default is unset) # #LIBXC_CFLAGS="-I/usr/local/include" # Fortran flags for LibXC (default is unset) # #LIBXC_FCFLAGS="-I/usr/local/include" # Link flags for the LibXC library (default is unset) # #LIBXC_LDFLAGS="" # Library flags for LibXC (default is unset) # #LIBXC_LIBS="-L/usr/local/lib -lxcf90 -lxc" # ------------------------------ # # LibXML2 # Website: # Trigger and install prefix for LibXML2 (default is unset) # #with_libxml2="/usr/local" # C preprocessor flags for LibXML2 (default is unset) # #LIBXML2_CPPFLAGS="-I/usr/local/include" # C flags for LibXML2 (default is unset) # #LIBXML2_CFLAGS="-I/usr/local/include" # Fortran flags for LibXML2 (default is unset) # #LIBXML2_FCFLAGS="-I/usr/local/include" # Link flags for the LibXML2 library (default is unset) # #LIBXML2_LDFLAGS="" # Library flags for LibXML2 (default is unset) # #LIBXML2_LIBS="-L/usr/local/lib -lxcf90 -lxc" # ------------------------------ # # HDF5 # Website: # Trigger and install prefix for HDF5 (default is unset) # #with_hdf5="/usr/local" # Location of the h5cc (or h5pcc) compiler (default is unset) # # Note: This parameter can greatly help the build system of ABINIT in # selecting the correct version of HDF5. # #H5CC="/opt/local/hdf5-serial/bin/h5cc" # C preprocessing flags for HDF5 (default is unset) # #HDF5_CPPFLAGS="-I/usr/local/include/hdf5" # C flags for HDF5 (default is unset) # #HDF5_CFLAGS="-std=c99" # Fortran flags for HDF5 (default is unset) # #HDF5_FCFLAGS="-I/usr/local/include/hdf5" # Link flags for HDF5 (default is unset) # #HDF5_LDFLAGS="" # Library flags for HDF5 (default is unset) # #HDF5_LIBS="-L/usr/local/lib/hdf5 -lhdf5 -lhdf5_hl" # ------------------------------ # # NetCDF # Website: # Trigger and install prefix for NetCDF (default is unset) # #with_netcdf="/usr/local" # C preprocessing flags for NetCDF (default is unset) # #NETCDF_CPPFLAGS="-I/usr/local/include/netcdf" # C flags for NetCDF (default is unset) # #NETCDF_CFLAGS="-std=c99" # Fortran flags for NetCDF (default is unset) # #NETCDF_FCFLAGS="-I/usr/local/include/netcdf" # Link flags for NetCDF (default is unset) # #NETCDF_LDFLAGS="" # Library flags for NetCDF (default is unset) # #NETCDF_LIBS="-L/usr/local/lib/netcdf -lnetcdf" # Enable Netcdf mode in Abinit (use netcdf as default I/O library, # change the default values of accesswff) # #enable_netcdf_default="no" # ------------------------------ # # NetCDF-Fortran # Website: # Trigger and install prefix for NetCDF-Fortran (default is unset) # #with_netcdf_fortran="/usr/local" # C preprocessing flags for NetCDF-Fortran (default is unset) # #NETCDF_FORTRAN_CPPFLAGS="-I/usr/local/include/netcdf_fortran" # C flags for NetCDF-Fortran (default is unset) # #NETCDF_FORTRAN_CFLAGS="" # Fortran flags for NetCDF-Fortran (default is unset) # #NETCDF_FORTRAN_FCFLAGS="-I/usr/local/include/netcdf_fortran" # Link flags for NetCDF-Fortran (default is unset) # #NETCDF_FORTRAN_LDFLAGS="" # Library flags for NetCDF-Fortran (default is unset) # #NETCDF_FORTRAN_LIBS="-L/usr/local/lib/netcdf_fortran -lnetcdff" # ------------------------------ # # Enable LibPSML support (default is no) # #with_libpsml="yes" # Fortran flags for the LibPSML library (default is unset) # #LIBPSML_FCFLAGS="-I/usr/local/libpsml-1.1.7/include" # Link flags for the LibPSML library (default is unset) # #LIBPSML_LDFLAGS="" # Library flags for LibPSML (default is unset) # #LIBPSML_LIBS="-L/usr/local/libpsml-1.1.7/lib -lpsml" # ------------------------------ # # TRIQS DMFT library (for CTQMC and rotationally invariant interaction) # Website: # Trigger and install prefix for TRIQS (default is unset) # #with_triqs="/usr/local" # Allow Fortran programs to call Python # #enable_python_invocation="yes" # Select the 1.4 version of TRIQS # #enable_triqs_v1_4="yes" # Select the 2.0 version of TRIQS # #enable_triqs_v2_0="yes" # C++ flags for the TRIQS library (default is unset) # #TRIQS_CXXFLAGS="-I/usr/local/triqs/include -I/usr/local/cthyb/src/c++" # Fortran flags for the TRIQS library (default is unset) # #TRIQS_FCFLAGS="-I/usr/local/triqs/include -I/usr/local/cthyb/src/c++" # Link flags for the TRIQS library (default is unset) # #TRIQS_LDFLAGS="-L/usr/local/triqs/lib -ltriqs -lcthyb_c" # Link flags for the TRIQS library (default is unset) # #TRIQS_LIBS="-L/usr/local/triqs/lib -ltriqs -lcthyb_c" # ------------------------------ # # Enable Wannier90 support (default is no) # #with_wannier90="yes" # Fortran flags for the Wannier90 library (default is unset) # #WANNIER90_FCFLAGS="-I/usr/local/include/wannier90" # Link flags for the Wannier90 library (default is unset) # #WANNIER90_LDFLAGS="" # Link flags for the Wannier90 library (default is unset) # #WANNIER90_LIBS="-L${HOME}/lib/wannier90 -lwannier90" # ------------------------------ # # Enable XMLF90 support (default is no) # #with_xmlf90="yes" # Fortran flags for the XMLF90 library (default is unset) # #XMLF90_FCFLAGS="-I/usr/local/include" # Link flags for the LibPSML library (default is unset) # #XMLF90_LDFLAGS="" # Link flags for the LibPSML library (default is unset) # #XMLF90_LIBS="-L/usr/local/lib -lxmlf90" # -------------------------------------------------------------------------- # # Profiling and performance analysis # # -------------------------------------------------------------------------- # # Enable internal Abinit timer (default is yes) # #enable_timer="no" # Enable PAPI support (default is no) # #with_papi="yes" # C preprocessor flags for the PAPI library (default is unset) # #PAPI_CPPFLAGS="-I/usr/local/include/papi" # C flags for the PAPI library (default is unset) # #PAPI_CFLAGS="-I/usr/local/include/papi" # Fortran flags for the PAPI library (default is unset) # #PAPI_FCFLAGS="-I/usr/local/include/papi" # Link flags for the PAPI library (default is unset) # #PAPI_LDFLAGS="" # Library flags for the PAPI library (default is unset) # #PAPI_LIBS="-L/usr/local/lib/papi -lpapi" # -------------------------------------------------------------------------- # # Fallbacks # # -------------------------------------------------------------------------- # # Fallbacks are external packages pre-built together for Abinit, in order to # partly compensate for the absence of required external packages. If used, # they will provide lower reliability, performance and scalability than the # proper installation of the full-fledged corresponding packages. In any case, # they should not be used in production. # # Fallbacks are enabled when the "with_fallbacks" option below points to # an existing and readable directory. If this is not the case, failed # detections of external packages will cause configure to abort. # # Example: if you set with_fallbacks="/path/to/fallbacks", the # build system of Abinit will look for: # # * executables in /path/to/fallbacks/bin/ # * C headers and Fortran modules in /path/to/fallbacks/include/ # * libraries in /path/to/fallbacks/lib/ # # when it cannot detect properly installed external packages. # # Please note that this mechanism will work only if you have built and # installed the fallbacks you need beforehand. You can have a look at # the following helper script after running configure for more # information: # # ~abinit-builddir/fallbacks/build-abinit-fallbacks.sh # # Comprehensive documentation about how to use external packages properly # and build fallbacks can be found on the Abinit Wiki, at: # # # # Prefix for already installed fallbacks (default is unset) # #with_fallbacks="/opt/abinit-dev/fallbacks/gnu/4.9/mpi" # -------------------------------------------------------------------------- # # Build workflow # # -------------------------------------------------------------------------- # # Note: if you do not know what the following is about, just leave it unset. # Use external ABINIT Common (default is no) # #with_abinit_common="yes" # Fortran flags for ABINIT Common (default is unset) # #ABINIT_COMMON_FCFLAGS="-I/usr/local/include/abinit_common" # Link flags for ABINIT Common (default is unset) # #ABINIT_COMMON_LDFLAGS="" # Library flags for ABINIT Common (default is unset) # #ABINIT_COMMON_LIBS="-L${HOME}/abinit/lib -labinit_common" # ------------------------------ # # Use external LibPAW (default is no) # #with_libpaw="yes" # C preprocessing flags for LibPAW (default is unset) # #LIBPAW_CPPFLAGS="-I/usr/local/include/libpaw" # C flags for LibPAW (default is unset) # #LIBPAW_CFLAGS="-std=c99" # Fortran flags for LibPAW (default is unset) # #LIBPAW_FCFLAGS="-I/usr/local/include/libpaw" # Link flags for LibPAW (default is unset) # #LIBPAW_LDFLAGS="" # Library flags for LibPAW (default is unset) # #LIBPAW_LIBS="-L${HOME}/abinit_shared/lib -lpaw" # -------------------------------------------------------------------------- # # Developer options # # -------------------------------------------------------------------------- # # Note: all the following options are disabled by default because they # enable the build of highly experimental code (i.e. they have # the complementary default values to those displayed below) # # Uncomment the following lines at your own risks # Enable BSE unpacking (gmatteo) # #enable_bse_unpacked="yes" # Enable optimized cRPA build for ifort <=17 (routerovitch) # #enable_crpa_optim="yes" # Enable exports (pouillon) # #enable_exports="yes" # Enable double precision for GW calculations (gmatteo) # #enable_gw_dpc="yes" # Use tetrahedrons from Libtetra (torrent) # #enable_libtetra="yes" # Enable lotf (mancini) # #enable_lotf="yes" # Enable memory profiling (waroquiers) # #enable_memory_profiling="yes" # Enable OpenMP (gmatteo, torrent) # #enable_openmp="yes" # Enable ZDOTC and ZDOTU bugfix (gmatteo) # #enable_zdot_bugfix="yes" # -------------------------------------------------------------------------- # # Maintainer options # # -------------------------------------------------------------------------- # # Fortran compiler vendor to be used by the build system (default is unset) # # Note: do not use if you do not know what it is about # #with_fc_vendor="dummy" # Fortran compiler version to be used by the build system (default is unset) # # Note: do not use if you do not know what it is about # #with_fc_version="0.0" # Build debugging instructions present in the source code # #enable_source_debug="yes" Copy the content of the example in myconf.ac9, then run: ../configure --with-config-file="myconf.ac9" If everything goes smoothly, you should obtain the following summary: ============================================================================== === Final remarks === ============================================================================== Core build parameters --------------------- * C compiler : gnu version 5.3 * Fortran compiler : gnu version 5.3 * architecture : intel xeon (64 bits) * debugging : basic * optimizations : standard * OpenMP enabled : no (collapse: ignored) * MPI enabled : yes (flavor: auto) * MPI in-place : no * MPI-IO enabled : yes * GPU enabled : no (flavor: none) * LibXML2 enabled : no * LibPSML enabled : no * XMLF90 enabled : no * HDF5 enabled : yes (MPI support: yes) * NetCDF enabled : yes (MPI support: yes) * NetCDF-F enabled : yes (MPI support: yes) * FFT flavor : fftw3 (libs: user-defined) * LINALG flavor : openblas (libs: user-defined) * Build workflow : monolith 0 deprecated options have been used:. Configuration complete. You may now type "make" to build Abinit. (or "make -j<n>", where <n> is the number of available processors) Important Please take your time to read carefully the final summary and make sure you are getting what you expect. A lot of typos or configuration errors can be easily spotted at this level. You might then find useful to have a look at other examples available in this page. Additional configuration files for clusters can be found in the abiconfig package. The configure script has generated several Makefiles required by make as well as the config.h include file with all the pre-processing options that will be used to build ABINIT. This file is included in every ABINIT source file and it defines the features that will be activated or deactivated at compilation-time depending on the libraries available on your machine. Let’s have a look at a selected portion of config.h: /* Define to 1 if you have a working MPI installation. */ #define HAVE_MPI 1 /* Define to 1 if you have a MPI-1 implementation (obsolete, broken). */ /* #undef HAVE_MPI1 */ /* Define to 1 if you have a MPI-2 implementation. */ #define HAVE_MPI2 1 /* Define to 1 if you want MPI I/O support. */ #define HAVE_MPI_IO 1 /* Define to 1 if you have a parallel NetCDF library. */ /* #undef HAVE_NETCDF_MPI */ This file tells us that - we are building ABINIT with MPI support - we have a library implementing the MPI2 specifications - our MPI implementation supports parallel MPI-IO. Note that this does not mean that netcdf supports MPI-IO. In this example, indeed, HAVE_NETCDF_MPI is undefined and this means the library does not have parallel-IO capabilities. Of course, end users are mainly concerned with the final summary reported by the configure script to understand whether a particular feature has been activated or not but more advanced users may find the content of config.h valuable to understand what’s going on. Now we can finally compile the package with e.g. make -j2. If the compilation completes successfully (🙌), you should end up with a bunch of executables inside src/98_main. Note, however, that the fact that the compilation completed successfully does not necessarily imply that the executables will work as expected as there are many different things that can go wrong at runtime. First of all, let’s try to execute: abinit --version Tip If this is a parallel build, you may need to use mpirun -n 1 abinit --version even for a sequential run as certain MPI libraries are not able to bootstrap the MPI library without mpirun (mpiexec). On some clusters with Slurm, the syadmin may ask you to use srun instead of mpirun. To get the summary of options activated during the build, run abinit with the -b option (or --build if you prefer the verbose version) ./src/98_main/abinit -b If the executable does not crash (🙌), you may want to execute make test_fast to run some basic tests. If something goes wrong when executing the binary or when running the tests, checkout the Troubleshooting section for possible solutions. Finally, you may want to execute the runtests.py python script in the tests directory in order to validate the build before running production calculations: cd tests ../../tests/runtests.py v1 -j4 As usual, use: ../../tests/runtests.py --help to list the available options. A more detailed discussion is given in this page. Dynamic libraries and ldd¶ Since we decided to compile with dynamic linking, the external libraries are not included in the final executables. Actually, the libraries will be loaded by the Operating System (OS) at runtime when we execute the binary. The OS will search for dynamic libraries using the list of directories specified in $LD_LIBRARY_PATH ( $DYLD_LIBRARY_PATH for MacOs). A typical mistake is to execute abinit with a wrong $LD_LIBRARY_PATH that is either empty or different from the one used when compiling the code (if it’s different and it works, I assume you know what you are doing so you should not be reading this section!) On Linux, one can use the ldd tool to print the shared objects (shared libraries) required by each program or shared object specified on the command line: ldd src/98_main/abinit linux-vdso.so.1 (0x00007fffbe7a4000) libopenblas.so.0 => /home/gmatteo/local/lib/libopenblas.so.0 (0x00007fc892155000) libnetcdff.so.7 => /home/gmatteo/local/lib/libnetcdff.so.7 (0x00007fc891ede000) libnetcdf.so.15 => /home/gmatteo/local/lib/libnetcdf.so.15 (0x00007fc891b62000) libhdf5_hl.so.200 => /home/gmatteo/local/lib/libhdf5_hl.so.200 (0x00007fc89193c000) libhdf5.so.200 => /home/gmatteo/local/lib/libhdf5.so.200 (0x00007fc891199000) libz.so.1 => /lib64/libz.so.1 (0x00007fc890f74000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fc890d70000) libgfortran.so.3 => /lib64/libgfortran.so.3 (0x00007fc890a43000) libm.so.6 => /lib64/libm.so.6 (0x00007fc890741000) libmpifort.so.12 => /home/gmatteo/local/lib/libmpifort.so.12 (0x00007fc89050a000) libmpi.so.12 => /home/gmatteo/local/lib/libmpi.so.12 (0x00007fc88ffb9000) libquadmath.so.0 => /lib64/libquadmath.so.0 (0x00007fc88fd7a000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fc88fb63000) libc.so.6 => /lib64/libc.so.6 (0x00007fc88f7a1000) ... <snip> As expected, our executable uses the openblas, netcdf, hdf5, mpi libraries installed in $HOME/local/lib plus other basic libs coming from lib64(e.g. libgfortran) added by the compiler. Tip On macOS, replace ldd with otool and the syntax: otool -L abinit If you see entries like: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib (compatibility version 1.0.0, current version 1.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib (compatibility version 1.0.0, current version 1.0.0) it means that you are linking against macOS VECLIB. In this case, make sure to use --enable-zdot-bugfix="yes" when configuring the package otherwise the code will crash at runtime due to ABI incompatibility (calling conventions for functions returning complex values). Did I tell you that macOS does not care about Fortran? If you wonder about the difference between API and ABI, please read this stackoverflow post. To understand why LD_LIBRARY_PATH is so important, let’s try to reset the value of this variable with unset LD_LIBRARY_PATH echo $LD_LIBRARY_PATH then rerun ldd (or otool) again. Do you understand what’s happening here? Why it’s not possible to execute abinit with an empty $LD_LIBRARY_PATH? How would you fix the problem? Troubleshooting¶ Problems can appear at different levels: - configuration time - compilation time - runtime i.e. when executing the code Configuration-time errors are usually due to misconfiguration of the environment, missing (hard) dependencies or critical problems in the software stack that will make configure abort. Unfortunately, the error message reported by configure is not always self-explanatory. To pinpoint the source of the problem you will need to search for clues in config.log, especially the error messages associated to the feature/library that is triggering the error. This is not as easy as it looks since configure sometimes performs multiple tests to detect your architecture and some of these tests are supposed to fail. As a consequence, not all the error messages reported in config.log are necessarily relevant. Even if you find the test that makes configure abort, the error message may be obscure and difficult to decipher. In this case, you can ask for help on the forum but remember to provide enough info on your architecture, the compilation options and, most importantly, a copy of config.log. Without this file, indeed, it is almost impossible to understand what’s going on. An example will help. Let’s assume we are compiling on a cluster using modules provided by our sysadmin. More specifically, there is an openmpi_intel2013_sp1.1.106 module that is supposed to provide the openmpi implementation of the MPI library compiled with a particular version of the intel compiler (remember what we said about using the same version of the compiler). Obviously we need to load the modules before running configure in order to setup our environment so we issue: module load openmpi_intel2013_sp1.1.106 The module seems to work as no error message is printed to the terminal and which mpicc shows that the compiler has been added to $PATH. At this point we try to configure ABINIT with: with_mpi_prefix="${MPI_HOME}" where $MPI_HOME is a environment variable set by module load (use e.g. env | grep MPI). Unfortunately, the configure script aborts at the very beginning complaining that the C compiler does not work! checking for gcc... /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc checking for C compiler default output file name... configure: error: in `/home/gmatteo/abinit/build': configure: error: C compiler cannot create executables See `config.log' for more details. Let’s analyze the output of configure. The line: checking for gcc... /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc indicates that configure was able to find mpicc in ${MPI_HOME}/bin. Then an internal test is executed to make sure the wrapper can compile a rather simple Fortran program using MPI but the test fails and configure aborts immediately with the pretty explanatory message: configure: error: C compiler cannot create executables See `config.log' for more details. If we want to understand why configure failed, we have to open config.log in the editor and search for error messages towards the end of the log file. For example one can search for the string “C compiler cannot create executables”. Immediately above this line, we find the following section: config.log configure:12104: checking whether the C compiler works' /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_modify_xrc_rcv_qp@IBVERBS_1.1' /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_open_xrc_domain@IBVERBS_1.1' /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_unreg_xrc_rcv_qp@IBVERBS_1.1' /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_query_xrc_rcv_qp@IBVERBS_1.1' /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_create_xrc_rcv_qp@IBVERBS_1.1' /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_create_xrc_srq@IBVERBS_1.1' /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_close_xrc_domain@IBVERBS_1.1' configure:12130: $? = 1 configure:12168: result: no configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "ABINIT" | #define PACKAGE_TARNAME "abinit" | #define PACKAGE_VERSION "9.1.2" | #define PACKAGE_STRING "ABINIT 9.1.2" | #define PACKAGE_BUGREPORT " | #define PACKAGE_URL "" | #define PACKAGE "abinit" | #define VERSION "9.1.2" | #define ABINIT_VERSION "9.1.2" | #define ABINIT_VERSION_MAJOR "9" | #define ABINIT_VERSION_MINOR "1" | #define ABINIT_VERSION_MICRO "2" | #define ABINIT_VERSION_BUILD "20200824" | #define ABINIT_VERSION_BASE "9.1" | #define HAVE_OS_LINUX 1 | /* end confdefs.h. */ | | int | main () | { | | ; | return 0; | } The line configure:12126: /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc conftest.c >&5 tells us that configure tried to compile a C file named conftest.c and that the return value stored in the $?shell variable is non-zero thus indicating failure: configure:12130: $? = 1 configure:12168: result: no The failing program (the C main after the line “configure: failed program was:”) is a rather simple piece of code and our mpicc compiler is not able to compile it! If we look more carefully at the lines after the invocation of mpicc, we see lots of undefined references to functions of the libibverbs library: This looks like some mess in the system configuration and not necessarily a problem in the ABINIT build system. Perhaps there have been changes to the environment, maybe a system upgrade or the module is simply broken. In this case you should send the config.log to the sysadmin so that he/she can fix the problem or just use another more recent module. Obviously, one can encounter cases in which modules are properly configured yet the configure script aborts because it does not know how to deal with your software stack. In both cases, config.log is key to pinpoint the problem and sometimes you will find that the problem is rather simple to solve. For instance, you are using a Fortran module files produced by gfortran while trying to compile with the intel compiler or perhaps you are trying to use modules produced by a different version of the same compiler. Perhaps you forgot to add the include directory required by an external library and the compiler cannot find the include file or maybe there is a typo in the configuration options. The take-home message is that several mistakes can be detected by just inspecting the log messages reported in configure.log if you know how to search for them. Compilation-time errors are usually due to syntax errors, portability issues or Fortran constructs that are not supported by that particular version of the compiler. In the first two cases, please report the problem on the forum. In the later case, you will need a more recent version of the compiler. Sometimes the compilation aborts with an internal compiler error that should be considered as a bug in the compiler rather than an error in the ABINIT source code. Decreasing the optimization level when compiling the particular routine that triggers the error (use -O1 or even -O0 for the most problematic cases) may solve the problem else try a more recent version of the compiler. If you have made non-trivial changes in code (modifications in the datatypes/interfaces), run make clean and recompile. Runtime errors are more difficult to fix as they may require the use of a debugger and some basic understanding of Linux signals. Here we focus on two common scenarios: SIGILL and SIGSEGV. If the code raises the SIGILL signal, it means that the CPU attempted to execute an instruction it didn’t understand. Very likely, your executables/libraries have been compiled for the wrong architecture. This may happen on clusters when the CPU family available on the frontend differs from the one available on the compute node and aggressive optimization options (-O3, -march, -xHost, etc) are used. Removing the optimization options and using the much safer -O2 level may help. Alternatively, one can configure and compile the source directly on the compute node or use compilation options compatible both with the frontend and the compute node (ask your sysadmin for details). Warning Never ever run calculations on CPUs belonging to different families unless you know what you are doing. Many MPI codes assume reproducibility at the binary level: on different MPI processes the same set of bits in input should produce the same set of bits in output. If you are running on a heterogeneous cluster, select the queue with the same CPU family and make sure the code has been compiled with options that are compatibile with the compute node. Segmentation faults (SIGSEGV) are usually due to bugs in the code but they may also be triggered by non-portable code or misconfiguration of the software stack. When reporting this kind of problem on the forum, please add an input file so that developers can try to reproduce the problem. Keep in mind, however, that the problem may not be reproducible on other architectures. The ideal solution would be to run the code under the control of the debugger, use the backtrace to locate the line of code where the segmentation fault occurs and then attach the backtrace to your issue on the forum. How to run gdb Using the debugger in sequential is really simple. First of all, make sure the code have been compiled with the -g option to generate source-level debug information. To use the gdb GNU debugger, perform the following operations: Load the executable in the GNU debugger using the syntax: gdb path_to_abinit_executable Run the code with the runcommand and pass the input file as argument: (gdb) run t02.in Wait for the error e.g. SIGSEGV, then print the backtrace with: (gdb) bt PS: avoid debugging code compiled with -O3 or -Ofast as the backtrace may not be reliable. Sometimes, even -O2 (default) is not reliable and you have to resort to print statements and bisection to braket the problematic piece of code. How to compile ABINIT on a cluster with the intel toolchain and modules¶ On intel-based clusters, we suggest to compile ABINIT with the intel compilers (icc and ifort) and MKL in order to achieve better performance. The MKL library, indeed, provides highly-optimized implementations for BLAS, LAPACK, FFT, and SCALAPACK that can lead to a significant speedup while simplifying considerably the compilation process. As concerns MPI, intel provides its own implementation (Intel MPI) but it is also possible to employ openmpi or mpich provided these libraries have been compiled with the same intel compilers. In what follows, we assume a cluster in which scientific software is managed with modules and the EasyBuild framework. Before proceeding with the next steps, it is worth summarizing the most important module commands. module commands To list the modules installed on the cluster, use: module avail The syntax to load the module MODULE_NAME is: module load MODULE_NAME while module list prints the list of modules currently loaded. To list all modules containing “string”, use: module spider string # requires LMOD with LUA Finally, module show MODULE_NAME shows the commands in the module file (useful for debugging). For a more complete introduction to environment modules, please consult this page. On my cluster, I can activate intel MPI by executing: module load releases/2018b module load intel/2018b module load iimpi/2018b to load the 2018b intel MPI EasyBuild toolchain. On your cluster, you may need to load different modules but the effect at the level of the shell environment should be the same. More specifically, mpiifort is now in PATH (note how mpiifort wraps intel ifort): mpiifort -v mpiifort for the Intel(R) MPI Library 2018 Update 3 for Linux* Copyright(C) 2003-2018, Intel Corporation. All rights reserved. ifort version 18.0.3 the directories with the libraries required by the compiler/MPI have been added to LD_LIBRARY_PATH while CPATH stores the locations to search for include file. Last but not least, the env should now define intel-specific variables whose name starts with I_: $ env | grep I_ I_MPI_ROOT=/opt/cecisw/arch/easybuild/2018b/software/impi/2018.3.222-iccifort-2018.3.222-GCC-7.3.0-2.30 Since I_MPI_ROOT points to the installation directory of intel MPI, we can use this environment variable to tell configure how to locate our MPI installation: with_mpi="${I_MPI_ROOT}" FC="mpiifort" # Use intel wrappers. Important! CC="mpiicc" # See warning below CXX="mpiicpc" # with_optim_flavor="aggressive" # FCFLAGS="-g -O2" Optionally, you can use with_optim_flavor="aggressive to let configure select compilations options tuned for performance or set the options explicitly via FCFLAGS. Warning Intel MPI installs two sets of MPI wrappers. (mpiicc, mpicpc, mpiifort) and (mpicc, mpicxx, mpif90) that use Intel compilers and GNU compilers, respectively. Use the -show option (e.g. mpif90 -show) to display the underlying compiler. As expected $ mpif90 -v mpif90 for the Intel(R) MPI Library 2018 Update 3 for Linux* COLLECT_GCC=gfortran <snip> Thread model: posix gcc version 7.3.0 (GCC) shows that mpif90 wraps GNU gfortran. Unless you really need to use GNU compilers, we strongly suggest the wrappers based on the Intel compilers (mpiicc, mpicpc, mpiifort) If we run configure with these options, we should see a section at the beginning in which the build system is testing basic capabilities of the Fortran compiler. If configure stops at this level it means there’s a severe problem with your toolchain. ============================================================================== === Fortran support === ============================================================================== checking for mpiifort... /opt/cecisw/arch/easybuild/2018b/software/impi/2018.3.222-iccifort-2018.3.222-GCC-7.3.0-2.30/bin64/mpiifort checking whether we are using the GNU Fortran compiler... no checking whether mpiifort accepts -g... yes checking which type of Fortran compiler we have... intel 18.0 Then we have a section in which configure tests the MPI implementation: Multicore architecture support ============================================================================== === Multicore architecture support === ============================================================================== checking whether to enable OpenMP support... no checking whether to enable MPI... yes checking how MPI parameters have been set... yon checking whether the MPI C compiler is set... yes checking whether the MPI C++ compiler is set... yes checking whether the MPI Fortran compiler is set... yes checking for MPI C preprocessing flags... checking for MPI C flags... checking for MPI C++ flags... checking for MPI Fortran flags... checking for MPI linker flags... checking for MPI library flags... checking whether the MPI C API works... yes checking whether the MPI C environment works... yes checking whether the MPI C++ API works... yes checking whether the MPI C++ environment works... yes checking whether the MPI Fortran API works... yes checking whether the MPI Fortran environment works... yes checking whether to build MPI I/O code... auto checking which level of MPI is supported by the Fortran compiler... 2 configure: forcing MPI-2 standard level support checking whether the MPI library supports MPI_INTEGER16... yes checking whether the MPI library supports MPI_CREATE_TYPE_STRUCT... yes checking whether the MPI library supports MPI_IBCAST (MPI3)... yes checking whether the MPI library supports MPI_IALLGATHER (MPI3)... yes checking whether the MPI library supports MPI_IALLTOALL (MPI3)... yes checking whether the MPI library supports MPI_IALLTOALLV (MPI3)... yes checking whether the MPI library supports MPI_IGATHERV (MPI3)... yes checking whether the MPI library supports MPI_IALLREDUCE (MPI3)... yes configure: configure: dumping all MPI parameters for diagnostics configure: ------------------------------------------ configure: configure: Configure options: configure: configure: * enable_mpi_inplace = '' configure: * enable_mpi_io = '' configure: * with_mpi = 'yes' configure: * with_mpi_level = '' configure: configure: Internal parameters configure: configure: * MPI enabled (required) : yes configure: * MPI C compiler is set (required) : yes configure: * MPI C compiler works (required) : yes configure: * MPI Fortran compiler is set (required) : yes configure: * MPI Fortran compiler works (required) : yes configure: * MPI environment usable (required) : yes configure: * MPI C++ compiler is set (optional) : yes configure: * MPI C++ compiler works (optional) : yes configure: * MPI-in-place enabled (optional) : no configure: * MPI-IO enabled (optional) : yes configure: * MPI configuration type (computed) : yon configure: * MPI Fortran level supported (detected) : 2 configure: * MPI_Get_library_version available (detected) : unknown configure: configure: All required parameters must be set to 'yes'. configure: If not, the configuration and/or the build with configure: MPI support will very likely fail. configure: checking whether to activate GPU support... no So far so good. Our compilers and MPI seem to work so we can proceed with the setup of the external libraries. On my cluster, module load intel/2018b has also defined the MKLROOT env variable env | grep MKL MKLROOT=/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl EBVERSIONIMKL=2018.3.222 that can be used in conjunction with the highly recommended mkl-link-line-advisor to link with MKL. On other clusters, you may need load an mkl module explicitly (or composerxe or parallel-studio-xe) Let’s now discuss how to configure ABINIT with MKL starting from the simplest cases: - BLAS and Lapack from MKL - FFT from MKL DFTI - no Scalapack - no OpenMP threads. These are the options I have to select in the mkl-link-line-advisor to enable this configuration with my software stack: The options should be self-explanatory. Perhaps the tricky part is Select interface layer where one should select 32-bit integer. This simply means that we are compiling and linking code in which default integer is 32-bits wide (default behaviour). Note how the threading layer is set to Sequential (no OpenMP threads) and how we chose to link with MKL libraries explicitly the get the full link line and compiler options. Now we can use these options in our configuration file: # BLAS/LAPACK with MKL with_linalg_flavor="mkl" LINALG_CPPFLAGS="-I${MKLROOT}/include" LINALG_FCFLAGS="-I${MKLROOT}/include" LINALG_LIBS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl" # FFT from MKL with_fft_flavor="dfti" FFT_CPPFLAGS="-I${MKLROOT}/include" FFT_FCFLAGS="-I${MKLROOT}/include" FFT_LIBS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl" Warning. If we run configure with these options, we should obtain the following output in the Linear algebra support section: Linear algebra support ============================================================================== === Linear algebra support === ============================================================================== checking for the requested linear algebra flavor... mkl checking for the serial linear algebra detection sequence... mkl checking for the MPI linear algebra detection sequence... mkl checking for the MPI acceleration linear algebra detection sequence... none checking how to detect linear algebra libraries... verify checking for BLAS support in the specified libraries... yes checking for AXPBY support in the BLAS libraries... yes checking for GEMM3M in the BLAS libraries... yes checking for mkl_imatcopy in the specified libraries... yes checking for mkl_omatcopy in the specified libraries... yes checking for mkl_omatadd in the specified libraries... yes checking for mkl_set/get_threads in the specified libraries... yes checking for LAPACK support in the specified libraries... yes checking for LAPACKE C API support in the specified libraries... no checking for PLASMA support in the specified libraries... no checking for BLACS support in the specified libraries... no checking for ELPA support in the specified libraries... no checking how linear algebra parameters have been set... env (flavor: kwd) checking for the actual linear algebra flavor... mkl checking for linear algebra C preprocessing flags... none checking for linear algebra C flags... none checking for linear algebra C++ flags... none checking for linear algebra Fortran flags... -I/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl/include checking for linear algebra linker flags... none checking for linear algebra configure: WARNING: parallel linear algebra is not available Excellent, configure detected a working BLAS/Lapack installation, plus some MKL extensions (mkl_imatcopy etc). BLACS and Scalapack (parallel linear algebra) have not been detected but this is expected as we haven’t asked for these libraries in the mkl-link-line-advisor GUI. This is the section in which configure checks the presence of the FFT library (DFTI from MKL, goedecker means internal Fortran version). Optimized FFT support ============================================================================== === Optimized FFT support === ============================================================================== checking which FFT flavors to enable... dfti goedecker checking for FFT flavor... dfti checking for FFT C preprocessing flags... checking for FFT C flags... checking for FFT Fortran flags... checking for FFT linker flags... checking for FFT library flags... checking for the FFT flavor to try... dfti checking whether to enable DFTI... yes checking how DFTI parameters have been set... mkl checking for DFTI C preprocessing flags... none checking for DFTI C flags... none checking for DFTI Fortran flags... -I/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl/include checking for DFTI linker flags... none checking for DFTI checking whether the DFTI library works... yes checking for the actual FFT flavor to use... dfti The line checking whether the DFTI library works... yes tells us that DFTI has been found and we can link against it although this does not necessarily mean that the final executable will work out of the box. Tip You may have noticed that it is also possible to use MKL with GNU gfortran but in this case you need to use a different set of libraries including the so-called compatibility layer that allows GCC code to call MKL routines. Also, MKL Scalapack requires either Intel MPI or MPICH2. Optional Exercise Compile ABINIT with BLAS/ScalaPack from MKL. Scalapack (or ELPA) may lead to a significant speedup when running GS calculations with large nband. See also the np_slk input variable. How to compile libxc, netcdf4/hdf5 with intel¶ At this point, one should check whether our cluster provides modules for libxc, netcdf-fortran, netcdf-c and hdf5 compiled with the same toolchain. Use module spider netcdf or module keyword netcdf to find the modules (if any). Hopefully, you will find a pre-existent installation for netcdf and hdf5 (possibly with MPI-IO support) as these libraries are quite common on HPC centers. Load these modules to have nc-config and nf-config in your $PATH and then use the --prefix option to specify the installation directory as done in the previous examples. Unfortunately, libxc and hdf5 do not provide similar tools so you will have to find the installation directory for these libs and pass it to configure. Tip You may encounter problems with libxc as this library is rather domain-specific and not all the HPC centers install it. If your cluster does not provide libxc, it should not be that difficult to reuse the expertise acquired in this tutorial to build your version and then install the missing dependencies inside $HOME/local. Just remember to: - load the correct modules for MPI with the associated compilers before configuring - configure with CC=mpiicc and FC=mpiifort so that the intel compilers are used - install the libraries and prepend $HOME/local/lib to LD_LIBRARY_PATH - use the with_LIBNAME option in conjunction with $HOME/local/lib in the ac9 file. - run configure with the ac9 file. In the worst case scenario in which neither netcdf4/hdf5 nor libxc are installed, you may want to use the internal fallbacks. The procedure goes as follows. - Start to configure with a minimalistic set of options just for MPI and MKL (linalg and FFT) - The build system will detect that some hard dependencies are missing and will generate a build-abinit-fallbacks.sh script in the fallbacks directory. - Execute the script to build the missing dependencies using the toolchain specified in the initial configuration file - Finally, reconfigure ABINIT with the fallbacks. How to compile ABINIT with support for OpenMP threads¶ Tip For a quick introduction to MPI and OpenMP and a comparison between the two parallel programming paradigms, see this presentation. Compiling ABINIT with OpenMP is not that difficult as everything boils down to: - Using a threaded version for BLAS, LAPACK and FFTs - Passing enable_openmp=”yes” to the ABINIT configure script so that OpenMP is activated also at level of the ABINIT Fortran code. On the contrary, answering the questions: - When and why should I use OpenMP threads for my calculations? - How many threads should I use and what is the parallel speedup I should expect? is much more difficult as there are several factors that should be taken into account. Note To keep a long story short, one should use OpenMP threads when we start to trigger limitations or bottlenecks in the MPI implementation, especially at the level of the memory requirements or in terms of parallel scalability. These problems are usually observed in calculations with large natom, %mpw, nband. As a matter of fact, it does not make sense to compile ABINIT with OpenMP if your calculations are relatively small. Indeed, ABINIT is mainly designed with MPI-parallelism in mind. For instance, calculations done with a relatively large number of \kk-points will benefit more of MPI than OpenMP, especially if the number of MPI processes divides the number of \kk-points exactly. Even worse, do not compile the code with OpenMP support if you do not plan to use threads because the OpenMP version will have an additional overhead due to the creation of the threaded sections. Remember also that increasing the number of threads does not necessarily leads to faster calculations (the same is true for MPI processes). There’s always an optimal value for the number of threads (MPI processes) beyond which the parallel efficiency starts to deteriorate. Unfortunately, this value is strongly hardware and software dependent so you will need to benchmark the code before running production calculations. Last but not least, OpenMP threads are not necessarily Posix threads. Hence if you have a library that provides both Open and Posix-threads, link with the OpenMP version. After this necessary preamble, let’s discuss how to compile a threaded version. To activate OpenMP support in the Fortran routines of ABINIT, pass enable_openmp="yes" to the configure script via the configuration file. This will automatically activate the compilation option needed to enable OpenMP in the ABINIT source code (e..g. -fopenmp option for gfortran) and the CPP variable HAVE_OPENMP in config.h. Note that this option is just part of the story as a significant fraction of the wall-time is spent in the external BLAS/FFT routines so do not expect big speedups if you do not link against threaded libraries. If you are building your own software stack for BLAS/LAPACK and FFT, you will have to reconfigure with the correct options for the OpenMP version and then issue make and make install again to build the threaded version. Also note that some libraries may change. FFTW3, for example, ships the OpenMP version in libfftw3_omp (see the official documentation) hence the list of libraries in FFTW3_LIBS should be changed accordingly. Life is much easier if you are using intel MKL because in this case it is just a matter of selecting OpenMP threading as threading layer in the mkl-link-line-advisor interface and then pass these options to the ABINIT build system together with enable_openmp="yes". Important When using threaded libraries remember to set explicitly the number of threads with e.g. export OMP_NUM_THREADS=2 either in your bash_profile or in the submission script (or in both). By default, OpenMP uses all the available CPUs so it is very easy to overload the machine, especially if one uses threads in conjunction with MPI processes. When running threaded applications with MPI, we suggest to allocate a number of physical CPUs that is equal to the number of MPI processes times the number of OpenMP threads. Computational intensive applications such as DFT codes have less chance to be improved in performance from Hyper-Threading technology (usually referred to as number of logical CPUs). We also recommend to increase the stack size limit using e.g. ulimit -s unlimited if the sysadmin allows you to do so. To run the ABINIT test suite with e.g. two OpenMP threads, use the -o2 option of runtests.py
https://docs.abinit.org/tutorial/compilation/
2022-05-16T22:43:21
CC-MAIN-2022-21
1652662512249.16
[]
docs.abinit.org
Specifies. typeof(XPCollection<T>) typeof(IList<T>) No We appreciate your feedback and continued support. May we contact you if we need to discuss your feedback in greater detail or update you on changes to this help topic?
https://docs.devexpress.com/eXpressAppFramework/DevExpress.Persistent.BaseImpl.PermissionPolicy.PermissionPolicyUser._fields?p=netstandard
2022-05-16T23:23:38
CC-MAIN-2022-21
1652662512249.16
[]
docs.devexpress.com
Sets up the persistent context before and after a Grails operation is invoked. Clear any pending changes. Called to finalize the persistent context. Disconnects the persistence context. Flushes any pending changes to the DB. Called to intialisation the persistent context. Checks whether the persistence context is open. Reconnects the persistence context. Sets the persistence context to read-only mode. Sets the persistence context to read-write mode.
https://docs.grails.org/4.0.10/api/grails/persistence/support/PersistenceContextInterceptor.html
2022-05-16T20:59:36
CC-MAIN-2022-21
1652662512249.16
[]
docs.grails.org
Advertising Queries Advertising queries are reports you can run to get an overview of all advertisers, orders, line items or creatives that have been running in your chosen time period. How to run an advertising query. You can apply multiple filters to get the data you want. For example: if you have added labels to advertisers, orders or line items, then you can filter by these labels to single out certain items in your reports. Advertising query example - in this case for a line item Metrics : Here are the metrics returned by a query, and what they mean. Impression: An ad has been delivered by the Adnuntius adserver.. Average Auction Rank: The average rank of the ad impression. If more than one ad is shown inside an ad unit, this number is the ranking of this ad. Templates and Schedules Publishing Queries Last modified 1mo ago Copy link
https://docs.adnuntius.com/adnuntius-advertising/admin-ui/reports/advertising-queries
2022-05-16T22:03:04
CC-MAIN-2022-21
1652662512249.16
[]
docs.adnuntius.com
By default, any authenticated account can access Zeppelin interpreter, credential, and configuration settings. When access control is enabled, unauthorized users can see the page heading, but no settings. There are two steps: defining roles, and specifying which roles have access to which settings. Prerequisite: Users and groups must be defined on all Zeppelin nodes and in the associated identity store. To enable access control for the Zeppelin interpreter, credential, or configuration pages, complete the following steps: Define a [roles]section in shiro.inicontents, and specify permissions for defined groups. The following example grants all permissions (" admin: [roles] admin = * In the [urls]section of the shiro.inicontents, uncomment the interpreter, configurations, or credential line(s) to enable access to the interpreter, configuration, or credential page(s), respectively. (If the [urls]section is not defined, add the section. Include the three /apilines listed in the following example.) The following example specifies access to interpreter, configurations, and credential settings for role "admin": [urls] /api/version = anon /api/interpreter/** = authc, roles[admin] /api/configurations/** = authc, roles[admin] /api/credential/** = authc, roles[admin] #/** = anon /** = authc To add more roles, separate role identifiers with commas inside the square brackets. Note: The sequence of lines in the [urls]section is important. The /api/versionline must be the first line in the [urls]section: /api/version = anon Next, specify the three /apilines in any order: /api/interpreter/** = authc, roles[admin] /api/configurations/** = authc, roles[admin] /api/credential/** = authc, roles[admin] The authcline must be last in the [urls]section: /** = authc When unauthorized users attempt to access the interpreter, configurations, or credential page, they see the page heading but not settings.
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.5.3/bk_zeppelin-component-guide/content/config-access-control-configs.html
2022-05-16T21:36:03
CC-MAIN-2022-21
1652662512249.16
[]
docs.cloudera.com
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1) install-config.yamlfile for AWS In OKD version 4.8, you can install a private cluster into an existing VPC on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize.. In OKD 4.8, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OKD into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone field in the install-config.yaml file. If you use a cluster with public access, you must create a public and a private subnet for each availability zone that your cluster uses. Each availability zone can contain no more than one public and one private subnet. You must provide a suitable VPC and subnets that allow communication to your machines. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OKD cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. Starting with OKD 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS ELBs, security groups, S3 buckets, and nodes. If you deploy OKD to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OKD clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network.. For installations of a private OKD cluster that are only accessible from an internal network and are not visible to the internet, you must manually generate your installation configuration file. You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OKD": ...}' (1) publish: Internal (12).
https://docs.okd.io/4.8/installing/installing_aws/installing-aws-private.html
2022-05-16T22:22:43
CC-MAIN-2022-21
1652662512249.16
[]
docs.okd.io
. Silicon Labs Wi-SUN Documentation# Getting Started# Getting Started with Simplicity Studio 5 and the Gecko SDK - Describes downloading development tools and the Gecko SDK, which includes the Silicon Labs Wi-SUN SDK. Introduces the Simplicity Studio 5 interface. QSG181: Silicon Labs Wi-SUN Quick-Start Guide - Describes how to get started with Wi-SUN development using the Silicon Labs Wi-SUN SDK and Simplicity Studio 5 with a compatible wireless starter kit (WSTK). Developing with Wi-SUN# UG495: Silicon Labs Silicon Labs Wi-SUN Developer's Guide - Reference for those developing applications using the Silicon Labs Wi-SUN SDK. The guide covers guidelines to develop an application on top of Silicon Labs Wi-SUN stack. The purpose of this document is to fill in the gaps between the Silicon Labs Wi-SUN Field Area Network (FAN) API reference, Gecko Platform references, and documentation for the target EFR32xG part. AN1332: Silicon Labs Wi-SUN Network Setup and Configuration - Describes the test environment and methods for testing Wi-SUN network performance. The results are intended to provide guidance on design practices and principles as well as expected field performance results. UG162: Simplicity Commander Reference Guide - Describes how and when to use Simplicity Commander's Command-Line Interface.1330: Silicon Labs Wi-SUN Mesh Network Performance - Describes the test environment and methods for testing Wi-SUN network performance. The results are intended to provide guidance on design practices and principles as well as expected field performance results.
https://docs.silabs.com/wisun/1.2/wisun-stack-api/
2022-05-16T21:39:34
CC-MAIN-2022-21
1652662512249.16
[]
docs.silabs.com
GetProject Returns the details about one launch. You must already know the project name. To retrieve a list of projects in your account, use ListProjects. Request Syntax GET /projects/ projectHTTP/1.1 URI Request Parameters The request uses the following URI parameters. Request Body The request does not have a request body. Response Syntax HTTP/1.1 200 Content-type: application/json { "project": { "activeExperimentCount": number, "activeLaunchCount": number, "arn": "string", "createdTime": number, "dataDelivery": { "cloudWatchLogs": { "logGroup": "string" }, "s3Destination": { "bucket": "string", "prefix": "string" } }, "description": "string", "experimentCount": number, "featureCount": number, "lastUpdatedTime": number, "launchCount": number, "name": "string", "status": "string", "tags": { "string" : :
https://docs.aws.amazon.com/cloudwatchevidently/latest/APIReference/API_GetProject.html
2022-05-16T23:11:57
CC-MAIN-2022-21
1652662512249.16
[]
docs.aws.amazon.com
Code: peptide Peptides can be entered using one or three letter amino acid abbreviations. A text file containing sequences should contain only one type of sequence (only one or only three lettered sequences but not both). Each line must have one and only one continuous line in the text file without spaces. Abbreviations used: Example PPPALPPKKR APTMLPPASDFA ProProProAlaLeuProProLysLysArg AlaProThrMetProProProLeuProPro PPPALPPKKR AlaProThrMetProProProLeuProPro ProProProAlaLeuProProLysLysArg AlaProThrMetPPPLPP In addition to the amino acids listed above, custom amino acids dictionary can be defined. The custom_aminoacids.dict file is stored in the .chemaxon directory (UNIX) or the user's chemaxon directory using MS Windows. The usual format of the dictionary file is: molName=L-Alanine Ala A [CX4H3]C@HX4H1C=O |wD:1.1,(3.85,-1.33,;2.31,-1.33,;1.54,-2.67,;1.54,,;)| 3 4 molName=L-Cysteine Cys C [NX3]C@@HH1C=O |wD:1.0,(1.54,-2.67,;2.31,-1.33,;3.85,-1.33,;4.62,-2.67,;1.54,,;)| 1 5 4 where the corresponding columns are: The name and the coordinates are not obligatory fields. The columns should be separated by tab characters. {info} To describe an aromatic custom amino acid both the aromatic and the Kekule form should be in the custom_aminoacids.dict file with the same short and long names. See also Peptide import and export options DNA/RNA sequences can be entered using one letter nucleic acid abbreviations. Each line must have one and only one continuous line in the text file without spaces. Abbreviations used: Code : dna, rna ACGTACGT ACCCCGTGGGT A-C-G-T-A-C-G-T A-C-C-C-C-G-T-G-G-G-T dA-dC-dG-dT-dA-dC-dG-dT dA-dC-dC-dC-dC-dG-dT-dG-dG-dG-dT acgtacgt
https://docs.chemaxon.com/display/lts-fermium/sequences-peptide-dna-rna.md
2022-05-16T22:13:24
CC-MAIN-2022-21
1652662512249.16
[]
docs.chemaxon.com
Configure Jupyter Notebook in a Customized Engine Image CDSW's base Image v8 (and later) come with Jupyter Notebook pre-installed on them. If you create a customized engine image by extending this base image, Jupyter Notebook will still be installed on this customized engine image. However, CDSW will not automatically list Jupyter Notebook in the dropdown list of editors on the Launch New Sesssion page in projects that are configured to use this customized engine image. You must use the following steps to configure the custom engine image to use Jupyter Notebook. - Log in to the Cloudera Data Science Workbench web UI as a site administrator. - Click . - Under Engine Images, click the Edit button for the customized engine image that you want to configure for Jupyter Notebook. - Click New Editor. - Name: Enter Jupyter Notebook. This is the name that appears in the dropdown menu for Editors when you start a new session. - Command: Enter the command to start Jupyter Notebook. /usr/local/bin/jupyter-notebook --no-browser --ip=127.0.0.1 --port=${CDSW_APP_PORT} --NotebookApp.token= --NotebookApp.allow_remote_access=True --log-level=ERROR - Click Save, then click Save again.
https://docs.cloudera.com/documentation/data-science-workbench/1-6-x/topics/cdsw_editors_jupyter_custom_engine.html
2022-05-16T22:05:17
CC-MAIN-2022-21
1652662512249.16
[]
docs.cloudera.com
module order Change the order of the processing modules in the darkroom using presets. When processing an image, the active modules are applied in a specific order, which is shown in the right-hand panel of the darkroom view. This module provides information about the current ordering of the processing modules in the pixelpipe. The single parameter “current order” can take on the following values: - v3.0 - This is the default module order for scene-referred workflow. - legacy - This module order is used for the legacy display-referred workflow. You will also see this order displayed for any images you previously edited in versions prior to darktable 3.0. - custom - Since darktable 3.0, it is possible to change the order in which modules are applied in the pixelpipe by clicking on the module header while holding Ctrl+Shift and dragging it into a new position. If you have changed the order of any modules, this parameter will show the value “custom”. Note: changing the order of modules in the pixelpipe is not a cosmetic change to the presentation of the GUI – it has real consequences to how the image will be processed. Please do not change the order of the modules unless you have a specific technical reason and understand why this is required from an image processing perspective. It is possible to reset the module order back to the v3.0 default order by clicking on the reset parameters icon on the module order header. You can also use the presets menu to change the order to either v3.0 (default) or legacy. For full details of the default module orders, please refer to the default module order section.
https://docs.darktable.org/usermanual/3.6/en/module-reference/utility-modules/darkroom/module-order/
2022-05-16T22:29:13
CC-MAIN-2022-21
1652662512249.16
[]
docs.darktable.org
Dealing with errors in workflows# Sometimes it can happen that you're building a nice workflow, but when you try to execute it, it fails. There are many reasons why workflows executions may fail (some more or less mysterious), for example when a node is not configured correctly or a third-party service that you're trying to connect to is not working properly. But don't panic. We will show you some ways in which you can troubleshoot the issue, so you can get your workflow up and running as soon as possible. Checking failed workflows# When one of your workflows fails, it's helpful to check the execution log by clicking on Executions in the left-side panel. The executions log shows you a list of the latest execution time, status, mode, and running time of your saved workflows. To investigate a specific workflow from the list, click on the folder icon on the row of the respective workflow. This will open the workflow in read-only mode, where you can see the execution of each node. This representation can help you identify at what point the workflow ran into issues. Catching erroring workflows# To catch failed workflows, create a separate Error Workflow with the Error Trigger node, which gets executed if the main execution fails. Then, you can take further actions by connecting other nodes, for example sending notifications via email or Slack about the failed workflow and its errors. To receive error messages for a failed workflow, you need to select the option Error Workflow in the Workflow Settings of the respective workflow. The only difference between a regular workflow and an Error Workflow is that the latter contains an Error Trigger node. Make sure to create this node before you set a workflow as Error Workflow. Error workflows - You don't need to activate workflows that use the Error Workflow node. - A workflow that uses the Error Trigger node uses itself as the error workflow. - The Error Trigger node is designed to get triggered only when the monitored workflow gets executed automatically. This means you can't test this (to see the result of) an error workflow while executing the monitored workflow manually. - You can set the same Error Workflow for multiple workflows. Exercise# In the previous chapters, you've built several small workflows. Now, pick one of them that you want to monitor. Create an Error Workflow that sends a message to a communication platform (for example, Slack, Discord, Telegram, or even email) if that workflow fails. Don't forget to set this Error Workflow in the settings of the monitored workflow. Show me the solution The workflow for this exercise looks like this: To check the configuration of the nodes, you can copy-paste the JSON code of the workflow: Throwing exceptions in workflows# Another way of troubleshooting workflows is to include a Stop and Error node in your workflow. This node throws an error, which can be set to one of two error types: an error message or an error object. The error message returns a custom message about the error, while the error object returns the type of error. The Stop and Error node can only be added as the last node in a workflow. When to throw errors Throwing exceptions with the Stop and Error node is useful for verifying the data (or assumptions about the data) from a node and returning custom error messages. If you are working with data from a third-party service, you may come across problems such as: wrongly formatted JSON output, data with the wrong type (for example, numeric data that has a non-numeric value), missing values, or errors from remote servers. Though this kind of invalid data might not cause the workflow to fail right away, it could cause problems later on, and then it can become difficult to track the source error. This is why it is better to throw an error at the time you know there might be a problem.
https://docs.n8n.io/courses/level-two/chapter-4/
2022-05-16T22:01:28
CC-MAIN-2022-21
1652662512249.16
[]
docs.n8n.io
The rawhtml plugin¶ The rawhtml plugin allows inserting raw HTML code in the page. The code is included as-is at the frontend: This plugin can be used for example for: prototyping extra markup quickly including “embed codes” in a page, in case the oembeditem plugin does not support it. including jQuery snippets in a page:
https://django-fluent-contents.readthedocs.io/en/latest/plugins/rawhtml.html
2022-05-16T22:55:35
CC-MAIN-2022-21
1652662512249.16
[]
django-fluent-contents.readthedocs.io
Catalog files¶ This document is a formal description of the XML catalog format. A catalog contains meta-data for a collection of feeds. Catalogs make it easier to find feeds for specific applications. Syntax¶ Catalog files have the following syntax ( ?follows optional items, *means zero-or-more, order of elements is not important, and extension elements can appear anywhere as long as they use a different namespace): <?xml version='1.0'?> <catalog xmlns=' <interface uri='...' xmlns=' <name>...</name> <summary>...</summary> <description>...</description> ? <homepage>...</homepage> ? <category type='...' ? >...</category> * <needs-terminal/> ? <icon type='...' href='...'/> * <entry-point command='...' binary-name='...' ? /> * </interface> * </catalog> The syntax within <interface> elements is identical to that of feeds. Each <interface> element represents a feed and contains a copy of that feed's body. However, <implementation>, <group>, <feed>, <feed-for> and <replaced-by> elements are omitted. They should instead be taken from the original feed, which can be downloaded from the specified uri when required. Digital signatures¶ When a catalog is downloaded from the web, it must contain a digital signature. A catalog. This is identical to the signature format used by feeds. Generating¶ Catalog files are automatically generated by 0repo. You can also manually generate a catalog from a set of feeds downloaded to a local directory: 0install run --command=0publish feeds/*.xml --catalog=catalog.xml --xmlsign Note A catalog generated like this points to the locations the feeds originally came from, not the local XML files on your disk. Usage¶ Catalog files are currently only used by Zero Install for Windows. You can search for feeds in catalogs like this: 0install catalog search KEYWORD See the command-line interface documentation for more commands. Short names¶ Catalogs allow you to use short names on the command-line instead of entering full feed URIs. Short names are either equal to the application <name> as listed in the catalog (spaces replaced with dashes) or the application's binary-name specified in an <entry-point>. For example, instead of 0install run you can use: 0install run vlc-media-player(application name) or 0install run vlc(executable file name) GUI¶ The main GUI of Zero Install for Windows displays a list of available applications populated by one or more catalogs. The default catalog can be extended with or replaced by custom catalogs in the Catalog tab of the Options window. Well-known catalogs¶ - - repository of common tools, libraries and runtime environments
https://docs.0install.net/specifications/catalog/
2022-05-16T21:49:31
CC-MAIN-2022-21
1652662512249.16
[]
docs.0install.net
theme, themes/hugo-darktable-docs-pdf-theme.
https://docs.darktable.org/usermanual/3.6/en/special-topics/translating/
2022-05-16T22:34:09
CC-MAIN-2022-21
1652662512249.16
[]
docs.darktable.org
Git LFS Deep Dive In April 2019, Francisco Javier López hosted a Deep Dive (GitLab team members only: on the GitLab Git LFS implementation to share domain-specific knowledge with anyone who may work in this part of the codebase in the future. You can find the recording on YouTube, and the slides on Google Slides and in PDF. This deep dive was accurate as of GitLab 11.10, and while specific details may have changed, it should still serve as a good introduction. Including LFS blobs in project archives Introduced in GitLab 13.5. The following diagram illustrates how GitLab resolves LFS files for project archives: - calls are random values until this Workhorse issue is resolved.
https://docs.gitlab.com/13.12/ee/development/lfs.html
2022-05-16T22:21:14
CC-MAIN-2022-21
1652662512249.16
[]
docs.gitlab.com
This page refers to the view_labelparameter that is part of a join. view_labelcan also be used as part of an Explore, described on the view_label(for Explores) parameter documentation page. view_labelcan also be used as part of a dimension, measure, or filter, described on the view_label(for fields) parameter documentation page. Usage join: view_name_2 { view_label: "desired label" } } Definition view_label changes the way that a group of fields from a joined view are labeled in the field picker. You can use view_label to group that view’s fields under the name of a different view: You can use view_label when you need more than one view for modeling purposes, but those views represent the same entity to your business users. In the example above, you have an Explore called order_items with two joined views: order_facts and orders. You may want those views to retain separate names for modeling purposes. However, it might make things simpler for your users if they both appear as Orders in the UI. If you do not explicitly add a view_label to a join, the view_label defaults to the name of the join. To change the names of the fields themselves, you can use the label parameter. Examples Make the customer_facts view appear to be part of the Customer view in the field picker: Make the product_facts view appear to be part of the Product Info view in the field picker: Common challenges view_label has no effect other than changing the field picker appearance When you change the view_label of a join, only the field picker is affected. The way that fields should be referenced in the LookML remains unchanged. Use proper capitalization when combining multiple views via view_label If you want a joined view to be merged with another view in the field picker, make sure that the capitalization used in view_label is correct. The capitalization should match how the view name appears in the field picker. For example: product_info will appear in the field picker as Product Info; each word is capitalized, and underscores and changed to spaces. For this reason, we used view_label: 'Product Info' instead of view_label: 'product_info'. Things to know There are several ways to relabel a joined view LookML has several ways to rename a joined view, all of which have different effects on the way that you write LookML. view_label is not appropriate for all use cases. view_label affects the Explore’s joined views This parameter is similar to view_label (for Explores) but affects the Explore’s joined views instead of the base view. Unlike label (for views), this parameter only affects the view when displayed in that Explore.
https://docs.looker.com/reference/explore-params/view_label-for-join
2022-05-16T22:02:20
CC-MAIN-2022-21
1652662512249.16
[]
docs.looker.com
The ECCO Ocean and Sea-Ice State Estimate¶ What is the ECCO Central Production State Estimate?¶ The Estimating the Circulation and Climate of the Ocean (ECCO) Central Production state estimate is a reconstruction of the three-dimensional time-varying ocean and sea-ice state. Currently in Version 4 Release 3, the state estimate covers the period Jan 1, 1992 to Dec 31, 2015. The state estimate is provided on an approximately 1-degree horizontal grid (cells range from ca. 20 to 110 km in length) and 50 vertical levels of varying thickness. The ECCO CP state estimate has two defining features: (1) it reproduces a large set of remote sensing and in-situ observational data within their prior quantified uncertainties and (2) the dynamical evolution of its ocean circulation, hydrography, and sea-ice through time perfectly satisfies the laws of physics and thermodynamics. The state estimate is the solution of a free-running ocean and sea-ice general circulation model and consequently can be used to assess budgets of quantities like heat, salt and vorticity. ECCO Version 4 Release 3 is the most recent edition of the global ocean state estimate and estimation system described by Forget et al. (2015b, 2016). A brief synopsis describing Release 3 can be found here: A high-level analysis of the state estimate can be found here: Relation to other ocean reanalyses¶ ECCO state estimates share many similarities with conventional ocean reanalyses but differ in several key respects. Both are ocean reconstructions that use observational data to fit an ocean model to the data so that the model agrees with the data in a statistical sense. Ocean reanalyses are constructed by directly adjusting the ocean model’s state to reduce its misfit to the data. Furthermore, information contained in the data is only explored forward in time. In contrast, ECCO state estimates are constructed by identifying a set of ocean model initial conditions, parameters, and atmospheric boundary conditions such that a free-running simulation of the ocean model reproduces the observations as a result of the governing equations of motion. These equations also provide a means for propagating information contained in the data back in time (“upstream” of when/where observations have been made). Therefore, while both ocean reanalyses and ECCO state estimates reproduce observations of ocean variability, only ECCO state estimates provide an explanation for the underlying physical causes and mechanisms responsible for bringing them into existence (e.g., Stammer et al. 2017). Conservation properties of ECCO state estimates¶ By design, ECCO state estimates perfectly satisfy the laws of physics and thermodynamics and therefore conserve heat, salt, volume, and momentum (Wunsch and Heimbach, 2007, 2013). Indeed, it is because of these conservation properties that ECCO state estimates are used to investigate the origins of ocean heat, salt, mass, sea-ice, and regional sea level variability (e.g., Wunsch et al., 2007; Köhl and Stammer, 2008; Piecuch and Ponte 2011; Fenty and Heimbach, 2013b; Wunsch and Heimbach 2013, 2014; Fukumori and Wang 2013; Buckley et al. 2014, 2015; Forget and Ponte, 2015, Fenty et al., 2017; Piecuch et al. 2017). How is the ECCO Central Production State Estimate Made?¶ The ECCO ocean reanalysis system is a mature, state-of-the-art data tool for synthesizing a diverse Earth System observations, including satellite and in-situ data, into a complete and physically-consistent description of Earth’s global ocean and sea-ice state. Ocean observations, even those from satellites with global coverage, are still sparse in both space and time, relative to the inherent scales of ocean variability. The ECCO reanalysis system is able to reconstruct the global ocean and sea-ice state by synthesizing hundreds of millions of sparse and heterogeneous ocean and sea-ice satellite and in-situ data with an ocean and sea-ice general circulation model. Through iterative high-dimension nonlinear optimization using the adjoint of the ocean and sea-ice model the ECCO reanalysis system identifies a particular solution to the system of equations describing ocean and sea-ice dynamics and thermodynamics that reproduces a set of constraining observations in a least-squares sense. By simultaneously integrating numerous diverse and heterogeneous data streams into the dynamically-consistent framework of the physical model we make optimal use of the data. Users of the ECCO reanalysis are not only provided a comprehensive description of the Earth’s changing ocean and sea-ice states but also information about the underlying physical processes responsible for driving those changes. For a list of the input data used to contrain the ECCO Version 4 Release 3 state estimate see:
https://ecco-v4-python-tutorial.readthedocs.io/intro.html
2022-05-16T21:52:01
CC-MAIN-2022-21
1652662512249.16
[]
ecco-v4-python-tutorial.readthedocs.io
Migration Table of Contents The @ChangeLog annotation has been deprecated in favour of the @ChangeUnit. For more information check this section # Introduction A migration is composed by multiple smaller pieces called ChangeUnits, which are processed in order by the Mongock runner. ChangeUnits are the unit of migration. These refer to the annotated classes where developers write migration logic/scripts. All classes with the @ChangeUnit annotation will be scanned by Mongock to execute the migration. A migration is constituted by an ordered list of ChangeUnits. Each of the ChangeUnits represent a unit of migration, which has the following implications: - Each ChangeUnit is wrapped in a transaction:. - When transactions are possible(transactional environment), Mongock uses the mechanism provided by the database. - On the other hand, in non-transactional environments, Mongock will try to provide an artificial transactional atmosphere by rolling back the failed change manually. - In targeted operations, such as undo, upgrade, etc., the ChangeUnit is the unit of the operation. For example, when performing an undooperation, it needs to specify the ChangeUnitId until which all the ChangeUnits are reverted(inclusive). - A ChangeUnit has only one migration method, which is marked with the @Execution annotation, and a rollback method, annotated with @RollbackExecution. # ChangeUnit methods Every class marked as @ChangeUnit will be marked as a migration class and can contain methods annotated as follow: @Execution: The main migration method(Mandatory). @RollbackExecution: This method basically reverts the changes made by the @Executionmethod. It's mandatory and highly recommended to properly implement it. It can be left empty if developers don't think is required in some scenarios. It will be triggered in the two following situations: - When the @Executionmethod fails in a non-transactional environment. - In recovery operation like undo. @BeforeExecution: Optional method that will be executed before the actual migration, meaning this that it won't be part of the transactional and executed in non-transactional context. It's useful to perform DDL operations in database where they are not allowed inside a transaction, like MongoDB, or as preparation for the actual migration. This method is treated and tracked in the database history like the @Execution method, meaning this that in case of failure, it will force the migration to be aborted, tracked in the database as failed and Mongock will run it again in the next execution. - @RollbackBeforeExecution: Similar to the @RollbackExecutionfor the @Executionmethod. It reverts back the changes made by the @BeforeExecutionmethod. It's only mandatory when the method @BeforeExecutionis present. # ChangeUnit attributes Multiple attributes can be passed to ChangeUnits to configure these. The following table represents the list of attributes that you can set for preparing your migration: # ChangeUnit example ChangeUnits accept dependency injections directly in the method and at constructor level @ChangeUnit(id="myMigrationChangeUnitId", order = "1", author = "mongock_test", systemVersion = "1")public class MyMigrationChangeUnit {private final MongoTemplate template;public MyMigrationChangeUnit(MongoTemplate template) {this.template = template;}//Note this method / annotation is Optional@BeforeExecutionpublic void before() {mongoTemplate.createCollection("clients");}//Note this method / annotation is Optional@RollbackBeforeExecutionpublic void rollbackBefore() {mongoTemplate.dropCollection("clients");}@Executionpublic void migrationMethod() {getClientDocuments().stream().forEach(clientDocument -> mongoTemplate.save(clientDocument, CLIENTS_COLLECTION_NAME));}@RollbackExecutionpublic void rollback() {mongoTemplate.deleteMany(new Document());}} # Best practices Use the Operation classes in favour of persisted objects in your ChangeUnits Although Mongock provides a powerful mechanism that allows you to inject any dependency you wish to your ChangeUnits, these are considered the source of truth and should treated like static resources, once executed shouldn't be changed. With this in mind, imagine the scenario you have the class Clientthat represents a table in your database. You create a ChangeUnit which uses the field nameof the client. One month later, you realise the field is not needed anymore and decide to remove it. If you remove it, the first ChangeUnit's code won't compile. This leaves you with two options: either remove/update the first ChangeUnit or keep the unneeded field name. Neither of which is a good option. An example for MongoDB would be to use MongoTemplate in favour of using Repository classes directly to perform the migrations. High Availability Considerations: In a distributed environment where multiple nodes of the application are running, there are a few considerations when building migrations with Mongock: Backwards compatible ChangeUnits While the migration process is taking place, the old version of the software is likely to still be running. During this time, it can happen that the old version of the software is dealing with the new version of the data. Scenarios where the data is a mix between old and new versions could also occur. This means the software must still work regardless of the status of the database. It can be a detriment to High Availability if ChangeUnits are non-backward-compatible ChangeUnits. 2-stage approach There are certain update operations that can leave code and data in an inconsistent state whilst performing a change. In such scenarios, we recommend to perform the change in two independent deployments. The first one: only provides additions and is compatible with the current deployed code. At this stage, the code would work with the old structure as well as the next change that will be applied. At this point, if the migration was executed, it affects the database but not the code allowing services to be running because the migration didn't produce a breaking change. The next step is required to ensure the new refactored code is also deployed. Once this is done, we have the first part of the data migration done(we only have to remove what's not needed anymore) and the code is able to work with the actual version of the data and the next migration that will be applying. The last stage is to do the new deployment with the data migration(which is compatible with the current code deployed) together with the code reflecting the change. Once again, there are chances that the data migration is done but the service itself(code) doesn't. This is not a problem as the code deployed is also compatible with the new version of the data. Light ChangeUnits Try to wrap your migration in relatively light ChangeUnits. The concept of light is not universal, but the time to execute a ChangeUnit shouldn't mean a risk to the application's startup. For example, when using Kubernetes in a transactional environment, if a ChangeUnit can potentially take longer than the Kubernetes initial delay, the services will proably fall into a infinite loop. If there is no ChangeUnit that puts this in risk, the worse case scenario is that the service will be re-started by the Kubernetes agents as it needs more time to acomplish the entire migration, but eventually the migration will finalise. Try to enforce idempotency in your ChangeUnits (for non-transactional environment). In these cases, a ChangeUnit can be interrupted at any time. Mongock will execute again in the next execution. Although you have the rollback feature, in non-transactional environments it's not guaranteed that it's executed correctly. ChangeUnit reduces its execution time in every iteration (for non-transactional environment). A ChangeUnit can be interrupted at any time. This means an specific ChangeUnit needs to be re-run. In the undesired scenario where the ChangeUnit's execution time is greater than the interruption time(could be Kubernetes initial delay), that ChangeUnit won't be ever finished. So the ChangeUnit needs to be developed in such a way that every iteration reduces its execution time, so eventually, after some iterations, the ChangeUnit finished. ChangeLog.
https://docs.mongock.io/v5/migration/index.html
2022-05-16T22:08:01
CC-MAIN-2022-21
1652662512249.16
[]
docs.mongock.io
Configuration - Live Chat The main purpose of the WebChat module configuration is to configure the appearance of your Live Chat on your website (or mobile app). Further you can define the availability of your chat to for example your opening hours. CustomizeCustomize ChatChat Use our visualized editor to add a chat title, welcome description, chat buttons and the input field hint. Chat buttons can trigger Flows. StyleStyle Change the chat's according your corporate colors and position the chat circle either bottom right or bottom left. Click Advanced Configuration and reference to a custom CSS URL to add further customizations. Advanced SettingsAdvanced Settings Through Show advanced settings, following options appear: AvailabilityAvailability In Availability you find three standard options to define at what times the chat shall be visible to your customers. - Always - chat 24/7 visible, customers can text you anytime - During Open Hours - set your open hours, the chat will be visible only then - An Agent Is Online An Agent Is OnlineAn Agent Is Online Through the Go online button on the top right of the Chatvisor platform, agents can go online or offline. Agents that set their state to "online" can be specifically targeted with incoming messages using Routing Rules.
https://docs.chatvisor.com/docs/config23_css_module-configuration-live-chat/
2020-09-18T20:42:34
CC-MAIN-2020-40
1600400188841.7
[array(['/img/newdocsimg/css-configuration_module-configuration_live-chat-customize.png', 'alt-text'], dtype=object) array(['/img/newdocsimg/css-configuration_module-configuration_live-chat-editor.png', 'alt-text'], dtype=object) array(['/img/newdocsimg/css-configuration_module-configuration_live-chat-availability.png', 'alt-text'], dtype=object) array(['/img/newdocsimg/css-configuration_module-configuration-live-chat-availability-agent-online.png', 'alt-text'], dtype=object) ]
docs.chatvisor.com
Clothesline Installation Neutral Bay 2089 NSW How can the Brabantia Wall Fix Clothes Airer be beneficial for your home in Neutral Bay 2089 Lower North Shore NSW? This is the epitome of a space saving clothes airer. It functions like a rotary laundry line – capable of dealing a full washing load at a time – without occupying too much space, especially when you only have a narrow or tight area to devote for its installation. You can get the Brabantia Wall Fix Clothes Airer at Lifestyle Clotheslines, your leading laundry line installer, supplier, and retailer in Neutral Bay 2089 Lower North Shore NSW. The Brabantia Wall Fix Clothes Airer has a clever and streamlined design that allows it to be installed even in the smallest part of your wall: - ● no need to set up and dis-assemble another clotherairer - ● fixed onto the wall - ● always ready for use - ● very easy to fold away - ● less hassle, quick drying task, more time for other chores The Brabantia Wall Fix Clothes Airer measures 1795mm wide, 1090mm high and 1820mm deep. Its outer lines are 120cm in length, but all in all, it can provide as much as 24 metres of hanging space. It can fit: - ● inside the bathroom as a secondary clothesline unit - ● in a small laundry area during days when you need to multitask - ● on a spare wall in your narrow garage when the weather is not pleasant. - ● on the balcony or patio area for easy access No worries about weather elements that may cause premature deterioration of its parts because the Brabantia Wall Fix Clothes Airer is specifically designed to withstand corrosion, and even the harsh Aussie sun. And how should you deal with rain, moisture, as well as bat poo and bird droppings? The Brabantia Wall Fix Clothes Airer can be purchased with a waterproof cover – for complete protection against various external elements. Aim for a faster, more efficient, and more convenient clothes drying task in your home in Neutral Bay 2089 Lower North Shore NSW by purchasing the Brabantia Wall Fix Clothes Airer from Lifestyle Clotheslines. Clothesline Services in the Neutral Bay area include: - Rotary clothesline installation - Hills Hoist and clothes hoist installs - Removal of old clothesline - Core Hole Drilling service - Insurance & Storm damaged quotes - Rewiring service
https://docs.lifestyleclotheslines.com.au/article/1641-clothesline-installation-neutral-bay-2089-nsw
2020-09-18T19:23:53
CC-MAIN-2020-40
1600400188841.7
[]
docs.lifestyleclotheslines.com.au
Show pageOld revisionsBacklinksBack to top Differences This shows you the differences between two versions of the page. View differences: Side by SideInline Go Link to this comparison view 2017/09/14 13:37 admin created Go 2017/09/14 13:37 admin created Go en:admin:users_administration:mass_user_account_deletion [2017/09/14 13:37]admin created en:admin:users_administration:mass_user_account_deletion [2017/09/14 13:37] (current)admin created Last modified: 2017/09/14 13:37
https://docs.openeclass.org/en/admin/users_administration/mass_user_account_deletion?do=diff&rev2%5B0%5D=&rev2%5B1%5D=1505385441&difftype=sidebyside
2020-09-18T20:55:07
CC-MAIN-2020-40
1600400188841.7
[]
docs.openeclass.org