content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
The agent layer¶ This section describes the agent layer and gives you enough information to implement your own distributed system without going too much into detail. For that, you should also read the section about the RPC layer. Overview¶ You can think of agents as small, independent programs running in parallel. Each agent waits for input (e.g., incoming network messages), processes the input and creates, based on its internal state and the input, some output (like outgoing network messages). You can also imagine them as being like normal objects that call other object’s methods. But instead of calling these methods directly, they do remote procedure calls (RPC) via a network connection. In theory, that means that every agent has a little server with an event loop that waits for incoming messages and dispatches them to the corresponding method calls. Using this model, you would quickly run out of resources with hundreds or thousands of interconnected agents. For this reason, agents are clustered in containers. A container provides the network server and event loop which all agents within the container share. Agents are uniquely identified by the container’s address and an ID (which is unique within a container), for example: tcp://localhost:5555/42. The following image illustrates this: If Agent C wants to send a message to Agent A, its container connects to A’s container. Agent C can now send a message to Agent A. If Agent C now wanted to send a message to Agent B, it would simply reuse the same connection: As you can see in the figure above, containers also have a clock, but you can ignore that for the moment. We’ll come back to it later. Components of a distributed system in aiomas¶ Agent: You implement your business logic in subclasses of aiomas.Agent. Agents can be reactive or proactive. Reactive agents only react to incoming messages. That means, they simply expose some methods that other agents can call. Proactive agents actively perform one ore more tasks, i.e., calling other agent’s methods. An agent can be both, proactive and reactive (that just means that your agent class exposes some methods and has one or more tasks running). Container: All agents live in a container. The agent container implements everything networking related (e.g., a shared RPC server) so that the agent base class can be as light-weight as possible. It also defines the codec used for message (de)serialization and provides a clock for agents. Codec: Codecs define how messages to other agents get serialized to byte strings that can be sent over the network. The base codecs can only serialize the most common object types (like numbers, strings, lists or dicts) but you can extend them with serializers for custom object types. The Codecs section explain all this in detail. Clock: Every container provides a clock for agents. Clocks are important for operations with a timeout (like sleep()). The default clock is a real-time clock synchronized to your system’s time. However, if you want to integrate your MAS with a simulation, you may want to let the time pass faster then real-time (in order to decrease the duration of your simulation). For that use case, aiomas provides a clock that can be synchronized with external sources. All clocks provide functions to get the current time, sleep for some time or execute a task after a given timeout. If you use these function instead of the once asyncio provides, you can easily switch between different kinds of clocks. The Clocks section provides more details and examples. Don’t worry if you feel a bit confused now. I’ll explore all of this with small, intuitive examples. Hello World: A single, proactive agent¶ In our first example, we’ll create a very simple agent which repeatedly prints “Hello, World!”: >>> import aiomas >>> >>> class HelloWorld(aiomas.Agent): ... def __init__(self, container, name): ... # We must pass a ref. to the container to "aiomas.Agent": ... super().__init__(container) ... self.name = name # Our agent's name ... ... async def run(self): ... # This method defines the task that our agent will perform. ... # It's usually called "run()" but you can name it as wou want. ... print(self.name, 'says:') ... clock = self.container.clock ... for i in range(3): ... await clock.sleep(0.1) ... print('Hello, World!') Agents should be a subclass of Agent. This base class needs a reference to the container the agents live in, so you must forward a container argument to it if you override __init__(). Our agent also defines a task run() which prints “Hello, World!” three times. The task also uses the container’s clock to sleep for a small amout of time between each print. The task run() can either be started automatically in the agent’s __init__() or manually after the agent has been instantiated. In our example, we will do the latter. The clock (see clocks) exposes various time related functions similar to those that asyncio offers, but you can easily exchange the default real-time clock of a container with another one (e.g., one where time passes faster than real-time, which is very useful in simulations). Now lets see how we can instantiate and run our agent: >>> # Containers need to be started via a factory function: >>> container = aiomas.Container.create(('localhost', 5555)) >>> >>> # Now we can instantiate an agent an start its task: >>> agent = HelloWorld(container, 'Monty') >>> aiomas.run(until=agent.run()) Monty says: Hello, World! Hello, World! Hello, World! >>> container.shutdown() # Close all connections and shutdown the server In order to run the agent, you need to start a Container first. The container will create an RPC server and bind it to the specified address. The function run() is just a wrapper for loop = asyncio.get_event_loop(); loop.run_until_complete(task). These are the very basics auf aiomas’ agent module. In the next example you’ll learn how an agent can call another agent’s methods. Calling other agent’s methods¶ The purpose of multi-agent systems is having multiple agents calling each other’s methods. Let’s see how we do this. For the sake of clearness, we’ll create two different agent types in this example where Caller calls a method of Callee: >>> import asyncio >>> import aiomas >>> >>> class Callee(aiomas.Agent): ... # This agent does not need to override "__init__()". ... ... # "expose"d methods can be called by other agents: ... @aiomas.expose ... def spam(self, times): ... """Return a lot of spam.""" ... return 'spam' * times >>> >>> >>> class Caller(aiomas.Agent): ... ... async def run(self, callee_addr): ... print(self, 'connecting to', callee_addr) ... # Ask the container to make a connection to the other agent: ... callee = await self.container.connect(callee_addr) ... print(self, 'connected to', callee) ... # "callee" is a proxy to the other agent. It allows us to call ... # the exposed methods: ... result = await callee.spam(3) ... print(self, 'got', result) >>> >>> >>> container = aiomas.Container.create(('localhost', 5555)) >>> callee = Callee(container) >>> caller = Caller(container) >>> aiomas.run(until=caller.run(callee.addr)) Caller('tcp://localhost:5555/1') connecting to tcp://localhost:5555/0 Caller('tcp://localhost:5555/1') connected to CalleeProxy('tcp://localhost:5555/0') Caller('tcp://localhost:5555/1') got spamspamspam >>> container.shutdown() The agent Callee exposes its method spam() via the @aiomas.expose decorator and thus allows other agents to call this method. The arguments and return values of exposed methods need to be serializable. Exposed methods can be normal functions or coroutines. The Caller agent does not expose any methods, but defines a task run() which receives the address of the remote agent. It can connect to that agent via the container’s connect() method. This is a coroutine, so you need to await its result. It’s return value is a proxy object to the remote agent. Proxies represent a remote object and provide access to exposed attributes (like functions) of that object. In the example above, we use the proxy to call the spam() function. Since this involves sending messages to the remote agent, you always need to use await with remote method calls. Distributing agents over multiple containers¶ One container can house a (theoretically) unlimited number of agents. As long as your agents spent most of the time waiting for network IO, there’s no need to use more than one container. If you notice, that the Python process with your program fully utilizes its CPU core (remember, pure Python only uses one core), its time to spawn sub-processes with its own container to actually parallelize your application. The aiomas.subproc module provides some helpers for this use case. Running multiple agent containers in a single process might only be helpful for demonstration or debugging purposes. In the latter case, you should also take a look at the aiomas.local_queue transport. You can replace normal TCP sockets with it and gain a deterministic order of outgoing and incoming messages between multiple containers within a single process.
https://aiomas.readthedocs.io/en/latest/guides/agent.html
2018-11-12T22:59:07
CC-MAIN-2018-47
1542039741151.56
[]
aiomas.readthedocs.io
Visual display policy settings The Visual Display section contains policy settings for controlling the quality of images sent from virtual desktops to the user device. Preferred color depth for simple graphicsPreferredVisual quality This setting specifies the desired visual quality for images displayed on the user device. By default, this setting is Medium. To specify the quality of images, choose one of the following options: - Low - Recommended for bandwidth-constrained networks where visual quality can be sacrificed for interactivity -.
https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/policies/reference/ica-policy-settings/visual-display-policy-settings.html
2018-11-12T23:32:47
CC-MAIN-2018-47
1542039741151.56
[]
docs.citrix.com
When you start the information center, you are presented with a window, which can be divided into three functional parts. Across the top is a toolbar. The toolbar will provide you with quick access to most of KInfoCenter’s features like get help on the current module and a help menu. Along the left hand side, is a column with a filter field at the top. This is a where you choose which module to investigate. To navigate through the various KCM modules, left click on the module in the tree view. You can also use the arrow keys to scroll though the KCM's and pressing Enter will select the module. The module will now appear of the main panel of the KInfoCenter window. Some items within the tree view are categories, you can left click or again press Enter to expand and collapsed these items. This will show the modules under the category. You can right click on the module listing to show the following options: : Collapses the tree to show only top level modules and categories. : Expands the tree to show modules. : This will clear any filter you have applied on the module listing via the search box The main panel shows you the system information about the selected module.
https://docs.kde.org/stable5/en/kde-workspace/kinfocenter/information-center-screen.html
2016-08-24T01:31:15
CC-MAIN-2016-36
1471982290752.48
[array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object) array(['kinfocenter.png', 'The KInfoCenter Screen'], dtype=object)]
docs.kde.org
See also There are 5 general mechanisms for creating arrays: This section will not cover means of replicating, joining, or otherwise expanding or mutating existing arrays. Nor will it cover creating object arrays or record arrays. Both of those are covered in their own sections. In general, numerical data arranged in an array-like structure in Python can be converted to arrays through the use of the array() function. The most obvious examples are lists and tuples. See the documentation for array() for details for its use. Some objects may support the array-protocol and allow conversion to arrays this way. A simple way to find out if the object can be converted to a numpy array using array() is simply to try it interactively and see if it works! (The Python Way). Examples: >>> x = np.array([2,3,1,0]) >>> x = np.array([2, 3, 1, 0]) >>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists, and types >>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]]) Numpy has built-in functions for creating arrays from scratch: zeros(shape) will create an array filled with 0 values with the specified shape. The default dtype is float64. >>> np.zeros((2, 3)) array([[ 0., 0., 0.], [ 0., 0., 0.]]) ones(shape) will create an array filled with 1 values. It is identical to zeros in all other respects. arange() will create arrays with regularly incrementing values. Check the docstring for complete information on the various ways it can be used. A few examples will be given here: >>> np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.arange(2, 10, dtype=np.float) array([ 2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.arange(2, 3, 0.1) array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]) Note that there are some subtleties regarding the last usage that the user should be aware of that are described in the arange docstring. linspace() will create arrays with a specified number of elements, and spaced equally between the specified beginning and end values. For example: >>> np.linspace(1., 4., 6) array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ]) The advantage of this creation function is that one can guarantee the number of elements and the starting and end point, which arange() generally will not do for arbitrary start, stop, and step values. indices() will create a set of arrays (stacked as a one-higher dimensioned array), one per dimension with each representing variation in that dimension. An examples illustrates much better than a verbal description: >>> np.indices((3,3)) array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]]) This is particularly useful for evaluating functions of multiple dimensions on a regular grid. This is presumably the most common case of large array creation. The details, of course, depend greatly on the format of data on disk and so this section can only give general pointers on how to handle various formats. Various fields have standard formats for array data. The following lists the ones with known python libraries to read them and return numpy arrays (there may be others for which it is possible to read and convert to numpy arrays so check the last section as well) HDF5: PyTables FITS: PyFITS Others? xxx Examples of formats that cannot be read directly but for which it is not hard to convert are libraries like PIL (able to read and write many image formats such as jpg, png, etc). Comma Separated Value files (CSV) are widely used (and an export and import option for programs like Excel). There are a number of ways of reading these files in Python. There are CSV functions in Python and functions in pylab (part of matplotlib). More generic ascii files can be read using the io package in scipy. There are a variety of approaches one can use. If the file has a relatively simple format then one can write a simple I/O library and use the numpy fromfile() function and .tofile() method to read and write numpy arrays directly (mind your byteorder though!) If a good C or C++ library exists that read the data, one can wrap that library with a variety of techniques (see xxx) though that certainly is much more work and requires significantly more advanced knowledge to interface with C or C++. There are libraries that can be used to generate arrays for special purposes and it isn’t possible to enumerate all of them. The most common uses are use of the many array generation functions in random that can generate arrays of random values, and some utility functions to generate special matrices (e.g. diagonal)
http://docs.scipy.org/doc/numpy-1.3.x/user/basics.creation.html
2016-08-24T01:34:02
CC-MAIN-2016-36
1471982290752.48
[]
docs.scipy.org
JForm::loadFieldFieldType Description Proxy for . Description:JForm::loadFieldType [Edit Descripton] protected function loadFieldType ( $type $new=true ) - Returns mixed object on success, false otherwise. - Defined on line 1484 of libraries/joomla/form/form.php - Since See also JForm::loadFieldType source code on BitBucket Class JForm Subpackage Form - Other versions of JForm::loadFieldType SeeAlso:JForm::loadFieldType [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/API17:JForm::loadFieldType
2016-08-24T01:32:16
CC-MAIN-2016-36
1471982290752.48
[]
docs.joomla.org
Components Weblinks Categories Links - Components Weblinks Links Edit This screen is accessed from the back-end Joomla! administrator panel. It is used to add or edit weblink categories.. The unique ID number automatically assigned to this item by Joomla!. This number cannot be changed. -. Specify a different layout than the one supplied by the component view or template overrides. -. Category Permissions -
https://docs.joomla.org/Help25:Components_Weblinks_Categories_Edit
2016-08-24T01:32:27
CC-MAIN-2016-36
1471982290752.48
[]
docs.joomla.org
Ticket #125 (closed defect: fixed) FInger UI is not usable on 2.8" screen Description The finger scroller is barely workable. The buttons on the right are too small for finger taps. Please adjust the sizes. Change History comment:2 Changed 6 years ago by sean_mosko@… - Status changed from new to assigned I will bring dave a device next week. He will start (now) to enlarge the scroll area and use 3 buttons instead of 4. comment:3 Changed 6 years ago by davewu01@… - Status changed from assigned to closed - Resolution set to fixed Fixed, please check: for new image files. comment:4 Changed 6 years ago by alphaone@… - Status changed from closed to reopened - Resolution fixed deleted Needs to be fixed in code before this bug can be closed. comment:5 Changed 6 years ago by jluebbe@… - Status changed from reopened to new - Owner changed from davewu01@… to mickey@… comment:6 Changed 6 years ago by alphaone@…. Note: See TracTickets for help on using tickets. I will work with the designers on this first.
http://docs.openmoko.org/trac/ticket/125
2013-05-18T20:57:47
CC-MAIN-2013-20
1368696382851
[]
docs.openmoko.org
Is the Solution working Is the Solution working Now that the hard work is done, complete your installation by running a saved search and check that the views are populated correctly. Validate that you installed the Solution successfully Run the setup saved searches: - Log into your Splunk Search head. - To open the App, choose "WebSphere" in your Home screen. - From the App main menu select View > Saved Searches > setup_dropdown. - From the App main menu select View > Saved Searches > setup_log - If you have installed the Solution correctly you will see a visual representation of the data you are collecting from your WAS environment, with your Deployment Manager, node agents, and app servers listed. - Restart your Splunk search head and indexers. When you run the searches you create the cell_node_server.csv and cell_node_server_log.csv files from your serverindex.xml and server.xml data that was indexed by Splunk. This search populates the filters that show Host, Cell, Node, and Server data on most views. Use the menu options to easily navigate the WAS data in your environment. To validate data integrity look at: - Component Inventory overview: This view verifies that basic config XML data is being collected and indexed by Splunk. Set the Time range to "All Time" to gather an accurate representation of the data. - Errors view in the Operations section: Regardless of whether errors are present or not in the specified time range, the Host, Cell, Node, and Server menu dropdowns should populate with your environment. Toggle through the views to ensure that the menus are refreshed and populated correctly. Selecting a Host refreshes the Cell dropdown and populates it with the correct Cell names. Selecting Cell refreshes the Node dropdown and so on. - LogEventType=W in the Troublshooting section: Loads all SystemOut.logand SystemErr.logwith event type W and ensures JVM log data is being indexed in the websphere index. - Server availability in the Operations section: If you have JMX enabled you should be able to see which of your app servers are running. It also establishes whether JMX data is being collected. - Alternatively you can check the JVM Runtime view in the Performance' section, however some of this data may not be indexed and it is determined by your WAS PMI settings in the WAS Admin Console. This documentation applies to the following versions of WAS: 2.0 , 2.0.1 View the Article History for its revisions.
http://docs.splunk.com/Documentation/WAS/latest/InstallGuide/IstheSolutionworking
2013-05-18T20:56:23
CC-MAIN-2013-20
1368696382851
[]
docs.splunk.com
Florence is the Renaissance city of all Renaissance cities. Sitting in the cradle of the Tuscan region, Firenze is the locale that probably best encapsulates all of which Italy is known. It’s a delightful mix of old world and modern Italy, with its cobblestone streets juxtaposed with industry and high fashion. There is more art than you’ll know what to do with and the food and culture is spectacular. Take a poll of people who have been to Italy, and I think 8/10 will say Florence is their favorite. There’s a reason for that, it’s an enchanting, captivating place. the dog days are over… The first order of business when you get to Florence is to see The Galleria dell’Accademia di Firenze. Backing up a bit, as I mentioned, Florence was the birthplace/capital of the Renaissance. If you know anything about the Renaissance or its art (or at least the names of all the ninja turtles), you know that Michelangelo is its most revered son. And while Michelangelo is probably most well known for the Sistine Chapel, he actually didn’t care for his paintings as much as his sculpture. And thus, that brings us to David. Michelangelo’s other widely renowned masterpiece is the gargantuan 17 foot statue of the biblical hero (i.e. David and Goliath). He stands right in the center of the Galleria dell’Accademia…the focal point of the museum. But…I actually think Michelangelo’s other works are worth a long examination as well. Everyone goes for David, but I promise you will leave with a greater appreciation of the art form as a whole. It’s incredible what that man was able to do with a chisel. Check out the unfinished prisoners and a possibly not Michelangelo produced Pieta (other authentic versions live in the Duomo of the Cathedral Santa Mario del Fiore –which hold your horses, you’ll read about in a minute– and St. Peter’s Basilica in Vatican City). David himself is a bit skewed in his proportions depending on where you stand. His right hand is gigantic when you look at its placement on his thigh. The best part of the status are the anterior and external jugular veins in his neck. The detail is incredible. It’s really worth taking a guided tour here to learn more about the history of each work and of the man. I love that Michelangelo just hated everyone, but it was likely a projection of the discomfort he felt within himself. He’s often a more interesting subject than his works…a piece of work, if I must. Walking towards the old town, Florence is very easy to navigate with a grid-like plan. Via ricasoli will take you directly to gothic style Cathedral Santa Maria del Fiore, home of the famous Duomo of Florence. Brunelleschi’s red brick cupula is one of the most recognizable features in the city and is an engineering marvel that has stood the test of time and weather and war since the 1200/1300s. A couple of guys you may have heard of contributed to its planning: Donatello and Leonardo da Vinci. If you’re keeping track, Raphael is the only ninja turtle not involved with this structure. Across from this cathedral is the Baptistery of St. John. This peculiar octagonal building is built in the Romanesque style and was erected in the 1000s. And it still stands out, with its very stylish mosaic detail and the large bronze ‘gates of paradise.’ However, the gates of paradise are just replicas, as the real ones are housed in the Museo dell’Opera del Duomo. Fun fact, I’ve seen two of the other three replicas in the world: Grace Cathedral in San Francisco up on Knob Hill and Nelson Atkins Museum of Art in Kansas City. Now I just have to visit the Kazan Cathedral in St. Petersburg, Russia and I’ll have gates of paradise bingo! If you have time, I’m sure a climb up Giotto’s tower of the cathedral will afford you an incredible view of the city. But, continue on your way down Via dei Calzaiuoli until you hit Piazza della Signoria, which is the city center. This plaza is flanked by cafes and shops all around, but at the far end lies the town hall of Palazzo Vecchio with its looming tower and replica of David standing guard. There’s also many statues holding court at the Loggia dei Lanzi including works from Donatello and Cellini. At the opposite end is another famous Florentine art house, the house of Gucci. Founded in 1920s by Guccio Gucci, this Italian luxury brand has ascended the ranks of high fashion and become a household name. There’s a museum and a cool as hell shop with items that can only be found here. You can’t talk about Florence without talking about fashion. Some of the most lusted after brands in the world started in this area: Gucci, Salvatore Ferragamo, Emilio Pucci and Roberto Cavalli, and they continue to reign supreme. Art begets art, and having such a creative world around no doubt helped these designers find inspiration around every corner. Continuing your walk in the old town, you’ll reach the famed Ponte Vecchio bridge. This patchwork quilt of a structure crosses over the Arno River. This latest iteration is from the 1300s, but previous versions were damaged by flooding. Nowadays, there are lots of jewelry vendors along the bridge although it does appear to be a little sketchy. By this point I was dying of hunger, but had to hold out until I could hoof it back to the Mercato di San Lorenzo near the center of town. When you enter, go straight to Da Nerbone. You will probably smell it before you see it. This is one of the finest sandwiches I’ve ever eaten in my life. Full disclosure: I am not a sandwich person. I don’t really like them, unless there’s pork of some sort of them (BLT is my favorite type). Here there is a sandwich that meets and meats this criteria: the porchetta. You will read reviews telling you to try the lampredotto tripe sandwich which is traditional. You should have the porchette. Crispy, crunchy, fatty…it’s perfection. The bun could be heated up a little more, and so I don’t think it knocks a good Keller-esque BLT from its #1 position in best sandwiches, but it came real close. It’s also one of the cheapest things you can find here. Before succumbing to a food coma, I had to find some an afternoon cappuccino to wake me up. News Cafe is killing it on Instagram due to their very talented barista/owner Massemo, who can draw pictures of the Duomo in your foam. That’s secondary, the coffee is really good, which, first things first yada yada yada. To top off an already art filled bonanza of a day, you have to visit Uffizi Gallery. Before going, you should read up on the prominent omnipotent Medici family that ruled over Italy during the 1500s. They killed it in the bank game and were incredibly powerful. When the dynasty died out, their art collection was donated to the city, and here you have the Uffizi Gallery, one of the largest art museums in the world. You’ve got Raphael, Caravaggio, da Vinci, El Greco, Rembrandt, Titian. So many masterpieces! Of course, everyone has to see the Birth of Venus by Botticelli. After strolling around the museum for a few hours, your legs will hate you, but your mind will alight in all of the beautiful things it has seen today. Treat your stomach with a steak florentine at Il Portale. These huge T-bones can feed a few hungry people, and are best paired with a red from the Tuscan region…and lucky you, you just happen to be there. Mangia! Mangia! One thought on “24 hours in florence” Pingback: Touchdown in Kan-SAS CITY | Traveling docs
https://traveling-docs.com/2018/08/21/24-hours-in-florence/
2021-05-06T02:57:17
CC-MAIN-2021-21
1620243988725.79
[array(['https://travelingdocs.files.wordpress.com/2018/06/img_2337.jpg?w=676', 'IMG_2337.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2015.jpg?w=676', 'IMG_2015.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2034.jpg?w=676', 'IMG_2034.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2027.jpg?w=676', 'IMG_2027.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2057.jpg?w=676', 'IMG_2057.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2084.jpg?w=676', 'IMG_2084.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2118.jpg?w=676', 'IMG_2118.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2091.jpg?w=676', 'IMG_2091.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2102.jpg?w=676', 'IMG_2102.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2381.jpg?w=676', 'IMG_2381.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2139.jpg?w=676', 'IMG_2139.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2198.jpg?w=676', 'IMG_2198.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2152.jpg?w=676', 'IMG_2152.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2218.jpg?w=676', 'IMG_2218.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2417.jpg?w=676', 'IMG_2417.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2321.jpg?w=676', 'IMG_2321.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2349.jpg?w=676', 'IMG_2349.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2293.jpg?w=676', 'IMG_2293.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2018/06/img_2411.jpg?w=676', 'IMG_2411.jpg'], dtype=object) ]
traveling-docs.com
Welcome to the Directa24 API v1 Documentation page. Here you will find all the details you need to review your current integration. This documentation is meant only to support old integrations and should not be integrated anymore. For the latest Deposits Documentation click here. We work as a bridge between you and your customer's local payment methods such as banks, e-wallets, credit cards among others. With only one integration, you have access to the most popular payment methods in the emerging markets. All communications should be made using POST method and exclusively through registered IPs on both sides. Only HTTPS requests are allowed Credentials must be included in all communications. Both parties must use an HMAC-SHA-256 (RFC 2104) code to verify the integrity of the information received from each side. Your account is associated with two different environments; Production and Staging (STG)/. You will find your API credentials in the Merchant Panel by going to Settings -> API Access. Notice that the credentials between our Staging environment and our Production environment are different. A Staging environment is available for integration development and testing which simulates the requests and transactions available in the platform. On test environment no transactions will actually be processed. You can create deposits in Staging using the same payment methods than in Production, the most important payment methods are available for the simulation. During integration, on the Staging system you can approve or cancel the deposits yourself, and receive the notifications to test the full flow.
https://docs.directa24.com/v/v1/
2021-05-06T05:20:51
CC-MAIN-2021-21
1620243988725.79
[]
docs.directa24.com
Broadcasts The Broadcasts API gives you a read-only interface to PredictHQ’s Live TV Events data. The API returns broadcasts; a broadcast represents a physical event broadcasted on a television network, at a specific date, time, and location. Search Broadcasts Use the parameters described below to search and filter all broadcasts that are available to your account. Visibility Window Please note that you will not receive an error when requesting a date range and/or location that is outside of your subscription's broadcast visibility window. Instead, your visibility window will be automatically applied to your results. Your plan’s visibility window is shown in your plan summary. Result Limit Please note the number of results returned will be limited to your subscription's result limit. If more results exist, the overflow field will be set to true to indicate the count number has been capped to your pagination limit. Your plan’s pagination limits are shown in your plan summary. Parameters Broadcast Fields Below are the fields returned by the Broadcasts endpoint. Please note that these are not the fields used for filtering – refer to the Search Broadcasts section to discover which parameters can be used to filter broadcasts. JSON Schemas are available for the Broadcasts endpoint and for a single broadcast:
https://docs.predicthq.com/resources/broadcasts/
2021-05-06T02:51:08
CC-MAIN-2021-21
1620243988725.79
[]
docs.predicthq.com
GridView.MasterRowExpanding Event Enables you to control whether particular detail clones can be displayed. Namespace: DevExpress.XtraGrid.Views.Grid Assembly: DevExpress.XtraGrid.v19.1.dll Declaration [DXCategory("MasterDetail")] public event MasterRowCanExpandEventHandler MasterRowExpanding <DXCategory("MasterDetail")> Public Event MasterRowExpanding As MasterRowCanExpandEventHandler Event Data The MasterRowExpanding event's data class is MasterRowCanExpandEventArgs. The following properties provide information specific to this event: Remarks The MasterRowExpanding event is raised when before expanding a master row or before switching between detail clones. The master row can be identified using the CustomMasterRowEventArgs.RowHandle parameter. The CustomMasterRowEventArgs.RelationIndex parameter identifies the detail clone which is about to be opened. If you need to prevent the detail from being displayed, set the MasterRowCanExpandEventArgs.Allow parameter to false. Please refer to the Master-Detail Relationships topic for additional information.
https://docs.devexpress.com/WindowsForms/DevExpress.XtraGrid.Views.Grid.GridView.MasterRowExpanding?v=19.1
2021-05-06T02:53:22
CC-MAIN-2021-21
1620243988725.79
[]
docs.devexpress.com
SYS.DataCheck.System abstract persistent class SYS.DataCheck.System extends %Library.Persistent SQL Table Name: SYS_DataCheck.SystemDataCheck Overview The DataCheck system provides a mechanism to compare the state of data on two systems and determine whether they are the matched or discrepant. It is assumed that the global states on both systems may be in transition and the system provides a mechansim to recheck discrepant ranges. One of the two systems must be defined as the destination. All control and output from the system is provided through the destination system. Typically, the destination system is the destination of shadowing or a non-primary mirror member, while the source system is the source of shadowing or another mirror member. See Destination for detail on creating and configuring a DataCheck system. The system operates in units called queries. The query specifies a database, an initial global reference, a target global reference, and a number of nodes. The query is sent by the destination system to the source system. Both systems calculate an answer by traversing N global nodes starting with the initial global reference, and hashing the keys/value pairs. The destination compares the query answers from both systems and records the results in a set of RangeList objects. Queries are created automatically on the destination system based on its settings. The destination system has a Workflow object to specify a strategy for checking globals. The Workflow consists of a number of phases to begin a check, recheck discrepancies, etc. The settings that define what globals to check are stored on the Destination in the RunParameters object. This class, System, is an abstract class containing elements common to both Destination and Source systems. Property Inventory Method Inventory Properties Methods Returns a reason code by reference to describe why the system is in its current state (particularly the Stopping and Stopped states on the destination system). Reason codes are not provided for all states. The reason will be $$$ReasonUnknown if a specific reason is not provided or not available. The reason code on the destination of DataCheck can be used to determine whether the system has stopped due to a Workflow stop phase, a user-requested stop or an error. Upon successful return from the Start() method on the destination system, its state will be changed to $$$StateStarting. On any subsequent call to this method, the caller can determine whether the system has reached a workflow stop phase by checking the reason code for $$$StateReasonWorkflowStop. While there is no guarantee that a reason will be provided when the system is stopped in all cases, it is guaranteed that if the system stops due to a workflow stop phase, the reason will be set to $$$StateReasonWorkflowStop. See StateReason for possible reason codes. Indexes Inherited Members Inherited Methods - %AddToSaveSet() - %AddToSyncSet() - %BMEBuilt() - DetermineClass() - () Storage Storage Model: CacheStorage (SYS.DataCheck.System)
https://docs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls?&LIBRARY=%25SYS&CLASSNAME=SYS.DataCheck.System
2021-05-06T04:58:39
CC-MAIN-2021-21
1620243988725.79
[]
docs.intersystems.com
. 1 © MongoDB, Inc 2008-present. MongoDB, Mongo, and the leaf logo are registered trademarks of MongoDB, Inc.
https://docs.mongodb.com/v4.4/reference/command/ping/
2021-05-06T05:07:03
CC-MAIN-2021-21
1620243988725.79
[]
docs.mongodb.com
class ExprBuilder This class builds an if expression. Marks the expression as being always used. Sets a closure to use as tests. The default one tests if the value is true. Tests if the value is a string. Tests if the value is null. Tests if the value is empty. Tests if the value is an array. Tests if the value is in an array. Tests if the value is not in an array. Transforms variables of any type into an array. Sets the closure to run if the test pass. Sets a closure returning an empty array. Sets a closure marking the value as invalid at validation time. if you want to add the value of the node in your message just use a %s placeholder. Sets a closure unsetting this key of the array at validation time. Returns the related node. Builds the expressions. © 2004–2017 Fabien Potencier Licensed under the MIT License.
https://docs.w3cub.com/symfony~4.1/symfony/component/config/definition/builder/exprbuilder
2021-05-06T04:12:59
CC-MAIN-2021-21
1620243988725.79
[]
docs.w3cub.com
St. Louis has one of the most recognizable features of any location in the world, the St. Louis arch. There’s so much more to see here though, with charming neighborhoods and stunning architecture, cultural character and museums and one of the biggest parks in the US. The best thing is, a lot of attractions are free here. Which cannot be said about most places. Most importantly, it is home to the best frozen custard in the US, nay, the world, Ted Drewes. I love Ted Drewes. I dream of Ted Drewes. Marry me, Ted Drewes. One taste of the Big Apple concrete will have you saying, meet me in St. Louis. I’ve only ever driven into St. Louis, and usually like to make it a long weekend, so typically I’ve gotten in around brunch time on Friday. The perfect place to start your trip is Kayak’s coffee. Just a stone’s throw from Washington University in St. Louis and Forest Park, there’s really good salads here. I know, I’m not usually a salad person, but the Northwoods Salad is delish. They have sandwiches, smoothies, quiche, french toast, everything you could want before heading to Forest Park. someone’s excited about the art museum Forest Park is home to an abundance of attractions and the zoo, which is free (!). This park is gigantic, larger than Central Park. The Saint Louis art museum, Science Center and Missouri History museum lie within its limits, and it seems like every notable event happens here, too. Shakespeare in the Park, Hot Balloon race and LouFest are held here. take me to church Driving from west to east, after you leave the park, head a few more blocks east to the Cathedral Basilica of Saint Louis. I’m not religious, but this is a site to behold. This cathedral is one of the most beautiful I’ve ever seen, and I’ve been all over Europe. The mosaic work on the ceiling is incredible. If you want to see another beautiful church, make sure you check out St. Francis on the campus of Saint Louis University, also gorgeous gothic architecture. There’s a pretty famous brewery from St. Louis, you may have heard of it…Schafly. Ha, you thought I was going to say Anheuser-Busch, but I’ll get to them. Schafly is a craft brewer that puts out of the best Pumpkin ales you can find. They also have a great restaurant that serves American fare (try the ribs) which goes great with their ales. I usually stay in Ladue or Clayton with friends, but after dinner, don’t miss an opportunity to go out in Clayton with the college kids. It’s laid back, but you’ll definitely hear the question, “Which high school did you go to?” at least 5 times during the night. So STL. The next morning, you’re going to want to start your day at Nadoz, a bistro/cafe with delicious paninis and crepes. This restaurant in Richmond Heights sits right across from the Saint Louis Galleria, an upscale mall that will make any serious shopper salivate. Head to Soulard Farmer’s Market to see the locals in one of the oldest markets in the country, established in 1779. Soulard also is known for its nightlife, especially around Mardi Gras, so tuck that in your mental Rolodex if so inclined. Anheuser-Busch, home of Budweiser, is adjacent to this area, and it’s worth a visit for the FREE tour, the Clydesdales, and the slice of Americana that this pop icon occupies. You’ll learn a lot about their brewing process and get a free sample at the end if you’re of age. Since you are so close to downtown, you gotta go to the Arch. First you will pass Busch stadium, and if you’re able, definitely catch a Cardinals game here sometime, it’s a beautiful park. At the base of the Arch is the gateway museum that has a lot of interesting facts about Lewis & Clark’s journey. The main event is taking the sideways elevator to the 630 ft top of the arch to the observation area. It’s a deal at $10 when compared to other observation decks (I’m looking at you, Willis Tower and Space Needle). If it’s a windy day, expect there to be a little swaying; if it’s a clear day, enjoy the vast views across the Mississippi. Stop by Union Station for a brief look at the architecture of the world’s former busiest and largest train station. Now it serves as a hotel and event center, as seen outfitted for a wedding in this picture. Spend the rest of your afternoon finding your inner kid at the City Museum, which is more like a playground of repurposed architectural and industrial materials for all ages, complete with a Ferris Wheel on its roof. It’s terrifying to be up that high. Unwind from your busy day at a restaurant on the Hill, an area of town that is known for its Italian food. Brazie’s is a personal favorite, but there’s so many choices. You gotta get cannolis or tiramisu for dessert, it’s mandatory. Consider having a night cap in the West End, a populated night spot with lounges and the Sub Zero Vodka Bar. For your last morning in town, head to Delmar Boulevard, and Blueberry Hill for good, old-fashioned diner food with a pop culture decor scheme. Like we’re talking metal lunch boxes and pez dispensers. Stroll along this street to see record shops and thrift stores and the like. And don’t think I’ve forgotten. It’s time. Ted Drewes awaits you. There’s a reason I waited until the end, and that’s because if you had it your first day, you wouldn’t want to do anything else. They are known for their concretes, custard with mix ins. The Big Apple has pieces of apple pie smushed into it. Need I say more? If you are driving back somewhere, take some Imo’s for the road. Cracker crust pizza and toasted raviolis will clog your arteries in the best of ways. Love St. Louis. Food/drink to try: Toasted Ravioli, STL style ribs, TED DREWES, Budweiser, Schafly Sports: St. Louis Blues (NHL), Cardinals (MLB) Music: Meet Me in St. Louis, Nelly, St. Lunatics/Chingy Suggested Souvenirs: something from the Arch, hell, bring back some Imo’s with you 2 thoughts on “3 days in St. Louis” Pingback: take me down to the Vatican City | Traveling docs Pingback: i heart hartford | Traveling docs
https://traveling-docs.com/2017/10/13/3-days-in-st-louis/
2021-05-06T04:11:47
CC-MAIN-2021-21
1620243988725.79
[array(['https://travelingdocs.files.wordpress.com/2017/10/img_6364.jpg?w=676', 'IMG_6364.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2017/09/img_5304.jpg?w=3264', 'IMG_5304.JPG'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2017/09/img_5283.jpg?w=3264', 'IMG_5283.JPG'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2017/09/img_52931.jpg?w=3264', 'IMG_5293.JPG'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2017/09/img_5225.jpg?w=3264', 'IMG_5225.JPG'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2017/09/img_5212-e1505012407274.jpg?w=676', 'img_5212-e1505012386721.jpg'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2017/09/img_5252.jpg?w=676', 'IMG_5252.JPG'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2017/09/img_5255.jpg?w=676', 'IMG_5255.JPG'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2017/09/img_5223.jpg?w=3264', 'IMG_5223.JPG'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2017/09/img_5322.jpg?w=676', 'IMG_5322.JPG'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2017/09/img_5230-e1505017744324.jpg?w=676', 'img_5230-e1505017721632.jpg'], dtype=object) ]
traveling-docs.com
A newer version of this page is available. Switch to the current version. BaseEdit.IsPrintingMode Property Gets or sets whether the editor is in printing mode. Namespace: DevExpress.Xpf.Editors Assembly: DevExpress.Xpf.Core.v19.2.dll Declaration Property Value Remarks If you create templates used to render grid elements when it is printed and use DevExpress Data Editors, set an editor's IsPrintingMode property to true to correctly calculate its bounds and margins in a printing document. For an example, please see 'Print Templates' in the DXGrid control's demo. See Also Feedback
https://docs.devexpress.com/WPF/DevExpress.Xpf.Editors.BaseEdit.IsPrintingMode?v=19.2
2021-05-06T03:22:51
CC-MAIN-2021-21
1620243988725.79
[]
docs.devexpress.com
Certificate Handling Guidelines These guidelines are relevant to maintainers of packages which utilize smart cards for loading certificate or private key. Its purpose is to bring a consistency in smart card handling on the OS; for background and motivation see the current status of PKCS#11 in Fedora. How to specify a certificate or private key stored in a smart card or HSM In April 2015, RFC7512 defined a 'PKCS#11 URI' as a standard way to identify objects stored in smart cards or HSMs. That form should be understood by programs when specified in place of a certificate file. For non-interactive applications which get information on the command line or configuration file, there should not be a separate configuration option to load keys and certificates stored in smart cards, the same option accepting files, should additionally accept PKCS#11 URIs. How to specify a specific PKCS#11 provider module for the certificate or key Packages which can potentially use PKCS#11 tokens SHOULD automatically use the tokens which are present in the system’s p11-kit configuration, rather than needing to have a PKCS#11 provider explicitly specified. See the PKCS#11 packaging page for more information.
https://docs.fedoraproject.org/sk/packaging-guidelines/SSLCertificateHandling/
2021-05-06T03:36:54
CC-MAIN-2021-21
1620243988725.79
[]
docs.fedoraproject.org
ROS 2 Documentation: Foxy Installation Building ROS 2 on Ubuntu Linux Installing ROS 2 on Ubuntu Linux Installing ROS 2 via Debian Packages Building ROS 2 on macOS Installing ROS 2 on macOS Building ROS 2 on Windows Installing ROS 2 on Windows Building ROS 2 on Fedora Linux Installing the latest ROS 2 development Maintaining a source checkout of ROS 2 Pre-release Testing Installing DDS implementations Installing Connext security plugins Installing University or Evaluation versions of RTI Connext DDS Working with Eclipse Cyclone DDS Tutorials Configuring your ROS 2 environment Introducing turtlesim and rqt Understanding ROS 2 nodes Understanding ROS 2 topics Understanding ROS 2 services Understanding ROS 2 parameters Understanding ROS 2 actions Using rqt_console Creating a launch file Recording and playing back data Creating an action Writing an action server and client (C++) Writing an action server and client (Python) Launching/monitoring multiple nodes with Launch Composing multiple nodes in a single process Using colcon to build packages ROS 2 Topic Statistics Tutorial (C++) Using Fast DDS Discovery Server as discovery protocol [community-contributed] Implement a custom memory allocator ROS2 on IBM Cloud Kubernetes [community-contributed] Eclipse Oxygen with ROS 2 and rviz2 [community-contributed] Building ROS 2 on Linux with Eclipse Oxygen [community-contributed] Building realtime Linux for ROS 2 [community-contributed] Use quality-of-service settings to handle lossy networks Management of nodes with managed lifecycles Efficient intra-process communication Recording and playback of topic data with rosbag using the ROS 1 bridge Using tf2 with ROS 2 Using URDF with robot_state_publisher Real-time programming in ROS 2 Trying the dummy robot demo Logging and logger configuration demo Guides Installation troubleshooting Developing a ROS 2 package ament_cmake user documentation Migrating launch files from ROS 1 to ROS 2 ROS 2 package with bloom Using Python Packages with ROS 2 Porting RQt plugins to Windows Running ROS 2 nodes in Docker [community-contributed] ROS 2 Package Maintainer Guide Building RQt from source Building RQt from source on macOS Building RQt from source on Windows 10 Concepts About the build system About internal ROS 2 interfaces About ROS 2 middleware implementations ROS 2 Client Interfaces (Client Libraries) Contributing ROS 2 developer guide Code style and language versions Quality guide: ensuring code quality Migration guide from ROS 1 Python migration guide from ROS 1 ROSCon Content Distributions ROS 2 alpha releases (Aug 2015 - Oct 2016) Beta 1 (codename ‘Asphalt’; December 2016) Beta 2 (codename ‘r2b2’; July 2017) Beta 3 (codename ‘r2b3’; September 2017)) Features Status Roadmap Feature Ideas Project Governance ROS 2 Technical Steering Committee Charter Marketing Materials Related Projects Intel ROS 2 Projects Glossary ROS 2 Documentation: Foxy » Related Projects Edit on GitHub Related Projects ¶ Intel ROS 2 Projects Other Versions v: foxy Releases Foxy (latest) Eloquent (EOL) Dashing Crystal (EOL) In Development Rolling Galactic
https://docs.ros.org/en/foxy/Related-Projects.html
2021-05-06T04:16:00
CC-MAIN-2021-21
1620243988725.79
[]
docs.ros.org
Introduction¶ If you are reading this user guide for the first time it is strongly recommended that you read the user guide fully before experimenting with your own data files. Much of the content has supplementary links to the reference documentation; you will not need to follow these links in order to understand the guide but they may serve as a useful reference for future exploration. Since later pages depend on earlier ones, try reading this user guide sequentially using the next and previous links.
https://scitools-iris.readthedocs.io/en/latest/userguide/index.html
2021-05-06T02:49:58
CC-MAIN-2021-21
1620243988725.79
[]
scitools-iris.readthedocs.io
9.3.3 Specifying Stim Fluid, Materials, and Material Quantities In the previous version of the stim job object, materials had to be repeated for each stage of a stimulation job. Because stimulation jobs in current industry practice now can contain 60 or more stages, a new, more efficient approach was needed. A stim job object can now contain a catalog of materials for that particular job (see Figure 9.3-1 and Figure 9.3.3-1 ); any stage of the stimulation job can reference material(s) in the catalog, thereby eliminating the need to repeat the materials for each stage. The stim fluid object is the fluid (without the proppant) that is carrying the proppant downhole. Proppants as a concept refers to the materials left in the well or consumed in the process of making the stimulation. The stim material catalog object contains a set of stim material objects representing additives and proppants that are used during a stimulation job. These materials can be used as a catalog by referencing the material from stim material quantity using a UUID instead of repeating the material properties. Figure 9.3.3-2 shows a UML diagram of individual materials. For proppants, additional details can be provided (if available). For example, properties related to ISO 13503.2 properties (e.g., density, solubility, apparent density) and crush tests, and ISO 13503.6 point data (e.g., conductivity, permeability and stress) can describe useful details of the proppant material properties. The stim fluid object uses the stim material quantity objects to get the “material reference” string, which is the UUID of stim material quantity. Each material type has a UUID as an attribute; and references are always that UUID.
http://docs.energistics.org/WITSML/WITSML_TOPICS/WITSML-000-131-0-C-sv2000.html
2021-05-06T03:00:32
CC-MAIN-2021-21
1620243988725.79
[array(['WITSML_IMAGES/WITSML-000-067-0-sv2000.png', None], dtype=object) array(['WITSML_IMAGES/WITSML-000-068-0-sv2000.png', None], dtype=object)]
docs.energistics.org
You can reply to a Bugsnag error email to update the status of an error, or collaborate on finding a fix. Reply to a Bugsnag error email to add a comment on the error in the Bugsnag dashboard. You can also perform certain actions by replying to an email with a command. The following commands are supported by Bugsnag: @ignore- Ignore the error. @fix- Fix the error. @reopen- Reopen the error. @delete- Delete the error. @issue- Create an issue for the error in your issue tracking system. You should ensure you have one configured in your notification plugins first. Note: Email commands and comments are not available for Bugsnag On-premise.
https://docs.bugsnag.com/product/email/email-commands/
2021-05-06T03:08:53
CC-MAIN-2021-21
1620243988725.79
[]
docs.bugsnag.com
Kirkbymoorside Town Council Agenda The Town Meeting is for all of the Citizens of Kirkbymoorside * Press are welcome and anyone may attend * Published on 6 May 2011 for a Town Meeting of the Parish of Kirkbymoorside to be held at 7.30 pm on Tuesday 17 May in the Methodist Hall, Kirkbymoorside - Welcome: The Chairman - Minutes of the Annual Parish Meeting, 28 April 2009. - Chairman’s Report. - Reports from voluntary organisations - Parish Plans: A short Presentation from Ms Maggie Farey from Rural Action Yorkshire - Wainds Field Development: A Short Presentation from Mr Roy Boardman, Trilandium Homes - Issues and concerns - An opportunity for electors to bring to the meeting’s attention issues and concerns about Kirkbymoorside. ………………………………… Councillor Gaynor De Barr Town Mayor, Chairman of the Town Council 4th May 2011
https://docs.kirkbymoorsidetowncouncil.gov.uk/doku.php/agendatownmeeting2011-05-17
2021-05-06T03:58:50
CC-MAIN-2021-21
1620243988725.79
[]
docs.kirkbymoorsidetowncouncil.gov.uk
Upgrade a Replica Set to 3.4¶ - Starting in version 3.4.21, MongoDB 3.4-series removes support for Ubuntu 16.04 POWER/PPC64LE. -. Preparedness¶¶ 3.4 features.¶ At this point, you can run the 3.4 binaries without the 3.4 features that are incompatible with 3.2. To enable these 3.4 features, set the feature compatibility version to 3.4..4. - To upgrade a sharded cluster, see Upgrade a Sharded Cluster to 3.4.
https://docs.mongodb.com/v4.4/release-notes/3.4-upgrade-replica-set/
2021-05-06T05:24:19
CC-MAIN-2021-21
1620243988725.79
[]
docs.mongodb.com
Because we on this blog are from all over the United States, from the west to the East, the Midwest to the South, we decided to create a series of “hometown hits,” where we cover different neighborhoods from one of our many home regions. With the devastating rains and flooding of Hurricane Harvey temporarily obscuring the true landscape of Houston, Texas, we wanted to shine a spotlight on this wonderful metropolis. And what a metropolis it is, the fourth most populated city in the US, the largest city in Texas–and we all know everything is bigger in Texas right? Nowhere is that more true than in this city that sits on Galveston Bay; there are big buildings, big hospitals, big space exploration and big hearts in abundance here. One of us on this blog was raised and has roots in this fantastic city, so we hope that underscoring a vibrant neighborhood will act as a reminder of the Houston that is and will rise again. And while this disaster may be a temporary setback, and the rebuilding efforts will take time, this city and its people will unite, they will persist and they will come back stronger, and better than before. photo from visithoustontexas.com Montrose has to be one of the most interesting neighborhoods in all of Texas, if not all the world. They say keep Austin weird, but it also applies here. They call this Bohemian area the “heart of Houston” and it does exemplify the diversity this city has to offer. Montrose became a hub of the counterculture movement in the 60s, however in the past 20 years it has become more and more high end without losing its charm. There are artists, antique and thrift shops, musicians, communes, spendy boutiques, a large LGBT faction, tattoo parlors, upmarket mansions, hipsters, and bars and restaurants galore. Literally everything you need to have a good time is in this one area. It’s supremely colorful and ecclectic, both in its people and in the buildings/homes. Close to the museum district, this area also boasts several notable collections at The Menil and at the Rothko Chapel (features works by Mark Rothko and Philip Johnson). It is also close to all the universities in town, attracting all comers, who are all welcome. Standouts include: Barnaby’s: if you look up the definition of a cozy cafe, this is what you’ll find Black Labrador Pub: literally the cutest British pub, human sized chess board! Chapultepec Lupita: 24 hour Mexican with dozens (plural) of tequila selections. El Real Tex Mex: the famous neon marquee of the old Tower movie theater sits near the very recognizable Westheimer and Montrose at the heart of this district Indika: modern Indian cuisine in a expansive setting (it’s walls are tikka masala color) La Mexicana: cozy, homey Mexican establishment that’s been around for 30 years Les Ba’Get: modern fusion Vietnamese brick and mortar of a beloved food truck Niko Niko’s: it’s all greek to me, especially when delicious and served out of an old gas station (I have a thing for those kind of places: Joe’s KC and Vinsetta’s Garage) Riel: sophisticated global cuisine that is literally all over the map…but it works Ramen Tatsu-Ya: yeah I know it’s from Austin, but it’s ramen and I like it The Dunlavy: the most picturesque views of Buffalo Bayou from floor to ceiling windows in their dining room Torchy’s Tacos: also from Austin, their tacos are amazing Underbelly: this butchery showcases the diverse multicultural flavors of this city, often with fish sauce (umami city) Everyone has always known the resiliency and strength of the people of Houston, but now more than ever, we’re seeing it. From the first responders to the good samaritans driving boats down to help the rescue efforts, we send our thoughts, prayers, and love to its citizens. If you are able to help, the American Red Cross and United Way are reputable charities that have a high donation value. Several celebrities such as JJ Watt and Kevin Hart also have youcaring and crowdrise pages, respectively, where you can donate as well. Other ways to help include donating care packages and blood, and supporting anyone who may be feeling helpless or upset over the situation. Don’t mess with Texas, cause we’ve all got their back.
https://traveling-docs.com/2017/09/01/hometown-hits-houston/
2021-05-06T03:50:14
CC-MAIN-2021-21
1620243988725.79
[array(['https://mohawkaustin.com/_/made/_/remote/https_res.cloudinary.com/hjbfjdt7p/image/upload/c_fill,w_470/v1504063563/qtyjj9xc4gpx9ptwvsee_256_256_80_s_c1_c_t_0_0_1.jpg', 'Image result for houston strong'], dtype=object) array(['https://travelingdocs.files.wordpress.com/2017/08/img_4534.jpg?w=676', 'IMG_4534.JPG'], dtype=object) ]
traveling-docs.com
Add the Extension First step is always to add the extension to your development environment. To do this use the tutorial located here. AIR SDKAIR SDK This ANE currently requires at least AIR 33+. This is required in order to support versions of Android > 9.0 (API 28). We always recommend using the most recent build with AIR especially for mobile development where the OS changes rapidly. extension is required by this extension. You must include this extension in your application. The Core extension ANE requires can use of certain aspects of the Google Play Services if supplied, mainly around using the advertising identifier to attribute links correctly. The client library is available as a series of ANEs that you add into your applications packaging options. Each separate ANE provides a component from the Play Services client library and are used by different ANEs. These client libraries aren't packaged with this ANE as they are used by multiple ANEs and separating them will avoid conflicts, allowing you to use multiple ANEs in the one application. If you wish to use the expanded functionality add the following Google Play Services: You can access the Google Play Services client library extensions here:. There are several additions to the manifest required for the Branch extension. In order to correctly gather install referrer information you need to add the com.google.android.finsky.permission.BIND_GET_INSTALL_REFERRER_SERVICE permission : KeysKeys Most importantly you need to add the Branch keys for your application as meta data tags to the application node in your manifest: The io.branch.sdk.TestMode flag can be used to set your application into test mode from the manifest. You can override this in your AS3 code however this option is here if you wish to use this method. Deep link schemeDeep link scheme If you are adding a deep link / custom url scheme to be able to launch your application from a url such as myapp:// then you need to add the scheme to the main activity for your application: ExampleExample MultiDex ApplicationsMultiDex Applications If you have a large application and are supporting Android 4.x then you will need to ensure you enable your application to correctly support MultiDex to allow the application to be broken up into smaller dex packages. This is enabled by default with releases of AIR v25+, except in the Android 4.x case where you need to change the manifest additions for the application tag to match the following and use the MultiDexApplication. Using AndroidXUsing AndroidX This will require the addition of the androidx.multidex extension which contains the androidx.multidex.MultiDexApplication implementation. iOSiOS The Branch SDK requires a few additions to the Info plist and Entitlements section of your application to correctly configure your application. These are listed in the sections below. Note: As of version 3.1 the extension now packages the Branch iOS framework code internal to the application. You no longer need to add the Framework. KeysKeys Most importantly you need to add the Branch keys for your application to the InfoAdditions node: You should replace the entries above with the values from the Branch dashboard for your application. Configure associated domainsConfigure associated domains Add your link domains from your Branch dashboard, these should be added to the Entitlements node, for example: Deep link schemeDeep link scheme If you are adding a deep link / custom url scheme to be able to launch your application from a url such as myapp:// then you need to add the scheme to the info additions as below:
https://docs.airnativeextensions.com/docs/branch/add-the-extension/
2021-05-06T02:51:06
CC-MAIN-2021-21
1620243988725.79
[]
docs.airnativeextensions.com
Setting up Foundation data for custom applications Applications use Foundation data as a source of information about the people in your company and their attributes, such as organization, location, geography, and other shared characteristics that you might need to categorize. The Foundation data can be used to drive business processes and rules. It consists of common data elements, such as people, organization, locations, categorizations, and geography, that can be used by multiple applications in an organization to satisfy different requirements. For example, if you use BMC Helix Innovation Studio to create an onboarding application and a service desk application for your organization, you can use the people and location data elements in both of these applications for different purposes. You do not need to re-create this data multiple times. The following image shows the different actions that you can take to set up your Foundation data in BMC Helix Platform: Related topic Leveraging Remedy ITSM Foundation data
https://docs.bmc.com/docs/helixplatform/setting-up-foundation-data-for-custom-applications-851870819.html
2021-05-06T04:08:25
CC-MAIN-2021-21
1620243988725.79
[array(['/docs/helixplatform/files/851870819/955517813/1/1600842500310/Setting+up+Foundation+data+process+diagram.png', 'Setting up Foundation data process diagram'], dtype=object) ]
docs.bmc.com
What if I haven’t Received My License Key in the Email? - If you haven’t received an email with your license key, head over to Freemius and log in (or reset the password) to get into your account. - Select the desired subscription option and download the MPG plugin premium version. - After manual installation, you may need to go to the MPG plugin > Account > “Sync License” > “Activate Pro” in order to make it work fully.
https://docs.mpgwp.com/article/27-what-if-i-haven-t-received-my-license-key-in-email
2021-05-06T04:04:30
CC-MAIN-2021-21
1620243988725.79
[]
docs.mpgwp.com
Watch the 7.3 Webinar On-Demand This new release brings updates to Universal Storage, query optimization, and usability that you won’t want to miss. Find the desired pod name ( <operator-pod-name>). The <operator-deployment-name> in the following command is the value of metadata.name in the deployment.yaml file. kubectl get pods | grep <operator-deployment-name> Run the following command to view the Operator logs. kubectl logs <operator-pod-name>
https://docs.singlestore.com/v7.3/reference/memsql-operator-reference/view-the-operator-logs/
2021-05-06T04:16:40
CC-MAIN-2021-21
1620243988725.79
[]
docs.singlestore.com
Construct a new completion handler for a widget. Make a completion request. If completion request fails, reset model and fail silently. Receive completion items from provider. Receive a completion reply from the connector. Updates model with text state and current cursor position. The completer widget managed by the handler. The data connector used to populate completion requests. The only method of this connector that will ever be called is fetch, so it is acceptable for the other methods to be simple functions that return rejected promises. The data connector used to populate completion requests. The only method of this connector that will ever be called is fetch, so it is acceptable for the other methods to be simple functions that return rejected promises. The editor used by the completion handler. The editor used by the completion handler. Get whether the completion handler is disposed. Dispose of the resources used by the handler. Get the state of the text editor at the given position. Invoke the handler and launch a completer. Handle a completion selected signal from the completion widget. Handle invoke-request messages. Handle selection changed signal from an editor. If a sub-class reimplements this method, then that class must either call its super method or it must take responsibility for adding and removing the completer completable class to the editor host node. Despite the fact that the editor widget adds a class whenever there is a primary selection, this method checks independently for two reasons: jp-mod-has-primary-selectionto filter out any editors that have a selection means the semantic meaning of jp-mod-completer-enabledis obscured because there may be cases where the enabled class is added even though the completer is not available. Handle a text changed signal from an editor. Handle a visibility change signal from a completer widget. Process a message sent to the completion handler. A completion handler for editors.
https://jupyterlab.readthedocs.io/en/stable/api/classes/completer.completionhandler-1.html
2021-05-06T04:06:27
CC-MAIN-2021-21
1620243988725.79
[]
jupyterlab.readthedocs.io
Staged underwriting is a manual review process that Assembly conducts on your pay-out users (people you are sending money to) using the user verification information collected during the User Verification process. The staged underwriting process is based on a number of underwriting thresholds determined by Assembly. The thresholds are set based on the risk profile of your platform/marketplace. During a manual review, Assembly conducts various verification methods including, but not limited to : Validation of email/domain existence Cross-validation of social media links and other personally identifiable information (PII) If at any point during manual review, we are unable to verify the identity of a pay-out user (person you are sending money to), Assembly will proceed with requesting further verification document/s to establish identity prior to the release of funds. Passing as much information as possible when your first onboard a user will help create a strong user profile on your platform, and also help Assembly verify your users quickly, with as little friction as needed.
https://docs.assemblypayments.com/en/articles/2037222-staged-underwriting
2021-05-06T02:51:27
CC-MAIN-2021-21
1620243988725.79
[]
docs.assemblypayments.com
Overview Assembly uses a fraud prevention tool that delivers real-time, multi-tiered protection, allowing us to create and deploy fraud rules specific to your industry, geography, as well as tailored rules specific to your business. Please visit our website or contact our sales team for information on our fraud management services. Depending on your pricing model, Assembly delivers a fully managed online fraud service with a team of fraud experts to protect your business from emerging threats like Card-Not-Present fraud, shill bidding and account replication/takeover. How they work Our fraud analysts combine the analysis of your business model, type of industry, and historical data to deploy the most effective rules aimed to detect fraud by looking at customer data and purchase behaviour on every single card transaction. “Fraud rules” contain as a set of classifiers that, when met, return one of three possible responses: Accept The transaction is deemed low risk and is accepted. Challenge The transaction is exhibiting fraudulent behaviour and is placed into a queue to be manually reviewed by our fraud specialists. Deny The transaction is deemed high risk, as it exhibits known fraudulent behaviour and/or risky data elements. Rule Classifiers Assembly uses over one hundred different rule classifiers to develop the best protection for your business. Rule classifiers include, but are not limited to: Geographical Profiling Classifiers that look at discrepancies between the customer’s location versus other data. Transactional Behaviour How both your buyers and sellers interact during the transaction life cycle. Network intelligence and predictive fraud risk scoring based on data that has been gathered on a particular email address. IP Profiling Similar to the email risk score, a risk score is also applied on an IP address. Device ID Matching Matching a Device ID against historical data along with a global consortium database.
https://docs.assemblypayments.com/en/articles/2037239-what-are-fraud-prevention-rules
2021-05-06T03:52:58
CC-MAIN-2021-21
1620243988725.79
[]
docs.assemblypayments.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Deletes the specified container. Before you make a DeleteContainer request, delete any objects in the container or in any folders in the container. You can delete only empty containers. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to DeleteContainerAsync. Namespace: Amazon.MediaStore Assembly: AWSSDK.MediaStore.dll Version: 3.x.y.z Container for the necessary parameters to execute the DeleteContainer service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/MediaStore/MMediaStoreDeleteContainerDeleteContainerRequest.html
2018-08-14T17:54:03
CC-MAIN-2018-34
1534221209216.31
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Returns an array of celebrities recognized in the input image. For more information, see Recognizing Celebrities in the Amazon Rekognition Developer Guide. RecognizeCelebrities returns the 100 largest faces in the image. It lists recognized celebrities in the CelebrityFaces array and unrecognized faces in the UnrecognizedFaces array. RecognizeCelebrities doesn't return celebrities whose faces are not amongst the largest 100 faces in the image. For each celebrity recognized, the RecognizeCelebrities returns a Celebrity object. The Celebrity object contains the celebrity name, ID, URL links to additional information, match confidence, and a ComparedFace object that you can use to locate the celebrity's face on the image. Rekognition does not retain information about which images a celebrity has been recognized in. Your application must store this information and use the Celebrity ID property as a unique identifier for the celebrity. If you don't store the celebrity name or additional information URLs returned by RecognizeCelebrities, you will need the ID to identify the celebrity in a call to the operation. You pass the imput. For an example, see Recognizing Celebrities in an Image in the Amazon Rekognition Developer Guide. This operation requires permissions to perform the rekognition:RecognizeCelebrities operation. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to RecognizeCelebritiesAsync. Namespace: Amazon.Rekognition Assembly: AWSSDK.Rekognition.dll Version: 3.x.y.z Container for the necessary parameters to execute the RecognizeCelebrities service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Rekognition/MRekognitionRecognizeCelebritiesRecognizeCelebritiesRequest.html
2018-08-14T17:54:15
CC-MAIN-2018-34
1534221209216.31
[]
docs.aws.amazon.com
Data Type Formatting Functions Topics Data type formatting functions provide an easy way to convert values from one data type to another. For each of these functions, the first argument is always the value to be formatted and the second argument contains the template for the new format. Amazon Redshift supports several data type formatting functions.
https://docs.aws.amazon.com/redshift/latest/dg/r_Data_type_formatting.html
2018-08-14T17:57:57
CC-MAIN-2018-34
1534221209216.31
[]
docs.aws.amazon.com
What is the service period of invoices with recurring. If set, the subscription end date forces the service period end of the invoice. With respect to the invoice line items, the start date and end date set as the invoice run period usually represent the service period for recurring items..
https://docs.juston.com/en/jo_faq_service_period/
2018-08-14T17:23:51
CC-MAIN-2018-34
1534221209216.31
[]
docs.juston.com
An Act to amend 59.69 (5) (b), 59.69 (5) (e) 2. and 59.69 (5) (e) 6.; and to create 59.69 (5) (g) of the statutes; Relating to: the method used by a county clerk to notify town clerks of certain county zoning actions. (FE) Amendment Histories Bill Text (PDF: ) Fiscal Estimates AB457 ROCP for Committee on Urban and Local Affairs (PDF: ) Wisconsin Ethics Commission information 2015 Senate Bill 313 - Enacted into law
http://docs.legis.wisconsin.gov/2015/proposals/ab457
2018-08-14T17:44:54
CC-MAIN-2018-34
1534221209216.31
[]
docs.legis.wisconsin.gov
Settings ↑ Back to Top WooCommerce > Settings >Products > Downloadable Products contains several downloadable product options. The most important option is the file download method: You have three choices: - Redirect only – When users download a file, their link redirects to the file. - Force download – File downloads are forced, using PHP. - for your store. To ensure files are protected from direct linking, Force Download can be used. Files will be served by PHP. However, if your files are large, or the server is underpowered, you may experience timeouts during download. In this case, you need to either look at your server or use the redirect method. If your server supports it, use X-Accel-Redirect/X-Sendfile; it’s the most reliable method because the file is served directly to the customer, and gives you the best performance. Files are also protected by an .htaccess file, making it secure. The next option is Must be logged in to download files. This only allows logged-in users to download. Guest checkout would need to be disabled. Creating downloadable products ↑ Back to Top Downloadable Simple Products To get started: - Create a simple product the regular way but tick the Downloadable checkbox: After ticking this box, extra options appear: 2. Upload a file and click Insert to set up each downloadable file URL. Match the URL of the product to the URL of your site. (i.e., if you have a www in your site URL, then make sure that is in the file URL.) – In version 2.0+, you can enter one per line – In prior versions, you could only set up one file per product. If you need to add multiple files in older version, ‘zip’ the files into one package. 3. Enter the Download Limit (optional). Once the user hits this limit, they can no longer download the file. 4. Enter the Download Expiry. If you define a number of days, download links expire after that. 5. Select the Download Type from the dropdown. 6. Save. As soon as you change an uploaded file or upload a new file, the Download Expiry and Download Limit are reset because it’s technically a new file. This is the intended behavior.. Order process ↑ Back to Top The order process for downloadable products is: - User adds product to their cart. - Users checks out and pays. - After payment, several things can happen depending on your setup: - If items in the order are all downloadable + virtual, the order will complete. - If items are physical and downloadable/virtual, the order will be processing until you change it. - Once complete, or if the option “Grant access to downloadable products after payment” is enabled, the user can: - Be granted download permission - See download links on the order received page - See download links in their email notification - See download links on their ‘My Account’ page if logged in Users can then download files. Managing orders with downloadable line items If you edit/view an order with downloadable products, the downloadable products meta box contains user permissions: By editing this panel, you can modify a user’s permissions or revoke access to files. You can also grant access to new downloads. FAQ ↑ Back to Top Why does WooCommerce link to the URL of the file?. Can I use cloud storage to store my files and downloads? Most definitely! WooCommerce only needs an external URL that points to your digital download file. If it is a valid external download URL, then WooCommerce works perfectly. There is no further validation.
https://docs.woocommerce.com/document/digitaldownloadable-product-handling/
2016-10-20T21:27:50
CC-MAIN-2016-44
1476988717954.1
[]
docs.woocommerce.com
Table of Contents Product Index So now you know Genesis 2 Female has all new surface names, and even though you bought Victoria 4 for Genesis 2 Female… You have to hand change all of the surface settings to use that huge library of Victoria 4 Mat files! Bummer Dude! But wait! There's a way now to do all of the work in a couple of clicks! This script will read the .DS, .DSA and .PZ2 files and do all of the work for you. No more going cross eyed and getting carpal tunnel clicking all around your folders, and checking settings back and!
http://docs.daz3d.com/doku.php/public/read_me/index/17102/start
2016-10-20T21:29:02
CC-MAIN-2016-44
1476988717954.1
[]
docs.daz3d.com
If you think you have found a bug in Janino, proceed as follows: If you experience problems with downloading, installing or using Janino, turn to the JANINO mailing list. Please issue feature requests on the JANINO mailing list. I appreciate your feedback. Let me know how you want to utilize Janino, if you find it useful, or why you cannot use it for your project.!
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=137199642
2014-03-07T09:37:09
CC-MAIN-2014-10
1393999640501
[]
docs.codehaus.org
To be able to run PHP analyses with Sonar, you will need: - A Java Runtime Environment (version > 1.5) - A database (optional if you're evaluating the plugin) - Sonar installed with its PHP plugin - All the PHP environment (PHP and tools that are required by the plugin) Install a Java Runtime Environment (JRE) Check the different supported Java virtual machines (see "Supported platforms > Java") that you can install. To be sure you have correctly installed a JVM, open a terminal a run "java -version": you should see the version of Java you've just installed. Install a database (optional if you're evaluating the plugin) Check the different supported databases (see "Supported platforms > Database") that you can install..
http://docs.codehaus.org/pages/viewpage.action?pageId=228178402
2014-03-07T09:34:39
CC-MAIN-2014-10
1393999640501
[]
docs.codehaus.org
Targets Default Target Gant's Build Script Gant and the Ant Optional Tasks Return Codes Ant Task Clean Target Set Maven Target Set Ivy Tool Gant is currently at version 1.1.1,.1.1. An Ubuntu/Debian package of Gant 1.0.2 for Groovy 1.5 is available: Deb.
http://docs.codehaus.org/pages/viewpage.action?pageId=74776983
2014-09-15T04:08:14
CC-MAIN-2014-41
1410657104119.19
[]
docs.codehaus.org
Improving Artifact Conflict Resolution NOTE: This page describes the best of my knowledge on this topic. To be absolutely certain about my facts (in the Context, etc), I'd need to verify by looking at the source code for maven-artifact and maven-artifact-manager. Context Currently, Maven supports the resolution of artifact versions by way of nearest-wins. That is, for any set of dependencies that share the same groupId:artifactId:typeclassifier in a particular artifact closure, the one declared nearest to the current project in the dependency tree will be selected for use. This can lead to subtle effects, where upgrading a particular version of one dependency in your POM can lead to a transitive dependency being declared nearer than previously, and therefore changing the version used for that transitive dependency. While this approach to conflict resolution is probably at least as good as any at solving the problem generally, it's important to recognize that certain situations may call for very different conflict resolution methods. The following scenarios describe times when specific conflict resolution techniques may be used to address specific problems. Testing and Production Builds In this scenario, users will likely want to know if there is any disagreement on dependency versions. Any disagreement could be used to spot-check the full application for features that may express incompatibility with the chosen version of the artifact. In some cases, these builds might be allowed to proceed with simple warnings about version conflicts. In others (particularly when the application is thought to be ready for a production rollout), it would be highly desirable to have the build fail if a conflict is detected, since any conflicts should have been resolved manually in the POM during testing or earlier. Alternatively, it's possible that a testing team wouldn't want to adjust upstream dependency declarations, and wouldn't want to impact the purity of the application POM's dependency specification. In these cases, they want the dependency list to converge on a set of "blessed" versions without allowing the build system any leeway to decide (with a potential to decide incorrectly) on its own. Nightly / Integration Builds Nightly and integration-style builds will likely take place on a CI server, and should sometimes incorporate a build of all the latest artifacts which match the specified dependency list. This way it's possible to report on possible migration/upgrade paths for POMs in the build that are not using the latest matching version. Corporate Policy-compliance Builds Many companies keep tight control over which external dependencies they allow for the software they produce. This can take many forms, from auditing and approving individual artifacts as they are added to a corporate repository, to auditing builds during testing, etc. In these situations, it might be useful to impose a conflict resolution technique to enforce the usage of a blessed set of artifact versions in a given build. This would allow other groups in the company to vet each artifact version before putting it on the blessed list. In this case, non-compliance would likely result in a broken build. Out-and-out Version Overrides In some special cases, it may be useful to override the version resolution process altogether, and instead use some alternate artifact version. One such scenario might involve generating migration reports for each of a set of projects. These reports might let developers know what sorts of problems they'd be likely to confront if they tried to move their application from its existing dependency-version set to the latest-and-greatest set. To make these more useful, it might be good to restrict the version override to a single dependency at a time, to isolate the errors related to that dependency upgrade. Problem Maven's artifact-handling components have the current version conflict resolution strategy hard-coded in place. The current strategy has to be componentized and abstracted to an interface so that we can switch strategies. Beyond this, we need to provide a mechanism for configuring the desired conflict resolution technique. For most purposes, it might be best to configure a Maven instance to use a particular technique, assuming that testing users, pre-production users, and CI systems will always look for the same type of conflict resolution technique.
http://docs.codehaus.org/display/MAVEN/Conflict+Resolution
2014-09-15T04:17:34
CC-MAIN-2014-41
1410657104119.19
[]
docs.codehaus.org
Container Platform API endpoint. You can define these triggers using GitHub, GitLab, Bitbucket, or Generic webhooks. Currently, OpenShift Container Platform webhooks only support the analogous versions of the push event for each of the Git-based source code management systems (SCMs). All other event types are ignored. When the push events are processed, the OpenShift Container Platform master host confirms if the branch reference inside the event matches the branch reference in the corresponding BuildConfig. If so, it then checks out the exact commit reference noted in the webhook event Create a BuildConfig from a GitHub repository."
https://docs.openshift.com/container-platform/4.1/builds/triggering-builds-build-hooks.html
2020-10-23T20:57:43
CC-MAIN-2020-45
1603107865665.7
[]
docs.openshift.com
Account Preferences Account preferences help you customize your Sentry experience. Manage your account by selecting "User settings" from the dropdown under your organization’s name. On this page, you can control the frequency of your email notifications, change your primary email, and update your security settings. Security Settings Security Settings include options to reset a password, sign out of all devices, and enable two-factor authentication. Two-Factor Authentication After setting up your two-factor authentication codes, click "View Codes" to download, print, or copy your codes to a secure location. If you cannot receive two-factor authentication codes in the future, such as if you lose your device, use your recovery codes to access your account. Clicking "Sign out of all devices" will end your sessions with any device logged in to Sentry using your account. Notifications Sentry emails you a notification when an issue state changes, a release deploys, or there's a spike in your quota. Fine-tune notifications per project by navigating to Workflow Notifications. For more details about Notifications, see our full documentation on Alerts & Notifications. Email Routing controls the email address to which notifications are sent for each project. These notifications default to the email address provided when you set up your Sentry account. You can route emails to an alternative email address when, for example, you want to notify only the team associated with a specific project. Weekly Reports Sentry generates Weekly Reports per project and sends these reports once a week on Mondays. For example: The email address used to log into your Sentry account is, by default, your primary email address. Add an alternative email address in the Add Secondary Emails section. After verifying your secondary email, you can set it to become your primary email. Close Account Closing your Sentry account automatically removes all data associated with your account after a 24-hour waiting period to prevent accidental cancellation. If your account is the sole owner of an organization, this organization will be deleted. Organizations with multiple owners will remain unchanged. For details about termination of service, see Term, Termination, and Effect of Termination.
https://docs.sentry.io/product/accounts/user-settings/
2020-10-23T22:24:40
CC-MAIN-2020-45
1603107865665.7
[]
docs.sentry.io
TOPICS× Supported DRM systems Browser TVSDK supports multiple digital rights management (DRM) systems using the W3C specification for encrypted media extensions (EME). DRM features are made available to the application through a unified API and workflow. For more information about the list of supported DRM systems, see the Content protection features in Supported features. To make full use of the information here, learn about Multi-DRM Workflows in the Multi-DRM Workflows guide.
https://docs.adobe.com/content/help/en/primetime/programming/browser-tvsdk-2-4/content-protection/t-psdk-browser-tvsdk-2_4-drm-support.html
2020-10-23T23:10:42
CC-MAIN-2020-45
1603107865665.7
[]
docs.adobe.com
TOPICS× Audience filters for reporting Audience filters (or audiences) are groups of visitors who share a specific characteristic or set of characteristics. Use audience filters to specify the audiences used for reporting. You can select an audience and compare its performance to the overall traffic. You might want to understand whether your winners were different for the various traffic sources, when compared to general traffic. This helps you discover audiences that should be potentially targeted to different content. One winner does not fit all traffic in many cases. For example, visitors who arrive at your page from a certain search engine might be one audience. Other audiences might be based on gender, age, location, registration status, purchase history, or just about any other detail you can collect about your visitors. Use audience filters to divide visitor traffic and compare experience performance for each traffic segment. When planning to use audience filters for an activity, consider the following guidelines: - Visitors can be in multiple audiences. If there are two audiences set up (for example, "new visitors" and "visitors from Google"), and a person meets both criteria, then this visitor is counted and tracked in both audiences. As a result, the sum of the visitors in the audiences does not match the number of visitors in an activity. - Set up audiences before launching the activity. Audience data cannot be retrieved retroactively. If you do not configure audience filters before you start the activity, then decide to use them after the activity has run for a while, you will not collect the data for the time that has already passed. - Begin with two to four audiences. Focus on basic information, such as the traffic source. - Rename audiences as needed. You can rename an audience without affecting the data to make the audience name more meaningful for the results being collected, even if the activity is active. - Enter precise values. Audience filter values are case-sensitive. For example, if you are using an audience that filters on cities, you should use an "OR" condition to include possible spelling and capitalization variations, such as "Vienna," "vienna," "wien," and "Wien." - Audiences created from the Audiences list are reusable. Audiences created as part of an activity cannot be reused. The following sections provide more information about setting up and reporting on audiences:
https://docs.adobe.com/content/help/en/target/using/audiences/managing-audience-filters.html
2020-10-23T22:39:47
CC-MAIN-2020-45
1603107865665.7
[]
docs.adobe.com
How to use automatic TCP/IP addressing without a DHCP server).. More Information Important Follow the steps in this section carefully. Serious problems might occur if you modify the registry incorrectly. Before you modify it, back up the registry for restoration in case problems occur... DCHP server, it assigns itself an IP address after generating an error message. The computer then broadcasts four discover messages, and after every 5 minutes it repeats the whole procedure until a DHCP server comes on line. A message is then generated stating that communications have been re-established with the DHCP Server.
https://docs.microsoft.com/uk-ua/windows-server/troubleshoot/how-to-use-automatic-tcpip-addressing-without-a-dh?branch=pr-en-us-959
2020-10-23T22:49:08
CC-MAIN-2020-45
1603107865665.7
[]
docs.microsoft.com
Inserting and deleting rows and columns, moving ranges of cells¶ Inserting rows and columns¶ You can insert rows or columns using the relevant worksheet methods: The default is one row or column. For example to insert a row at 7 (before the existing row 7): >>> ws.insert_rows(7) Moving ranges of cells¶ You can also move ranges of cells within a worksheet: >>> ws.move_range("D4:F10", rows=-1, cols=2) This will move the cells in the range D4:F10 up one row, and right two columns. The cells will overwrite any existing cells. If cells contain formulae you can let openpyxl translate these for you, but as this is not always what you want it is disabled by default. Also only the formulae in the cells themselves will be translated. References to the cells from other cells or defined names will not be updated; you can use the Parsing Formulas translator to do this: >>> ws.move_range("G4:H10", rows=1, cols=1, translate=True) This will move the relative references in formulae in the range by one row and one column.
https://openpyxl.readthedocs.io/en/3.0/editing_worksheets.html
2020-10-23T22:18:56
CC-MAIN-2020-45
1603107865665.7
[]
openpyxl.readthedocs.io
debug – Print statements during execution¶ Synopsis¶ - This module prints statements during execution and can be useful for debugging variables or expressions without necessarily halting the playbook. - Useful for debugging together with the ‘when:’ directive. - This module is also supported for Windows targets. See Also¶ See also - assert – Asserts given expressions are true - The official documentation on the assert module. - fail – Fail with custom message - The official documentation on the fail module. Examples¶ #.
https://docs.ansible.com/ansible/2.8/modules/debug_module.html
2020-10-23T21:51:21
CC-MAIN-2020-45
1603107865665.7
[]
docs.ansible.com
Overview of Protected Configuration You can use protected configuration to encrypt sensitive information, including user names and passwords, database connection strings, and encryption keys, in a Web application configuration file such as the Web.config file. Encrypting configuration information can improve the security of your application by making it difficult for an attacker to gain access to the sensitive information even if the attacker gains access to your configuration file. For example, an unencrypted configuration file might contain a section specifying connection strings used to connect to a database, as shown in the following example: <configuration> <connectionStrings> <add name="SampleSqlServer" connectionString="Data Source=localhost;Integrated Security=SSPI;Initial Catalog=Northwind;" /> </connectionStrings> </configuration> A configuration file that encrypts the connection string values using protected configuration does not show the connection strings in clear text, but instead stores them in encrypted form, as shown in the following example: <configuration> XO/zmmy3sR0iOJoF4ooxkFxwelVYpT0riwP2mYpR3FU+r6BPfvsqb384pohivkyNY7ifxJkUb+URiGLoaj+XHym//fmCclAcveKlba6vKrcbqhEjsnY2F522yaTHcc1+wXUWqif7rSIPhc0+MT1hB1SZjd8dmPgtZUyzcL51DoChy+hZ4vLzE= </CipherValue> </CipherData> </EncryptedData> </connectionStrings> When the page is requested, the .NET Framework decrypts the connection string information and makes it available to your application. Note You cannot use protected configuration to encrypt the configProtectedData section of a configuration file. You also cannot use protected configuration to encrypt the configuration sections that do not employ a section handler or sections that are part of the managed cryptography configuration. The following is a list of configuration sections that cannot be encrypted using protected configuration: processModel, runtime, mscorlib, startup, system.runtime.remoting, configProtectedData, satelliteassemblies, cryptographySettings, cryptoNameMapping, and cryptoClasses. It is recommended that you use other means of encrypting sensitive information, such as the ASP.NET Set Registry console application (Aspnet_setreg.exe) tool, to protect sensitive information in these configuration sections. For information on the ASP.NET Set Registry console application (Aspnet_setreg.exe), see article Q329290, "How to use the ASP.NET utility to encrypt credentials and session state connection strings," in the Microsoft Knowledge Base at the Microsoft support Web site. Working with Protected Configuration You manage protected configuration using the ASP.NET IIS Registration tool (Aspnet_regiis.exe) or the protected configuration classes in the System.Configuration namespace. The Aspnet_regiis.exe tool (located in the %SystemRoot%\Microsoft.NET\Framework\versionNumber folder) includes options for encrypting and decrypting sections of a Web.config file, creating or deleting key containers, exporting and importing key container information, and managing access to a key container. Encryption and decryption of the contents of a Web.config file is performed using a ProtectedConfigurationProvider class. The following list describes the protected configuration providers included in the .NET Framework: DpapiProtectedConfigurationProvider. Uses the Windows Data Protection API (DPAPI) to encrypt and decrypt data. RsaProtectedConfigurationProvider. Uses the RSA encryption algorithm to encrypt and decrypt data.. You can specify which ProtectedConfigurationProvider you want to use by configuring it in your application's Web.config file, or you can use one of the ProtectedConfigurationProvider instances configured in the Machine.config file. For more information, see Specifying a Protected Configuration Provider. Once you have specified which provider to use, you can encrypt or decrypt the contents of the Web.config file for your application. For more information, see Encrypting and Decrypting Configuration Sections. Note As a best practice when securing your Web applications, it is important that you always keep your application server up to date with the latest security patches for Microsoft Windows and Internet Information Services (IIS), as well as any security patches for Microsoft SQL Server or other membership data sources. For detailed information about best practices for writing secure code and securing applications, see the book "Writing Secure Code" by Michael Howard and David LeBlanc, and see the guidance provided on the Microsoft Patterns and Practices Web site. See Also Tasks Walkthrough: Encrypting Configuration Information Using Protected Configuration Other Resources Encrypting Configuration Information Using Protected Configuration
https://docs.microsoft.com/en-us/previous-versions/aspnet/hh8x3tas%28v%3Dvs.100%29
2020-10-23T23:13:23
CC-MAIN-2020-45
1603107865665.7
[]
docs.microsoft.com
Examples¶ OpenCV Edge Detection¶ This example does edge detection using OpenCV. This is our canonical starter demo. If you haven't used Pachyderm before, start here. We'll get you started running Pachyderm locally in just a few minutes and processing sample log lines. Word Count (Map/Reduce)¶ Word count is basically the "hello world" of distributed computation. This example is great for benchmarking in distributed deployments on large swaths of text data. Periodic Ingress from a Database¶ This example pipeline executes a query periodically against a MongoDB database outside of Pachyderm. The results of the query are stored in a corresponding output repository. This repository could be used to drive additional pipeline stages periodically based on the results of the query. Periodic Ingress from MongoDB Lazy Shuffle pipeline¶ This example demonstrates how lazy shuffle pipeline i.e. a pipeline that shuffles, combines files without downloading/uploading can be created. These types of pipelines are useful for intermediate processing step that aggregates or rearranges data from one or many sources. Variant Calling and Joint Genotyping with GATK¶ This example illustrates the use of GATK in Pachyderm for Germline variant calling and joint genotyping. Each stage of this GATK best practice pipeline can be scaled individually and is automatically triggered as data flows into the top of the pipeline. The example follows this tutorial from GATK, which includes more details about the various stages. Pachyderm Pipelines¶ This section lists all the examples that you can run with various Pachyderm pipelines and special features, such as transactions. Joins¶ A join is a special type of pipeline that enables you to perform data operations on files with a specific naming pattern. Matching files by name pattern Spouts¶ A spout is a special type of pipeline that you can use to ingest streaming data and perform such operations as sorting, filtering, and other. Transactions¶ Pachyderm transactions enable you to execute multiple Pachyderm operations simultaneously. Use Transactions with Hyperparameter Tuning err_cmd¶ The err_cmd parameter in a Pachyderm pipeline enables you to specified actions for failed datums. When you do not need all the datums to be successful for each run of your pipeline, you can configure this parameter to skip them and mark the job run as successful. Skip Failed Datums in Your Pipeline Machine Learning¶ Iris flower classification with R, Python, or Julia¶ The "hello world" of machine learning implemented in Pachyderm. You can deploy this pipeline using R, Python, or Julia components, where the pipeline includes the training of a SVM, LDA, Decision Tree, or Random Forest model and the subsequent utilization of that model to perform inferences. R, Python, or Julia - Iris flower classification Sentiment analysis with Neon¶ This example implements the machine learning template pipeline discussed in this blog post. It trains and utilizes a neural network (implemented in Python using Nervana Neon) to infer the sentiment of movie reviews based on data from IMDB. Neon - Sentiment Analysis pix2pix with TensorFlow¶ If you haven't seen pix2pix, check out this great demo. In this example, we implement the training and image translation of the pix2pix model in Pachyderm, so you can generate cat images from edge drawings, day time photos from night time photos, etc. Recurrent Neural Network with Tensorflow¶ Based on this Tensorflow example, this pipeline generates a new Game of Thrones script using a model trained on existing Game of Thrones scripts. Tensorflow - Recurrent Neural Network Distributed Hyperparameter Tuning¶ This example demonstrates how you can evaluate a model or function in a distributed manner on multiple sets of parameters. In this particular case, we will evaluate many machine learning models, each configured uses different sets of parameters (aka hyperparameters), and we will output only the best performing model or models. Spark Example¶ This example demonstrates integration of Spark with Pachyderm by launching a Spark job on an existing cluster from within a Pachyderm Job. The job uses configuration info that is versioned within Pachyderm, and stores it's reduced result back into a Pachyderm output repo, maintaining full provenance and version history within Pachyderm, while taking advantage of Spark for computation.
https://docs.pachyderm.com/latest/examples/examples/
2020-10-23T21:25:18
CC-MAIN-2020-45
1603107865665.7
[]
docs.pachyderm.com
Enable Auto Spell Corrector A search for "moniter" on most search clients will fetch zero results. But if a user, on realizing his or her mistake, looks up "monitor" next then SearchUnify can infer that "moniter" is "monitor" misspelled. Once a deduction has been made, SearchUnify will automatically correct each instance of "moniter" in subsequent sessions. Detect Incorrect Queries But the mystery is: How does a search client distinguish a correct query from a misspelled one? For straightforward terms that are misspelled, such as "komputer" and "queuee", there are standard dictionaries to take care of them. A much bigger and pertinent problem for businesses is to find a search engine that will identify and correct the jargon used in their field. For instance, oAuth and 256-byte TLI for an organization. SearchUnify offers a solution. Once set up, Enable Auto Spell Correcter rummages through select content fields to learn jargon and offer relevant suggestions when an employee or customer misspells a query. Sticking with the example, if all the products developed or sold by an organization are in Database (content source) > Inventory (content type) > Products (content field), then a SearchUnify admin can connect Enable Auto Spell Correcter with Products. That's all it's needed for Did You Mean to recognize that 256-bit TLI is incorrect and fetch five extra search results on the first page with the correct query, 256-byte TLI. Expand Dictionary of In-House Terminology To the admins relief, once a connection has been established, Enable Auto Spell Corrector keeps expanding and updating the dictionary with each crawl of the content source. Turn On "Enable Auto Spell Correcter" - Select a search client. - Toggle Enable Auto Spell Correcter to turn it on. - Click to launch a window where you can select content sources. - Select the content source, content type, and content fields and click Add. Last updated: Friday, September 25, 2020
https://docs.searchunify.com/Content/Search-Tuning/Enable-Auto-Spell-Corrector.htm
2020-10-23T21:00:24
CC-MAIN-2020-45
1603107865665.7
[]
docs.searchunify.com
On the 28th of September, we'll release our new Navigation Bar! Isn't it pretty? Thanks to this new navigation bar: - Your users will be able to find your announcements faster than ever: following your feedback we moved the notification bell to a more prominent space. - The famous red number will now reflect the number of unread announcements, while undone userlane guides will be marked with a blue dot (see screenshot above). - As soon as the search function is released (yes we've been very busy lately!), your users will also be able to search through your Userlane content thanks to the search tab. Isn't this exciting?! If you have any questions, please reach out to us: [email protected]
https://docs.userlane.com/en/articles/4456708-release-note-new-assistant-navigation
2020-10-23T21:16:45
CC-MAIN-2020-45
1603107865665.7
[array(['https://downloads.intercomcdn.com/i/o/247080399/e54ccf5d8c8307cd0cd26d18/userlane+navigation+bar.png', None], dtype=object) ]
docs.userlane.com
Let's display your Customer data in the table using Google Sheets as a resource. There are two ways how you can do it: Using Templates Select the Template tab and select Google Sheet as a resource. Follow the Google Sheet authorization process to be signed up on the account. Then, export the file with your Customer's data and drag-and-drop Customer as a table. From scratch Drag-and-drop a table from the list of components Сustomize from the table menu Add Data Source from Data , choose Resource -> Google Sheets and Collection -> Customers Click Save If you need to change the order and the list of columns, drag and drop them or disable accordingly: Finally, let's set data types in the columns we're having displayed now. (For details on data types, (aka Fields) refer to Fields.). Because a field is a property of a column, to display field settings you need to click the corresponding column. It would be most natural to have set Image for photo. For convenience, you might also want user statuses to be colour coded:
https://docs.jetadmin.io/getting-started/quickstart/build-a-page-with-components
2020-10-23T21:47:45
CC-MAIN-2020-45
1603107865665.7
[]
docs.jetadmin.io
# Adding Parameters Follow along in the Terminal cd examples/tutorial python 03_parameterized_etl_flow.py In the last tutorial we refactored the Aircraft ETL script into a Prefect Flow. However, the extract_live_data Task has been hard coded to pull aircraft data only within a particular area, in this case a 200 KM radius surrounding Dulles International Airport: @task def extract_live_data(): # Get the live aircraft vector data around Dulles airport dulles_airport_position = aclib.Position(lat=38.9519444444, long=-77.4480555556) area_surrounding_dulles = aclib.bounding_box(dulles_airport_position, radius_km=200) print("fetching live aircraft data...") raw_aircraft_data = aclib.fetch_live_aircraft_data(area=area_surrounding_dulles) return raw_aircraft_data It would be ideal to allow fetching data from a wide variety of areas, not just around a single airport. One approach would be to allow extract_live_data to take lat and long parameters. However, we can go a step further: it turns out that we already have airport position information in our reference data that we can leverage! Let's refactor our Python function to take a user-specified airport along with the reference data: @task def extract_live_data(airport, radius, ref_data): # Get the live aircraft vector data around the given airport (or none) area = None if airport: airport_data = ref_data.airports[airport] airport_position = aclib.Position( lat=float(airport_data["latitude"]), long=float(airport_data["longitude"]) ) area = aclib.bounding_box(airport_position, radius) print("fetching live aircraft data...") raw_aircraft_data = aclib.fetch_live_aircraft_data(area=area) return raw_aircraft_data (In case you're curious, area=None will fetch live data for all known aircraft, regardless of the area it's in) How might we make use of these function parameters within a Prefect Flow? By using prefect.Parameter: from prefect import Parameter # ...task definitions...) Just like Tasks, Parameters are not evaluated until flow.run() is called, using default values if provided, or overridden values passed into run(): # Run the Flow with default airport=IAD & radius=200 flow.run() # ...default radius and a different airport! flow.run(airport="DCA") Lastly, take note that our execution graph has changed -- fetching live data now depends on obtaining the reference data: Up Next! What happens when a task fails? And how can we customize actions taken when things go wrong?
https://docs.prefect.io/core/tutorial/03-parameterized-flow.html
2020-10-23T21:45:05
CC-MAIN-2020-45
1603107865665.7
[array(['/prefect-tutorial-etl-parameterized-dataflow.png', 'Graph ETL'], dtype=object) ]
docs.prefect.io
How SecureNative works In order to integrate with your application, SecureNative uses the concept of agents. the agent is a lightweight component (a regular dependency package) that is installed into your application and allows you to send us events, capture and inspect your application requests. You can install the agent using your favorite package manager depending on the language that you use for your web server. The agent is very lightweight and doesn't impact your application’s performance, the events are submitted asynchronously and resilient to network failures. After we will receive an event from your application we will automatically trigger security flows that are associated with an event. Typical request flowTypical request flow There are multiple types of agents that are available: First you need to install the SecureNative agents: - Install the SecureNative JavaScript agent - Install the SecureNative SDK/Agent on into your application web server - Configure AppId and API keys for your app How Agent WorksHow Agent Works When you install SecureNative micro-agent on your application. The agent will loads first before and performs dynamic instrumentation of your application. SecureNative performs dynamic instrumentation of your code in run-time, that includes all requests and user operations looking for attacks and vulnerabilities and blocks the attack. Predefined and custom Security Flows are triggered, to give an additional level of protection to the business logic of your application. How request flow worksHow request flow works When a website loads, the js agent collects indicators from your browser device, we use those indicators to uniquely identify every visitor and create a device fingerprint. In addition to using our SDK, we require you to report to us events (see the documentation for the complete events list). Events are operations that are requested by the user to perform/ already completed such as: login, logout, signup, profile update, etc. We use events to learn more about your user and build a behavior profile: Events acts as triggers that we use to run security flows, if we detect an anomaly behavior, we would automatically trigger a webhook into your application, this allows you to take action to protect your user. We also expose a verify endpoint which you may call before every sensitive operation, we analyze the data that we collect and anomaly behavior and will return to you a risk score with security triggers. Communication with SecureNativeCommunication with SecureNative SecureNative agents send requests to SecureNative cloud via a secure HTTPS connection. You are in charge of what data you want to send to us, as more data you send the better we can leverage it and provide better results. The events that you send are automatically cached locally and SecureNative SDK/ Agent will insure that they are delivered securely in batches to the SecureNative cloud. The SDK/Agent will try to communicate with api.securenative.com/collector:443 endpoint, please make sure that your application and environment are allowing outgoing traffic to that endpoint. What happens if SecureNative has a downtime?What happens if SecureNative has a downtime? No worries here, your application performance will not be impacted, our SDK will automatically resend us the events ensuring that events will eventually get to our servers. All events are asynchronously delivered and even if we have downtime, your application will not be impacted, once network issues are resolved everything will continue to run normally.
https://docs.securenative.com/docs/welcome/how-securenative-works/
2020-10-23T21:38:59
CC-MAIN-2020-45
1603107865665.7
[]
docs.securenative.com
After you sign up for integrato.io, you can browse through different integration options or use tools and resources to create a flow for an integration. See Introduction to integrator.io for general information about flows and integrations. Note: This article applies to the legacy user interface (UI). For information on navigating the current UI, see Navigate integrator.io. Contents Account management The following menu displays in the top right corner of your account. Click the avatar icon to find your subscription details and profile information. - If you are an account owner, you can click My account to see the activity of the users in your account and transfer ownership of your account to another account owner. For more information, see Manage accounts. - If you are a user of another account, you can click My profile to modify your user information. Note: The options you find in your account or profile vary based on your role and permissions. See Invite users to your account and manage their roles for more information. Top navigation menu The top navigation menu allows you to navigate to various locations throughout your integrator.io account. Home: The homepage is where you will find your integration tiles and the navigation bar with all of the tools, resources, integration options, and knowledge base articles. Tools & Resources: This menu allows you to access the most commonly used components of integrator.io. - Tools: This section contains the Flow Builder and Data Loader. - Flow Builder: Use this to create flows for an integration. Use flows to export data out of one application and import it into another application. An example of a flow is exporting Order (and Buyer) data from eBay into NetSuite to fulfill the order and also to track sales etc. You can also use flows to create multiple exports and imports that can be linked together. For more information, see Navigate Flow Builder. - Data Loader: Upload a CSV file to a REST + JSON application. For more information, see Data Loader Tool in integrator.io. - Resources: The Resources section allows you to access Exports, Imports, Connections, Scripts, Agents, Stacks, API Tokens, and the Recycle Bin. - Connections: See all of the connections you’ve created or set up a new one. You can also perform certain actions like, edit or clone a connection. Connections are used to store credentials, along with other access information for an application or system. For example, the login credentials to access a NetSuite account would be stored in a connection on integrator.io. For more information, see Connections. - Exports: See all of the exports you’ve set up or create a new one. You can also perform certain actions like, edit or clone an export. Exports are used to extract or receive data from an application or system. An example would be exporting inventory data from NetSuite. For more information, see Exports. - Imports: See all of the imports you’ve set up or create a new one. You can also perform certain actions like, edit or clone an import. Imports are used to insert data into an application. An example would be updating inventory data for items you sell on eBay. For more information, see Imports. - Agents: You can use agents to access a database that is behind a firewall without whitelisting IP addresses. - API Tokens: Get tokens to access integrator.io’s API. - Recycle Bin: Any resource you delete will be moved to the Recycle Bin. Resources are things like, connections and exports. After 30 days, any resources in the Recycle Bin will be deleted. Marketplace: Find ways that integrator.io can quickly resolve integration problems between common platforms. For more information, see Quickstart integration templates. Support: Find solutions to problems with the following options: - Submit a ticket: Request assistance by submitting a ticket to our support team. - Knowledge Base: Find articles with solutions to your problems, guides for how to use tools, and answers to your questions. - What’s new: See our release notes to find out about changes and new features. - University: Become an integration expert with interactive online tutorials. For more information, see Celigo University. See our community forum for more information on integrator.io. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360043136411-Navigate-integrator-io-legacy-UI-
2020-10-23T22:30:17
CC-MAIN-2020-45
1603107865665.7
[array(['/hc/article_attachments/360064813691/legacy-avatar.png', None], dtype=object) array(['/hc/article_attachments/360056457291/topnav.png', 'topnav.png'], dtype=object) ]
docs.celigo.com
Fedora ELN project ≤ 8.
https://docs.fedoraproject.org/ar/eln/
2020-10-23T22:49:32
CC-MAIN-2020-45
1603107865665.7
[]
docs.fedoraproject.org
Bad sectors This section is intended to explain how to analyze bad storage space on a storage device. Contents Softwares gsmartcontrol You can use gsmartcontrol for analyse general condition of the hard drive or SSD. It uses the smartmontools service to display the health status of the storage device. This software have GUI and a lot informations are given. Gsmartcontrol will not display the number of bad sectors, but the SMART of the hard disk will indicate a problem with storage sectors. He is rarely mistaken, but a false positive is possible. Use badblocks to perform an additional test. badblocks You can use badblocks tools with fdisk for analyse bad sector of the storage device. Follow this guide: * sudo fdisk -l (spot storage device at analyse, for example /dev/sda) * sudo badblocks -v /dev/sda This command display in stdout bad sectors. If not sector display (for example, 250059, 250059095 etc), is that there is not bad sector.
https://docs.kaisen-linux.org/index.php/Bad_sectors
2020-10-23T21:45:46
CC-MAIN-2020-45
1603107865665.7
[]
docs.kaisen-linux.org
Resultra comes with a number of tracking templates called “factory templates”. Generally, these templates are fairly basic, but they allow you to quickly get started with different types of tracking and demonstrate many of Resultra's capabilities. You can use these templates directly or further customize and extend them for your own purposes. You can also try these templates on Resultra's demo site. To the extent that trackers can be heavily customized, much of a tracker's functionality comes from its template. In this regard, it is helpful to have documentation and/or instructions for the templates themselves. Below is a list of factory templates currently provided with Resultra. Click on the name of a specific template for more information.
https://docs.resultra.org/doku.php?id=factory-templates
2020-10-23T21:12:59
CC-MAIN-2020-45
1603107865665.7
[]
docs.resultra.org
DataKeeper allows the user to choose the compression level associated with each mirror. Enabling compression can result in improved replication performance, especially across slower networks. A compression level setting of 3-5 represents a good balance between CPU usage and network efficiency based on the system, network and workload. Feedback Thanks for your feedback. Post your comment on this topic.
http://docs.us.sios.com/dkce/8.6.4/en/topic/compression
2020-10-23T22:18:27
CC-MAIN-2020-45
1603107865665.7
[]
docs.us.sios.com
This topic summarizes characteristics that apply to Virtual SAN, as well as its clusters and datastores. Virtual SAN provides numerous benefits to your environment. Table 1. Virtual SAN Features Supported Features Description Shared storage support Virtual SAN supports VMware features that require shared storage, such as HA, vMotion, and DRS. For example, if a host becomes overloaded, DRS can migrate virtual machines to other hosts in the cluster. Just a Bunch Of Disks (JBOD) Virtual SAN supports JBOD for use in a blade server environment. If your cluster contains blade servers, you can extend the capacity of the datastore with JBOD storage that is connected to the blade servers. On-disk format Virtual SAN 6.6 supports on-disk virtual file format 5.0, that provides highly scalable snapshot and clone management support per Virtual SAN cluster. For information about the number of virtual machine snapshots and clones supported per Virtual SAN cluster, see the Configuration Maximums documentation. All-flash and hybrid configurations Virtual SAN can be configured for all-flash or hybrid cluster. Fault domains Virtual SAN supports configuring fault domains to protect hosts from rack or chassis failure when the Virtual SAN cluster spans across multiple racks or blade server chassis in a data center. Stretched cluster Virtual SAN supports stretched clusters that span across two geographic locations. Virtual SAN health service Virtual SAN health service includes preconfigured health check tests to monitor, troubleshoot, diagnose the cause of cluster component problems, and identify any potential risk. Virtual SAN performance service Virtual SAN performance service includes statistical charts used to monitor IOPS, throughput, latency, and congestion. You can monitor performance of a Virtual SAN cluster, host, disk group, disk, and VMs. Integration with vSphere storage features Virtual SAN integrates with vSphere data management features traditionally used with VMFS and NFS storage. These features include snapshots, linked clones, vSphere Replication, and vSphere APIs for Data Protection. Virtual Machine Storage Policies Virtual SAN works with VM storage policies to support a VM-centric approach to storage management. If you do not assign a storage policy to the virtual machine during deployment, the Virtual SAN Default Storage Policy is automatically assigned to the VM. Rapid provisioning Virtual SAN enables rapid provisioning of storage in the vCenter Server® during virtual machine creation and deployment operations. Parent topic: Virtual SAN Concepts
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.virtualsan.doc/GUID-DC8E1129-E58E-4684-948C-66E28F5C35CB.html
2020-10-23T22:16:48
CC-MAIN-2020-45
1603107865665.7
[]
docs.vmware.com
You are viewing documentation for Kubernetes version: v1.18 Kubernetes v1.18 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version. Using NodeLocal DNSCache in Kubernetes clusters Kubernetes v1.18 [stable]). Motivation service can be upgraded to TCP. TCP conntrack entries will be removed on connection close in contrast with UDP entries that have to timeout (default nf_conntrack_udp_timeout. IP in the 169.254.20.0/16 space or any other IP address that can be guaranteed to not collide with any existing IP. This document uses 169.254.20.10 as an example. This feature can be enabled using the following steps: Prepare a manifest similar to the sample nodelocaldns.yamland save it as nodelocaldns.yaml..
https://v1-18.docs.kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/
2020-10-23T21:53:42
CC-MAIN-2020-45
1603107865665.7
[]
v1-18.docs.kubernetes.io
Advanced Configuration (Hibernate Cache Module) Advanced Configuration (Hibernate Cache Module) After you set up Hibernate to work with the GemFire module, GemFire will run automatically with preconfigured settings. Depending on your topology, you may want to change these settings. GemFire has these default settings: - this section. Configuration Flowchart The following flowchart illustrates some of the decisions you might consider before making configuration changes. If the action says "no change req.", then no configuration change is required to run GemFire in that mode. All other actions are cross-referenced below the flowchart. For more information about the common GemFire topologies, refer to Common GemFire Topologies. For general information on how to make configuration changes, refer to "Changing GemFire's Default Configuration". - Use a Client/Server configuration: See Changing the GemFire Topology for more information. - Use a replicated region for server: Change the server region to "REPLICATE_HEAP_LRU" as described in Changing the Client/Server Region Attributes. - Use a partitioned region for peer: Change the peer region to "PARTITION" as described in Changing the Peer-to-peer Region Attributes. - Use a CACHING_PROXY region for client: Change the client region to "CACHING_PROXY" as described in Changing the Client/Server Region Attributes. - Use GemFire Locators for peer discovery: See Using GemFire Locators with a Peer-to-peer Configuration for more information. - Use GemFire Locators for client/server discovery: See Using GemFire Locators with a Client/Server Configuration for more information. Common GemFire Topologies Before configuring the GemFire module, you must consider which basic topology is suited for your usage. The configuration process is slightly different for each topology. For general information about GemFire topologies, refer to Topologies and Communication. - Peer-to-Peer Configuration. In a peer-to-peer configuration, each GemFire instance within a Hibernate JVM need fast access to all data. This configuration is also the simplest one to set up and does not require any external processes. By default, the GemFire module will operate in a peer-to-peer configuration. - Client/Server Configuration. In a client/server configuration, the Hibernate JVM operates as a GemFire client, which must communicate with one or more GemFire servers to acquire data. A client/server configuration is useful when you want to separate the Hibernate JVM from the cached data. In this configuration, you can reduce the memory consumption of the Hibernate process since data is stored in separate GemFire server processes. For instruction on running GemFire in a client/server configuration, refer to Changing the GemFire Topology.
http://gemfire82.docs.pivotal.io/docs-gemfire/latest/tools_modules/hibernate_cache/advanced_config.html
2018-04-19T11:19:47
CC-MAIN-2018-17
1524125936914.5
[]
gemfire82.docs.pivotal.io
Event ID 2 — Remote Procedure Call (RPC) Load Balancing Service Applies To: Windows Server 2008 R2 .jpg) The Remote Procedure Call (RPC) Load Balancing service is used with Terminal Services Gateway server. The administrator can configure the RPC Load Balancing service using the Terminal Server administrative tool. Event Details Resolve Report the internal error to Microsoft An internal error occurred in the RPC/HTTP Load Balancing Coordinator. There is not enough information in the event message to provide a recommendation for resolution of the problem. Ensure that the RPC/HTTP Load Balancing servers can communicate and that they did not lose connectivity temporarily. If the servers are communicating and you continue to receive this error, report the internal error to Microsoft. Note the details in the event message, and then contact Microsoft Customer Service and Support (CSS). For information about how to contact CSS, see Enterprise Support ().
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd337674(v=ws.10)
2018-04-19T11:46:04
CC-MAIN-2018-17
1524125936914.5
[array(['images%5cee406008.red(ws.10', None], dtype=object)]
docs.microsoft.com
traces¶ A Python library for unevenly-spaced time series analysis. Why?¶ Taking measurements at irregular intervals is common, but most tools are primarily designed for evenly-spaced measurements. Also, in the real world, time series have missing observations or you may have multiple series with different frequencies: it’s can be useful to model these as unevenly-spaced. Traces aims to make it simple to write readable code to: - Munge. Read, write, and manipulate unevenly-spaced time series data - Explore. Perform basic analyses of unevenly-spaced time series data without making an awkward / lossy transformation to evenly-spaced representations - GTFO. Gracefully transform unevenly-spaced times series data to evenly-spaced representations Traces was designed by the team at Datascope based on several practical applications in different domains, because it turns out unevenly-spaced data is actually pretty great, particularly for sensor data analysis. Quickstart: using traces¶ To see a basic use of traces, let’s look at these data from a light switch, also known as Big Data from the Internet of Things. The main object in traces is a TimeSeries, which you create just like a dictionary, adding the five measurements at 6:00am, 7:45:56am, etc. >>> time_series = traces.TimeSeries() >>> time_series[datetime(2042, 2, 1, 6, 0, 0)] = 0 # 6:00:00am >>> time_series[datetime(2042, 2, 1, 7, 45, 56)] = 1 # 7:45:56am >>> time_series[datetime(2042, 2, 1, 8, 51, 42)] = 0 # 8:51:42am >>> time_series[datetime(2042, 2, 1, 12, 3, 56)] = 1 # 12:03:56am >>> time_series[datetime(2042, 2, 1, 12, 7, 13)] = 0 # 12:07:13am What if you want to know if the light was on at 11am? Unlike a python dictionary, you can look up the value at any time even if it’s not one of the measurement times. >>> time_series[datetime(2042, 2, 1, 11, 0, 0)] # 11:00am 0 The distribution function gives you the fraction of time that the TimeSeries is in each state. >>> time_series.distribution( >>> start=datetime(2042, 2, 1, 6, 0, 0), # 6:00am >>> end=datetime(2042, 2, 1, 13, 0, 0) # 1:00pm >>> ) Histogram({0: 0.8355952380952381, 1: 0.16440476190476191}) The light was on about 16% of the time between 6am and 1pm. Adding more data…¶ Now let’s get a little more complicated and look at the sensor readings from forty lights in a house. How many lights are on throughout the day? The merge function takes the forty individual TimeSeries and efficiently merges them into one TimeSeries where the each value is a list of all lights. >>> trace_list = [... list of forty traces.TimeSeries ...] >>> count = traces.TimeSeries.merge(trace_list, operation=sum) We also applied a sum operation to the list of states to get the TimeSeries of the number of lights that are on. How many lights are on in the building on average during business hours, from 8am to 6pm? >>> histogram = count.distribution( >>> start=datetime(2042, 2, 1, 8, 0, 0), # 8:00am >>> end=datetime(2042, 2, 1, 12 + 6, 0, 0) # 6:00pm >>> ) >>> histogram.median() 17 The distribution function returns a Histogram that can be used to get summary metrics such as the mean or quantiles. It’s flexible¶ The measurements points (keys) in a TimeSeries can be in any units as long as they can be ordered. The values can be anything. For example, you can use a TimeSeries to keep track the contents of a grocery basket by the number of minutes within a shopping trip. >>> time_series = traces.TimeSeries() >>> time_series[1.2] = {'broccoli'} >>> time_series[1.7] = {'broccoli', 'apple'} >>> time_series[2.2] = {'apple'} # puts broccoli back >>> time_series[3.5] = {'apple', 'beets'} # mmm, beets To learn more, check the examples and the detailed reference. More info¶ Contributing¶ Contributions are welcome and greatly appreciated! Please visit the repository for more info.
http://traces.readthedocs.io/en/latest/
2018-04-19T11:55:46
CC-MAIN-2018-17
1524125936914.5
[]
traces.readthedocs.io
Offline Usage¶ In some corporate environments servers do not have Internet access. You can still use Rally in such environments and this page summarizes all information that you need to get started. Installation and Configuration¶ We provide a special offline installation package. Please refer to the offline installation guide for detailed instructions. After the installation you can just follow the normal configuration procedure. Command Line Usage¶ Rally will automatically detect upon startup that no Internet connection is available and print the following warning: [WARNING] No Internet connection detected. Automatic download of track data sets etc. is disabled. It detects this by trying to connect to github.com. If you want to disable this probing you can explicitly specify --offline. Using tracks¶ A Rally track describes a benchmarking scenario. You can either write your own tracks or use the tracks that Rally provides out of the box. In the former case, Rally will work just fine in an offline environment. In the latter case, Rally would normally download the track and its associated data from the Internet. If you want to use one of Rally’s standard tracks in offline mode, you need to download all relevant files first on a machine that has Internet access and copy it to the target machine(s). Use the download script to download all data for a track on a machine that has access to the Internet. Example: # downloads the script from Github curl -O chmod u+x download.sh # download all data for the geonames track ./download.sh geonames This will download all data for the geonames track and create a tar file rally-track-data-geonames.tar in the current directory. Copy this file to the home directory of the user which will execute Rally on the target machine (e.g. /home/rally-user). On the target machine, run: cd ~ tar -xf rally-track-data-geonames.tar The download script does not require a Rally installation on the machine with Internet access but assumes that git and curl are available. After you’ve copied the data, you can list the available tracks with esrally list tracks. If a track shows up in this list, it just means that the track description is available locally but not necessarily all data files. Using cars¶ Note You can skip this section if you use Rally only as a load generator. If you have Rally configure and start Elasticsearch then you also need the out-of-the-box configurations available. Run the following command on a machine with Internet access: git clone ~/.rally/benchmarks/teams/default tar -C ~ -czf rally-teams.tar.gz .rally/benchmarks/teams/default Copy that file to the target machine(s) and run on the target machine: cd ~ tar -xzf rally-teams.tar.gz After you’ve copied the data, you can list the available tracks with esrally list cars.
http://esrally.readthedocs.io/en/latest/offline.html
2018-04-19T11:41:03
CC-MAIN-2018-17
1524125936914.5
[]
esrally.readthedocs.io
ServerTraceEventSet Class The ServerTraceEventSet object represents a set server trace events. Namespace: Microsoft.SqlServer.Management.Smo Assembly: Microsoft.SqlServer.Smo (in Microsoft.SqlServer.Smo.dll) Syntax 'Declaration Public NotInheritable Class ServerTraceEventSet _ Inherits EventSetBase 'Usage Dim instance As ServerTraceEventSet public sealed class ServerTraceEventSet : EventSetBase public ref class ServerTraceEventSet sealed : public EventSetBase [<SealedAttribute>] type ServerTraceEventSet = class inherit EventSetBase end public final class ServerTraceEventSet extends EventSetBase Remarks. Examples Inheritance Hierarchy System.Object Microsoft.SqlServer.Management.Smo.EventSetBase Microsoft.SqlServer.Management.Smo.ServerTraceEventSet Thread Safety Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms220275(v=sql.100)
2018-04-19T12:39:54
CC-MAIN-2018-17
1524125936914.5
[]
docs.microsoft.com
Downloading the Action Log To download the action log to your machine: - Go to Tools & Settings > Action Log (in the Plesk group). - In the Log files section, select the time period using the drop-down boxes, and click Download. The dialog window will open, prompting you to select the location for the downloaded log file to be saved to. - Select the location, and click Save. Leave your feedback on this topic here If you have questions or need support, please visit the Plesk forum or contact your hosting provider. The comments below are for feedback on the documentation only. No timely answers or help will be provided.
https://docs.plesk.com/en-US/onyx/administrator-guide/statistics-and-monitoring/action-logs/downloading-the-action-log.59225/
2018-04-19T11:57:43
CC-MAIN-2018-17
1524125936914.5
[]
docs.plesk.com
Creating Mozilla POT files. After extracting the en-US l10n files, you can run the following command: moz2po -P l10n/en-US pot This creates a set of POT ( -P) files in the pot directory from the Mozilla files in l10n/en. If you have existing translations (Mozilla related or other Babelzilla files) and you wish to convert them to PO for future translation then the following generic instructions will work: moz2po -t en-US af-ZA af-ZA_pofiles This will combine the untranslated template en-US files from en-US combine them with your existing translations in af-ZA and output PO files to af-ZA_pofiles. moz2po -t l10n/fr l10n/xh po/xh For those who are not English fluent you can do the same with another languages. In this case msgid will contain the French text from l10n/fr. This is useful for translating where the translators other languages is not English but French, Spanish or Portuguese. Please make sure that the source languages i.e. the msgid language is fully translated as against en-US. tracks the outstanding features which would allow complete localisation of Mozilla including; all help, start pages, rdf files, etc. It also tracks some bugs. Accesskeys don’t yet work in .properties files and in several cases where the Mozilla .dtd files don’t follow the normal conventions, for example in security/manager/chrome/pippki/pref-ssl.dtd.po. You might also want to check the files mentioned in this Mozilla bug 329444 where mistakes in the DTD-definitions cause problems in the matching of accelerators with the text. You might want to give special attention to the following files since it contains customisations that are not really translations. Also, all width, height and size specifications need to be edited with feedback from testing the translated interfaces. There are some constructed strings in the Mozilla code which we can’t do much about. Take good care to read the localisation notes. For an example, see mail/chrome/messenger/downloadheaders.dtd.po. In that specific file, the localisation note from the DTD file is lost, so take good care of those. The file extension of the original Mozilla file is required to tell the Toolkit how to do the conversion. Therefore, a file like foo.dtd must be named foo.dtd.po in order to po2moz to recognise it as a DTD file.
http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/moz2po.html?id=toolkit/moz2po
2018-04-19T11:47:31
CC-MAIN-2018-17
1524125936914.5
[]
docs.translatehouse.org
Snapshots that you can restore You can perform the following to restore data on NAS shares: Hot snapshot restore A Hot snapshot restore is an on-demand restore of the server data that resides on Phoenix CloudCache. Hot snapshots are point-in-time images of the backup data stored on Phoenix CloudCache. Such a restore operation continues until Phoenix restores the data to the specified location. To locate Hot snapshots, expand Hot in the Restore Data window. Warm snapshot restore A Warm snapshot restore is restoring of point-in-time images of the backup data dating back to 90 days in time data is considered as thawed and is available for restore. After thawing the data, Phoenix sends an email informing you about the availability of your data and about the duration for which your data remains available. After you receive this email, you can restore > NAS. The displays the list of NAS servers. - Click the NAS server link and select the NAS share that you want to restore. - Click Restore. - In the left pane, expand Cold, and select the snapshot to defreeze. - Click Defreeze. Note: This data stays in the thawed state for the duration mentioned in the email. This duration is fixed and cannot be changed. Ensure that you restore your thawed data within the sepcified duration.
https://docs.druva.com/Phoenix/030_Configure_Phoenix_for_Backup/040_Back_up_and_restore_NAS_shares/030_Restore_NAS_shares/020Snapshots_that_you_can_restore
2018-06-17T23:51:24
CC-MAIN-2018-26
1529267859904.56
[array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/cross.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/39353/Restore_FS_Hot.png?revision=1&size=bestfit&width=732&height=479', None], dtype=object) array(['https://docs.druva.com/@api/deki/files/39356/Restore_FS_Warm.png?revision=1&size=bestfit&width=730&height=476', None], dtype=object) array(['https://docs.druva.com/@api/deki/files/39355/Restore_FS_Cold.png?revision=1&size=bestfit&width=738&height=490', None], dtype=object) array(['https://docs.druva.com/@api/deki/files/39354/Restore_FS_Thawed.png?revision=1&size=bestfit&width=739&height=488', None], dtype=object) ]
docs.druva.com
DataWeave Reference Documentation The DataWeave Language is a powerful template engine that allows you to transform data to and from any kind of format (XML, CSV, JSON, Pojos, Maps, etc). Let us start off with some examples to demonstrate the prowess of Dataweave as a data transformation tool. String manipulation This example shows how easy it is to work with strings in DataWeave. Group by … This example shows how easy it is to group by a given criteria and then transform that result to match the expected output. Document Structure DataWeave files are divided into two main sections: The Header, which defines directives (optional) The Body, which describes the output structure The two sections are delimited by a separator, which is not required if no header is present. The separator consists of three dashes: "---" Below is a taste of what a .dwl file looks like. This code describes a conversion: Header The DataWeave header contains the directives, which define high level information about your transformation. The structure of the Header is a sequence of lines, each with its own Directives. The Header is terminated with '---'. Through directives you can define: DataWeave version Input types and sources Output type Namespaces to import into your transform Constants that can be referenced throughout the body Functions that can be called throughout the body All directives are declared on the header section of your DataWeave document and act upon the entire scope of it. Directives are a mechanism to declare variables and constants and namespace aliases which need to be referenced in the Document. They are also needed to declare the type of the output of your transform. In Anypoint Studio, you can optionally use them to declare additional inputs. You rarely need them for this as any data arriving in the incoming Mule Message is already implicitly recognized as an input. Version Directive Through this directive, you specify the version of the DataWeave syntax that is used to interpret the transformation. %dw 1.0 Namespace Directive This directive associates an alias with its subsequent URI. The directive is relevant only when either the input or the output is of type XML. Input Directive Inputs are declared by assigning a name and a content type. You may define as many input directives as you want. You can then refer to them (or their child elements) in any part of the DataWeave body through the names defined in the directive. %input payload application/xml Valid types are: application/json application/xml application/java application/csv application/dw text/json text/xml text/csv CSV Input Parsing When defining an input of type CSV, there are a few optional parameters you can add to the input directive to customize how the data is parsed. These are not defined in the DataWeave script but on the Mule XML code, in the Transform Message XML element. In Anypoint Studio there are two ways to achieve this. You can either manually add the attributes to the project’s XML, or do it through the graphical interface, by selecting the element from the tree view in the input section and clicking the gear icon. See Using DataWeave in Studio for more details. Output Directive The Output Directive specifies what the output type is in a transformation, which is specified using content/type. Only one output can be specified, the structure of this output is then defined in the DataWeave body. %output application/xml Valid types are: application/json application/xml application/java application/csv application/dw text/json text/xml text/csv Skip Null On Whenever the output is of XML or JSON type and has null values in its elements or attributes, you can specify whether this generates an outbound message that contains fields with "null" values, or if these fields are ignored entirely. This can be set through an attribute in the output directive named skipNullOn, which can be set to three different values: elements, attributes, or everywhere. %output application/xml skipNullOn="everywhere" When set to: elements: A key:value pair with a null value is ignored. attributes: An XML attribute with a null value is skipped. everywhere: Apply this rule to both elements and attributes. Define Constant Directive You can define a constant in the header, you can then reference it (or its child elements, if any exist) in the DataWeave When you write your DataWeave file, you define an expression that generates one of the data types listed above. Simple Literal Types Literals can be of the following types: String : Double quoted ("Hello") or Single quoted ('Hello') Boolean : Literals true or false Number : Decimal and Integer values are supported (ex: 2.0) Dates : IS0-8601 enclosed by "|" (ex:|2003-10-01T23:57:59Z|) Regex : Regex expression enclosed by "/" (ex:/(\d+)-(\d+)/) This is a String literal expression Streaming DataWeave supports streaming a large payload. No configuration is necessary in the DataWeave code itself, but other components need to be set up for this to work. See DataWeave Streaming. Example Transformation Suppose you want to transform an XML document to JSON, appending extra content to the output. DataWeave Canonical Model As covered above, DataWeave uses three basic data types: Objects, Arrays, and Simple Types, the execution of a DataWeave transformation always produces one of these three types of data. In essence, the body of every DataWeave transformation is a single expression that defines the structure and contents of one such element (which can be an Object, Array, or Simple Literal). This expression can be built using any of the following elements: Objects Arrays Simple literals Variable and Constant references A DataWeave transformation can be as simple as the definition of a single element from the list above. Even a simple literal 'Hello world' is a valid DataWeave transformation. Simple literal. If you declare input directives in your DataWeave’s header, regardless of its type (XML, JSON, Java), any execution that references these inputs produces, as stated before, an Object, an Array, or a Simple literal. When you understand the structure of these data types, expressed in the syntax of DataWeave expressions, you effectively understand DataWeave. In Anypoint Studio, if you ever need to visualize the canonical DataWeave model of your data to get a better reference, set the output type of your transform to application/dw. Your transform then outputs your data as a DataWeave expression, which resembles a JSON object. See the example below. Literal Expressions These correspond to the three different data-types: Simple, Object, and Array. Simple Literal Object Literal Array Literal Variables Input Variables Input directives allow you to make any number of input sources available in global variables, which can then be referenced in any part of the Transform’s body. To reference one of these, you can just call it by the name you defined in the directive. Remember that the Transform is itself an expression, so the entire body of the transform could be written as a simple variable reference to the input document. Consider the example below, which transforms an incoming JSON document into XML, and where the output XML structure mimics the input JSON structure. literal Scoped to Array literal Scoped to Object literal Invalid Reference outside of Scope Selectors Value Selector Expressions The complex structure of Objects and Arrays can be navigated using Selector Expressions. Each selector expression returns either an object, an array, or a simple type. A selector always operates within a given context, which can be a reference to a variable, an object literal, an array literal, or the invocation of a function. As DataWeave processes a selector, a new context is set for further selectors, so you can navigate through the complex structures of arrays and objects by using chains of selectors, who’s depth is limited only by the depth of the current context. There are 4 types of selector expression: Single Value selector .<key-name> Multi Value selector .*<key-name> Descendants Selector ..<key-name> Indexed Selector [<index>] Applying the Single level Explicit Selector, the Descendants Selector, or the Indexed Selector returns the value of the key:value pair that matches the expression. Note: Each of these selector expressions supports having a '?' appended at the end of the chain. This changes the expression into a query that checks upon the existence of the key. The return type in this case is a boolean true or false. Single Value selector This selector returns the first value whose key matches the expression, that is, payload.name, which returns the value whose key matches name. Multi Value selector This selector returns an array with all the values whose key matches the expression. Indexed Selector This selector can be applied to String literals, Arrays and Objects. In the case of Objects, the value of the key:value pair found at the index is returned. The index is zero based. If the index is bigger or equal to 0, it starts counting from the beginning. If the index is negative, it starts counting from the end where -1 is the last element. When using the Index Selector with a String, the string is broken down into an array, where each character is an index. Range selector Range selectors limit the output to only the elements specified by the range on that specific order. This selector allows you to slice an array or even invert it. Attribute Selector Expressions In order to query for the attributes on an Object, the syntax .@<key-name> is used. If you just use .@ (without <key-name>) it returns an object containing each key:value pair in it. Applying Selectors to Arrays When the context for selection is an Array, the result is always an Array. Each element on the context Array is queried for matching key:value pairs. In each case, only the value of the key:value pair is returned. Descendants Selector This selector is applied to the context using the form ..<field-name> and retrieves the values of all matching key:value pairs in the sub-tree under the current context. Regardless of the hierarchical structure these fields are organized in, they are all placed at the same level in the output. Selectors modifiers There are two selectors modifiers: ? and !. The question mark returns true or false whether the keys are present on the structures. The exclamation mark evaluates the selection and fails if any key is not present. Key Present Returns true if the specified key is present in the object. In the example above, if a 'name' key does exist in the input, it returns true. This operation also works with attributes: You can also use this validation operation as part of a filter: The example above selects key:value pairs with value "Mariano" ⇒ {name: Mariano, name: Mariano} Assert Present Returns an exception if any of the specified keys are not found. Reference Elements From an Incoming Mule Message Often, you want to use the different elements from the Mule Message that arrive at the DataWeave Transformer in your transform. The following sections show you how to reference each of these. The Payload of a Mule Message You can take the Payload of the Mule message that reaches the DataWeave transformer and use it in your transform body. You can also refer to sub elements of the payload through the dot syntax payload.user. You can optionally also define the payload as an input directive in the header, although this isn’t required. Inbound Properties from a Mule Message You can take Inbound Properties from the Mule message that arrives to the DataWeave transformer and use them in your transform body. To refer to one of these, simply call it through the matching Mule Expression Language (MEL) expression. In MEL, there are two supported syntaxes to call for an inbound property: inboundProperties.name inboundProperties[’name’] You can optionally also define the inbound property as a constant directive in the header, although this isn’t required. %var inUname = inboundProperties['userName'] Outbound Properties from a Mule Message You can take any Outbound Properties in the Mule message that arrives to the DataWeave transformer and use it in your transform body. To refer to it, simply call it through the matching Mule Expression Language (MEL) expression. In MEL, there are two supported syntaxes to call an outbound property: outboundProperties.name outboundProperties[’name’] You can optionally also define the outbound property as a constant directive in the header, although this isn’t required. %var outUname = outboundProperties['userName'] Flow Variables from a Mule Message You can take any Flow Variable in the Mule message that arrives at the DataWeave transformer and use it in your transform body. To refer to it, simply call it through the matching Mule Expression Language (MEL) expression. In MEL, there are two supported syntaxes to call a flow variable: flowVars.name flowVars[’name’] You can optionally also define the flow variable as a constant directive in the header, although this isn’t required. %var uname = flowVars['userName'] Operators Map Using Map on an Array Returns an array that is the result of applying a transformation function (lambda) to each of the elements. The lambda is invoked with two parameters: index and the value. If these parameters are not named, the index is defined by default as $$ and the value as $. In the following example, custom names are defined for the index and value parameters of the map operation, and then both are used to construct the returned value. In this case, value is defined as firstName and its index in the array is defined as position. Using Map on an Object Returns an array with the values that result out of applying a transformation function (lambda) to each of the values in the object. The keys of the original object are all ignored by this operation and the object is treated as an array. To have access to the keys, you can use the operation mapObject instead. The lambda is invoked with two parameters: index and the value. If these parameters are not named, the index is defined by default as $$ and the value as $. The index refers to the position of a key:value pair when the object is treated as an array. In the example above, as key and value are not defined, they’re identified by the placeholders $$ and $. For each key:value pair in the input, an object is created and placed in an array of objects. Each of these objects contains two properties: one of these directly uses the value, the other multiplies this value by a constant that is defined as a directive in the header. The mapping below performs exactly the same transform, but it defines custom names for the properties of the operation, instead of using $ and $$. Here, position is defined as referring to the array index, and money to the value in that index. Map Object Similar to Map, but instead of processing only the values of an object, it processes both keys and values, and instead of returning an array with the results of processing these values through the lambda, it returns an object with the key:value pairs that result from processing both key and value of the object through the lambda. The lambda is invoked with two parameters: key and the value. If these parameters are not named, the key is defined by default as $$ and the value as $. In the example above, as key and value are not defined, they’re identified by the placeholders $$ and $. For each key:value pair in the input, the key is preserved and the value becomes an object with two properties: one of these is the original value, the other is the result of multiplying this value by a constant that is defined as a directive in the header. The mapping below performs exactly the same transform, but it defines custom names for the properties of the operation, instead of using $ and $$. Here, 'category' is defined as referring to the original key in the object, and 'money' to the value in that key. Pluck Pluck is useful for mapping an object into an array. Pluck is an alternate mapping mechanism to mapObject. Like mapObject, pluck executes a lambda over every key:value pair in its processed object, but instead of returning an object, it returns an array, which may be built from either the values or the keys in the object. The lambda is invoked with two parameters: key and the value. If these parameters are not named, the key is defined by default as $$ and the value as $. Filter Using Filter on an Object Returns an object with the key:value pairs that pass the acceptance criteria defined in the lambda. If these parameters are not named, the key is defined by default as $$ and the value as $. Remove Using Remove on an Object When running it on an object, it returns another object where the specified keys are removed. The above example removes the key value pair that contains the key 'aa' from {aa: "a", bb: "b"} ⇒ {bb: "b"} When or or. AND The expression and (in lower case) can be used to link multiple conditions, its use means that all of the linked conditions must evaluate to true for the expression as a whole to evaluate to true. In the example above, currency is "EUR", unless the payload has BOTH conditions met. OR The expression or (in lowercase) can be used to link multiple conditions. Its use means that either one or all of the linked conditions must evaluate to true for the expression as a whole to evaluate to true. In the example above, currency is "EUR", only when one of the conditions evaluates to true. Concat The concat operator is defined using two plus signs. You must have spaces on both sides of them. Using Concat on an Object Returns the resulting object of concatenating two existing objects. The example above concatenates object {aa: a} and {cc: c} in a single one ⇒ {aa: a , cc: c} Using Concat on an Array When using arrays, it returns the resulting array of concatenating two existing arrays. IS Evaluates if a condition validates to true and returns a boolean value. Conditions may include and and or operators. Contains Evaluates if an array or list contains in at least one of its indexes a value that validateso true and returns a boolean value. You can search for a literal value, or match a regex too. Using Contains on an Array You can evaluate if any value in an array matches a given condition: Using Contains on a String You can also use contains to evaluate a substring from a larger string: Instead of searching for a literal substring, you can also match it agains a regular expression: AS (Type Coercion) Coerce the given value to the specified type. DataWeave by default attempts to convert the type of a value before failing, so using this operator to convert is sometimes not required but still recommended. Coerce to string Any simple types can be coerced to string. If formatting is required (such as for a number or date) the format schema property can be used. Date and number format schemas are based on Java DateTimeFormatter and DecimalFormat. Coerce to number A string can be coerced to number. If the given number has a specific format the schema property can be used. Any format pattern accepted by DecimalFormat is allowed. Coerce to date Date types can be coerced from string or number. Any format pattern accepted by DateTimeFormatter is allowed. Type Of Returns the type of a provided element (eg: '":string"' , '":number"' ) Flatten If you have an array of arrays, this function can flatten it into a single simple array. Zip and Unzip If you have two or more separate lists, the zip function can be used to merge them together into a single list of consecutive n-tuples. Imagine two input lists each being one side of a zipper: similar to the interlocking teeth of a zipper, the zip function interdigitates each element from each input list, one element at a time. Here is another example of the zip function with more than two input lists. Unzip works similarly to zip except that the input is a single list consisting of two or more embedded lists of elements. Size Of Returns the number of elements in an array (or anything that can be converted to an array) Reduce Applies a reduction to the array. The lambda is invoked with two parameters: the accumulator ($$) and the value ($). Unless specified, the accumulator by default takes the first value of the array. In some cases, you may want to not use the first element of the array as the initial value of the accumulator. To set the accumulator to be something else, you must define this in a lambda. Join By Merges an array into a single string value, using the provided string as a separator between elements. Split By Performs the opposite operation as Join By. It splits a string into an array of separate elements, looking for instances of the provided string and using it as a separator. Order By Returns the provided array ordered according to the value returned by the lambda. The lambda is invoked with two parameters: index and the value. If these parameters are not named, the index is defined by default as $$ and the value as $. Group By Partitions an array into a Object that contains Arrays, according to the discriminator lambda you define. The lambda is invoked with two parameters: index and the value. If these parameters are not named, the index is defined by default as $$ and the value as $. Distinct By Returns only unique values from an array that may have duplicates. The lambda is invoked with two parameters: index and value. If these parameters are not defined, the index is defined by default as $$ and the value as $. Replace Replaces a section of a string for another, in accordance to a regular expression, and returns a modified string. Match Match a string against a regular expression. Match returns an array that contains the entire matching expression, followed by all of the capture groups that match the provided regex. In the example above, we see that the search regular expression describes an email address. It contains two capture groups, what’s before and what’s after the @. The result is an array of three elements: the first is the whole email address, the second matches one of the capture groups, the third matches the other one. Scan Returns an array with all of the matches in the given string. Each match is returned as an array that contains the complete match, as well as any capture groups there may be in your regular expression. In the example above, we see that the search regular expression describes an email address. It contains two capture groups, what’s before and what’s after the @. The result is an array with two matches, as there are two email addresses in the input string. Each of these matches is an array of three elements, the first is the whole email address, the second matches one of the capture groups, the third matches the other one. Similar Evaluates if two values are similar, regardless of their type. For example, the string "1234" and the number 1234 aren’t equal, but they are recognized as similar. Capitalize Returns the provided string with every word starting with a capital letter and no underscores. Basic Math Operations Date Time Operations Global MEL Functions Your DataWeave code can call any function you define as a global Mule Expression Language (MEL) function, as long as it is correctly defined in the Mule Project where your Transform Message element sits. Refer to Using DataWeave in Studio. Object Type ⇒ ':object' Objects are represented as a collection of key:value pairs. Object: { 'Key' : Value } Key : 'Qualified Name' @('Qualified Name'= Value,…) Qualified Name: 'namespace prefix#name' where the 'namespace prefix#' part is optional Name: String that represents the name. Special Types of Objects Single Value Objects If an Object has only one key:value pair, the enclosing curly brackets { } are not required: Conditional Elements Objects can define conditional key:value pairs based on a conditional expression. This example outputs an additional field called "extension" only when the fileSystem property is present in payload (this field may contain any value, not just "true"). If absent: Dynamic Elements Dynamic elements allow you to add the result of an expression as key:value pairs of an object. It is important to note that the expression between the parentheses should return an array of objects. All of objects in that array get merged together. They are also merged with the contained object. So the output looks like this: String Type ⇒ ':string' A string can be defined by the use of double quotes or single quotes. String interpolation String interpolation allows you to embed variables or expressions directly in a' Date time is the conjunction of 'Date' + 'Time'. Local timezone to use. Period Type ⇒ ':period' Specifies a period of time. Examples |PT9M| ⇒ 9 minutes , |P1Y| ⇒ 1 Year Accessors Expressions Type ⇒ ':regex' Regular Expressions are defined between /. For example /(\d+)/ for represents multiple numerical digits from 0-9. These may be used as arguments in certain operations that act upon strings, like Matches or Replace, or on operations that act upon objects and arrays, such as filters. as a Hint for Developers In Anypoint Studio, it’s easy to view metadata that describes the input and output data of every building block you’re using. When defining a custom type for a particular input or output of your transform, this is represented in the DataWeave transformer’s metadata. Exposing metadata helps you understand what it is you’re integrating to in order to build up the rest of a system, as it lets you know what you need to provide and what you can expect in advance. Java Class Java developers use the 'class' metadata key as hint for what class needs to be created and sent in. If this is not explicitly defined, DataWeave tries to infer from the context or it assigns it the default values: java.util.HashMap for objects java.util.ArrayList for lists The above code defines your type as an instance of 'com.anypoint.df.pojo.User'. Defining Types For Type Coercion Format The metadata 'format' key is used for formatting numbers and dates. In Anypoint Studio, you can define several more values, like separators, quote characters and escape characters. See Using DataWeave in Studio. Functions and Lambdas There are two types of directives you can use to define a function, through %var (as with variables) using a lambda, or through %function. Lambdas Lambdas can be used inside operators such as map, mapObject, etc. or they can be assigned to a variable. When using lambdas with an operator, they can be either named or anonymous. Functions You can declare functions in the Header and these can be invoked at any point in the Body. You refer to them as you do to any variable or constant: using the form $<function-name>() passing an expression as argument. The result of the expression that is passed as an argument is used in the execution of the function body. Existing Functions Expressions that Call
https://docs.mulesoft.com/mule-user-guide/v/3.8/dataweave-reference-documentation
2018-06-18T00:23:19
CC-MAIN-2018-26
1529267859904.56
[]
docs.mulesoft.com
- API > - Public API Resources > - Measurements and Alerts Measurements and Alerts¶ Note Groups and projects are synonymous terms. Your {GROUP-ID} is the same as your project id. For existing groups, your group/project id remains the same. This page uses the more familiar term group when referring to descriptions. The endpoint remains as stated in the document.
https://docs.opsmanager.mongodb.com/current/reference/api/nav/measurements-and-alerts/
2018-06-18T00:11:26
CC-MAIN-2018-26
1529267859904.56
[]
docs.opsmanager.mongodb.com
Account Updater Spreedly’s Account Updater allows you to always keep your customers’ card details up-to-date. When your customers’ credit card numbers expire or are updated, Account Updater protects you from the lost revenue, involuntary churn and decreased customer satisfaction associated with outdated payment information. No additional development is required on your side since cards are updated behind the scenes. How Account Updater Works Once you opt-in, Spreedly will submit all retained, non-test, Visa, MasterCard, and Discover cards in your Spreedly vault for updating twice monthly, on the 1st and 15th of every month. Only Visa, MasterCard, and Discover cards in the USA, Canada and parts of the UK are eligible for updating at this time. This restriction is subject to the discretion of the card networks. If other countries become eligible, we’ll annouce it to our customers. Opting-in to Account Updater happens at the organization level - all environments will be affected. By turning on the Account Updater feature, you confirm that all of your cards will be sent to the card brands for updates twice monthly, but you will only be charged for those cards that are actually updated. Notifications and Tracking We will send two e-mail notifications when your cards are updated. Notifications will be sent to all active users associated with the organization. The first email is notice that the card update process has started. Example: The second email is notice that card updates have completed and contains aggregate information about the updates that were performed. Example: To track which of your payment methods were affected by Account Updater, use the list transactions API and search for the following successful transaction types: ReplacePaymentMethod- The number and/or expiration date has been updated. (Note: You will find the new monthand yearin the JSON response here). InvalidReplacePaymentMethod- We received either a new card number or expiration date that was invalid. The payment method was not updated, but a fee is still created since Spreedly still incurs a cost in this scenario. ContactCardHolder- Contact the cardholder for a new number and/or expiration date. ClosePaymentMethod- The account is no longer open and should no longer be used. The storage_state property on the payment method will be changed to closed and the number will be redacted. Please visit the Help Center for a detailed walkthrough on pricing, opting-in, and opting-out.
https://docs.spreedly.com/guides/account-updater/
2018-06-17T23:33:29
CC-MAIN-2018-26
1529267859904.56
[array(['/assets/images/account-updater/cards-sent.png', None], dtype=object) array(['/assets/images/account-updater/cards-updated.png', None], dtype=object) ]
docs.spreedly.com
Creating a Cluster With Earlier AMI Versions of Amazon EMR Amazon EMR 2.x and 3.x releases are referenced by AMI version. With Amazon EMR release 4.0.0 and later, releases are referenced by release version, using a release label such as emr-5.11.0. This change is most apparent when you create a cluster using the AWS CLI or programmatically. When you use the AWS CLI to create a cluster using an AMI release version, use the --ami-version option, for example, --ami-version 3.11.0. Many options, features, and applications introduced in Amazon EMR 4.0.0 and later are not available when you specify an --ami-version. For more information, see create-cluster in the AWS CLI Command Reference. The following example AWS CLI command launches a cluster using an AMI version."] Programmatically, all Amazon EMR release versions use the RunJobFlowRequest action in the EMR API to create clusters. The following example Java code creates a cluster using AMI release version 3.11.0. RunJobFlowRequest request = new RunJobFlowRequest() .withName("AmiVersion Cluster") .withAmiVersion("3"); The following RunJobFlowRequest call uses a release label instead: RunJobFlowRequest request = new RunJobFlowRequest() .withName("ReleaseLabel Cluster") .withReleaseLabel("emr-5"); Configuring Cluster Size When your cluster runs, Hadoop determines the number of mapper and reducer tasks needed to process the data. Larger clusters should have more tasks for better resource use and shorter processing time. Typically, anwide. task node failures and continues cluster execution even if a task node becomes unavailable. Amazon EMR automatically provisions additional task nodes to replace those that fail. You can have a different number of task nodes for each cluster step. You can also add a step to a running cluster to modify the number of task nodes. Because all steps are guaranteed to run sequentially by default, you can specify the number of running task nodes for any step.
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-3x-create.html
2018-06-18T00:21:42
CC-MAIN-2018-26
1529267859904.56
[]
docs.aws.amazon.com
Error getting tags : error 404Error getting tags : error 404 libURLFollowHttpRedirects <true|false> libURLFollowHttpRedirects (the cFollowRedirects of me) When set to true (the default), libUrl will respond to 301, 302, and 307 responses by trying to get the url listed in the Location header IF the original request was a GET (i.e not a post). It will respond to 303 responses by trying to GET the redirected url whatever the original request method. When set to false, no attempt is made to get the redirected url and the result will return "error" followed by the status code and message (e.g. error 302 found) (This is different from previous behavior whereby libUrl always attempted to GET 301 and 302 redirects whatever the original request method, but didn't handle other responses.)
http://docs.runrev.com/Command/libURLFollowHttpRedirects
2018-06-18T00:21:32
CC-MAIN-2018-26
1529267859904.56
[]
docs.runrev.com
View the results for a survey You can view the responses for one survey definition. Survey results are stored on the Metric Result [asmt_metric_result] table. Before you beginRole required: survey_admin or survey_reader Procedure Navigate to Survey > and calculated values of interest to advanced survey administrators. Because the Metric Result table is also used by the assessment feature, many field names are not clear in the context of surveys. Related TasksView results for all surveysView a survey scorecardRelated ReferenceMetric result fields
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/administer/survey-administration/task/t_ViewResultsForASpecificSurvey.html
2018-06-17T23:34:59
CC-MAIN-2018-26
1529267859904.56
[]
docs.servicenow.com
A primary account owner can invite anyone to the collaboration even if the required user is not registered in Jelastic. In this case, this newly added member of the shared account will be registered automatically, after confirming the invitation. This user will be included to the default sign up group, stated in JCA > Billing > Groups > Signup default column (e.g. trial or beta). Nevertheless, it is recommended to create a separate group for these users and provide them with custom quotas, in order to ensure a sufficient level of accessibility for joint development. For example, you can set a trial group for them with no trial period limits, but at the same time restrict creation of environments on their accounts (let them work only with the primary account of collaboration). Let’s see how to configure this: 1. Create a new group as it is described in the Add new user group section of the Groups document. 2. Navigate to the Quotas tab and change the values of the following limitations: - account.trialperiod - this quota defines the amount of days a user’s account remains active until deactivation. It is recommended to state its value to a large number in order to lift limits, e.g. 1000000. - environment.maxcount - state value of this quota to 0 (zero) in order to disallow users to create their own environments. In such a way, users of this group will be provided with a kind of “eternal” trial period without the ability to create their own environments. They can only work with the existing shared ones, or create new environments only on the primary billing account (if primary user has allowed that). 3. Then, navigate to Billing > Groups and choose the newly created group from the list. 4. Click the Use for Collaboration button at the panel. After that, the chosen group will be marked with a green tick in the Collaboration default column, as it is shown above.
https://ops-docs.jelastic.com/collaboration-user-group
2018-06-18T00:21:30
CC-MAIN-2018-26
1529267859904.56
[array(['https://download.jelastic.com/index.php/apps/gallery/ajax/image.php?file=8bfe76c532583fbfb4eab0886326ab31%2Fdefault%20group.png', None], dtype=object) ]
ops-docs.jelastic.com
Atomic Zero-Width Assertions The metacharacters described in the following table do not cause the engine to advance through the string or consume characters. They simply cause a match to succeed or fail depending on the current position in the string. For instance, ^ specifies that the current position is at the beginning of a line or string. Thus, the regular expression ^FTP returns only those occurrences of the character string "FTP" that occur at the beginning of a line. See Also Regular Expression Language Elements | Regular Expression Options
https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-1.1/h5181w5w(v=vs.71)
2018-06-18T01:07:57
CC-MAIN-2018-26
1529267859904.56
[]
docs.microsoft.com
Make a domain the default The default domain is the domain to which the system automatically assigns task and user records that are not already assigned to a domain. Before you beginRole required: admin Procedure Navigate to Domain Admin > Domains. Open the domain you want to be the default domain. For example, Default. Configure the form layout to add the Default field. Select the Default check box. Click Update. Note: If you do not set a default domain, then new tasks and user records are placed in the global domain. Related TasksManually manage the domain for particular records
https://docs.servicenow.com/bundle/helsinki-platform-administration/page/administer/company-and-domain-separation/task/t_MakeAnMSPDomainTheDefault.html
2018-06-17T23:42:36
CC-MAIN-2018-26
1529267859904.56
[]
docs.servicenow.com
Traffic Ops - Using¶ Health¶ The Health Table¶ The Health table is the default landing screen for Traffic Ops, it displays the status of the EDGE caches in a table form directly from Traffic Monitor (bypassing Traffic Stats), sorted by Mbps Out. The columns in this table are: - Profile: the Profile of this server or ALL, meaning this row shows data for multiple servers, and the row shows the sum of all values. - Host Name: the host name of the server or ALL, meaning this row shows data for multiple servers, and the row shows the sum of all values. - Edge Cache Group: the edge cache group short name or ALL, meaning this row shows data for multiple servers, and the row shows the sum of all values. - Healthy: indicates if this cache is healthy according to the Health Protocol. A row with ALL in any of the columns will always show a , this column is valid only for individual EDGE caches. - Admin: shows the administrative status of the server. - Connections: the number of connections this cache (or group of caches) has open ( ats.proxy.process.http.current_client_connectionsfrom ATS). - Mbps Out: the bandwidth being served out if this cache (or group of caches) Since the top line has ALL, ALL, ALL, it shows the total connections and bandwidth for all caches managed by this instance of Traffic Ops. Graph View¶ The Graph View shows a live view of the last 24 hours of bits per seconds served and open connections at the edge in a graph. This data is sourced from Traffic Stats. If there are 2 CDNs configured, this view will show the statistis for both, and the graphs are stacked. On the left-hand side, the totals and immediate values as well as the percentage of total possible capacity are displayed. This view is update every 10 seconds. Server Checks¶ The server checks page is intended to give an overview of the Servers managed by Traffic Control as well as their status. This data comes from Traffic Ops extensions. Daily Summary¶ Displays daily max gbps and bytes served for all CDNs. In order for the graphs to appear, the ‘daily_bw_url’ and ‘daily_served_url’ parameters need to be be created, assigned to the global profile, and have a value of a grafana graph. For more information on configuring grafana, see the Traffic Stats section. Server¶ This view shows a table of all the servers in Traffic Ops. The table columns show the most important details of the server. The IPAddrr column is clickable to launch an ssh:// link to this server. The icon will link to a Traffic Stats graph of this server for caches, and the will link to the server status pages for other server types. Delivery Service¶ The fields in the Delivery Service view are: Delivery Service Types¶ One of the most important settings when creating the delivery service is the selection of the delivery service type. This type determines the routing method and the primary storage for the delivery service. Federations¶ Federations allow for other (federated) CDNs (at a different ISP, MSO, etc) to add a list of resolvers and a CNAME to a delivery service Traffic Ops. When a request is made from one of federated CDN’s clients, Traffic Router will return the CNAME configured in the federation mapping. This allows the federated CDN to serve the content without the content provider changing the URL, or having to manage multiple URLs. Before adding a federation in the Traffic Ops UI, a user with the federations role needs to be created. This user will be assigned to the federation and will be able to add resolvers to the federation via the Traffic Ops Federation API. Header Rewrite Options and DSCP¶ Most header manipulation and per-delivery service configuration overrides are done using the ATS Header Rewrite Plugin. Traffic Control allows you to enter header rewrite rules to be applied at the edge and at the mid level. The syntax used in Traffic Ops is the same as the one described in the ATS documentation, except for some special strings that will get replaced: The deliveryservice screen also allows you to set the DSCP value of traffic sent to the client. This setting also results in a header_rewrite rule to be generated and applied to at the edge. Note The DSCP setting in the UI is only for setting traffic towards the client, and gets applied after the initial TCP handshake is complete, and the HTTP request is received (before that the cache can’t determine what deliveryservice this request is for, and what DSCP to apply), so the DSCP feature can not be used for security settings - the TCP SYN-ACK is not going to be DSCP marked. Token Based Authentication¶ Token based authentication or signed URLs is implemented using the Traffic Server url_sig plugin. To sign a URL at the signing portal take the full URL, without any query string, and add on a query string with the following parameters: - Client IP address The client IP address that this signature is valid for. C=<client IP address> - Expiration The Expiration time (seconds since epoch) of this signature. E=<expiration time in secs since unix epoch> - Algorithm The Algorithm used to create the signature. Only 1 (HMAC_SHA1) and 2 (HMAC_MD5) are supported at this time A=<algorithm number> - Key index Index of the key used. This is the index of the key in the configuration file on the cache. The set of keys is a shared secret between the signing portal and the edge caches. There is one set of keys per reverse proxy domain (fqdn). K=<key index used> - Parts Parts to use for the signature, always excluding the scheme (http://). parts0 = fqdn, parts1..x is the directory parts of the path, if there are more parts to the path than letters in the parts param, the last one is repeated for those. Examples:1: use fqdn and all of URl path 0110: use part1 and part 2 of path only 01: use everything except the fqdn P=<parts string (0's and 1's)> - Signature The signature over the parts + the query string up to and including “S=”. S=<signature> Parent Selection¶ Parameters in the Edge (child) profile that influence this feature: Parameters in the Mid (parent) profile that influence this feature: Qstring Handling¶ Delivery services have a Query String Handling option that, when set to ignore, will automatically add a regex remap to that delivery service’s config. There may be times this is not preferred, or there may be requirements for one delivery service or server(s) to behave differently. When this is required, the psel.qstring_handling parameter can be set in either the delivery service profile or the server profile, but it is important to note that the server profile will override ALL delivery services assigned to servers with this profile parameter. If the parameter is not set for the server profile but is present for the Delivery Service profile, this will override the setting in the delivery service. A value of “ignore” will not result in the addition of regex remap configuration. Multi Site Origin¶ Note The configuration of this feature changed significantly between ATS version 5 and >= 6. Some configuration in Traffic Control is different as well. This documentation assumes ATS 6 or higher. See Configure Multi Site Origin for more details. Normally, the mid servers are not aware of any redundancy at the origin layer. With Multi Site Origin enabled this changes - Traffic Server (and Traffic Ops) are now made aware of the fact there are multiple origins, and can be configured to do more advanced failover and loadbalancing actions. A prerequisite for MSO to work is that the multiple origin sites serve identical content with identical paths, and both are configured to serve the same origin hostname as is configured in the deliveryservice Origin Server Base URL field. See the Apache Traffic Server docs for more information on that cache’s implementation. With This feature enabled, origin servers (or origin server VIP names for a site) are going to be entered as servers in to the Traiffic Ops UI. Server type is “ORG”. Parameters in the mid profile that influence this feature: Parameters in the deliveryservice profile that influence this feature: see Configure Multi Site Origin for a quick how to on this feature. Regex Remap Expression¶ The regex remap expression allows to to use a regex and resulting match group(s) in order to modify the request URIs that are sent to origin. For example: ^/original/(.*) Note If Query String Handling is set to 2 Drop at edge, then you will not be allowed to save a regex remap expression, as dropping query strings actually relies on a regex remap of its own. However, if there is a need to both drop query strings and remap request URIs, this can be accomplished by setting Query String Handling to 1 Do not use in cache key, but pass up to origin, and then using a custom regex remap expression to do the necessary remapping, while simultaneously dropping query strings. The following example will capture the original request URI up to, but not including, the query string and then forward to a remapped URI: ^/([^?]*).* Delivery Service Regexp¶ This table defines how requests are matched to the delivery service. There are 3 type of entries possible here: The Order entry defines the order in which the regular expressions get evaluated. To support CNAMES from domains outside of the Traffic Control top level DNS domain, enter multiple HOST_REGEXP lines. - Example: - Example foo. Note In most cases is is sufficient to have just one entry in this table that has a HOST_REGEXP Type, and Order 0. For the movies delivery service in the Kabletown CDN, the entry is simply single HOST_REGEXP set to .*\.movies\..*. This will match every url that has a hostname that ends with movies.cdn1.kabletown.net, since cdn1.kabletown.net is the Kabletown CDN’s DNS domain. Static DNS Entries¶ Static DNS entries allow you to create other names under the delivery service domain. You can enter any valid hostname, and create a CNAME, A or AAAA record for it by clicking the Static DNS button at the bottom of the delivery service details screen. Server Assignments¶ Click the Server Assignments button at the bottom of the screen to assign servers to this delivery service. Servers can be selected by drilling down in a tree, starting at the profile, then the cache group, and then the individual servers. Traffic Router will only route traffic for this delivery service to servers that are assigned to it. The Coverage Zone File and ASN Table¶ The Coverage Zone File (CZF) should contain a cachegroup name to network prefix mapping in the form: { "coverageZones": { "cache-group-01": { "coordinates": { "latitude": 1.1, "longitude": 2.2 }, "network6": [ "1234:5678::/64", "1234:5679::/64" ], "network": [ "192.168.8.0/24", "192.168.9.0/24" ] }, "cache-group-02": { "coordinates": { "latitude": 3.3, "longitude": 4.4 }, "network6": [ "1234:567a::/64", "1234:567b::/64" ], "network": [ "192.168.4.0/24", "192.168.5.0/24" ] } } } The CZF is an input to the Traffic Control CDN, and as such does not get generated by Traffic Ops, but rather, it gets consumed by Traffic Router. Some popular IP management systems output a very similar file to the CZF but in stead of a cachegroup an ASN will be listed. Traffic Ops has the “Networks (ASNs)” view to aid with the conversion of files like that to a Traffic Control CZF file; this table is not used anywhere in Traffic Ops, but can be used to script the conversion using the API. The script that generates the CZF file is not part of Traffic Control, since it is different for each situation. Note The "coordinates" section is optional and may be used by Traffic Router for localization in the case of a CZF “hit” where the zone name does not map to a Cache Group name in Traffic Ops (i.e. Traffic Router will route to the closest Cache Group(s) geographically). The Deep Coverage Zone File¶ The Deep Coverage Zone File (DCZF) format is similar to the CZF format but adds a caches list under each deepCoverageZone: { "deepCoverageZones": { "location-01": { "coordinates": { "latitude": 5.5, "longitude": 6.6 }, "network6": [ "1234:5678::/64", "1234:5679::/64" ], "network": [ "192.168.8.0/24", "192.168.9.0/24" ], "caches": [ "edge-01", "edge-02" ] }, "location-02": { "coordinates": { "latitude": 7.7, "longitude": 8.8 }, "network6": [ "1234:567a::/64", "1234:567b::/64" ], "network": [ "192.168.4.0/24", "192.168.5.0/24" ], "caches": [ "edge-02", "edge-03" ] } } } Each entry in the caches list is the hostname of an edge cache registered in Traffic Ops which will be used for “deep” caching in that Deep Coverage Zone. Unlike a regular CZF, coverage zones in the DCZF do not map to a Cache Group in Traffic Ops, so currently the deep coverage zone name only needs to be unique. If the Traffic Router gets a DCZF “hit” for a requested Delivery Service that has Deep Caching enabled, the client will be routed to an available “deep” cache from that zone’s caches list. Note The "coordinates" section is optional. Parameters and Profiles¶ Parameters are shared between profiles if the set of { name, config_file, value } is the same. To change a value in one profile but not in others, the parameter has to be removed from the profile you want to change it in, and a new parameter entry has to be created (Add Parameter button at the bottom of the Parameters view), and assigned to that profile. It is easy to create new profiles from the Misc > Profiles view - just use the Add/Copy Profile button at the bottom of the profile view to copy an existing profile to a new one. Profiles can be exported from one system and imported to another using the profile view as well. It makes no sense for a parameter to not be assigned to a single profile - in that case it really has no function. To find parameters like that use the Parameters > Orphaned Parameters view. It is easy to create orphaned parameters by removing all profiles, or not assigning a profile directly after creating the parameter. See also Profile Parameters in the Configuring Traffic Ops section. Tools¶ Generate ISO¶ Generate ISO is a tool for building custom ISOs for building caches on remote hosts. Currently it only supports Centos 6, but if you’re brave and pure of heart you MIGHT be able to get it to work with other unix-like OS’s. The interface is mostly self explainatory as it’s got hints. When you click the Download ISO button the folling occurs (all paths relative to the top level of the directory specified in _osversions.cfg_): - Reads /etc/resolv.conf to get a list of nameservers. This is a rather ugly hack that is in place until we get a way of configuring it in the interface. - Writes a file in the ks_scripts/state.out that contains directory from _osversions.cfg_ and the mkisofs string that we’ll call later. - Writes a file in the ks_scripts/network.cfg that is a bunch of key=value pairs that set up networking. - Creates an MD5 hash of the password you specify and writes it to ks_scripts/password.cfg. Note that if you do not specify a password “Fred” is used. Also note that we have experienced some issues with webbrowsers autofilling that field. - Writes out a disk configuration file to ks_scripts/disk.cfg. - mkisofs is called against the directory configured in _osversions.cfg_ and an ISO is generated in memory and delivered to your webbrowser. You now have a customized ISO that can be used to install Red Hat and derivative Linux installations with some modifications to your ks.cfg file. Kickstart/Anaconda will mount the ISO at /mnt/stage2 during the install process (at least with 6). You can directly include the password file anywhere in your ks.cfg file (usually in the top) by doing %include /mnt/stage2/ks_scripts/password.cfg What we currently do is have 2 scripts, one to do hard drive configuration and one to do network configuration. Both are relatively specific to the environment they were created in, and both are probably wrong for other organizations, however they are currently living in the “misc” directory as examples of how to do things. We trigger those in a %pre section in ks.cfg and they will write config files to /tmp. We will then include those files in the appropriate places using %pre. For example this is a section of our ks.cfg file: %include /mnt/stage2/ks_scripts/packages.txt %pre python /mnt/stage2/ks_scripts/create_network_line.py bash /mnt/stage2/ks_scripts/drive_config.sh %end These two scripts will then run _before_ anaconda sets up it’s internal structures, then a bit further up in the ks.cfg file (outside of the %pre %end block) we do an %include /mnt/stage2/ks_scripts/password.cfg ... %include /tmp/network_line %include /tmp/drive_config ... This snarfs up the contents and inlines them. If you only have one kind of hardware on your CDN it is probably best to just put the drive config right in the ks.cfg. If you have simple networking needs (we use bonded interfaces in most, but not all locations and we have several types of hardware meaning different ethernet interface names at the OS level etc.) then something like this: #!/bin/bash source /mnt/stage2/ks_scripts/network.cfg echo "network --bootproto=static --activate --ipv6=$IPV6ADDR --ip=$IPADDR --netmask=$NETMASK --gateway=$GATEWAY --ipv6gateway=$GATEWAY --nameserver=$NAMESERVER --mtu=$MTU --hostname=$HOSTNAME" >> /tmp/network.cfg # Note that this is an example and may not work at all. You could also put this in the %pre section. Lots of ways to solve it. We have included the two scripts we use in the “misc” directory of the git repo: - kickstart_create_network_line.py - kickstart_drive_config.sh These scripts were written to support a very narrow set of expectations and environment and are almost certainly not suitable to just drop in, but they might provide a good starting point. Queue Updates and Snapshot CRConfig¶ When changing delivery services special care has to be taken so that Traffic Router will not send traffic to caches for delivery services that the cache doesn’t know about yet. In general, when adding delivery services, or adding servers to a delivery service, it is best to update the caches before updating Traffic Router and Traffic Monitor. When deleting delivery services, or deleting server assignments to delivery services, it is best to update Traffic Router and Traffic Monitor first and then the caches. Updating the cache configuration is done through the Queue Updates menu, and updating Traffic Monitor and Traffic Router config is done through the Snapshot CRConfig menu. Queue Updates¶ Every 15 minutes the caches should run a syncds to get all changes needed from Traffic Ops. The files that will be updated by the syncds job are: - records.config - remap.config - parent.config - cache.config - hosting.config - url_sig_(.*).config - hdr_rw_(.*).config - regex_revalidate.config - ip_allow.config A cache will only get updated when the update flag is set for it. To set the update flag, use the Queue Updates menu - here you can schedule updates for a whole CDN or a cache group: - Click Tools > Queue Updates. - Select the CDN to queueu uodates for, or All. - Select the cache group to queue updates for, or All - Click the Queue Updates button. - When the Queue Updates for this Server? (all) window opens, click OK. To schedule updates for just one cache, use the “Server Checks” page, and click the in the UPD column. The UPD column of Server Checks page will change show a when updates are pending for that cache. Snapshot CRConfig¶ Every 60 seconds Traffic Monitor will check with Traffic Ops to see if a new CRConfig snapshot exists; Traffic Monitor polls Traffic Ops for a new CRConfig, and Traffic Router polls Traffic Monitor for the same file. This is necessary to ensure that Traffic Monitor sees configuration changes first, which helps to ensure that the health and state of caches and delivery services propagates properly to Traffic Router. See Traffic Router Profile for more information on the CRConfig file. To create a new snapshot, use the Tools > Snapshot CRConfig menu: - Click Tools > Snapshot CRConfig. - Verify the selection of the correct CDN from the Choose CDN drop down and click Diff CRConfig. On initial selection of this, the CRConfig Diff window says the following: There is no existing CRConfig for [cdn] to diff against… Is this the first snapshot??? If you are not sure why you are getting this message, please do not proceed! To proceed writing the snapshot anyway click the ‘Write CRConfig’ button below. If there is an older version of the CRConfig, a window will pop up showing the differences between the active CRConfig and the CRConfig about to be written. - Click Write CRConfig. - When the This will push out a new CRConfig.json. Are you sure? window opens, click OK. - The Successfully wrote CRConfig.json! window opens, click OK. Invalidate Content¶ Invalidating content on the CDN is sometimes necessary when the origin was mis-configured and something is cached in the CDN that needs to be removed. Given the size of a typical Traffic Control CDN and the amount of content that can be cached in it, removing the content from all the caches may take a long time. To speed up content invalidation, Traffic Ops will not try to remove the content from the caches, but it makes the content inaccessible using the regex_revalidate ATS plugin. This forces a revalidation of the content, rather than a new get. Note This method forces a HTTP revalidation of the content, and not a new GET - the origin needs to support revalidation according to the HTTP/1.1 specification, and send a 200 OK or 304 Not Modified as applicable. To invalidate content: - Click Tools > Invalidate Content - Fill out the form fields: - Select the Delivery Service - Enter the Path Regex - this should be a PCRE compatible regular expression for the path to match for forcing the revalidation. Be careful to only match on the content you need to remove - revalidation is an expensive operation for many origins, and a simple /.*can cause an overload condition of the origin. - Enter the Time To Live - this is how long the revalidation rule will be active for. It usually makes sense to make this the same as the Cache-Controlheader from the origin which sets the object time to live in cache (by max-ageor Expires). Entering a longer TTL here will make the caches do unnecessary work. - Enter the Start Time - this is the start time when the revalidation rule will be made active. It is pre-populated with the current time, leave as is to schedule ASAP. - Click the Submit button. Manage DNSSEC Keys¶ In order to support DNSSEC in Traffic Router, Traffic Ops provides some actions for managing DNSSEC keys for a CDN and associated Delivery Services. DNSSEC Keys consist of a Key Signing Keys (KSK) which are used to sign other DNSKEY records as well as Zone Signing Keys (ZSK) which are used to sign other records. DNSSEC Keys are stored in Traffic Vault and should only be accessible to Traffic Ops. Other applications needing access to this data, such as Traffic Router, must use the Traffic Ops DNSSEC APIs to retrieve this information. - To Manage DNSSEC Keys: - Click Tools -> Manage DNSSEC Keys - Choose a CDN and click Manage DNSSEC Keys - If keys have not yet been generated for a CDN, this screen will be mostly blank with just the CDN and DNSSEC Active? fields being populated. - If keys have been generated for the CDN, the Manage DNSSEC Keys screen will show the TTL and Top Level Domain (TLD) KSK Expiration for the CDN as well as DS Record information which will need to be added to the parent zone of the TLD in order for DNSSEC to work. The Manage DNSSEC Keys screen also allows a user to perform the following actions: Activate/Deactivate DNSSEC for a CDN Fairly straight forward, this button set the dnssec.enabled param to either true or false on the Traffic Router profile for the CDN. The Activate/Deactivate option is only available if DNSSEC keys exist for CDN. In order to active DNSSEC for a CDN a user must first generate keys and then click the Active DNSSEC button. Generate Keys Generate Keys will generate DNSSEC keys for the CDN TLD as well as for each Delivery Service in the CDN. It is important to note that this button will create a new KSK for the TLD and, therefore, a new DS Record. Any time a new DS Record is created, it will need to be added to the parent zone of the TLD in order for DNSSEC to work properly. When a user clicks the Generate Keys button, they will be presented with a screen with the following fields: - CDN: This is not editable and displays the CDN for which keys will be generated - ZSK Expiration (Days): Sets how long (in days) the Zone Signing Key will be valid for the CDN and associated Delivery Services. The default is 30 days. - KSK Expiration (Days): Sets how long (in days) the Key Signing Key will be valid for the CDN and associated Delivery Services. The default is 365 days. - Effective Date (GMT): The time from which the new keys will be active. Traffic Router will use this value to determine when to start signing with the new keys and stop signing with the old keys. Once these fields have been correctly entered, a user can click Generate Keys. The user will be presented with a confirmation screen to help them understand the impact of generating the keys. If a user confirms, the keys will be generated and stored in Traffic Vault. Regenerate KSK Regenerate KSK will create a new Key Signing Key for the CDN TLD. A new DS Record will also be generated and need to be put into the parent zone in order for DNSSEC to work correctly. The Regenerate KSK button is only available if keys have already been generated for a CDN. The intent of the button is to provide a mechanism for generating a new KSK when a previous one expires or if necessary for other reasons such as a security breach. When a user goes to generate a new KSK they are presented with a screen with the following options: - CDN: This is not editable and displays the CDN for which keys will be generated - KSK Expiration (Days): Sets how long (in days) the Key Signing Key will be valid for the CDN and associated Delivery Services. The default is 365 days. - Effective Date (GMT): The time from which the new KSK and DS Record will be active. Since generating a new KSK will generate a new DS Record that needs to be added to the parent zone, it is very important to make sure that an effective date is chosen that allows for time to get the DS Record into the parent zone. Failure to get the new DS Record into the parent zone in time could result in DNSSEC errors when Traffic Router tries to sign responses. Once these fields have been correctly entered, a user can click Generate KSK. The user will be presented with a confirmation screen to help them understand the impact of generating the KSK. If a user confirms, the KSK will be generated and stored in Traffic Vault. Additionally, Traffic Ops also performs some systematic management of DNSSEC keys. This management is necessary to help keep keys in sync for Delivery Services in a CDN as well as to make sure keys do not expire without human intervention. Generation of keys for new Delivery Services If a new Delivery Service is created and added to a CDN that has DNSSEC enabled, Traffic Ops will create DNSSEC keys for the Delivery Service and store them in Traffic Vault. Regeneration of expiring keys for a Delivery Service Traffic Ops has a process, controlled by cron, to check for expired or expiring keys and re-generate them. The process runs at 5 minute intervals to check and see if keys are expired or close to expiring (withing 10 minutes by default). If keys are expired for a Delivery Service, traffic ops will regenerate new keys and store them in Traffic Vault. This process is the same for the CDN TLD ZSK, however Traffic Ops will not re-generate the CDN TLD KSK systematically. The reason is that when a KSK is regenerated for the CDN TLD then a new DS Record will also be created. The new DS Record needs to be added to the parent zone before Traffic Router attempts to sign with the new KSK in order for DNSSEC to work correctly. Therefore, management of the KSK needs to be a manual process.
https://traffic-control-cdn.readthedocs.io/en/latest/admin/traffic_ops/using.html
2018-06-17T23:51:08
CC-MAIN-2018-26
1529267859904.56
[]
traffic-control-cdn.readthedocs.io
The School Nutrition Foundation (SNF) Announces 2018 Josephine Martin National Policy Fellow! SNF is excited to announce that Christina Osborne from Fayetteville, N.C., has been named the 2018 Josephine Martin National Policy Fellow. Just TWO Months Left to Apply for Professional Development Scholarships Achieve your dreams of continuing education with the help of an education scholarship from SNF! The deadline to apply is April 2, 2018. Last Chance for Award Nominations! Recognize your colleague today for SNA’s Employee of the Year, Manager of the Year, and Director of the Year awards. Nominations are being accepted online until March 1, 2018. Learn More ANC 2018: Registration is Now Open! Registration for SNA’s 2018 Annual National Conference (ANC) is now open! ANC is your opportunity to unite with the school nutrition community for four days to share bright ideas, learn best practices and illuminate a bold vision for foodservice in K12 schools. Learn More Bonus Virtual Expo 2018—Get In On the Action! The future is here! SNA’s free 2018 Virtual Expo is now open through March 16! Register now to get in on the action happening this month with 33 booths open and a live chat with exhibitors next week on Tuesday, February 20. Increase Breakfast Participation; Promote NSBW! The National School Breakfast Week (NSBW) kick-off on March 5 is fast approaching! Celebrate with this year's emoji-inspired theme, “I Heart School Breakfast.” Learn More Your Vote Counts:. Learn More Register for SNA’s Best of #SNIC18 Webinar: Simple Tech Tools from Your Nerdy Best Friend Don’t miss next week’s webinar to learn more about the latest tech tools for school nutrition professionals—plus register for our March webinar to learn how other districts have operated Community Eligibility with ISPs below 60%. Learn More Use SNA’s Block Grant Calculator! SNA’s easy-to-use, printable Block Grant Calculator can you give you a clearer estimate of how much your local schools and students stand to lose. LAC attendees are encouraged to bring their eye-opening results with them to the conference. Learn More. Learn More LAC 2018 Resources SNA members are encouraged to visit the LAC Resources page to access tools to maximize your experience at LAC 2018 and make the most of your time in the nation’s capital. Learn More STEPS February Challenge: Spread Some Love! In honor of Valentine’s Day, spread good feeling all around you. Do something nice, call an old friend, write a card for a co-worker, or cook dinner for your spouse. When you do nice things for others, you feel better about yourself! Give yourself ten (10) points for every act of kindness or friendly gesture you do in February. At the end of the month, be sure to add up your points and enter them online—you could win a prize! Learn More Next Week: Best of #SNIC18: Simple Tech Tools from Your Nerdy Best Friend February 15, 2018 Conference Scholarships Deadline for ANC 2018 February 15-28, 2018 Online Voting Open for SNA’s Election February 20, 2018 Virtual Expo "Live" Day February 28, 2018 SN Showcase Poster Proposal Deadline March 1, 2018 SNA Member Awards Nominations Deadline March 14, 2018 Community Eligibility Series, Part 2: Making It Work With ISPs Below 60% Read the February 2018 Issue The best-laid plans often begin with strategy—the topic of the February 2018 issue of School Nutrition magazine. Use our detailed reasoning, provided guides and expansive glossary to plot a roadmap for achievable success within your school nutrition operation. Officials: Free Meals Program a Success in Bradford School District, The Bradford Era Leftover Food Bypasses Garbage to Fill Nutritional Goals, The Bismarck Tribune Lawmakers Push for Federal Nutrition Bill for Native Youth, The Associated Press Four Knox County Schools Honored by USDA in Health Initiative, WATE ABC News New Spartanburg Co. School Program Aims to Give More Students Free Breakfast, WMBF]]
http://docs.schoolnutrition.org/newsroom/snexpress/v17/snexpress-2018-02-14.html
2018-06-17T23:30:55
CC-MAIN-2018-26
1529267859904.56
[]
docs.schoolnutrition.org
Partners¶ CIViC partners are generally those organizations and individuals who are engaged in reciprocal collaboration that leads to substantial improvements of the CIViC codebase, application and/or curated content. CIViC is actively working with these organizations to advance our mission of improving clinical interpretation of variants for cancer. In some cases specific feature developments or research questions may be supported by co-funded grants. ClinGen Somatic Working Group¶ The mission of the ClinGen Cancer Somatic Workgroup (SWG) is to facilitate the development of data curation interfaces and standards for interpretation of somatic changes and their clinical actionability in order to enhance the usability, dissemination and implementation of cancer somatic changes in the ClinGen resource to improve healthcare. The Cancer Somatic Workgroup aims to collaborate with expert groups to develop processes that support accurate determination of clinical relevance of somatic changes for use by physicians, clinical laboratories, researchers, and guideline-developing groups. The Cancer Somatic Workgroup, under the leadership of Dr Subha Madhavan and Dr Shashikant Kulkarni, has adopted use of the CIViC curation platform for several of its task teams and Variant Curation Expert Panels (VCEP; in development). CIViC is working closely with SWG experts to develop their VCEP curation process and automate submission of somatic variant interpretations to ClinVar. Personalized Oncogenomics at BC Cancer Agency¶ The BC Cancer Agency’s Personalized Onco-Genomics (POG) program is a clinical research initiative that is embedding genomic sequencing into real-time treatment planning for BC patients with incurable cancers. Cancer is a complex biological process. The POG program categorizes the development of treatment strategies to block its growth, identify clinical trials that the patient may benefit from and potentially identify less toxic and more effective therapeutic options. The POG program, under the leadership of Dr Steven Jones and Dr Janessa Laskin, have partnered with CIViC to study the impact of including CIViC interpretations in our clinical decision making. Variant Interpretation for Cancer Consortium¶. The Variant Interpretation for Cancer Consortium (VICC) was founded to bring together the leading institutions that are independently developing comprehensive cancer variant interpretation databases. CIViC is a founding member of the VICC. Alissa Clinical Informatics Platform at Agilent Technologies¶ To widen the reach and visibility of CIViC, and to enable its use as a valuable resource to the community of diagnostic labs, a partnership was set up with Agilent Technologies. The CIViC database has been integrated into their Alissa Clinical Informatics Platform. This cloud-based, clinical-grade software platform is geared towards routine diagnostic labs and allows molecular pathologists to automate their variant filtration and classification workflow, and draft lab reports. As of version 5.0 of Bench Lab, labs can automate their variant assessment protocol to assess the molecular profile of a sample and automatically flag presence of prognostic, diagnostic and therapeutic evidence in the CIViC database. Users are able to restrict the CIViC search to relevant tumor and tissue type. Secondly, the variant review tools in the platform now allow users to review the CIViC content and assess and select relevant portions for inclusion into draft lab reports. Discussions are underway to support the submission of new interpretation, developed by Alissa users, back to the CIViC knowledgebase. VHL Variant Curation Expert Panel¶ The ClinGen VHL Variant Curation Expert Panel, under the leadership of Dr Raymond Kim, is an Expert Panel for VHL as part of the ClinGen Clinical Domain Working Groups. This panel consists of expert clinicians, clinical laboratory diagnosticians, and researchers working to implement VHL specific standards and protocols for assessing VHL variant pathogenicity. CIViC is working with the VHL Expert panel to develop these rules, catalogue published evidence for VHL variant pathogenicity, and submit exemplar variants to ClinVar for use in the ClinGen curation process.
https://civic.readthedocs.io/en/latest/about/partners.html
2020-05-25T13:42:06
CC-MAIN-2020-24
1590347388758.12
[]
civic.readthedocs.io
GBAR (Graph Backup And Restore), is an integrated tool for backing up and restoring the data and data dictionary (schema, loading jobs, and queries) of a single TigerGraph node. In Backup mode, it packs TigerGraph data and configuration information in a single file onto disk or a remote AWS S3 bucket. Multiple backup files can be archived. Later, you can use the Restore mode to rollback the system to any backup point. This tool can also be integrated easily with Linux cron to perform periodic backup jobs. The current version of GBAR is intended for restoring the same machine that was backed up. For help with cloning a database (i.e., backing up machine A and restoring the database to machine B), please contact [email protected] . SynopsisUsage: gbar backup [options] -t <backup_tag>gbar restore [options] <backup_tag>gbar configgbar listOptions:-h, --help Show this help message and exit-v Run with debug info dumped-vv Run with verbose debug info dumped-y Run without prompt-t BACKUP_TAG Tag for backup file, required on backup The -y option forces GBAR to skip interactive prompt questions by selecting the default answer. There is currently one interactive question: At the start of restore, GBAR will always ask if it is okay to stop and reset the TigerGraph services: (y/N)? The default answer is yes. Config For S3 configuration, the AWS access key and secret are not provided, then GBAR will use the attached IAM role. You can specify the number of parallel processes for backup and restore. If GSQL authentication is enabled, you must provide a username and password. Backup A backup archive is stored as several files in a folder, rather than as a single file. Distributed backup performance is improved. Restore To select a backup archive to restore, the full backup name must be specified. Restore asks fewer interactive questions than before: The user must provide a full archive name; there is no option to select the latest from a set of archives. GBAR restore does not estimate the the uncompressed data size and check whether there is sufficient disk space. gbar config GBAR Config must be run before using GBAR backup/restore functionality. GBAR Config will open the following configuration template interactively in a text editor. Using the comments as a guide, edit the configuration file to set the configuration parameters according to your own needs. Synopsis# Configure file for GBAR# you can specify storage method as either local or s3.# Assign True if you want to store backup files on local disk.# Assign False otherwise, in this case no need to set path.store_local: Falsepath: PATH_TO_BACKUP_REPOSITORY# Assign True if you want to store backup files on AWS S3.# Assign False otherwise, in this case no need to set AWS key and bucket.# AWS access key and secret is optional. If not specified, it will use# attached IAM role of the instance.store_s3: Falseaws_access_key_id:aws_secret_access_key:bucket: YOUR_BUCKET_NAME# The maximum timeout value to wait for core modules(GPE/GSE) on backup.# As a roughly estimated number,# GPE & GSE backup throughoutput is about 2GB in one minute on HDD.# You can set this value according to your gstore size.# Interval string could be with format 1h2m3s, means 1 hour 2 minutes 3 seconds,# or 200m means 200 minutes.# You can set to 0 for endless waiting.backup_core_timeout: 5h# The number of processes to be created during compressing backup archive.# Compressing in parallel can gain improved performance.# The same number of processes will be spawned for decompression on restore.compress_process_number: 8# Need to put gsql user/passwd here if gsql authentication is ongsql_user:gsql_passwd: If you do not wish to store the username and password in the config file, you can prepend the user login credentials, as environment variables, to the gbar command you wish to run. Leaving the config file's username and password fields blank will require you to manually prepend the login information to the gbar command, as seen below. $ GSQL_USERNAME=tigergraph GSQL_PASSWORD=tigergraph gbar backup -t daily gbar backup -t <backup_tag> The backup_tag acts like a filename prefix for the archive filename. The full name of the backup archive will be <backup_tag>-<timestamp>, which is a subfolder of the backup repository. If store_local is true, the folder is a local folder on every node in a cluster, to avoid massive data moving across nodes in a cluster. If store_s3 is true, every node will upload data located on the node to the s3 repository. Therefore, every node in a cluster needs access to Amazon S3. If IAM policy is used for authentication, every node in the cluster needs to be attached with the IAM policy. GBAR Backup performs a live backup, meaning that normal operations may continue while backup is in progress. When GBAR backup starts, it sends a request to gadmin , which then requests the GPE and GSE to create snapshots of their data. Per the request, the GPE and GSE store their data under GBAR’s own working directory. GBAR also directly contacts the Dictionary and obtains a dump of its system configuration information. In addition, GBAR records TigerGraph system version. Then, GBAR compresses each of these data and configuration information files in tgz format and stores them in the <backup_tag>-<timestamp> subfolder on each node. As the last step, GBAR copies that file to local storage or AWS S3, according to the Config settings, and removes all temporary files generated during backup. The current version of GBAR Backup takes snapshots quickly to make it very likely that all the components (GPE, GSE, and Dictionary) are in a consistent state, but it does not fully guarantee consistency. It’s highly recommended when issuing the backup command, no active data update is in progress. A no-write time period of about 5 seconds is sufficient. Backup does not save input message queues for REST++ or Kafka. gbar list This command lists all generated backup files in the storage place configured by the user. For each file, it shows the file’s full tag, file’s size in human readable format, and its creation time. gbar restore <archive_name> Restore is an offline operation, requiring the data services to be temporarily shut down. The user must specific the full archive name ( <backup_tag>-<timestamp> ) to be restored. When GBAR restore begins, it first searches for a backup archive exactly matching the archive_name supplied in the command line. Then it decompresses the backup files to a working directory. Next, GBAR will compare the TigerGraph system version in the backup archive with the current system's version, to make sure that backup archive is compatible with that current system. It will then shut down the TigerGraph servers (GSE, RESTPP, etc.) temporarily. Then, GBAR makes a copy of the current graph data, as a precaution. Next, GBAR copies the backup graph data into the GPE and GSE and notifies the Dictionary to load the configuration data. When these actions are all done, GBAR will restart the TigerGraph servers. The primary purpose of GBAR is to save snapshots of the data configuration of a TigerGraph system, so that in the future the same system can be rolled back (restored) to one of the saved states. A key assumption is that Backup and Restore are performed on the same machine, and that the file structure of the TigerGraph software has not changed. Specific requirements are listed below. Restore Requirements and Limitations Restore is supported if the TigerGraph system has had only minor version updates since the backup. TigerGraph version numbers have the format X.Y[.Z], where X is the major version number and Y is the minor version number. Restore is supported if the backup archive and the current system have the same major version number AND the current system has a minor version number that is greater than or equal to the backup archive minor version number. Backup archives from a 0.8.x system cannot be Restored to a 1.x system. Examples: Restore needs enough free space to accommodate both the old gstore and the gstore to be restored. The following example describes a real example, to show the actual commands, the expected output, and the amount of time and disk space used, for a given set of graph data. For this example, and Amazon EC2 instance was used, with the following specifications: Single instance with 32 vCPU + 244GB memory + 2TB HDD. Naturally, backup and restore time will vary depending on the hardware used. To run a daily backup, we tell GBAR to backup with the tag name daily . $ gbar backup -t daily[23:21:46] Retrieve TigerGraph system configuration[23:21:51] Start workgroup[23:21:59] Snapshot GPE/GSE data[23:33:50] Snapshot DICT data[23:33:50] Calc checksum[23:37:19] Compress backup data[23:46:43] Pack backup data[23:53:18] Put archive daily-20180607232159 to repo-local[23:53:19] Terminate workgroupBackup to daily-20180607232159 finished in 31m33s. The total backup process took about 31 minutes, and the generated archive is about 49 GB. Dumping the GPE + GSE data to disk took 12 minutes. Compressing the files took another 20 minutes. To restore from a backup archive, a full archive name needs to be provided, such as daily-20180607232159 . By default, restore will ask the user to approve to continue. If you want to pre-approve these actions, use the "-y" option. GBAR will make the default choice for you. $ gbar restore daily-20180607232159[23:57:06] Retrieve TigerGraph system configurationGBAR restore needs to reset TigerGraph system.Do you want to continue?(y/N):y[23:57:13] Start workgroup[23:57:22] Pull archive daily-20180607232159, round #1[23:57:57] Pull archive daily-20180607232159, round #2[00:01:00] Pull archive daily-20180607232159, round #3[00:01:00] Unpack cluster data[00:06:39] Decompress backup data[00:17:32] Verify checksum[00:18:30] gadmin stop gpe gse[00:18:36] Snapshot DICT data[00:18:36] Restore cluster data[00:18:36] Restore DICT data[00:18:36] gadmin reset[00:19:16] gadmin start[00:19:41] reinstall GSQL queries[00:19:42] recompiling loading jobs[00:20:01] Terminate workgroupRestore from daily-20180607232159 finished in 22m55s.Old gstore data saved under /home/tigergraph/tigergraph/gstore with suffix -20180608001836, you need to remove them manually. For our test, GBAR restore took about 23 minutes. Most of the time (20 minutes) was spent decompressing the backup archive. Note that after the restore is done, GBAR informs you were the pre-restore graph data (gstore) has been saved. After you have verified that the restore was successful, you may want to delete the old gstore files to free up disk space.
https://docs-beta.tigergraph.com/admin/admin-guide/system-management/backup-and-restore
2020-05-25T13:01:58
CC-MAIN-2020-24
1590347388758.12
[]
docs-beta.tigergraph.com
TOPICS× Partial ad-break insertion TVSDK provides a TV-like experience of being able to join in the middle of an ad, in live streams. The partial ad-break insertion feature allows you to mimic a TV-like experience where, if the client starts a live stream inside a mid-roll, it starts playing within that mid-roll. It is similar to switching to a TV channel and the commercials run seamlessly. For example, If a user joins in the middle of a 90 seconds ad break (three 30 seconds ads), 10 seconds into the second ad (that is, at 40 seconds into the ad break), the second ad is played for the remaining duration (20 seconds) followed by the third ad. Track Ad Ad trackers for the partially played ad (the second ad) are not triggered. In the example above, only the tracker for the third ad is triggered. Behavior with pre-roll The feature works when a pre-roll ad is played with live content. The stream plays from the live point after pre-roll ad ends. Ad break events is sent even if there are no complete ads in this ad break. An ad is considered partial ad, if is skipped for more than one second. For example, if a viewer watches an ad that they skipped for 800 ms, it is considered as a complete ad.
https://docs.adobe.com/content/help/en/primetime/programming/tvsdk-3x-ios-prog/advertising/ios-3x-partialad-break-insertion.html
2020-05-25T15:40:30
CC-MAIN-2020-24
1590347388758.12
[]
docs.adobe.com
Key concepts Overview addresses the following business goals: - Thousands of pre-built runbooks third-party IT service management applications - Upgrade or change IT management applications without redesigning workflows - Improve quality of service TrueSight Orchestration in containers Repository, Configuration Distribution Peer (CDP), and TrueSight Orchestration Content (includes adapters and modules) are now available in Docker containers.. TrueSight Orchestration uses Docker to run and host pre-built images. Before starting with TrueSight Orchestration in containers, it is assumed that you are familiar with the Docker platform and containers in general. For more information about Docker containers, see Docker documetation . Some of the considerations if you choose to install containerized platform are: - In the first release of containers, TrueSight Orchestration is providing images only for the repository, CDP, and content on an internal Apache Derby database. - Images are available on the BMC's Electronic Product Download website. - Images contain pre-installed components that you only need to deploy in your container environments. - Apart from the installation process, features and functions for platform components remains the same in the containerized or classic mode. - Containerized platform supports high availability. - Migration of platform to containers is not available in this release. - In this release, for multiple instances of the CDP, data replication is not supported. However, you can manually configure data to ensure same data is replicated across the instances. Use the following topics to understand more about TrueSight Orchestration in containers: How BMC provides value BMC provides a one-stop solution that is single suite of products that supports all your ITIL service support needs — from architecture to integration and implementation to support. Video overview The following BMC Communities videos provide a general overview of . This BMC Communities video (5:11) discusses element managers and their APIs and how to integrate and automate them using . The next BMC Communities video (2:58) introduces building workflows for element managers, automating them and exposing their APIs. The following BMC Communities video (3:26) describes triggering workflows based on element manager events and introduces adapters.
https://docs.bmc.com/docs/TruesightOrchestrationContent/201901/key-concepts-823452511.html
2020-05-25T14:53:37
CC-MAIN-2020-24
1590347388758.12
[]
docs.bmc.com
How to Add Images to Text in Layers Widgets This tutorial offers some insight into how to build content with the Layers Content Widget and achieve layouts that mix images and text. We are often asked why the textarea editor toolbars don’t have an image option. This editor toolbar is not the same as your WordPress editor – it cannot handle complex operations like connecting to your WordPress media library by default. Instead, your widget column has a Featured Media button in the options toolbar to add an image to the column. This ensures the content is well balanced, has appropriate spacing, and is responsive for mobile devices. Rather than pasting in one giant wall of text into a single column, this widget allows you to style each section of the content separately, down to single paragraphs if you like. But what if you want to add images inside the textarea? The only reason this should ever be necessary is if your widget does not have an options toolbar with a featured media button. Below we will explore both options – building your content with the Content widget, and workarounds for widgets without Featured Media options. Featured Media The Content Widget is designed to support balanced layouts for text and images, which can be configured in hundreds of ways using a combination of columns, column widths image and text alignment, and is your most powerful building tool. The following example demonstrates how a Content Widget is setup using a Masonry layout, with varying Image Layouts in the columns: Any image can also be a video, Soundcloud player, tweet or any oEmbed content by using the Video URL (oEmbed) field in the Featured Media drop-down. Learn more about the Content Widget, or browse our Tutorials for layout tips and customizations. Images in Widget Excerpts Not all widgets have a Featured Media button on the excerpts (they should!) , so as a workaround you can insert most basic HTML into the widget’s text or excerpt areas by using the Code view button on the editor toolbar, including image tags. The following will walk you through adding an image tag to your widget’s text area that aligns the image left and allows the text to wrap around it automatically. - Every Layers widget should have a Background option on the right-hand design bar where you can click to select an image. This opens the WordPress media library where you can upload or browse for an existing image. After selecting the image thumbnail, copy the URL from the right side, then click the X at top-right to close the media library - Add your text to your text area and format it as desired. - Click the </> button on the text area’s editor toolbar where you want to insert the image to switch to code view. This works just like the Text tab of the WordPress post editor. - Place your cursor at the start of the text, outside of the <p> tag - Enter a basic img tag, which looks like this: <img class="" src=""/> Simple right? - You need to set two main things in this tag, the class and the src . Place your cursor inside the quotes after the src attribute and paste your image URL - Next, place your cursor inside the quotes after the class attribute and enter any one of the following class names, which are provided by WordPress: - alignleft - alignright - aligncenter - alignnone - Cick the code button </> again to switch back to visual editing. - Your image will now appear inline with the content: There are many drawbacks to doing it this way, including a lack of padding on the image and little control over scaling or ensuring the image is responsive on mobiles. To get around this, you can setup the content in a draft post in WordPress first. This allows you to size the image and add padding if needed. To avoid having to reformat the text after it is pasted, be sure to highlight separate paragraphs and select the paragraph style in the editor. Click the Text view to copy and paste the content into your widget’s Code view. Featured Media vs Images in Content If your widget has a Featured Media option, just about any combination of text and image can be archived using the widget’s controls vs adding the image directly to the editor. Using the above example with the left-aligned cat and our text, here is how it looks using a Content widget instead: As you can see, you can still “wrap” text around images or place them virtually anywhere in the text by dividing them up into columns to gain better control over placement. To achieve this, we have four columns: - The first column is set to 12 of 12 to be full width and uses a featured image of the cat, left aligned with justified text. By default, the Image Left or Image Right layouts will balance your image and text equally, the same as having two 6 of 12 columns side by side. - The second column has only text and is also set to 12 of 12 with justified text. - The third is just justified text, set to 8 of 12. - The fourth is a 4 of 12 width with only a Featured Image, set to Image Bottom. Did you know? Our friends at Jetpack are doing some incredible work to improve the WordPress experience. Check out Jetpack and improve your site's security, speed and reliability.
https://docs.layerswp.com/doc/how-to-add-images-to-text-in-layers-content-widget/
2020-05-25T14:25:29
CC-MAIN-2020-24
1590347388758.12
[array(['https://docs.layerswp.com/wp-content/uploads/content-layout-magazine.jpg', 'content-layout-magazine'], dtype=object) array(['https://docs.layerswp.com/wp-content/uploads/content-imgalign.jpg', 'content-imgalign'], dtype=object) array(['https://docs.layerswp.com/wp-content/uploads/content-layout-columns.jpg', 'content-layout-columns'], dtype=object) array(['https://refer.wordpress.com/wp-content/uploads/2018/02/leaderboard-light.png', 'Jetpack Jetpack'], dtype=object) ]
docs.layerswp.com
API Reference ModulesModules The AWS Construct Library is organized into several modules. They are named like this: - aws-xxx: service package for the indicated service. This package will contain constructs to work with the given service. - aws-xxx¹: a little superscript 1 indicates that his package only contains CloudFormation Resources (for now). - aws-xxx-targets: integration package for the indicated service. This package will contain classes to connect the constructs in the "aws-xxx" package to other AWS services it can work with. - xxx: packages that don't start "aws-" are AWS CDK framework packages. Module ContentsModule Contents Modules contain the following types: - Constructs - All higher-level constructs in this library. - Other Types - All non-construct classes, interfaces, structs and enums that exist to support the constructs. - CloudFormation Resources - All constructs that map directly onto CloudFormation Resources. We recommend that you read the CloudFormation Resource and Property Type Reference for details on these resources. - CloudFormation Property Types - All structs that are used by the CloudFormation Resource constructs. Constructs take a set of (input) properties in their constructor; the set of properties (and which ones are required) can be seen on a construct's documentation page. The construct's documentation page also lists the available methods to call and the properties which can be used to retrieve information about the construct after it has been instantiated. Every type's page has a table at the top with links to language-specific documentation on the type.
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-construct-library.html
2020-05-25T14:56:57
CC-MAIN-2020-24
1590347388758.12
[]
docs.aws.amazon.com
Technical overview Warning Editing the registry incorrectly can cause serious problems that may require you to reinstall your operating system. Citrix cannot guarantee that problems resulting from the incorrect use of the Registry Editor can be solved. Use the Registry Editor at your own risk. Be sure to back up the registry before you edit it. The RealTime Optimization Pack offers clear, crisp high-definition audio and video calls with Microsoft Skype for Business in an optimized architecture. Users can seamlessly participate in audio-video or audio-only calls to and from other: - Skype for Business users - Microsoft Lync users - Standards-based video desktop and conference room multipoint control unit (MCU) systems - Standalone IP phones compatible with Skype for Business All audio and video processing is offloaded from the Server to the end-user device or terminal. This optimizes the quality of the call with minimal impact on the server scalability. Key featuresKey features The Optimization Pack provides Citrix Virtual Apps and Desktops customers the following key features: - Optimizes Skype for Business audio and video calls on Windows, Mac, Chrome OS, and Linux devices by redirecting media processing to the user device. Our partner, Dell, supports Wyse ThinOS. - Co-developed with Microsoft, who developed and maintains the native Skype for Business client user interface. The advantage is that there is no UI hooking by the Citrix software. Users see the familiar native Skype for Business interface. - Compatible with Skype for Business Server 2019, Skype for Business Server 2015, Lync Server 2013, and Skype for Business Online (Office 365). - Enables call initiation from the Skype for Business dialpad, dial-in bar, contacts list, Conversation window, and Outlook or other Office application. - Supports all Skype for Business calling and conferencing scenarios. That includes audio and video calling, hold, transfer, call forking and redirection, active speaker conferencing, and simulcast video. - Compatible with Skype for Business protocols for networking, media encryption (SRTP/AES), firewall traversal (STUN/TURN/ICE), and bandwidth management. - Forwards device location information to the Skype for Business client, to support Emergency Services (for example, E911) and Location Based Routing (LBR). - Call Admission Control on the Skype for Business server improves the media quality in enterprise networks. It does so by tracking media bandwidth usage and denying calls that would overload the network by using too much bandwidth. - Call Admission Control works in all network configurations supported by Microsoft. That is, multiple regions, sites, links, routes, policies, and so forth. It works for both on-premises and remote endpoints. On remote endpoints, as with native Skype for Business clients running remotely, only internal portions of the media path are subject to the Call Admission Control bandwidth policies. - Support for Skype for Business calls when the Edge server is not reachable. In these cases, the Optimization Pack goes into fallback mode and audio and video processing occurs on the server. - Supports these audio codecs: SILK, G.771, G.722, G.722.1, G.722c, and RT-Audio. We don’t support the G.722 Stereo and the Siren low bandwidth codec. This support enables voice communications over a wide range of network environments, including the public internet and mobile networks. - Field-proven compatibility with a broad range of audio devices, conferencing bridges, gateways, and server and network-based recording solutions. For recommended products, see Citrix Ready Marketplace. - Simulcast video transmission (multiple concurrent video streams) to optimize the video quality on conference calls and Skype Meetings. - Uses hardware H.264 encoding on Windows devices that support AMD VCE or Intel Quick Sync, subject to compatibility. We recommend using the latest driver versions. - Supports RT-Video, H.264 UC, H.264 Scalable Video Coding (SVC), and H.264 Advanced Video Coding (AVC). Video call rates range from 128 kb/s to 2048 kb/s. All video is encoded at up to 30 fps (depending on the webcam used) and transmitted over RTP/UDP (preferred) or TCP. - Uses the hardware acceleration capabilities of USB Video Class (UVC) 1.1/1.5 H.264 hardware-encoding webcams with Windows and Linux devices (not including the Citrix Ready workspace hub). - The Optimization Pack takes advantage of the H.264 hardware encoding functionality of the Logitech C930e and C925e cameras on conference calls that use simulcast video. The hardware-encoding capability of these cameras is available when used with Windows and Linux devices. H.264 hardware encoding and decoding on Linux supporting AMD Video Coding Engine (VCE). The Hardware decoding on Linux is disabled by default. The current AMD driver OMX decoder in Linux, decodes video streams with high latency (up to 500 ms). You can enable the hardware decoding using the registry: HKEY_CURRENT_USER\Software\Citrix\HDXRTConnector\MediaEngine\ Name: DisableLinuxAMDH264HardwareDecoding Type: DWORD Data: 0 – enabled 1 or no value – disable - Supports a wide range of video resolutions, from 320x180 to 1920x1080, at up to 30 frames per second. - Supports most webcams, including built-in webcams on Windows devices and built-in webcams (Facetime cameras) on Mac devices. - Improves audio and video quality over lossy connections by enabling forward error correction (FEC). - In fallback mode, generic HDX RealTime (the Optimized-for-Speech codec) handles Echo Cancellation. Therefore, the RealTime Optimization Pack Echo Cancellation feature is automatically disabled and this option is grayed out under Settings in the Optimization Pack notification area icon. - When enabled by administrators, all audio and video calls made with the Optimization Pack inform the Skype for Business server infrastructure about the bandwidth usage. The calls follow all bandwidth policy constraints, including: - Limits audio and video bandwidth as required by the policies. - Downgrades video calls to audio only if the bandwidth for video is not available. A generic message displays. - Reroutes the call through the internet when the bandwidth on the corporate network is not available. A generic message displays. - Reroutes the call to voicemail when the bandwidth is not available anywhere. A generic message displays. - Reports the Call Admission Control bandwidth constraints to the Quality-of-Experience monitoring database. - Supports Quality of Service (QoS) by observing the audio and video port ranges configured on the Skype for Business server (see and). - Supports Differentiated Services Code Point (DSCP) marking for media packets. For Windows, distribute the QoS policies to the endpoints. For Linux, Chrome OS, and Mac OS X, there are Optimization Pack registry settings that must be applied in the user profile on the server. For more information, see Citrix Knowledge Base article. - Optimizes the Quality-of-Experience (QoE) through various techniques, including: - Adaptive jitter buffer - Packet loss concealment - Call rate adaptation - The Quality of Experience reports specifies the RealTime Optimization Pack mode (optimized or non-optimized).The endpoint operating system entry is added with a prefix, which specifies the optimized versus non-optimized state of the call. Optimized – HDXRTME: <OSversion> Example in a report: HDXRTME: Windows 10 Pro,Windows 10 Pro,No Service Pack WOW64 Unoptimized – HDXRTC: <OSversion> Example in a report: HDXRTC: Windows 7 Enterprise,Windows 7,SP1 WOW64 Supports Intel-based x86 Chromebook devices that can run Android apps (ARC++), used with the minimum version of the Citrix Workspace app 1809 for Android. Both the Workspace app and the RealTime Media Engine are released through the Play Store. For a list of supported Chromebooks and Chromeboxes (only listed as a stable channel), see Chrome OS Systems Supporting Android Apps. For Chromebook limitations, see Limitations. - If there is no RealTime Media Engine present on the user device, provides fallback to server-side media processing (Generic RealTime). - The RealTime Media Engine uses the Citrix Workspace app for Windows auto-update capability and policy controls. - Single download, single install bundle of Citrix Workspace app for Windows and the RealTime Media Engine. The single install bundle is ideal for first-time users on unmanaged devices. LimitationsLimitations When you deliver the Skype for Business client in a virtualized environment, there are a few feature differences. For more information, see Citrix Knowledge Base article. On Linux terminals, the RealTime Media Engine installer disables multimedia redirection in the Citrix Workspace app for Linux. Doing so avoids the Optimization Pack and the Citrix Workspace app for Linux/Unix getting into a conflict when accessing video devices. HDXRTME Users can enable multimedia redirection in the module.ini file. Enabling multimedia redirection allows the RealTime Media Engine and Citrix Workspace app for Linux to co-exist. Support for coexistence is included in the Citrix Workspace app for Linux 1810 or later with any version of the RealTime Media Engine. - If you have more than one camera connected to an endpoint and you want to use the second camera for videos or video previews, open Tools > Video Device Settings and select the camera and click OK. - The RealTime Media Engine doesn’t support these features on Chromebooks: - External USB webcams. - Camera encoding USB Video Class (UVC) 1.1. - Device enumeration and switching from Skype for Business settings. Only default devices are used. - G722.1C, RTAudio, and RTVideo codecs. - Human interface devices, auto gain control, and Call Admission Control. - In fallback mode, webcam and audio devices are not available because of limitations in the Citrix Workspace app for Android. - Simulcast support in multiparty video conference calls. - When using an HP T730 thin client with Windows 10 and a Logitech C925e web cam to make a video call, and then resizing the window to high definition resolution, a 30 fps video stream is sent as a 24 fps video stream. - If Skype for Business is running locally on your device, right-click the Skype for Business icon in the system tray and Exit the application. Interaction issues are likely when running Skype for Business locally while it is also running in the data center as a hosted application. - Microsoft does not support the Lync and Skype for Business basic client with the Optimization Pack. Workaround: Use the full version of Skype for Business. - The Optimization Pack does not support direct media connections to and from public switched telephone network (PSTN) gateways. There is an optional feature of Skype for Business, known as media bypass. For more information, see and Citrix Knowledge Center articles. If Skype for Business server administrators enable media bypass, PSTN calls involving Optimization Pack users automatically and transparently route media connections through the Mediation Server. This feature limitation doesn’t cause any user impact. Take this limitation in to account when planning network capacity. - When the Skype for Business client is delivered as a published application rather than as part of a full Windows desktop, desktop sharing is not supported. If you use desktop sharing, the server desktop is shared rather that the local desktop. Application sharing can be used to share other hosted applications during a Skype for Business call. The virtualized Skype for Business client cannot share applications running locally on the user device. - Client-side recording is not supported. Citrix recommends evaluating third-party server/network-based recording solutions. - Gallery view is not supported in multiparty calls. Active speaker view is used in Skype for Business multiparty calls using the Optimization Pack. - Panoramic webcams that deliver a 360-degree view of the meeting room are not supported. - We do not support optimized delivery in a double-hop Citrix Virtual Apps and Desktops-Citrix Workspace app scenario. Optimized delivery is redirection of media processing to the user device. - Web proxy limitations: - HTTP proxy authentication is not supported. Configure proxies using white lists to allow unauthenticated access to target Skype for Business servers (for example, Office 365 servers for cloud-based deployments). - Web Proxy Auto-Discovery Protocol (WPAD) and dynamic proxy detection are supported by using Windows endpoints only. Configure Linux and Mac endpoints using a static HTTP proxy address. On Linux terminals, the RealTime Media Engine installer disables multimedia redirection in the Citrix Workspace app for Linux for 64-bit applications. Thus, avoiding the Optimization Pack and the Citrix Workspace app for Linux/Unix getting into a conflict when accessing video devices. However, other unified communications applications cannot support Generic USB redirection when accessed on a Linux terminal that has the RealTime Media Engine installed. The RealTime Optimization Pack 2.8 with Citrix Workspace app for Linux 18.10 and above, supports multimedia redirection for all 32-bit applications. - The date and time strings on USB telephone devices that have display capabilities are not properly localized. - The Plantronics Clarity P340 audio device is not supported. - The Optimization Pack disables the use of hardware acceleration for the Logitech C920 camera on Windows. Support is provided for the C920 as a nonencoding camera. To enable hardware compression for the Logitech C920 on Windows, do the following: Replace the Logitech driver with the stock Microsoft driver. Create a registry setting that enables hardware acceleration with the C920. On 32-bit and 64-bit Windows: HKEY_CURRENT_USER\Software\Citrix\HDXRTConnector\MediaEngine Name: EnableC920Compression Type: DWORD Data: 1 (enables the hardware acceleration) and 0 or missing (disables hardware acceleration) Note: Logitech does not recommend the C920 for business use cases. We recommend the more modern Logitech cameras (C930E, C925E), which are compatible with standard Microsoft drivers. Considerations and recommendationsConsiderations and recommendations The inclusion of hardware acceleration for video increases the amount of data being sent if you deploy devices that support hardware acceleration for video. Ensure that you have sufficient bandwidth available among all endpoints or update your Skype for Business server media bandwidth policies accordingly. In Fallback mode, video quality might degrade to the point of failure on virtual desktops that have a single virtual CPU. Fallback mode is when the RealTime Media Engine is not available on the endpoint and audio and video processing occurs on the server. We recommend that you change the VDA configuration to have a minimum of two CPUs for users who might need Fallback mode. For more information, see Citrix Knowledge Base articles and. When attempting to make high-definition video calls from a home office, consider your user network bandwidth and ISP routing policies. If you observe pixelation of the video or problems with lip sync, adjust the Maximum Packet Size (MTU) on the NIC properties. Specify a lower value such as 900 to avoid situations where ISPs perform traffic shaping based on packet size. Various scenarios might not work properly when some conversation participants run 1.x versions of the Optimization Pack. For example, combining content sharing and audio and video conferencing. We recommend participants using older versions of the Optimization Pack upgrade to this version of the Optimization Pack. Users might see an error when calling or joining a session when they have multiple sessions running. We recommend running only one session. Old versions of graphics card drivers might impact the stability of the Optimization Pack. H.264 hardware encoding and decoding on Intel and AMD chipsets works most reliably when using the latest versions of graphics drivers. The drivers are available from the endpoint or chipset vendors. If an unsupported driver version is detected, the Optimization Pack might automatically disable these features. Bandwidth guidelines for virtualized Skype for BusinessBandwidth guidelines for virtualized Skype for Business In general, the bandwidth consumption when using the HDX RealTime Optimization Pack is consistent with non-virtualized Skype for Business. The HDX RealTime Media Engine supports the audio and video codecs that Skype for Business commonly uses, and obeys the bandwidth restrictions configured on the Skype for Business server. If the network has been provisioned for Skype for Business traffic, the Optimization Pack might not require more traffic engineering. For new or growing deployments, network bandwidth, and Quality of Service provisioning, follow the Microsoft guidelines for voice and video. These guidelines apply when client endpoints are the sources and destinations of real-time media traffic. Audio and video traffic in Optimized mode flows out-of-band from ICA. The only extra traffic generated by the Optimization Pack is from the: - Low bandwidth ICA virtual channel control interactions between the RealTime Connector on the VDA server and the RealTime Media Engine on the client endpoint. - Compressed logging data sent from the RealTime Media Engine to the RealTime Connector. This additional traffic amounts to under 25 Kbps of upstream ICA bandwidth and about 5 Kbps of ICA downstream bandwidth. This table summarizes different types, sources, and destinations of network traffic with HDX RealTime Optimization Pack: For the Microsoft bandwidth guidelines for Skype for Business, see. H.264 is the main video codec used by Skype for Business and the RealTime Optimization Pack. H.264 supports a wide range of video resolution and target bandwidth values. The Skype for Business bandwidth usage policies always constrain the bandwidth usage for video. In specific call scenarios, the actual bandwidth usage might be even lower. The usage depends on the current bandwidth availability and client endpoint capabilities. For the HD video resolution in peer-to-peer calls, we suggest 1 Mbps or more and for the VGA resolution, 400 Kbps or more. Conference calls might require more bandwidth to support HD video (we recommend 2 Mbps). The Optimization Pack also supports the legacy RT Video codec for interoperability scenarios with legacy versions of the Microsoft unified communication software. The bandwidth usage with RT Video is similar to H.264, but video resolutions using RT Video are limited to VGA or less. Audio codec usage depends on the call scenario. Because the Microsoft Skype for Business Audio-Video Conferencing Server doesn’t support SILK or RtAudio, these codecs are used only on point-to-point calls. Conference calls use G.722. SILK offers comparable audio quality to G.722 while consuming less bandwidth. In addition to the codecs used by the native Skype for Business client, the HDX RealTime Media Engine offers a super-wideband codec, G.722.1C. This codec offers superior audio quality when both parties on a point-to-point call are using the Optimization Pack. This codec consumes 48 Kbps of network bandwidth. The Optimization Pack 2.4 doesn’t support the ultra-low bandwidth Siren codec, which is the predecessor to G.722.1. The Optimization Pack does support G.722.1 for interoperability with third-party systems, although Skype for Business does not support G.722.1. The Optimization Pack automatically selects the best audio codec that all participants on the call support and fits within the available bandwidth. Typically: - A call between two Optimization Pack users uses the super-wideband G.722.1C codec at 48 Kbps and has good audio fidelity. - A conference call uses the wideband G.722 codec at 64 Kbps. That is, 159.6 Kbps with IP header, UDP, RTP, SRTP, and Forward Error Correction. - A call between an Optimization Pack user and a native Skype for Business client user uses the wideband SILK codec at 36 Kbps. That is, 100 Kbps with IP header, UDP, RTP, SRTP, and Forward Error Correction. - When an Optimization Pack user makes or receives a public switched telephone network (PSTN) call, one of the narrowband codecs is used: G.711 at 64 Kbps or narrowband RtAudio at 11.8 Kbps. Citrix Customer Experience Improvement Program (CEIP)Citrix Customer Experience Improvement Program (CEIP) The Citrix CEIP usage and analytics program is a voluntary data collection program designed to improve your product experience. After installing this version of the: - Configuration data. - All system and account identifiers are anonymized before being uploaded.. CEIP opt-out policies and the user interface (UI) The RealTime Connector defines the following registry entries controlling CEIP metrics: HKEY_LOCAL_MACHINE\Software\Citrix\HDXRTConnector\\ DWORD DisableCEIPMetrics When absent or set to 0, the user did not opt out of CEIP metrics collection. If present and set to nonzero, the user opted out of CEIP metrics collection. In the Settings dialog screen, the RealTime Connector adds a check box. Send anonymous usage metrics to Citrix The check box is hidden if the administrator disables CEIP metrics collection by setting DisableCEIPMetrics. Otherwise, it appears. The check box is checked if the OptOutOfCEIPMetrics registry value is absent or set to zero. The check box is clear if OptOutOfCEIPMetrics is present and set to nonzero. When the user changes the state of the check box, the RealTime Connector updates the registry setting and enables or disables CEIP metrics submission accordingly.
https://docs.citrix.com/en-us/hdx-optimization/current-release/overview.html
2020-05-25T14:52:19
CC-MAIN-2020-24
1590347388758.12
[]
docs.citrix.com
BlueCat DNS/DHCP BlueCat DNS/DHCP Server™ is a software solution that can be hosted on the Citrix SDX platform to deliver reliable, scalable and secure DNS and DHCP core network services without requiring additional management costs or data center space. Critical DNS services can be load balanced across multiple DNS nodes within a single system or across multiple SDX appliances without the need for additional hardware. Virtual instances of BlueCat DNS/DHCP Server™ can be hosted on SDX to provide a smarter way to connect mobile devices, applications, virtual environments and clouds. To learn more about BlueCat and Citrix, see. If you are an existing BlueCat customer, you can download software and documentation via the BlueCat support portal at. Provisioning a BlueCat DNS/DHCP InstanceProvisioning a BlueCat DNS/DHCP Instance You must download an XVA image from the Bluecat Customer Care, at. After you have downloaded the XVA image, upload it to the SDX appliance before you start provisioning the instance. Make sure that you are using Management Service build 118.7 or later on the SDX appliance. Management channel across 0/1 and 0/2 interfaces are supported on BlueCat DNS/DHCP VMs. For more information see Configuring channel from Management Service. Note: SR-IOV interfaces (1/x and 10/x) that are part of a channel do not appear in the list of interfaces because channels are not supported on a BlueCat DNS/DHCP instance. On the Configuration tab, navigate to BlueCat DNS/DHCP > Software Images. To upload an XVA image to the SDX appliance: - In the details pane, under XVA Files > Action, click Upload. - In the dialog box that appears, click Browse, and then select the XVA file that you want to upload. - Click Upload. The XVA file appears in the XVA Files pane. To provision a BlueCat DNS/DHCP instance: - On the Configuration tab, navigate to BlueCat DNS/DHCP > Instances. - In the details pane, click Add. The Provision BlueCat DNS/DHCP Server page opens. - In the Provision BlueCat DNS/DHCP wizard, follow the instructions on the screen. - Under Instance Creation, in the Name field, enter a name for the instance and select the uploaded image from the XVA File drop-down menu, then Click Next. Optionally, in the Domain Name field, enter a domain name for the instance. Note: The name should contain no spaces. Under Network Settings, from the Management Interface drop-down menu, select the interface through which to manage the instance, set the IP address and gateway for that interface. You can assign interfaces explicitly for high availability and service. Select the parameters and then click Next. Note: When assigning interfaces for management, high availability and service, make sure you assign the interfaces based on supported combination of interfaces: You can select the same interface for all three. You can select a different interface for all three. You can select the same interface for management and service, but select a different interface for high availability. Click Finish, and then click Close. The instance will be created, booted, and configured with the selected IP address. After you provision the instance, log on to the instance through SSH to complete the configuration. For details on how to configure the BlueCat DNS/DHCP Server or place it under the control of BlueCat Address Manager, see the appropriate BlueCat Administration Guide, available at. To modify the values of the parameters of a provisioned BlueCat DNS/DHCP Server instance, from the BlueCat DNS/DHCP Instances pane, select the instance that you want to modify, and then click Modify. In the Modify BlueCat DNS/DHCP wizard, modify the parameter settings. Note: If you modify any of the interface parameters or the name of the instance, the instance stops and restarts to put the changes into effect. Monitoring a BlueCat DNS/DHCP InstanceMonitoring a BlueCat DNS/DHCP Instance The SDX appliance collects statistics, such as the version of SDXTools running on the instance, of a BlueCat DNS/DHCP instance. To view the statistics related to a BlueCat DNS/DHCP instance: - Navigate to BlueCat DNS/DHCP > Instances. - In the details pane, click the arrow next to the name of the instance. Managing a BlueCat DNS/DHCP InstanceManaging a BlueCat DNS/DHCP Instance You can start, stop, restart, force stop, or force restart a BlueCat DNS/DHCP instance from the Management Service. On the Configuration tab, expand BlueCat DNS/DHCP. To start, stop, restart, force stop, or force restart a BlueCat DNS/DHCP instance: - Click Instances. In the details pane, select the instance on which you want to perform the operation, and then select one of the following options: - Start - Shut Down - Reboot - Force Shutdown - Force Reboot - In the Confirm message box, click Yes. Upgrading the SDXTools File for a BlueCat DNS/DHCP InstanceUpgrading the SDXTools File for a BlueCat DNS/DHCP Instance SDXTools, a daemon running on the third-party instance, is used for communication between the Management Service and the third-party instance. Upgrading SDXTools involves uploading the file to the SDX appliance, and then upgrading SDXTools after selecting an instance. You can upload an SDXTools file from a client computer to the SDX appliance. To upload an SDXTools file: - In the navigation pane, expand Management Service, and then click SDXTools Files. - In the details pane, from the Action list, select Upload. - In the Upload SDXTools Files dialog box, click Browse, navigate to the folder that contains the file, and then double-click the file. - Click Upload. To upgrade SDXTools: On the Configuration tab, expand BlueCat DNS/DHCP. - Click Instances. - In the details pane, select an instance. - From the Action list, select Upgrade SDXTools. - In the Upgrade SDXTools dialog box, select a file, click OK, and then click Close. Rediscovering a BlueCat DNS/DHCP InstanceRediscovering a BlueCat DNS/DHCP Instance You can rediscover an instance to view the latest state and configuration of an instance. During rediscovery, the Management Service fetches the configuration. By default, the Management Service schedules instances for rediscovery of all instances once every 30 minutes. On the Configuration tab, expand BlueCat DNS/DHCP. - Click Instances. - In the details pane, select the instance that you want to rediscover, and from the Action list, click Rediscover. - In the Confirm message box, click Yes.
https://docs.citrix.com/en-us/sdx/11-1/third-party-virtual-machines/third-party-bluecat.html
2020-05-25T15:28:37
CC-MAIN-2020-24
1590347388758.12
[]
docs.citrix.com
2019-03-21 Release New HTML5 Plugin for VideoJS version 7.x and Brightcove Player version 6.x VideoJS v7.x and Brightcove Player v6.x HTML5 Plugin Note: This plugin replaces our older VideoJS v5.x HTML5 plugin and Brightcove Player v5.x HTML5 plugin, and any customers using the older plugin are recommended to migrate over to this plugin as soon as possible. A new plugin is available to integrate your VideoJS version 7.x and Brightcove Player version 6.x: Pulse Plugin for VideoJS and Brightcove Player. Documentation Releases This release includes the following documentation updates:
https://docs.videoplaza.com/oadtech/relnotes/2019/2019-03-21.html
2020-05-25T14:09:44
CC-MAIN-2020-24
1590347388758.12
[]
docs.videoplaza.com
Best Practices¶ This section has a selection of things that other teams have found to be good things to keep in mind to build robot code that works consistently, and to eliminate possible failures. If you have things to add to this section, feel free to submit a pull request! Make sure you’re running the latest version of RobotPy!¶ Seriously. We try to fix bugs as we find them, and if you haven’t updated recently, check to see if you’re out of date! This is particularly true during build season. Don’t use the print statement/logger excessively¶ Printing output can easily take up a large proportion of your robot code CPU usage if you do it often enough. Try to limit the amount of things that you print, and your robot will perform better. Instead, you may want to use this pattern to only print once every half second (or whatever arbitrary period): # Put this in robotInit self.printTimer = wpilib.Timer() self.printTimer.start() .. # Put this where you want to print if self.printTimer.hasPeriodPassed(0.5): self.logger.info("Something happened") Remember, during a competition you can’t actually see the output of Netconsole (it gets blocked by the field network), so there’s not much point in using these except for diagnostics off the field. In a competition, disable it. Don’t die during the competition!¶ If you’ve done any amount of programming in python, you’ll notice that it’s really easy to crash your robot code – all you need to do is mistype something and BOOM you’re done. When python encounters errors (or components such as WPILib or HAL), then what happens is an exception is raised. There’s a lot of things that can cause your program to crash, and generally the best way to make sure that it doesn’t crash is test your code. RobotPy provides some great tools to allow you to simulate your code, and to write unit tests that make sure your code actually works. Whenever you deploy your code using pyfrc, it tries to run your robot code’s tests – and this is to try and prevent you from uploading code that will fail on the robot. However, invariably even with all of the testing you do, something will go wrong during that really critical match, and your code will crash. No fun. Luckily, there’s a good technique you can use to help prevent that! What you need to do is set up a generic exception handler that will catch exceptions, and then if you detect that the FMS is attached (which is only true when you’re in an actual match), just continue on instead of crashing the code. Note Most of the time when you write code, you never want to create generic exception handlers, but you should try to catch specific exceptions. However, this is a special case and we actually do want to catch all exceptions. Here’s what I mean: try: # some code goes here except: if not self.isFmsAttached(): raise What this does is run some code, and if an exception occurs in that code block, and the FMS is connected, then execution just continues and hopefully everything continues working. However (and this is important), if the FMS is not attached (like in a practice match), then the raise keyword tells python to raise the exception anyways, which will most likely crash your robot. But this is good in practice mode – if your driver station is attached, the error and a stack trace should show up in the driver station log, so you can debug the problem. Now, a naive implementation would just put all of your code inside of a single exception handler – but that’s a bad idea. What we’re trying to do is make sure that failures in a single part of your robot don’t cause the rest of your robot code to not function. What we generally try to do is put each logical piece of code in the main robot loop (teleopPeriodic) in its own exception handler, so that failures are localized to specific subsystems of the robot. With these thoughts in mind, here’s an example of what I mean: def teleopPeriodic(self): try: if self.joystick.getTrigger(): self.arm.raise_arm() except: if not self.isFmsAttached(): raise try: if self.joystick.getRawButton(2): self.ball_intake.() except: if not self.isFmsAttached(): raise # and so on... try: self.robot_drive.arcadeDrive(self.joystick) except: if not self.isFmsAttached(): raise Note In particular, I always recommend making sure that the call to your robot’s drive function is in it’s own exception handler, so even if everything else in the robot dies, at least you can still drive around. Consider using a robot framework¶ If you’re creating anything more than a simple robot, you may find it easier to use a robot framework to help you organize your code and take care of some of the boring details for you. While frameworks sometimes have a learning curve associated with them, once you learn how they work you will find that they can save you a lot of effort and prevent you from making certain kinds of mistakes. See our documentation on Robot Code Frameworks
https://robotpy.readthedocs.io/en/2020.1.1/guide/guidelines.html
2020-05-25T15:05:23
CC-MAIN-2020-24
1590347388758.12
[]
robotpy.readthedocs.io
#include <v8.h> Summary of a garbage collection cycle. See |TraceEpilogue| on how the summary is reported. Definition at line 7865 of file v8.h. Memory retained by the embedder through the |EmbedderHeapTracer| mechanism in bytes. Definition at line 7876 of file v8.h. Time spent managing the retained memory in milliseconds. This can e.g. include the time tracing through objects in the embedder. Definition at line 7870 of file v8.h.
https://v8docs.nodesource.com/node-14.1/de/de7/structv8_1_1_embedder_heap_tracer_1_1_trace_summary.html
2020-05-25T13:27:16
CC-MAIN-2020-24
1590347388758.12
[]
v8docs.nodesource.com
Below is a list of the Array Mesh settings that can be modified for greater control over the duplication process. Most of these settings are fully interactive, letting you freely experiment with advanced multiple stage creations. LightBox > Array Mesh Presets The Array Mesh Presets button opens LightBox to the Array Mesh tab. You can then double-click on a saved Array Mesh preset file to apply the settings to your current mesh. Open and Save Save allows you to save the current Array Mesh settings in a file. Open command of course loads any previously saved Array Mesh file and applies the corresponding settings to the current model. Note: An Array Mesh file does not contain the geometry that is being instanced, but rather the settings for the array itself. Array Mesh Array Mesh enables or disables Array Mesh mode for the current Tool or SubTool. When Array Mesh mode is first enabled, it creates a copy of the current model. This copy is positioned in the same location as the original model. If an Array Mesh already exists, disabling and enabling Array Mesh mode will simply hide/unhide any transformations that have been applied without changing any settings. This function allows you to temporally turn off the array so as to make isolated modifications to the original Mesh. TransPose The TransPose switch allows you to use the TransPose system to manipulate your Array Mesh interactively. When TransPose is enabled, switching to Move, Scale or Rotate will turn on the TransPose Action Line and let you use it to modify the Offset, Scale and Rotate values for the Array Mesh. (The X, Y, and Z Amount sliders.) TransPose mode with an Array Mesh also lets you interactively set the pivot point for the transformations. To change the pivot, simply click and drag the yellow circle located at the start of the TransPose line. The pivot is always freely manipulated relative to the camera working plane. For accurate placement, it is advised to switch to an orthographic view and carefully choose the desired point of view before moving the pivot indicator. Upon changing the pivot point, the Action Line will automatically be repositioned to fit the new pivot location. Lock Position, Lock Size Lock Position and Lock Size prevent the position and/or size of the existing Array Mesh instances from being changed. By default, transformations are applied to the initial model and the instances then move or scale accordingly. By activating these locks, the size and position of the existing instances won’t change. These locks affect all stages associated with the array. Switch XY, Switch XZ, Switch YZ Switch XY, Switch XZ and Switch YZ transform the current axis orientation, based upon the current working plane from which you are viewing the model. These functions are useful when you want to apply transformations that may not be in the desired direction relative to the world axis. Transform Stage The Transform Stage slider lets you navigate between the different Array Mesh stages. To create a new stage, use the Append New or Insert New functions. When an Array Mesh is first created, this slider will be greyed out because there are no additional stages to choose from. Please refer to the Array Mesh Stages section below for more information about stages. Append New Append New creates a new stage after all existing stages in the list. So if you have four stages and are currently at the first, this button will create a 5th stage. Insert New Insert New creates a new stage immediately after the currently selected stage. So if you have four stages and are currently at the first, this button will create a new stage 2 with the remaining stages each incrementing by one number. With this function, you can insert a new stage in between two existing stages. Reset Reset sets all parameters for the currently selected stage back to their default values. Delete removes the currently selected stage. If that is the only existing stage then the Array Mesh is deleted and all the settings are returned to their default values. Copy, Paste The Copy and Paste functions let you copy the settings from the current Array Mesh stage and paste them into another stage or even to another Array Mesh. Repeat The Repeat slider defines the number of instance that will be created from the current model. This value always includes the original model, so to create a single copy the slider must be set to 2. Chain Chain makes the next stage start at the end of the previous one. This allows you to generate advanced curve structures using a single instanced mesh across multiple stages. When enabled, the Chain function turns off the Alignment and Pattern functions. Smooth The Smooth slider applies a smooth transition between each stage. Align to Path Align to Path changes the orientation of all instances to follow the array path. To change the orientation of each instanced mesh along the path, you can change the axis orientation modifier in the Align to Path button. Align to Axis Align to Axis orients each instance with the world axis rather than along the array path. To change the orientation of the instanced meshes to use another axis, click the desired modifier in the Align to Axis button Pattern Start, Pattern Length, Pattern On, Pattern Off The Pattern Start, Pattern Length, Pattern On and Pattern Off sliders define when each instance of the Array Mesh starts and how many are visible (On) or invisible (Off). The first object is always visible, even if you set Pattern Start to a value other than 1. However, in this case selecting another SubTool will cause the first instance of the previous SubTool to disappear since it’s no longer the active instance. This is similar to how SubTool visibility works, where the selected SubTool must always be displayed even if it is set to “Off”. X Mirror, Y Mirror, Z Mirror X Mirror, Y Mirror and Z Mirror apply a mirror transformation to the Array Mesh, based on the chosen axis. Mirroring can be individually set for each Stage. X Align, Y Align, Z Align X Align, Y Align and Z Align apply a positive or negative offset to the axis of transformation, making the various alignments easier. Offset The Offset mode works in association with the X, Y and Z Amount sliders and curves. When enabled, modifying the sliders will increase the distance of the copies from the source. The Offset value is the distance between the source and the final instance generated by the current stage. Modifying the curve will affect the acceleration or deceleration of distance between copies along the length of the array. The curve is interactive and any manipulation will provide real-time visual feedback. When the TransPose mode is enabled, manipulating the TransPose line in Move mode will interactively change the Offset values. Scale The Scale mode works in association with the X, Y and Z Amount sliders and curves. When enabled, modifying the sliders will increase the scale of the copies relative to the source. The Scale value is the size of the source relative to the final copy being generated by the current stage. Modifying the curve will affect the acceleration or deceleration of the scale between copies along the length of the array. The curve is interactive and any manipulation will provide real-time visual feedback. When the TransPose mode is enabled, manipulating the TransPose line in Scale mode will interactively change the Scale values. Rotate The Rotate mode works in association with the X, Y, and Z Amount slider and curves. When enabled, modifying the sliders will adjust the orientation of the copies relative to the source. The Rotate value is the angle of the source relative to the final copy being generated by the current stage. Modifying the curve will affect the acceleration or deceleration of the rotation between copies along the length of the array. The curve is interactive and any manipulation will provide real-time visual feedback. When the TransPose mode is enabled, manipulating the TransPose line in Rotate mode will interactively change the Scale values. Pivot Pivot mode works in association with the X, Y and Z Amount slider and curves. When enabled, modifying the sliders will change the position of the pivot point used by the different transformations (Offset, Scale, Rotate). Modifying the curve has no impact on the pivot location. When the TransPose mode is enabled, being in TransPose Move mode and dragging the yellow circle found at the source position will interactively change the Pivot values. Please refer to the TransPose and Pivot section of the documentation (above) for more information about the pivot. X, Y, Z Amount and X, Y, Z Profile These sliders and profile curves work in conjunction with the Offset, Rotate, Scale and Pivot modes. Please refer to these sections just above for more information. Convert to NanoMesh Convert to NanoMesh transforms each Array Mesh to a NanoMesh structure, creating a separate placement polygon for each instance. Please refer to the Array Mesh with NanoMesh section above and to the NanoMesh documentation for more information about NanoMesh manipulation and creation. Make Mesh Make Mesh converts the Array Mesh into real (non-instanced) geometry. After conversion, the resulting model can be freely edited with any ZBrush sculpting and modeling tools. Extrude Extrude converts the actual Array Mesh results to a new mesh and generates between each former instance, based upon its PolyGroups. In order to perform this function, the Array Mesh objects must share PolyGrouping on their opposite sides. When Extrude is turned on, the Make Mesh function will look at this PolyGrouping and create bridges between the same PolyGrouped areas. This function is useful when creating environment items like stairs or organic models like snakes where you want the gaps generated in between the repeats to be filled. If your instance repeats are close to each other, ZBrush will fuse them. If this is an undesired result, change the Repeat Value of the array to add more space between each instance and then click Make Mesh again. When using cylindrical arrays, the Close function will attempt to fuse the final instance of the array to the start, creating a contiguous circle. Angle The Angle slider works with Extrude when it generates bridging geometry on the Array. This slider will look at the surface normal of the corresponding PolyGrouped faces. Changing the Angle slider may fix bridging problems but can also generate undesirable results. Adjust this setting only if the default values don’t work well.
http://docs.pixologic.com/reference-guide/tool/polymesh/array-mesh/
2017-11-17T19:24:21
CC-MAIN-2017-47
1510934803906.12
[]
docs.pixologic.com
Possible Tasks¶ Here are a few ideas for improving mahotas. New Features¶ Small Improvements¶ - something like the overlayfunction from pymorph (or even just copy it over and adapt it to mahotas style). - H-maxima transform (again, pymorph can provide a basis) - entropy thresholding Internals¶ These can be very complex as they require an understanding of the inner workings of mahotas, but that does appeal to a certain personality. special case 1-D convolution on C-Arrays in C++. The idea is that you can write a tight inner loop in one dimension: void multiply(floating* r, const floating* f, const floating a, const int n, const int r_step, const int f_step) { for (int i = 0; i != n; ++i) { *r += a * *f; r += r_step; f += f_step; } } to implement: r[row] += a* f[row+offset] and you can call this with all the different values of a and offset that make up your filter. This would be useful for Guassian filtering.
http://mahotas.readthedocs.io/en/latest/tasks.html
2017-11-17T19:13:05
CC-MAIN-2017-47
1510934803906.12
[]
mahotas.readthedocs.io
Core Data¶ Scaphold's core data platform allows you to easily define complex data models that are instantly deployed to a production GraphQL API backed by a highly available SQL cluster. Core Data provides a great set of tools for powering core application logic. Every type in your schema that implements the Node interface is backed by core data. That means every type that implements Node maps to a single table and can be related to any other core data types via Connection fields. Each core data type X also receives a createX, updateX, and deleteX mutation as well as various ways to read the data with getX, allX, and through connections..
https://docs.scaphold.io/coredata/
2017-11-17T19:30:06
CC-MAIN-2017-47
1510934803906.12
[]
docs.scaphold.io
A. This Processor may create or initialize a Connection Pool in a method that uses the @OnScheduled annotation. However, because communications problems may prevent connections from being established or cause connections to be terminated, connections themselves are not created at this point. Rather, the connections are created or leased from the pool in the onTrigger method. The onTrigger method.
https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.0/bk_developer-guide/content/ingress.html
2017-11-17T19:18:58
CC-MAIN-2017-47
1510934803906.12
[]
docs.hortonworks.com
You must set up a system and a root account to administer vRealize Log Insight. vRealize Log Insight Root User vRealize Log Insight currently uses the root user account as the service user. No other user is created. Unless you set the root password property during deployment, the default root password is blank. You must change the root password when you log in to the vRealize Log Insight console for the first time. SSH is disabled until the default root password is set. The root password must meet the following requirements. Must be at least eight characters long Must contain at least one uppercase letter, one lowercase letter, one digit, and one special character Must not repeat the same character four times vRealize Log Insight Admin User When you start the vRealize Log Insight virtual appliance for the first time, vRealize Log Insight creates the admin user account for its Web user interface. The default password for admin is blank. You must change the admin password in the Web user interface during the initial configuration of vRealize Log Insight. Active Directory Support vRealize Log Insight supports integration with Active Directory. When configured, vRealize Log Insight can authenticate or authorize a user against Active Directory. See Enable User Authentication Through Active Directory. Privileges Assigned to Default Users The vRealize Log Insight service user has root privileges. The Web user interface admin user has the administrator privileges only to the vRealize Log Insight web user interface.
https://docs.vmware.com/en/vRealize-Log-Insight/4.5/com.vmware.log-insight.administration.doc/GUID-0E687B33-8E1B-4ECE-B4CD-4D9C63AD50D0.html
2017-11-17T19:58:25
CC-MAIN-2017-47
1510934803906.12
[]
docs.vmware.com
You must use the Orchestrator client to configure the SOAP plug-in. Configuration WorkflowsThe Configuration workflow category contains workflows that allow you to manage SOAP hosts. Add a SOAP HostYou can run a workflow to add a SOAP host and configure the host connection parameters. Configure Kerberos AuthenticationYou can use Kerberos authentication when you add a host. Parent topic: Using the SOAP Plug-In
https://docs.vmware.com/en/vRealize-Orchestrator/6.0/com.vmware.vrealize.orchestrator-use-plugins.doc/GUID-1E24E97D-0FF1-4A7B-957F-59E793C58626.html
2017-11-17T19:58:20
CC-MAIN-2017-47
1510934803906.12
[]
docs.vmware.com