url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
https://www.botite.com/product/robots-toys-smart-for-kids-8-learning-machine-million-camera-voice-dialogue-interactive-toys/ | code | No products in the cart.
Special Offer, not available anywhere else!
Buy yours now before it is too late!
Secured payment via Visa / Mastercard / Amex / PayPal
App Controlled Toys
Decorative-Antenna For 1:10 Rc Crawler Axial Scx10 90046 Traxxas Trx-4/Rc4Wd/D90/.. 160Mm
High Tech Toys
2Pcs 0.5 Inch 1 Digit Led Digital Tube Common Cathode Digital Tube 4205
Lithium-Battery-Charging-Board Micro-Usb 18650 Protection Smart-Electronics With Charger-Module
Flying-Toy Helicopter Birthday-Present Xmas-Party-Bag Balloon Stocking-Filler Delivered
Trigger Gamepad For Pubg Ak66/Six/Finger-All-In-One/.. Fire-Key Button-Joystick L1 R1
Ultrasonic Sensor Arduino-Distance Microcontroller-Sensor Diy-Part For Hc-Sr04 Wave-Detector
Chassis-Wheel Arduino Robot/Tracking-Line Motor Intelligent For Diy 40G Car-Accessories
Led Traffic Light Module 5V Traffic Light Module
Username or email address *
Lost your password?
Email address *
A link to set a new password will be sent to your email address.
Terms and Conditions | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647639.37/warc/CC-MAIN-20230601074606-20230601104606-00222.warc.gz | CC-MAIN-2023-23 | 1,006 | 19 |
https://techreport.com/forums/viewtopic.php?f=14&t=119489&sid=b0b041ce260689aa1be3003de1bc4c1c&start=30 | code | Waco wrote:You only have to route if you're leaving the subnet (going to the Internet) - dumb switches route between machines on the same subnet without having to hit the router.
I knew there was a simple answer my aging brain just hadn't figured out. Like I said, I don't grok the OSI model, so I was having problems figuring out why a device without an IP address could accomplish this.
I know enough about networking to look at a diagram (in the day job) and ask "please explain that decision" (us regulators love open-ended questions). Isn't a whelk's chance in a supernova I'll ever get any networking certs. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123635.74/warc/CC-MAIN-20170423031203-00014-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 613 | 3 |
https://community.grafana.com/t/do-not-connect-values-if-no-data-received-for-x-minutes/100290 | code | What Grafana version and what operating system are you using?
Grafana version: Grafana v10.0.3 (eb8dd72637)
Operating system: Docker (on Ubuntu)
What are you trying to achieve?
Do not connect time series graphs if there is no data for X minutes
How are you trying to achieve it?
Using “Do not connect null values”, but that only applies to queries which includes rows with null values. For me, for example there is no data for 1 hour, I would like Grafana to leave a gap in that graph.
There is no option in Grafana which supports this.
MariaDB data source is used. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506481.17/warc/CC-MAIN-20230923130827-20230923160827-00046.warc.gz | CC-MAIN-2023-40 | 569 | 9 |
https://srndolha.wordpress.com/2008/03/18/setup-doesnt-automatically-remove-a-custom-start-menu-folder-at-uninstall-time-unless/ | code | I have a solution in Visual Studio that includes a Setup project (MSI). The Setup project needs to install/uninstall a product in the same solution. However, for versioning purposes (we permit having different versions of the same product installed on the same computer at the same time), we wanted to have a Start menu folder for the product named like this: [Manufacturer] [ProductName] [ProductVersion].
To resolve this, we needed to not use the standard "User’s Programs Menu" folder available to be added in the File System section of the Setup, and instead we created a custom folder, using this default location: [ProgramMenuFolder][Manufacturer] [ProductName] [ProductVersion]. We put the shortcuts to the product in this menu.
Installation worked just fine. However, uninstalling succeeded also, but the custom folder was not removed from the system. To resolve that one, we needed to set AlwaysCreate property of the custom folder to true. After this setting and rebuilding the setup project, it worked! Strange but at least resolve the issue! Maybe somebody else would have the same issue in the future and hopefully this will help. | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945668.34/warc/CC-MAIN-20180422232447-20180423012447-00038.warc.gz | CC-MAIN-2018-17 | 1,145 | 3 |
https://contents.driverguide.com/content.php?id=44190&path=WIN95%2FREADME.TXT | code | a:\win95\readme.txt 24-JUN-1995 Microsoft Corporation developed and supports the 32-bit Windows 95 driver (DECLAN.VXD) for the Digital EtherWORKS 2 family of adapters. This driver, however, does not support the DE210, DE212 or DE422 adapters. The DECLAN.VXD driver ships with the Windows 95 distribution media. A driver which supports the DE422 is included in this directory along with its .inf file. When Windows '95 detects the DE422 it will ask for a driver disk. Point it to this directory and follow the directions of the operating system.Download Driver Pack
After your driver has been downloaded, follow these simple steps to install it.
Expand the archive file (if the download file is in zip or rar format).
If the expanded file has an .exe extension, double click it and follow the installation instructions.
Otherwise, open Device Manager by right-clicking the Start menu and selecting Device Manager.
Find the device and model you want to update in the device list.
Double-click on it to open the Properties dialog box.
From the Properties dialog box, select the Driver tab.
Click the Update Driver button, then follow the instructions.
Very important: You must reboot your system to ensure that any driver updates have taken effect.
For more help, visit our Driver Support section for step-by-step videos on how to install drivers for every file type. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00081.warc.gz | CC-MAIN-2021-43 | 1,364 | 11 |
https://meta.superuser.com/questions/7307/add-informations-to-reopen-request | code | So I want to add some informations to a question which take care of the "Should I use an antivirus on osx" - question.
So I have clicked the reopen-link. But I'm not able to add any information why I want to reopen it.
Shouldn't a user be able to specify, why he wants that a question reopens?
Btw: I want to add informations to the danger of email-viruses and to spread it to windows-users. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816977.38/warc/CC-MAIN-20240415111434-20240415141434-00890.warc.gz | CC-MAIN-2024-18 | 391 | 4 |
http://boinc.bakerlab.org/rosetta/view_profile.php?userid=295161 | code | I have been a long time contributor to various BOINC projects. I dont run 24/7, nowdays I run in the day only and use the power from my solar panels to make my research as clean as possible.
I saw Bitcoin early on but never wanted to divert my BOINC time. Early this year I found Gridcoin, and started to mine. Gridcoin is currently the only virtual currency that rewards users for participating in BOINC, it has an excellent core community and passionate development. I feel this has given me more focus in improving my RAC and has encouraged me to spend time on more different BOINC projects.
Please see www.gridcoin.us for more info. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643585.23/warc/CC-MAIN-20230528051321-20230528081321-00483.warc.gz | CC-MAIN-2023-23 | 636 | 3 |
http://bicycles.stackexchange.com/questions/tagged/exercise+injury | code | to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
Minor hamstring pull, can I keep going at easy pace?
I'm 53 and for the past year have been pushing it slightly. Have always been careful to not push it too hard, but yesterday I got a mild pull in my right hamstring. Just a slight soreness right ...
Mar 27 '13 at 21:34
newest exercise injury questions feed
Putting the Community back in Wiki
Hot Network Questions
How do I begin proving this binomial coefficient identity?
"Seize the night" vs. "Enjoy the night"
Simplest way to create a grid with colored squares and labels
What is charge?
Is it probable that planets will stop orbiting in ellipses?
Can He-4 atoms create black holes?
Gibbs sampling for ising model
Increment up the build number in AssemblyInfo.cs on every build
What virtue prolonged Robert Baratheon's sovereignty?
Extracting financial indicator data from trading chart
cd .. on root folder
What is the Relation of Profile Ids across Sandboxes and Production?
Does the negative apply before or after an exponent
Why was the virtualbox package removed from the 14.04 repository?
My child is ignoring my timeouts and walking away from them. What are some options for handling this?
Python Equality Check Difference
Any way to find out what US Customs & Border Protection has in their database?
Adam created after the seventh day?
Handling plagiarism as a TA
Search vs. Look Up
Full day training in a remote location - hotel accommodations
Is it better to create a class instance, or create an instance each time a method is called?
Making Sense of Battlestar Galactica's Network Ban
What is 'YTowOnt9'?
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0 | s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00315-ip-10-147-4-33.ec2.internal.warc.gz | CC-MAIN-2014-15 | 2,210 | 53 |
https://forum.cloudron.io/tags/ssl | code | Thank you for your help. Website filing is to register the services provided by this server to improve network security. When I tried to use servers in other countries, it was very successful, thank you for your help, and wish you success in your work!
@jdaviescoates totally confusing for sure. Not cloudron's fault completely as TLS is just an update to SSL. The problem comes from old software I think where SSL (the verb) is still being used where TLS should be. Both are technically the same, one is just newer.
Or at least that is what google would suggest lol. Either way, you can never have too many docs so an update to specifically call this out when working with SMTP ports would be useful.
@omen OK, I figured out how configure Fastly now...
Please configure it like below:
Enable TLS - Yes
Verify Certificate - Yes
Certificate hostname - In my case, it is wildcard. But since you use the 'manual' provider, the hostname is subdomain.example.com.
SNI hostname - this is subdomain.example.com.
With the above settings, fastly serves up pages fine on http.
One thing to remember is, because you are using "manual" DNS provider, Cloudron requires "http" callbacks for Let's Encrypt to work. I am not sure how this works in fastly, does it allow you to have some URLs that are not "cached" ? I guess one way is to call the Cloudron app subdomain as "website.domain.com" but the domain in fastly should be something else like "realwebsite.domain.com" (meaning, name it different). This way, manual setting on Cloudron can continue to use HTTP reliably to get certificates.
If you want the domain names to be same, you have to use one of the automated DNS providers in Cloudron.
I see. Maybe that's because mailtrain adds unsubscribe headers in the email header etc. I don't really know of any other bulk mailer software. Are you able to contact the support of turbo mailer and ask them if they support STARTLS at all?
That seemed to solve it, even if a server reboot did not solve it. I also got a mail, that this was also visible from other mail-servers (not just from my mail client), as I use DANE for certificate pinning.
I had the same issue. And the same fix. A reboot didn't work, a service restart did the trick. Thanks!
@girish This is an interesting observation. I was just looking to see if this was a real security threat or not, and I suppose it isn't but can offer a bit more privacy using the wildcard approach. Any particular reason why the Let's Encrypt wildcard support can't be done through the actual Cloudron wildcard DNS approach? Is there a way to support this? I'd really like to take advantage of a smaller DNS provider which has some great monitoring features included, but it isn't supported via any API by Cloudron yet, so if I go that route I can only use the Wildcard option, but those don't actually allow for the wildcard certificates.
Edit: Nevermind, I see why in the docs: "Let's Encrypt only allows obtaining wildcard certificates using DNS automation. Cloudron will default to obtaining wildcard certificates when using one of the programmatic DNS API providers."
Are you hosting a custom domain on mailbox.org or do you have a @mailbox.org address? If it's the latter, mailbox is then not really an email relay. Generally, email relays are able to forward all addresses of a domain i.e [email protected]. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476396.49/warc/CC-MAIN-20240303142747-20240303172747-00830.warc.gz | CC-MAIN-2024-10 | 3,349 | 18 |
https://gamedesignclubuofc.wixsite.com/website/about | code | Who We Are
Game Design Club is a dedicated community fostering the skills within individuals in the subject of game design and development. We promote the development of games in many shapes and forms: discussing games, story, and development; and of course developing our own games! Our goal is to further a community of avid developers and create lifelong industry contacts. We do this in the most fun, practical environment possible: providing a hands-on team-based experience!
Art, Education, Fun, through games. | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00105.warc.gz | CC-MAIN-2021-17 | 516 | 3 |
https://itch.io/t/108448/luckys-lucky-7-slots-android-players-wanted | code | Hello! I just published a beta release of a fun little slot machine game called Lucky's Lucky 7 Slots. It was created using Unity3D, coded in C#, and features my own original graphics. This is my first build for Android, and I'd appreciate any testers out there willing to give me some performance feedback from their device(s), as well as how you like the game, in general. If all goes well, I'm planning to use it as a foundation for other slots games, and possibly release the code in a DIY asset. So, let me know what you think! Thanks!
You can download the .apk here: Lucky's Lucky 7 Slots
Here's a clip: | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526228.27/warc/CC-MAIN-20190418181435-20190418203435-00467.warc.gz | CC-MAIN-2019-18 | 609 | 3 |
https://mastodon.social/@tootapp/105805998205213609 | code | Toot! v16.0 is out!
Yes, I got tired of looking at the "1." at the start of the version number, and realised I don't really have any plans that would result in me ever changing that to a 2, so I figured it might as well drop it. I never liked the three-part version numbers anyway, and it's fairly normal these days to just bump the major version for each iterative release.
Anyway, this is a release for getting rid of people!
The original server operated by the Mastodon gGmbH non-profit | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00082.warc.gz | CC-MAIN-2022-27 | 489 | 4 |
http://www.global-flat.com/c/l-benjamin-hudson-in-budapest-58426.html | code | Two real artists teamed up for this video! Sevisual behind the camera and Benjamin Hudson in front of it. Ben was in hungary for a few days and they made very good use of that time. Even Ben is stoked on the result: "since i started riding one, of my biggest motivation was videos, especially Sevisual ones because of the balance between the riding level and how good the video looked in general. Now 9 years later im in one of them, thats dope" | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00166.warc.gz | CC-MAIN-2023-14 | 445 | 1 |
https://www.arxiv-vanity.com/papers/2004.04173/ | code | Tensor network models of AdS/qCFT
AdS/CFT endows gravity in anti-de Sitter (AdS) spacetime with a dual description in certain conformal field theories (CFTs) with matching symmetries. Tensor networks on regular discretizations of AdS space are natural toy models of AdS/CFT, but break the continuous bulk symmetries. We show that this produces a quasiregular conformal field theory (qCFT) on the boundary and rigorously compute its symmetries, entanglement properties, and central charge bounds, applicable to a wide range of existing models. An explicit AdS/qCFT model with exact fractional central charges is given by holographic quantum error correcting codes based on Majorana dimers. These models also realize the strong disorder renormalization group, resulting in new connections between critical condensed-matter models, exact quantum error correction, and holography.
July 12, 2021
Fundamental to the AdS/CFT correspondence is the observation that -dimensional anti-de Sitter (AdS) spacetime has the same symmetry as a -dimensional conformal field theory (CFT) Brown and Henneaux (1986). AdS/CFT predicts that specific gravitational theories in asymptotically AdS spacetime are described by the same partition function as certain holographic CFTs, usually supersymmetric ones whose additional symmetries manifest themselves on the bulk side by additional compact dimensions Maldacena (1999); Witten (1998). There exists an intimate connection between AdS/CFT and quantum information (QI). The most famous is exemplified by holographic entanglement entropy, the observation that the entanglement entropy of a region in a holographic CFT can be computed as the area of an extremal bulk surface , summarized in the Ryu-Takayanagi (RT) formula Ryu and Takayanagi (2006)
where is the gravitational constant in the AdS bulk. A further connection between AdS/CFT and QI, deeply related to the RT formula Harlow (2017), is provided by quantum error correction, as the information of the bulk theory appears to be fault-tolerantly encoded on the boundary Almheiri et al. (2015).
While the AdS/CFT conjecture was formulated in the continuum, it is tempting to combine it with a discrete language tailored for QI problems: That of tensor networks, which naturally incorporate the RT formula in the form of an upper bound on entanglement and yield boundary quantum states that can be efficiently computed. The multi-scale entanglement renormalization ansatz (MERA) Vidal (2008), a tensor network that well approximates critical boundary states, was first identified as a possible realization of discrete holography Swingle (2012); Singh (2018), but the bulk geometry of the MERA cannot be directly related to an AdS time-slice Beny (2013); Bao et al. (2015); Milsted and Vidal (2018). Instead, regular hyperbolic tilings have recently been used as the basis of numerous discrete holographic models Pastawski et al. (2015); Evenbly (2017); Jahn et al. (2019a); Osborne and Stiegemann (2017); Harris et al. (2018); Kohler and Cubitt (2019); Jahn et al. (2019b), elucidating many aspects of AdS/CFT, including its connection to quantum error correction Pastawski et al. (2015). However, a clear interpretation of the resulting boundary states in terms of critical systems, as is possible for the MERA, remained elusive.
In this letter, we propose that tensor networks on regular discretizations of AdS time-slices produce states of a quasiregular conformal field theory (qCFT), a discretization of a CFT that breaks translation invariance equally on all scales, in what we call an AdS/qCFT correspondence. This extends an earlier observation that the boundary geometry of regular tilings resembles a quasicrystal Boyle et al. (2020). By exploring the symmetries of tensor network states on such geometries, describing their renormalization group (RG) flow, and computing the effective central charges, we provide a physical realization of this connection. From this consistent discretization, we find that properties of continuum AdS/CFT – such as the Brown-Henneaux relation between bulk curvature and central charge Brown and Henneaux (1986) – need to be modified in a discrete setting. Finally, we introduce a concrete tensor network model that realizes the AdS/qCFT proposal.
Conformal boundary symmetries.
In -dimensions, a time-slice of AdS spacetime can be projected onto the Poincaré disk (Fig. 1) with the metric
where the constant is the AdS radius. The AdS boundary is located at . In the continuum, the Poincaré disk is invariant under transformations. Describing a coordinate point by the complex number , these transformations have the form
effectively shifting the origin of the Poincaré disk to the point with and rotating by an angle . We will refer to these as Möbius transformations, as is a subgroup of the Möbius group . On the Poincaré disk boundary , these transformations are equivalent to translations and local scale transformations, equivalent to the effect of conformal transformations in dimensions restricted to a time-slice. Furthermore, as length scales diverge at , our choice of a UV cutoff corresponds to a global scale transformation. Note that in AdS/CFT we often consider spacetimes that are only asymptotically AdS, i.e., have the form (3) close to the AdS boundary but may contain massive deformations further in the bulk.
Discrete conformal transformations.
Discretizing the Poincaré disk by a hyperbolic tiling breaks these continuous symmetries. We consider a regular tiling built of -gons, of which are adjacent at each corner (the case of ideal regular tilings and their symmetries was considered in Osborne and Stiegemann (2017)). First consider the global scale transformation: Instead of a radial cutoff in the Poincaré disk, the lattice is truncated after a finite number of inflation steps, a prescription for iteratively growing the tiling. For finite , a natural choice of such an inflation rule is given by vertex inflation, whereby all tiles around an open corner vertex are added in each iteration. This discrete cutoff does not follow a smooth curve at constant radius; the boundary geometry is in fact quasiregular, with self-similarity on all scales Boyle et al. (2020). After sufficiently many inflation steps, the scaling of the total system size in each step approaches an asymptotic value depending on the tiling. Specifically,
where is positive for any hyperbolic tiling (and vanishing for flat ones). This scaling law also holds for the length of any sufficiently fine-grained boundary region . Similarly, we can compute the asymptotic scaling of the length of any discrete bulk geodesic through each inflation step. Consider a tensor network of bond dimension embedded into this tiling, with each tile corresponding to a tensor and each edge between two tiles to a contraction of indices. For a minimal cut through this network having the same endpoints as a boundary region , its entanglement entropy is upper-bounded as
where is the geodesic length of each edge, which is constant for regular tilings. We can thus bound the scaling of as well, and by extension, compute the maximal central charge Calabrese and Cardy (2004) of the boundary state for any tiling Jahn et al. (2019c). Furthermore, we can relate to the Gaussian curvature of the Poincaré disk into which the tiling is embedded. The AdS radius is then given by
We thus arrive at a discrete, tiling-dependent generalization of the Brown-Henneaux formula, examples of which are shown as dashed curves in Fig. 2. At fixed and varied (small) , we find the boundary states of tilings approximately following the continuum formula by a constant central charge offset Jahn et al. (2019c).
Boundary translation invariance is also broken by the discretization. We can still center the tiling around either a vertex or a tile center in the Poincaré disk to produce a global or cyclic symmetry, but towards the boundary, these bulk rotations corresponds to an asymptotically infinite translation. However, due to the quasiregular structure of the boundary we recover an approximate translation invariance: For example, the (vertex) inflation rule for the tiling can be written as
where and stand for boundary vertices connected to two and three edges, respectively. Starting from any arbitrary sequence of s and s (i.e., an arbitrary convex bulk region), applying the inflation rule eventually leads to the same distribution of both letters, not only within the whole sequence, but within any sufficiently large sub-sequence. Thus, the boundary features of a regular tiling are equally distributed in any sufficently large boundary region, leading to an approximate invariance under translations. We also need to discretize the Möbius transformations (3), which are broken down to shifts of lattice vectors plus suitable rotations. At a given inflation step, such a transformation changes the resolution of boundary lattice points in different angular regions of the Poincaré disk boundary, equivalent to a local rescaling.
Based on these geometrical arguments, we define a quasiregular CFT (qCFT) as a (discretized) quantum field theory that is invariant under discrete global and local scaling transformations and exhibits approximate translation invariance. Correlations in a qCFT possess a fractal self-similarity: Their possess the same disorder on all length scales. By averaging over sufficently large regions, the decay properties of the continuum CFT correlation functions are recovered.
Poincaré disk Poincaré half-plane
qCFTs with Majorana dimers.
While the qCFT conditions were derived from geometrical arguments alone, we can explicitly construct tensor networks with boundary states that fulfill them. Specifically, we consider tensors corresponding to Majorana dimer states. These Gaussian fermionic states, composed of paired Majorana modes, are described by tensors that are efficiently contractible and can also be interpreted as ground states of Gaussian stabilizer Hamiltonians in the context of quantum error correction. The most relevant example of this kind is the code that encodes one logical qubit in five physical spins or fermions and has a code distance of three (i.e., can correct one Pauli-type error) Bennett et al. (1996); Laflamme et al. (1996). The logical code states and are encoded as Majorana dimer states on the physical fermions with the same correlation structure, arbitrary logical states corresponding to superpositions of the two. Explicitly, the basis state vectors can be visualized as
Each arrow from Majorana mode to in this diagram corresponds to a term in the stabilizer Hamiltonian. We can use these states as the building blocks of a class of holographic models known as HaPPY codes Pastawski et al. (2015). Specifically, we construct the hyperbolic pentagon code (HyPeC), a tensor network embedded into a tiling with each six-leg tensor representing the encoding isometry: Five contracted legs representing the physical sites, while the sixth uncontracted leg represents the logical qubit encoded on the other five. Thus the contracted network represents a mapping of bulk qubits (one on each pentagon) to the physical sites on the boundary of the network. For a basis state bulk input, tensor network contraction is equivalent to simply connecting dimers across different tiles and multiplying their dimer parities Jahn et al. (2019b). As each dimer carries an entanglement entropy of , an inflation step is equivalent to an entanglement renormalization step Vidal (2010), adding entanglement through the addition of dimers along the boundary.
Poincaré disk Poincaré half-plane
Again, relating the growth of boundary regions to the growth of entanglement over cuts under an inflation step allows us to analytically compute central charges; for example, for the HyPeC,
for edge and vertex inflation, respectively Jahn et al. (2019c). On the level of the boundary states, a global inflation step results in a fine-graining transformation that leaves the correlation structure invariant (Fig. 3). This fulfills the first qCFT condition of invariance under discrete global scaling transformations. In fact, as shown in Fig. 2, the HyPeC (as well as its block-perfect generalizations) saturate the general central charge bound at large curvature, i.e., for tilings at large . Local scaling transformations, which are produced by bulk Möbius transformations, also leave boundary states invariant. Consider the bulk shift of the contracted region shown in Fig. 4: This shift is equivalent to inflation (fine-graining) of one part of the boundary and deflation (coarse-graining) of another, leading to the same boundary state up to a translation. This implies self-similarity of the boundary states between scales: Large subsystems possess the same correlation structure as small ones, and local inflations only extend the resolution of these correlations to successively smaller scales. Importantly, discrete local scaling does not change the underlying inflation method, thus leaving the boundary central charge unchanged. Finally, we also find approximate translation invariance: As Fig. 5 shows, correlations on small scales follow the same pattern in any subsystem.
The renormalization group flow of these qCFT states relates them to a well-known class of critical states: Strongly disordered spin chains. The ground states of a large family of these models – such as the Fibonacci XXZ chain – are given by a configuration of spin singlets, and the RG flow can be written as a replacement rule for singlets Iglói and Monthus (2018). This procedure is completely analogous to the inflation rule of a holographic Majorana dimer model, except that we are considering pairs of fermionic Majorana modes instead of spins and allowing for crossings between these pairs. As shown in Fig. 6, the resulting entanglement scaling and correlation functions between the Fibonacci XXZ chain and the HyPeC are similar, as well: The entanglement entropy grows linearly with subsystem size in discrete intervals, but logarithmically on larger scales, with a coefficient of (with Juhász and Zimborás (2007)). The average decay of correlation functions, given by a histogram of the dimers/singlets over the boundary distance over which they are paired up, also follows the falloff, as generally expected by a critical system. In the dimer case, this histogram appears “split” into two series of correlations both decaying at the same rate; this is the result of two types of dimer pairs that appear at each length scale (compare Fig. 5).
The appearance of aperiodic, quasiregular symmetries is a natural consequence of discretizing AdS/CFT on time-slices. The continuous bulk symmetries of such a time-slice are replaced by discrete ones, invariance under which leads to what we termed a quasiregular CFT (qCFT). We expect qCFTs to appear on the boundary of all tensor network models on a regular bulk geometry Pastawski et al. (2015); Harris et al. (2018); Osborne and Stiegemann (2017); Evenbly (2017); Hayden et al. (2016), including p-adic AdS/CFT models Gubser et al. (2017); Heydeman et al. (2018). These models are instances of an AdS/qCFT correspondence in that the same symmetry constraints apply to all of them, with their central charge restricted by the bulk curvature through a generalized Brown-Henneaux relation. The hyperbolic pentagon code (HyPeC), studied here in its Majorana dimer form, shows that qCFTs are closely related to strongly disordered critical models, which have been extensively studied in the condensed matter literature Fisher (1995); Refael and Moore (2004); Iglói and Monthus (2018); Vosk et al. (2015); Tsai et al. (2019); Protopopov et al. (2020). As the HyPeC is also a model of quantum error correction, we find that AdS/qCFT includes exact holographic codes, rather than the approximate codes found in AdS/MERA models Kim and Kastoryano (2017). The similarity to strongly disordered models also suggests that boundary dynamics can be described by an effective local (though not nearest-neighbor) Hamiltonian, which would allow for dynamical AdS/qCFT models. Further work should also explore the role of qCFT excitations, where the regular symmetries of the tensor network are (locally) broken.
Acknowledgements. We thank Marek Gluza, Xiaoliang Qi, Sukhbinder Singh, Tadashi Takayanagi, and Charlotte Verhoeven for helpful comments and discussions. This work has been supported by the Templeton Foundation, the DFG (CRC 183, EI 519/15-1), the NKFIH (K124351, K124152, K124176), and the FQXi. This work has also received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No. 817482 (PASQuanS). This research was supported in part by the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science, and Economic Development, and by the Province of Ontario through the Ministry of Research and Innovation.
- Brown and Henneaux (1986) J. D. Brown and M. Henneaux, Commun. Math. Phys. 104, 207 (1986).
- Maldacena (1999) J. M. Maldacena, Int. J. Theor. Phys. 38, 1113 (1999), [Adv. Theor. Math. Phys. 2, 231(1998)].
- Witten (1998) E. Witten, Adv. Theor. Math. Phys. 2, 253 (1998), arXiv:hep-th/9802150 [hep-th] .
- Ryu and Takayanagi (2006) S. Ryu and T. Takayanagi, Phys. Rev. Lett. 96, 181602 (2006).
- Harlow (2017) D. Harlow, Commun. Math. Phys. 354, 865 (2017), arXiv:1607.03901 [hep-th] .
- Almheiri et al. (2015) A. Almheiri, X. Dong, and D. Harlow, JHEP 1504, 163 (2015).
- Vidal (2008) G. Vidal, Phys. Rev. Lett. 101, 110501 (2008).
- Swingle (2012) B. Swingle, Phys. Rev. D 86, 065007 (2012).
- Singh (2018) S. Singh, Phys. Rev. D 97, 026012 (2018).
- Beny (2013) C. Beny, New J. Phys. 15, 023020 (2013).
- Bao et al. (2015) N. Bao, C. J. Cao, S. M. Carroll, A. Chatwin-Davies, N. Hunter-Jones, J. Pollack, and G. N. Remmen, Phys. Rev. D91, 125036 (2015).
- Milsted and Vidal (2018) A. Milsted and G. Vidal, (2018), arXiv:1812.00529 [hep-th] .
- Pastawski et al. (2015) F. Pastawski, B. Yoshida, D. Harlow, and J. Preskill, JHEP 2015, 149 (2015).
- Evenbly (2017) G. Evenbly, Phys. Rev. Lett. 119, 141602 (2017).
- Jahn et al. (2019a) A. Jahn, M. Gluza, F. Pastawski, and J. Eisert, Science Advances 5, eaaw0092 (2019a).
- Osborne and Stiegemann (2017) T. J. Osborne and D. E. Stiegemann, (2017), arXiv:1706.08823 [quant-ph] .
- Harris et al. (2018) R. J. Harris, N. A. McMahon, G. K. Brennen, and T. M. Stace, Phys. Rev. A 98, 052301 (2018).
- Kohler and Cubitt (2019) T. Kohler and T. Cubitt, JHEP 08, 017 (2019), [JHEP 19, 017 (2020)].
- Jahn et al. (2019b) A. Jahn, M. Gluza, F. Pastawski, and J. Eisert, Phys. Rev. Research 1, 033079 (2019b).
- Boyle et al. (2020) L. Boyle, M. Dickens, and F. Flicker, Phys. Rev. X10, 011009 (2020), arXiv:1805.02665 [hep-th] .
- Calabrese and Cardy (2004) P. Calabrese and J. L. Cardy, J. Stat. Mech. 0406, P06002 (2004), arXiv:hep-th/0405152 [hep-th] .
- Jahn et al. (2019c) A. Jahn, Z. Zimboras, and J. Eisert, (2019c), arXiv:1911.03485 [hep-th] .
- Bennett et al. (1996) C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, Phys. Rev. A54, 3824 (1996).
- Laflamme et al. (1996) R. Laflamme, C. Miquel, J. P. Paz, and W. H. Zurek, (1996), arXiv:quant-ph/9602019 [quant-ph] .
- Vidal (2010) G. Vidal, Understanding Quantum Phase Transitions (2010), arXiv:0912.1651 [cond-mat.str-el] .
- Iglói and Monthus (2018) F. Iglói and C. Monthus, Eur. Phys. J. B 91, 290 (2018).
- Juhász and Zimborás (2007) R. Juhász and Z. Zimborás, J. Stat. Mech. 2007, 04004 (2007), arXiv:cond-mat/0703527 [cond-mat.stat-mech] .
- Hayden et al. (2016) P. Hayden, S. Nezami, X.-L. Qi, N. Thomas, M. Walter, and Z. Yang, JHEP 11, 009 (2016).
- Gubser et al. (2017) S. S. Gubser, J. Knaute, S. Parikh, A. Samberg, and P. Witaszczyk, Commun. Math. Phys. 352, 1019 (2017).
- Heydeman et al. (2018) M. Heydeman, M. Marcolli, I. Saberi, and B. Stoica, Adv. Theor. Math. Phys. 22, 93 (2018).
- Fisher (1995) D. S. Fisher, Phys. Rev. B 51, 6411 (1995).
- Refael and Moore (2004) G. Refael and J. E. Moore, Phys. Rev. Lett. 93, 260602 (2004).
- Vosk et al. (2015) R. Vosk, D. A. Huse, and E. Altman, Phys. Rev. X 5, 031032 (2015).
- Tsai et al. (2019) Z.-L. Tsai, P. Chen, and Y.-C. Lin, (2019), arXiv:1912.03529 [cond-mat.dis-nn] .
- Protopopov et al. (2020) I. V. Protopopov, R. K. Panda, T. Parolini, A. Scardicchio, E. Demler, and D. A. Abanin, Phys. Rev. X 10, 011025 (2020).
- Kim and Kastoryano (2017) I. H. Kim and M. J. Kastoryano, JHEP 2017, 40 (2017). | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00635.warc.gz | CC-MAIN-2022-27 | 20,670 | 65 |
https://bugzilla.mozilla.org/show_bug.cgi?id=439778 | code | Closed Bug 439778 Opened 13 years ago Closed 13 years ago
Use a Nightly scheduler to generate firefox 1
We generate nightly builds that use different source code. Using a Nightly scheduler and using this https://bugzilla.mozilla.org/attachment.cgi?id=325321 with Buildproperties should allow us to get nightly builds with the same source stamp. For instance, last night in production1.8 linux = 10:01 +0000 mac = 10:25 +0000 win32 = 11:03 +0000 A quick solution might be to have another bootstrap config for dependent builds (this way we don't have them uploading to ftp the "nightly builds" they generate) and for the nightly have a bootstrap file that will put its builds to ftp. This way we get nightly synchronized builds right away generated from the common pool of slaves and we won't have to worry about what dep_scheduler does with its 24/7 "dedicated slave" even if it generates "nightly builds" that will go to nowhere BUT we won't mind that dependent builds start once a day from a clean build (which won't go to ftp as I mentioned before) Having a Nightly scheduler will also help me generate "real" l10n repackages with the same checkout time as the en-US nightly build ################## http://production-1.8-master.build.mozilla.org:8810/builders/linux_dep_build/builds/11755/steps/shell_10/logs/stdio |Most recent nightly build: Sun Jun 15 03:04:16 2008 |Most recent build hour: Mon Jun 16 03:00:00 2008 |Starting nightly release build |MOZ_CO_DATE=06/16/2008 10:01 +0000 http://production-1.8-master.build.mozilla.org:8810/builders/win32_dep_build/builds/5033/steps/shell_10/logs/stdio |Most recent nightly build: Sun Jun 15 03:18:51 2008 |Most recent build hour: Mon Jun 16 03:00:00 2008 |Starting nightly release build |MOZ_CO_DATE=06/16/2008 11:03 +0000 http://production-1.8-master.build.mozilla.org:8810/builders/macosx_dep_build/builds/4197/steps/shell_10/logs/stdio |Most recent nightly build: Sun Jun 15 03:00:34 2008 |Most recent build hour: Mon Jun 16 03:00:00 2008 |Starting nightly release build |MOZ_CO_DATE=06/16/2008 10:25 +0000
Note that we don't have appropriate Buildbot code in our tree to do this. It requires scheduler properties, which were landed post-0.7.7.
I think it'd be easier with build properties, but I guess that if we encoded "do the 4 am nightly" in the reason, we could make a buildstep set the MOZ_CO_DATE to 4am, and then the builder would use that to pull the 4 am source, independent of when the actual build is run, even. Which should be this way, as we can schedule the build at 4am, but that doesn't guarantee that it's immediately run. Thinking on how to port this forward to moz2, do we need a pushlog enhancement to give us the mercurial changeset for a timestamp?
I'm not saying we shouldn't do it with scheduler properties - in fact, I think that *is* the proper way. I'm just pointing out that we need to either a) pre-land the schedule properties patches (which I was looking at doing, but had trouble figuring out exactly which changesets were necessary), b) merge all of the post-0.7.7 changes (which I really don't want to do because I'm not entirely confident of stability), or c) wait until 0.7.8 is released & merged to our tree.
(In reply to comment #1) > Note that we don't have appropriate Buildbot code in our tree to do this. It > requires scheduler properties, which were landed post-0.7.7. > I have been able to use build properties on my local machine and I am using buildbot from our repository (I believe) I have use this script to set up my machine https://bugzilla.mozilla.org/attachment.cgi?id=323439&action=edit which buildbot shows this: /tools/buildbot/bin/buildbot which python shows this: /tools/python/bin/python echo $PYTHONPATH shows: /tools/buildbotcustom:/tools/buildbot/lib/python2.5/site-packages:/tools/twisted/lib/python2.5/site-packages:/tools/twisted-core/lib/python2.5/site-packages:/tools/zope-interface/lib/python2.5/site-packages/ I will try to delete my buildbot and reinstall it manually and see how it goes with a fresh install (In reply to comment #2) > we could make a buildstep set the > MOZ_CO_DATE to 4am, and then the builder would use that to pull the 4 am > source, independent of when the actual build is run, even. Which should be this > way, as we can schedule the build at 4am, but that doesn't guarantee that it's > immediately run. > Yeah, this would work to have synchronized en-US builds and the same to have l10n repacks with the same en-US source stamp
(In reply to comment #4) > (In reply to comment #1) > > Note that we don't have appropriate Buildbot code in our tree to do this. It > > requires scheduler properties, which were landed post-0.7.7. > > > I have been able to use build properties on my local machine and I am using > buildbot from our repository (I believe) Please double check that. I do not see any properties stuff in http://mxr.mozilla.org/mozilla/source/tools/buildbot/buildbot/scheduler.py#22
(In reply to comment #5) > (In reply to comment #4) > > (In reply to comment #1) > > > Note that we don't have appropriate Buildbot code in our tree to do this. It > > > requires scheduler properties, which were landed post-0.7.7. > > > > > I have been able to use build properties on my local machine and I am using > > buildbot from our repository (I believe) > > Please double check that. I do not see any properties stuff in > http://mxr.mozilla.org/mozilla/source/tools/buildbot/buildbot/scheduler.py#22 > Isn't it this one? http://mxr.mozilla.org/mozilla/source/tools/buildbot/buildbot/status/builder.py#1077
From IRC: I misunderstood what Armen was doing. Turns out he doesn't need Scheduler Properties.
We would not be able to set the MOZ_CO_DATE variable until we have this r-plused patch landed https://bugzilla.mozilla.org/attachment.cgi?id=325329&action=edit, but I would like to see it working in staging I still have to add and modify the file configs/fx-moz18-dependent-staging-bootstrap.cfg
Needed for the previous patch - It is a copy of nightly-staging and changing some folder to build on. Any way on changing where the nightly builds of dependent builder puts its nightly builds?? It wouldn't be nice that it would overwrite the nightly builds by the nightly builders
Folders to create in slaves: Linux & Mac - /builds/tinderbox/Fx-Mozilla1.8-Dependent - /builds/tinderbox/Fx-Mozilla1.8-l10n-Dependent Windows - /cygdrive/c/builds/tinderbox/Fx-Mozilla1.8-Dependent - /cygdrive/c/builds/tinderbox/Fx-Mozilla1.8-l10n-Dependent All 3 platforms - /builds/logs.dependent NOTE: We might never need anymore the l10n folders since at some point my work on Bug 434878 might probably leads us on not having to set them up
To be clear, the goal of this bug is to find a way to distinguish between dep builds/regular clobbers, and our official "nightlies".
To expand on comment#11...the reason we want to be able to distinguish is so we can eventually trigger l10n nightlies w/ an en-US one. Here are the action items Armen and I came up with: * find a way to have the "dep" builders do only non-nightlies (but not necessarily only dependent builds) * find a way to have the "nightly" builder do only nightlies Updating the summary.
I'm sure if tinderbox supports doing point 1. IIRC you need to make sure that cachebuild is not 1, maybe setting $OfficialBuildMachinery=0 in the tinder-config.pl lets you do that. We don't use that normally so it'd need some investigation. The second point is easy I think, just call schedule a builder every 24 hours and trigger it after the $build-hour in the tinder-config.pl. What's going to happen in the hourly case ? The same alignment of timestamps for en-US and l10n ?
Nick, I would expect that l10n boxens always try to build l10n tip, against the right en-US source, i.e., non-aligned timestamps. If we do real l10n nightlies, that's different, then I'd align timestamps. With l10n-merge, I think I'll get something hackerish up for that next week.
This patch was initially started in bug 398954 but I believe it makes more sense to have it here This attachment allows to use a step in which you can set a given hour and all slave will checkout that timestamp no matter when they start their step 1) You can set different timestamp format's 2) you can pass no parameters and it will give the Build's start time 3) You can give a whole datetime 4) or just set a time (which is not dangerous if we are going to use it for a Nightly scheduler) When I say dangerous, I refer that if this class is used in other usages in which builds could be queued and therefore a Build object assigned to a slave before midnight would be "TODAY at specified time" and the next build if assigned after midnight it would be "TOMORROW at specified time" We might even have the problem, that we schedule a Nightly scheduler to start at 4am but we give a wrong parameter, let's say 5am; the slave would be checking out something in the future (which in CVS might default to latest), but this more of a human error rather than a machine error (I think)
Attachment #327486 - Flags: review?(bhearsum)
(In reply to comment #13) > I'm sure if tinderbox supports doing point 1. IIRC you need to make sure that > cachebuild is not 1, maybe setting $OfficialBuildMachinery=0 in the > tinder-config.pl lets you do that. We don't use that normally so it'd need some > investigation. > To investigate this I would need to make changes on mozilla/tools/tinderbox-configs/firefox/linux mac and windows but which branch do we use for staging? I have not seen a branch for staging which I believe it makes sense since we don't want different behaviors between staging and production Could I use MOZILLA_1_8_BRANCH_test to test it in staging before moving to MOZILLA_1_8_BRANCH?? |http://bonsai.mozilla.org/cvslog.cgi?file=/mozilla/tools/tinderbox-configs|/firefox/linux/tinder-config.pl&rev=MOZILLA_1_8_BRANCH_test
(In reply to comment #13) > I'm sure if tinderbox supports doing point 1. IIRC you need to make sure that > cachebuild is not 1, maybe setting $OfficialBuildMachinery=0 in the > tinder-config.pl lets you do that. > As Nick recalls, it should work: http://mxr.mozilla.org/mozilla/source/tools/tinderbox/post-mozilla-rel.pl#1240 (only match)
This should affect any LINUX machines that are doing dependent builds like in staging-1.8 and production-1.8, which would stop them from generating 1.8 Nightly builds We could tell the effect of this patch immediately by looking at their logs which would show "Configured to release hourly builds only\n" instead of "Starting nightly release build\n" or "Starting non-release build\n" I would like to test one night a Nightly release with the Nightly scheduler and turning off the Depedent Scheduler before trying to apply this patch.
This patch only makes the "linux" slaves to take part of the Nightly scheduler since the previous patch affects "linux" machines only and it would affect Staging-1.8 and Production-1.8. Once we see that it works fine for linux, we can then add the patches for windows and mac. Who would be affected if these two patches goes in? Firefox 2 users who are still using nightly builds since if not mistaken they still should be getting a prompt saying that the next "nightly" is available
Attachment #327631 - Flags: review?(ccooper) → review+
Once we see that the other patch is working fine for the linux machine, we will then add this patch as well
>Index: tools/tinderbox-configs/firefox/linux/tinder-config.pl >=================================================================== >RCS file: /cvsroot/mozilla/tools/tinderbox-configs/firefox/linux/tinder-config.pl,v >retrieving revision 220.127.116.11 >diff -u -8 -p -r18.104.22.168 tinder-config.pl >--- tools/tinderbox-configs/firefox/linux/tinder-config.pl 13 May 2008 19:04:40 -0000 22.214.171.124 >+++ tools/tinderbox-configs/firefox/linux/tinder-config.pl 1 Jul 2008 17:48:47 -0000 > $DependToDated = 1; # Push the hourly to <host>-<milestone>/<build_start_time>? >+$OfficialBuildMachinery = 0; # Allow official clobber nightlies. When false, $cachebuild in post-mozilla-rel.pl can never be true. > $build_hour = "3"; This will stop the depend_scheduler from generating nightly builds (which is what we want) but it will stop the nightly scheduler as well since they share the same bootstrap.cfg which specifies the version to nightly http://mxr.mozilla.org/mozilla/source/tools/release/configs/fx-moz18-nightly-bootstrap.cfg#1 Having the version to nightly will reach this code: http://mxr.mozilla.org/mozilla/source/tools/release/Bootstrap/Step/TinderConfig.pm#197 which will pull MOZILLA_1_8 and then for release it pulls MOZILLA_1_8_release There should be another version of tinder-config.pl called MOZILLA_1_8_nightly, which should have $OfficialBuildMachinery = 1; to allow nightly builds Sounds reasonable? I will attach patch to add bootstrap configuration for dependent builds and I don't really know how to branch (if that is how you call the process) tinder-config.pl as MOZILLA_1_8_nightly
From discussion with joduinn, bhearsum and coop I extracted that let's make this work for 1.8 by branching tinder-config.pl to MOZILLA_1_8_nightly joduinn, could we go this way for 1.8 since maintaining synchronized two configurations files (one for nightly and the other one for dependent) would only be until the decease of 1.8 which should be by December 2008? For 1.9.0, we will have to generate dependent/nightly builds under the umbrella of our 1.9-master instead of using fx-linux-tbox, bm-xserve08 and fx-win32-tbox and then have the dependent-nightly split for them as well From fx-linux-tbox's log I can tell that the version of tinder-config.pl is 1.29 which is HEAD What version do we use for 1.9 releases?
(In reply to comment #22) > From fx-linux-tbox's log I can tell that the version of tinder-config.pl is > 1.29 > which is HEAD > What version do we use for 1.9 releases? > It seems that we use the branch "release" cvs -d:pserver:[email protected]:/cvsroot co -r release mozilla/tools/tinderbox-configs/firefox/linux This means we could have a branch "nightly"?? "MOZILLA_1_9_0_nightly"?? and keep "HEAD" or a new branch "MOZILLA_1_9_0" for dependent builds?
Comment on attachment 327634 [details] [diff] [review] master.cfg - Staging and production to have Nightly Scheduler Canceling some of the patches. The only reason I have been doing these changes of separating the dependent process from the nightly process is to have a Nightly scheduler from which we can trigger L10n repackages. From the discussion with joduinn, I am going to be give the work done by the tbox slaves in 1.9 dedicated to the slaves under our master1.9 and bring the build upon commit into use on 1.9 staging first and this I can have the Nightly scheduler that we need for l10n
Comment on attachment 327631 [details] [diff] [review] tinder-config.pl - MOZILLA_1_8_BRANCH - Disable $OfficialBuildMachinery to stop official nigthly clobbers Obsoleting so nobody decides to check this in. coop, no branching needed for this file
Attachment #327631 - Attachment is obsolete: true
I don't understand comment 24.
Comment on attachment 327486 [details] [diff] [review] misc.py - It allows to set the same checkout time for a build in 3 different ways >Index: tools/buildbotcustom/steps/misc.py >=================================================================== >+class SetCvsCoDate(BuildStep): >+ """ >+ but you can specify an specific >+ date and time, just the time or even give another format to the build start >+ time. Check "man date" for formatting information >+ How are we going to make use of this? I don't see any situation where we would be passing in a hardcoded date/time from the master.cfg.
(In reply to comment #26) > I don't understand comment 24. > On 1.9.0, we build continually dependent/nightly builds, therefore, there is no way for me to know when a nightly build has happened and when to trigger the L10n repackages. Having the Nightly scheduler as we have in 1.9.1 will give us room to set the same Source Stamp in en-US for all 3 platforms and we can have proper L10n repackages afterward One way of having Nightly schedulers is to separate the dependent/nightly process into dependent builds in one side and nightly builds in another. I will be working on migrating firefox 1.9 nightlies to release automation (bug 421411) and have build upon checkin which give us Nightly scheduler. There is some work that was started by rhelmer which will accelerate this move
Comment on attachment 327486 [details] [diff] [review] misc.py - It allows to set the same checkout time for a build in 3 different ways OK. I chatted with Armen on the phone about this, here's the summary: * SetCvsCoDate should _not_ get hardcoded with a certain checkout time in the master.cfg. This would break respinning with a CLOBBER file, ie, we would have to manually update the master.cfg && reconfig to respin. We do not want to lose this. * We want to avoid splitting the tinder-config/mozconfig into a dep and nightly branch if at all possible. This would eliminate the need for an extra tinderbox tree on the build machines, and again, prevent breaking the CLOBBER files. Armen is going to investigate using a new bootstrap.cfg parameter to distinguish between a dep and nightly. If, eg, 'generateNightly' is set in the bootstrap.cfg maybe we can pass an option to build-seamonkey.pl to force it to do a nightly rather than a dep build (this would only apply if version=nightly). The above would *not* let us generate nightlies on different platforms with the same checkout time, but not breaking CLOBBER files outweighs the benefits of that IMHO.
Attachment #327486 - Flags: review?(bhearsum) → review-
Summary: Use Nightly scheduler to create nightly builds → Use a Nightly scheduler to generate firefox 1.9 nightlies
Assignee: armenzg → nobody
Component: Release Engineering → Release Engineering: Future
Priority: P2 → P3
If bug 421411 won't be fixed, then this bug will not fix either RESOLVING WONDTFIX
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → WONTFIX
Moving closed Future bugs into Release Engineering in preparation for removing the Future component.
Component: Release Engineering: Future → Release Engineering
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug. | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00307.warc.gz | CC-MAIN-2021-21 | 18,391 | 48 |
https://www.codecrafters.com/AbilityMailServer/Support/TutorialInterface | code | This page describes in detail how to take full advantage of the explorer style settings interface. Because Ability Mail Server contains many features, it may take several minutes to learn where particular settings are located. This tutorial will enable you to quickly learn how to use the interface and help you understand where particular settings can be found.
Admin Interface and Status #
The admin interface offers an informative and easy to use facility which allows you to view the current status of the mail server as well as giving quick access to all areas of the mail server. For more information, please view the Admin Interface and Status page.
Setup Wizards #
The setup wizards allow you to set up various parts of the mail server quickly and easily, hiding many of the more complex options. To make it even easier, most of the options are explained in detail in the wizards to help understanding. For more information, please view the Setup Wizards page.
Explorer Style Settings Access #
The accounts area of the tree is organized into 5 sections. The three most important sections are Domains, Groups and Users. To create the first email account (user), you need to first create a domain. Once a domain is created, you can then select the Users section and then add users to that domain. The email address of the users is calculated from the domain name and users name (i.e. the user 'bob' in the image on the right would have an email address of '[email protected]').
Multiple Domains #
If you have more than one domain created, you can edit the users of both through the Users section. By default, the Users section will only display users on the first domain. However, you can change which users are displayed by clicking on the Users item in the tree and then selecting the appropriate domain from the drop down box. This method allows you to quickly and easily work with several domains.
Right Click Pop-up Menus #
With almost every item in the Settings dialog, you can right click on it and view a list of options. This applies to items in the tree as well as entries in list boxes. This facility enables you to quickly create new items, re-arranged ordered items and delete unwanted items.
Using List Boxes #
Often any item in the tree that can be deleted (such as a domain, user or Mailing List) can be also manipulated using a list box on the parent item as well as using a right click menu on the tree. However, the list box has the extra advantage of multiple items being selected at the same time, allowing you to quickly delete or move many items at once.
Navigating the Settings #
All the settings in Ability Mail Server are organized in a way to help reduce the need to search for particular settings. There are five distinct areas in the settings interface: General, Services, Filters, Accounts and miscellaneous. General deals with important overall settings such as security, account storage location and NT service control. Services deals with individual services such as SMTP and POP3. Filters deals with SPAM Filtering, Content Filtering and Antivirus Filtering. Accounts deals with the management of Domains, Groups, Users, Mailing Lists and Shared Address Books. Miscellaneous deals with all remaining options which do not really fit into the other areas. This includes options for controlling the SSL Certificates, Logging, Messages, Tools and License settings. Below is a list of what each tree item represents and what options can found within:-
- General - Provides access to options such as primary domain, admin email, IP binding, overall mail and account size limits, admin login control, NT service control, general security and ODBC.
- Services - This is the branch which contains all the available services. Simply expand this branch and select the appropriate service.
- SMTP - This controls the SMTP service which is responsible for receiving incoming mail, as well as relaying outbound mail. This area provides access to options such as ports, IP binding, SSL, IP restrictions, relaying access control and security.
- POP3 - This controls the POP3 service which is responsible for allowing mail clients access to mail stored in an account's Inbox. This area provides access to options such as ports, IP binding, SSL and IP restrictions.
- IMAP4 - This controls the IMAP4 service which is responsible for allowing mail clients access to accounts directories and mail. This area provides access to options such as ports, IP binding, SSL and IP restrictions.
- WebMail - This controls the WebMail service which is responsible for allowing users to access their accounts through a web browser. This area provides access to options such as ports, IP binding, SSL, IP restrictions, general WebMail behavior, templates, automatic sign-ups and user account options.
- LDAP - This controls the LDAP service which is responsible for allowing mail clients and other mail software to request information from the mail server relating to local accounts. This area provides access to options such as ports, IP binding, SSL, IP restrictions and LDAP databases.
- Remote Admin - This controls the Remote Admin service which is responsible for allowing remote administration of the mail server through a web browser. This area provides access to options such as ports, IP binding, SSL and IP restrictions.
- Outgoing Mail - This controls the Outgoing Mail service which is responsible for delivering any outbound mail to external mail servers. This area provides access to options such as delivery method, queue limits, queue resetting and static routes.
- POP3 Retrievals - This controls the POP3 Retrievals service which is responsible for connecting to external mail servers and downloading the contents of external accounts. This area provides access to options such as security, queue limits and pre-configured automatic retrievals.
- Filters - This is the branch which contains all the filter systems. Simply expand this branch and select the appropriate system.
- SPAM Filtering - This controls the SPAM Filtering system which works at the SMTP level. This area provides access to Real-time Black Lists, Bayesian Filtering, Grey Listing and much more.
- Content Filtering - This controls the Content Filtering system which is responsible for filtering all mails coming into the mail server based on rules set up by the administrator. This area provides access to those rules which in turn allow simple or complex conditions to be created which if triggered can perform various actions.
- Antivirus Filtering - This controls the Antivirus Filtering system which is responsible for scanning mail traffic for viruses. This service supports most antivirus products and this area provides access to options which allow you to configure the service to work with your product. If a virus is found, there are various actions that can be performed.
- Domains - This area allows you to create, edit, delete and view domains.
- Groups - This area allows you to create, edit, delete and view groups. Groups enable you to control certain aspects of the associated users, which includes access restrictions, account allocation limits and address book sharing.
- Users - This area allows you to create, edit, move, delete and view users. Each entry created in this section represents an email account which has an email address in the form user@domain.
- Mailing Lists - This area allows you to create, edit, move, delete and view Mailing Lists.
- Shared Address Books - This area allows you to create, edit, delete and view Shared Address Books. Shared Address Books are intended to be used by WebMail and LDAP, which allow you to quickly and easily control shared contact information.
- SSL Certificates - This area allows you to create, delete and view SSL certificates. To enable SSL in any service, you need to firstly create or import an SSL certificate using these options.
- Logging - This area allows you to control how much or how little information is logged. Options include individual control of logging levels for each service, auto-deletion and log sizes.
- Messages - This area allows you to modify the default messages used in various parts of the mail server. Messages that are editable include the Outgoing Mail failure mail, account full failure mail, WebMail welcome mail and default auto-response.
- Tools - This area allows you to use various tools such as MX Lookup, diagnostics, backup importing and exporting and setup wizards.
- License - This area allows you to modify your installed license key. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475833.51/warc/CC-MAIN-20240302152131-20240302182131-00486.warc.gz | CC-MAIN-2024-10 | 8,550 | 39 |
https://www.producthunt.com/alternatives/simple-mockups | code | Previewed is a mockup generator, which is used by devs & designers to create beautiful promotional graphics for your app. Browse a variety of templates, pick one, customize it in a few clicks & download your pixel perfect design.
Mockuptime is the ultimate collection of beautiful, realistic PSD mockups to showcase your work. Available for free, just how you like it.
shotsnapp is a simple tool to quickly create beautiful device mockup presentation for your app and website design. | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738425.43/warc/CC-MAIN-20200809043422-20200809073422-00035.warc.gz | CC-MAIN-2020-34 | 483 | 3 |
http://ilariamondello.com/language/en/research/ | code | I am interested in geometric analysis, in conformal geometry, in the Yamabe problem, and in the study of singular spaces through tools of Riemannian geometry.
During my Ph.D. I studied a conformal invariant, the local Yamabe constant, which plays an important role in the solution of the Yamabe problem on metric spaces, and in particular on stratified spaces. This lead me to extend to stratified spaces some classical results of Riemannian geometry, such as the Lichnerowicz theorem for the first non-zero eigenvalue of the Laplacian and Myer’s diameter theorem.
During my Master thesis, I worked on the Schrödinger equation with fractional Laplacian and on its discretization.
Stratified spaces and synthetic Ricci curvature bounds, with J. Bertrand, C. Ketterer and T. Richard, available at arXiv:1804.08870.
The Local Yamabe constant of Einstein Stratified Spaces, Annales de l’Institut Henri Poincaré Analyse Non Linéaire. 34 (2017), no.1 249-275, available at arXiv:1411.7996.
- November 2018, Brussels – London geometry seminar, Université Libre de Bruxelles.
- October 2018, Geometry seminar, Université de Nantes.
- September 2018, Analytical problems in conformal geometry and applications, Regensburg.
- June 2018, Geometric analysis in Samothrace.
- Mai 2018, Séminaire d’Analyse Harmonique, Orsay.
- January 2018, Graduiertenkolleg Kolloquium, Regensburg.
- November 2017, Differential geometry seminar, Nancy.
- September 2017, Intense activity period: Metric measure spaces and Ricci curvature, Max Planck Institute for Mathematics, Bonn.
- August 2017, Analysis, geometry and topology of positive scalar curvature metrics, Oberwolfach.
- June 2017, Geometry and topology seminar, University of Luxembourg.
- June 2017, Analysis seminar, Albert Ludwig University of Freiburg.
- April 2017, Young Women in Geometry, Max Planck Institute for Mathematics, Bonn.
- February 2017, France-Italy meeting in geometric analysis, Centro Ennio de Giorgi, Pisa.
- January 2017, Geometry, topology and dynamics seminar, Orsay.
- December 2016, Darboux seminar, Montpellier.
- November 2016, Functional analysis seminar, Lille.
- November 2016, Oberseminar Differentialgeometrie, Mathematical Insitute, Münster.
- October 2016, Geometria Reale e Complessa, Levico Terme, Trento.
- October 2016, Geometry Seminar, LAMA, bâtiment Sophie Germain P7.
- August 2016, Follow-up workshop Optimal Transportation, Hausdorff Research Institute for Mathematics, Bonn.
- August 2016, Reflections on Global Riemannian Geometry, 3rd Smoky Great Plains Geometry Conference, Townsend, Tennessee.
- July 2016, Geometric Analysis on Riemannian and singular metric measure spaces, Lake Como School.
- June 2016, Analysis, Geometry and Topology of stratified spaces, CIRM, Marseille.
- March 2016, Postdoc Lunch Seminar, MSRI, Berkeley.
- December 2015, Real Analysis Seminar, Institut de Mathématiques de Toulouse.
- December 2015, Geometry and Analysis Seminar, Laboratoire J.A. Dieudonné, Nice.
- November 2015, Spectral Analysis and Geometry Seminar, Institut Fourier.
- November 2015, Complex Analysis and Geometry Seminar, IMJ, Paris.
- October 2015, Non linear PDE’s S Seminar, LAGA, Paris 13.
- October 2015, Rencontres de géométrie, Institut de Mathématiques de Bordeaux.
- September 2015, Geometric C.O.R.P Seminar, Seillac.
- July 2015, Geometry Seminar, Tokyo Institute of Technology.
- July 2015, Journée du Laboratoire Jean Leray, Nantes.
- May 2015, Conference Inter’Actions en Mathématiques, Institut Fourier, Grenoble.
- March 2015, Geometry and Topology Seminar, UBO, Brest.
- Novemeber 2014, Geometry Seminar, University of Nantes.
- October 2014, Forum des Jeunes Mathematicien-ne-s, Institut Henri Poincaré, Paris.
- July 2014, Ph.D. Students Seminar, University of Nantes.
- December 2013, Ph.D. Students Seminar, University of Lille 1. | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482347.44/warc/CC-MAIN-20190217172628-20190217194628-00456.warc.gz | CC-MAIN-2019-09 | 3,868 | 44 |
https://www.cesgo.org/en/the-scientific-data-sharing-platform/ | code | Powered by the Seek web platform, CeSGO Research Sharing is dedicated to the sharing of heterogeneous scientific research.
This web service allows to upload content in relation with the scientific research (e.g. publications, protocols, projects, institutions, etc.) and to create associations with experimental information.
FAIR = Findable, Accessible, Interoperable, and Re-usable.
This service aims to improve the integration of the FAIR data model, by helping linking data with its experimental information
In order to add content in the research sharing service, you have to join a project.
During the first connection, an user can join existing projects, or ask for the creation of a new one. Please note that only the administrator can create a new project.
If you missed that, please send a message to GenOuest support thought Seek feedback feature.
ISA : Investigation / Study / Assay
ISA formalism is used to describe biological activities.
e.g. Growth control of the eukaryote cell: a systems biology study in yeast
A unit of research
e.g. Study of the impact of changes in flux on the transcriptome under different nutrient limitations
e.g. Protein expression profiling using mass spectrometry
Several objects can be pushed in the SEEK model.
Each content is related to the biological information (ISA).
Through customizable access policies, users can manage the visibility of their research (private, semi-private or public). | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476399.55/warc/CC-MAIN-20240303210414-20240304000414-00613.warc.gz | CC-MAIN-2024-10 | 1,438 | 16 |
https://www.odesk.com/o/jobs/browse/skill/mvc/dp/0/ | code | Est. Budget: $1,000.00
We have a desktop applicatino that has been converted to a web application which is almost complete. All of the major functionality is done and the website is functioning. We are looking for someone to take the base asp.net mvc site and use some css/graphic design magic and make it look great. We are looking for the site to be as simple as possible as to not confuse our users. The main things that need to be ... | s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463028.70/warc/CC-MAIN-20150226074103-00123-ip-10-28-5-156.ec2.internal.warc.gz | CC-MAIN-2015-11 | 438 | 2 |
http://www.kristophervansant.com/about-contact.html | code | I'm a freelance front-end web developer, sometimes designer.
When I'm not working with clients, I'm experimenting and learning new things on CodePen.
I have some skills and experience with:
I'm currently available for freelance projects, contract work or long term work.
Please feel free to email me or reach out through any social platform.
Non-techy things I currently can't get enough of: | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659417.14/warc/CC-MAIN-20190117224929-20190118010929-00055.warc.gz | CC-MAIN-2019-04 | 391 | 6 |
https://devblogs.microsoft.com/bharry/13895/ | code | TFS Security updates
On Wednesday, we released a roll up of fixes for security vulnerabilities for several versions of Team Foundation Server. There are no new features in this update. Most of the vulnerabilities are related to cross site scripting (XSS), some of which were customer reported. The others include an improperly encoded API, a service endpoint editing experience which exposes a previously configured password, and a regex denial of service vulnerability in our web portal. We recommend customers install these updates. These fixes are included in the recently released Team Foundation Server 2018 Update 1. The release on Wednesday was for older versions and for customers who are not yet ready to update to the TFS 2018. Team Foundation Server 2015 Update 4.1:
Team Foundation Server 2017.0.1:
Team Foundation Server 2017 Update 3.1:
We take all security vulnerabilities very seriously and go to great lengths to protect our customers. The worst kind of security vulnerabilities you can have are those that allow an external, unauthenticated attacker access to or control over a system. Fortunately, none of these are of that nature. All of them require an authenticated user who has been granted permissions to your TFS server. They all would require a hostile or unlikely accidental action by someone on your team. However, out of an abundance of caution, we are releasing fixes and we encourage you to install the update. All of these fixes have, of course, already been applied to our cloud hosted offering – VSTS.
As I mentioned above, some of the vulnerabilities were customer reported. Although we do extensive security testing ourselves, like all bugs, it’s possible for us to miss something. From time to time, some of our customers (particularly larger enterprises) do their own security testing of both TFS and VSTS and report their findings. In most cases they don’t find anything. However, recently, one of our customers did some very detailed testing and they found a few XSS issues. We’re grateful to our customers who invest the effort to ensure our product is as secure as possible and we’re committed to fixing any significant issues they find.
Going forward, to avoid future XSS vulnerabilities slipping through our testing, we are adopting Content Security Policy to broadly mitigate XSS issues
Thank you, Brian | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818732.46/warc/CC-MAIN-20240423162023-20240423192023-00678.warc.gz | CC-MAIN-2024-18 | 2,358 | 8 |
https://block-green.gitbook.io/block-green-litepaper-v1/the-solution/miners/what-is-the-collateral | code | Comment on page
What is the collateral?
Utilizing on-chain collateral, miners are required to deposit their Bitcoin into a multi-sig collateral pool. This mechanism serves as a safeguard to protect against potential under-delivery of agreed-upon rewards. Should a miner fail to fulfill their obligations, Block Green (BG) promptly initiates the liquidation of an appropriate portion of the miner's collateral held within the on-chain collateral vault. By taking immediate action, this process ensures that the LPs (liquidity providers) are shielded from any negative impact stemming from the miner's inability to meet their commitments.
In the event of liquidation, as specified in the bilateral agreement, the counterparty maintains a claim on the holding company to ensure the fulfillment of its obligations. This provision serves to protect the counterparty's interests and grants them the right to either continue operating within the premises or seize assets as necessary. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100989.75/warc/CC-MAIN-20231209233632-20231210023632-00006.warc.gz | CC-MAIN-2023-50 | 977 | 4 |
https://freelancing.stackexchange.com/questions/2127/how-to-quote-and-justify-an-ecommerce-solution-to-a-non-technical-customer/2130#2130 | code | I am a web freelancer with .NET experience, and I am about to give my first eCommerce solution quote to a new client. The client is looking to sell their product online with eBay, integrate with Amazon, utilize various payment gateways, add analytics, etc. I have examined some out-of-the-box products such as uCommerce, but a solution like that would cost between 2500 to 5000 euros per year.
The quotation going to be full price (i.e. not hourly charged).
How do you decide the quote?
If I go for a product like uCommerce, do I exclude the subscription charges from my quote? That is, would I quote £X for consultancy + 2500 euros for uCommerce, or should I quote for the first year with everything included for the first year (which is adding above two)?
I can see that this will cost me £X, but when I see other leased eCommerce solutions, my quote looks ridiculously expensive in comparison (or theirs so unbelievably cheap). Why are those online solutions (1-and-1, GoDaddy, etc.) so cheap? | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00486.warc.gz | CC-MAIN-2022-05 | 998 | 5 |
https://www.futuretools.io/tools/denolyrics | code | DenoLyrics is a web application that uses Artificial Intelligence to quickly and accurately transcribe audio into text. It supports over 50 languages and is built with the latest web technology. It is limited free to use, and you can buy paid plans.
🚩 WARNING: This tool has been flagged for either trying to game the upvote system, poor customer reviews, or shady practices! Please be aware and use this tool with caution. It is currently under review! Upvoting has been turned off for this tool until we've come to a conclusion. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510238.65/warc/CC-MAIN-20230927003313-20230927033313-00546.warc.gz | CC-MAIN-2023-40 | 533 | 2 |
https://lists.samba.org/archive/samba-technical/2012-September/086901.html | code | Howto make BDC to take over the PDC role
Jan B Kinander
samba at kinander.nu
Mon Sep 17 03:28:39 MDT 2012
Hi, I have two DC-s in my LAN but I have lost control over one of them
(can't log in to any account), the samba server runs on it still but I need
to let the secondary DC take over in it's place and reformat the first DC.
What do I need to do in the Secondary DC to make it the primary one?
What info should be in the files krb5.conf, resolv.conf, named.conf and
other files in the PDC compared to the BDC?
If I'm going to upgrade to RC1 how do I configure internals /external dns
I have Version 4.0.0beta9-GIT-2ea0f6c and I use Bind9.
I can't login to my PDC, but I might be able to move the HDD to another
computer and get info from it that way.
Jan Blomqvist Kinander
More information about the samba-technical | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00115.warc.gz | CC-MAIN-2023-14 | 819 | 16 |
https://www.understandingrecruitment.co.uk/job/senior-front-end-engineer-react-dot-js-1/ | code | City of London, London
£65000 - £75000 per annum
5 days ago
Front End Engineers with React experience, I currently have an excellent opportunity for a Front End Engineer to work for a UK leading tech company, voted one of the Top 50 Start-Ups to work for in 2020! After what can only be described as a fantastic and hugely successful year for the business, they are now looking to invest in the development team and hire engineers with React experience.
The potential Front-End Engineer will have the chance to work with a driven, and diverse team focused on developing a simply life-changing product. Their unique technology platform works with data, algorithms, and machine learning to produce the best product for their customers. Their front-end application is built using React and proven experience working with React will be required for this role. As this is a Senior vacancy you will be very hands-on whilst putting forward ideas on new business practices and providing leadership qualities to help develop the team.
As the Senior Front End Engineer, you will be responsible for:
- Designing, testing, and releasing brand new functionalities across their web applications
- Collaborating with the engineering team to build new API's
- Leading code reviews and pair programming
- Working with the design team to design new product features
Key skills for the Senior Front End Engineer:
- Experience creating web applications using React and Redux
- Keeping up to date with new technology and frameworks, implementing when needed
This brilliant role comes with an excellent benefits package, the chance to help solve meaningful real-world problems using technology, and a very competitive package of up to £75,000, depending on experience.
If interested be sure to hit the apply button straight away to avoid missing out! | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064898.14/warc/CC-MAIN-20210411174053-20210411204053-00013.warc.gz | CC-MAIN-2021-17 | 1,831 | 15 |
https://askubuntu.com/questions/952991/allowing-remote-desktop-rdesktop-through-ubuntu-16-04-dhcp-sever | code | So while I have glanced through the forms to find if someone else has had the same issue, but I feel like my situation is particularly isolated/special. (I guess anyways)
I am trying to run the rdesktop/remote desktop application through my modem/router and my Ubuntu DHCP server to a system on the subnet(or sub-lan). I have forwarded the port (3389) through my modem/router and have it redirect the traffic to my Ubuntu DHCP server. From that, using netstat, the port 3389 is not blocked but closed (on my DHCP server)(not sure if it matters that it needs to be listening?). When I check to see if the port is open (through the modem/router and DHCP server), the results I have gotten say that the port is still closed. I know the packet is making it pass the port on the router/modem because I have remotely connected to a system on the LAN through the modem/router successfully. Due to my setup however I wish to remotely access the system from the sub-lan. (modem/router --> LAN --> DHCP server --> sub-LAN/net, incase the terminology seems confusing.)
Can anyone offer a solution? I can work around this but my ideal setup should be possible? (at least I think so)
Here is my iptable(s) content, thank you!
/sbin/iptables -P FORWARD ACCEPT
/sbin/iptables -A FORWARD -i enp5s0 -j ACCEPT
/sbin/iptables --table nat -A POSTROUTING -o enp0s25 -j MASQUERADE | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480905.29/warc/CC-MAIN-20190216170210-20190216192210-00379.warc.gz | CC-MAIN-2019-09 | 1,358 | 7 |
http://www.jah-rastafari.com/forum/message-view.asp?message_group=163 | code | I don't really think it is necessary to summarize. A lot of things will be missed if someone only reads the summary. But whoever wants to write a summary of their reasonings is welcome to.
And as far as concluding our topics, this might be hard to do, because people don't always agree after the reasoning is finished, so a conclusion never came. And also, if the reasonings are left open, then old reasonings can be brought to the front again if someone thinks of something to add to the reasoning.
Haile Selassie I | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817463.60/warc/CC-MAIN-20240419234422-20240420024422-00507.warc.gz | CC-MAIN-2024-18 | 516 | 3 |
http://mayron.net/prism/pma/config/ebook/Rodents-of-Sub-Saharan-Africa%3A-A-Biogeographic-and-Taxonomic-Synthesis/ | code | Rodents Of Sub Saharan Africa: A Biogeographic And Taxonomic Synthesis
I am a Senior Technical Product Manager at LogRhythm To work more now how to Rodents of Sub Saharan Africa: A Biogeographic trans are this total summary. d or toxicology slipstreams may guarantee. We will let you if main. To participate more about Copies Direct have this early UsEditorial rebellion.
I was a Senior Cloud Security Service Architect at The Great War and legendary Rodents of. The Ending of World War One, and the Legacy of Peace '. Britannica Online Encyclopedia. front from the immense on 24 June 2008. The Medieval and Converted of a rich ad '. Henn, Peter( 9 March 2015). Rodents of Sub Saharan Africa: A Biogeographic and Taxonomic; Policy)PaypalThis highlights attracted for a Stripe history who is in the HubPages brigades password and features to be enabled via PayPal. No content is reallocated with Paypal unless you have with this application. life; Policy)Facebook LoginYou can use this to Do targeting up for, or cooperating in to your Hubpages edition. No collaboration opens sent with Facebook unless you offer with this message. knowledge; Policy)MavenThis recovers the Maven code and development life. action; Policy)MarketingGoogle AdSenseThis is an area glory. degree program as an Assistant Professor at the Florida Institute of Technology The BBC represents only Personalized for the Rodents of Sub Saharan Africa: A Biogeographic of marketable effects. been about our enemy to Special exploring. The error does correctly generated. Check the email of over 336 billion browser kinds on the pdf. Prelinger Archives power down! Other blocking figures, cookies, and run! , among other accomplishments.resumes have, northern of the invalid radiologists in machines would command their Rodents of Sub Saharan if there loved no music to be them. It is them what they provide. networks much fell that Privacy contains the way to handle items observe better or worse in promising cost-consequences. For power, if you have using a contemporary length, and your giant ingestion is on the user, it finally is the number to Use you include at least a then better.
In my spare time, I enjoy travel, hockey, and writing music The Rodents of Sub Saharan Africa: A Biogeographic and Taxonomic will kick occurred to your Kindle music. It may 's up to 1-5 breaks before you contained it. You can Add a virtue gardening and send your &minus. political problems will n't be fundamental in your m-d-y of the books you engage observed. .
Please contact me directly In 2016, 52 Rodents of Sub Saharan Africa: A of the everyday good favor had the mid-word. This configuration is found to enter to 55 logic in 2017. print to stabilize the debate enough have trilogy to possible characteristics. This destination introduces probably geared in your l! for consulting inquiries.
New articles, including analysis of current cybersecurity news and events, will be announced on Lanham, Maryland: Rowman plants; Littlefield. excellence in serious( Israel) '. Turner, Leonard Charles Frederick( 1976). Tunes of the First World War. The elites of the First World War. The First World War and high-powered seconds. The dead and aristocratic of a final Rodents of Sub '. The golden treason of the Great War '. Damn, he was, Triple N, as marketable, played better Rodents than the Pentagon. How the anything continued they added up on the Giza peacemaker below right? He added avoiding the cookies and submitting the file anytime. millennia in the help said the account to be once if a Bringing Privacy circlet might use processes or Platoons to the right. In github, the families across the Nile in Cairo had approximately trying other Triple N knowledge councilor at that atmosphere. He slid a request helmet to page. only, the processes played about properly current about getting their industry long from the new ruling fronts, if so because those going little weapons had over maximum and read to write history. Warhurst occurred his end to the NLA late more. .
I research ways to improve security, including operations, authentication (particularly biometrics), usability, and securing information. I apply a foundation of knowledge in information retrieval to a variety of security challenges.
I have published dozens of academic, peer-reviewed articles in cybersecurity and information retrieval. Articles and citations are available on Google Scholar 2017 Springer Nature Switzerland AG. Hunt, Tien-Yien Li, Judy A. Hunt, Tien-Yien Li, Judy A. The github will view welcomed to clinical part Pyramid. It may is up to 1-5 humans before you recommended it. The cookie will lead defeated to your Kindle solution. .
Contact me Green, John Frederick Norman( 1938). hand: Albert Ernest Kitson '. numerous Society s Journal. A Naval execution of World War I. Brassey's Defence Publishers. for more information on my ongoing research. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00434.warc.gz | CC-MAIN-2023-14 | 4,904 | 9 |
https://enbis.org/activities/events/current/617_ENBIS_19_Post_Conference_Course__Random_Forests/ | code | ENBIS-19 Post-Conference Course: Random Forests5 September 2019; 09:00 – 13:00
Thursday, 5th September, 9:00-13:00, Room Dudich
Jean-Michel Poggi (Paris-Descartes Univ. and Lab. Maths Orsay, Paris Saclay University, France)
This tutorial is an introduction to Random Forests and is aimed at a broad audience of statisticians. Based on decision trees combined with aggregation and bootstrap ideas, random forests were introduced by Breiman in 2001 and are one of the most powerful and widely used statistical learning methods. Indeed, they are a powerful nonparametric statistical method allowing to consider in a single and versatile framework regression problems, as well as two-class and multi-class classification problems. Variable importance index allows in addition to propose a ranking of explanatory variables and to define a variable selection strategy involving ranking and a stepwise ascending variable introduction strategy. The provisional table of contents of the course is:
- Classification and Regression Trees (CART)
- Random Forests
- Variable importance
- Variable selection
- Practice session
- An application in industry
- Random Forests for Big Data
Short biography of the lecturer
Jean-Michel Poggi is Professor of Statistics at Paris-Descartes University and Lab. Maths Orsay, Paris Sud University, in France.
His research interests are in time series, wavelets, tree-based and resampling methods, applied statistics. Research activities combine theoretical and practical contributions together with industrial applications (mainly environment and energy) and software development.
His publications combine theoretical and practical contributions together with industrial applications and software development.
He is Associate Editor of three journals: Journal of Statistical Software, CSBIGS (Case Studies in Business, Industry and Government Statistics) and Journal de la SFdS.
More information can be found at: http://www.math.u-psud.fr/~poggi/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879374.66/warc/CC-MAIN-20210419111510-20210419141510-00622.warc.gz | CC-MAIN-2021-17 | 1,973 | 17 |
https://gastronomia.ca/pages/deque-in-python-ad0690 | code | Deque in Python. We can reverse the sequence of the dequeuer using the reverse() method. Again, the Python list will provide a very nice set of methods upon which to build the details of the deque. Python | Pandas Dataframe/Series.head() method, Python | Pandas Dataframe.describe() method, Dealing with Rows and Columns in Pandas DataFrame, Python | Pandas Extracting rows using .loc, Python | Extracting rows using Pandas .iloc, Python | Pandas Merging, Joining, and Concatenating, Python | Working with date and time using Pandas, Python | Read csv using pandas.read_csv(), Python | Working with Pandas and XlsxWriter | Set – 1. This article is contributed by Manjeet Singh. python documentation: colecciones.deque. – … Metaprogramming with Metaclasses in Python, User-defined Exceptions in Python with Examples, Regular Expression in Python with Examples | Set 1, Regular Expressions in Python – Set 2 (Search, Match and Find All), Python Regex: re.search() VS re.findall(), deque::front() and deque::back() in C++ STL, deque::clear() and deque::erase() in C++ STL, deque::operator= and deque::operator in C++ STL. ; On a deque, adding an element or removing an element on either side of a deque instance takes constant time O(1). @ritmatter As I stated before, the official Python style guide explicitly discourages if len(d) == 0, and I personally prefer the more succinct if d as well. deque is a container class in Python which can hold a collection of python objects. Please write to us at [email protected] to report any issue with the above content. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. import collections de = collections.deque([1,2,3]) We have already seen the append and pop functions in the Deque for inserting and deleting the elements respectively. Some functions in Deque are used to get information related to items. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Arithmetic Operations on Images using OpenCV | Set-1 (Addition and Subtraction), Arithmetic Operations on Images using OpenCV | Set-2 (Bitwise Operations on Binary Images), Image Processing in Python (Scaling, Rotating, Shifting and Edge Detection), Erosion and Dilation of images using OpenCV in python, Python | Thresholding techniques using OpenCV | Set-1 (Simple Thresholding), Python | Thresholding techniques using OpenCV | Set-2 (Adaptive Thresholding), Python | Thresholding techniques using OpenCV | Set-3 (Otsu Thresholding), Python | Background subtraction using OpenCV, Face Detection using Python and OpenCV with webcam, Selenium Basics – Components, Features, Uses and Limitations, Selenium Python Introduction and Installation, Navigating links using get method – Selenium Python, Interacting with Webpage – Selenium Python, Locating single elements in Selenium Python, Locating multiple elements in Selenium Python, Hierarchical treeview in Python GUI application, Python | askopenfile() function in Tkinter, Python | asksaveasfile() function in Tkinter, Introduction to Kivy ; A Cross-platform Python Framework, Python Language advantages and applications, Download and Install Python 3 Latest Version, Statement, Indentation and Comment in Python, How to assign values to variables in Python and other languages, Taking multiple inputs from user in Python, Difference between == and is operator in Python, Python Membership and Identity Operators | in, not in, is, is not, Python | Set 3 (Strings, Lists, Tuples, Iterations). It has the m Writing code in comment? In Python, there’s a specific object in the collections module that you can use for linked lists, called deque. edit See your article appearing on the GeeksforGeeks main page and help other Geeks. Deque Data Structure In this tutorial, you will learn what a double ended queue (deque) is. Also, you will find working examples of different operations on a deque in C, C++, Java and Python. Deque (Doubly Ended Queue) in Python is implemented using the module “collections“. It is directly supported in Python through collections module. The append() method is used to add elements at the right end of the queue, and appendleft() method is used to append the element at the left of the queue. There are two types of extending functions. When to use yield instead of return in Python? Deque is preferred over list in the cases where we need quicker append and pop operations from both the ends of container, as deque provides an O(1) time complexity for append and pop operations as compared to list which provides O(n) time complexity. collections.deque uses an implementation of a linked list in which you can access, insert, or remove… Attention geek! There is another method called rotate(). ; A deque is like both a stack and queue. There are another two methods related to insertion and deletion. Render HTML Forms (GET & POST) in Django, Django ModelForm – Create form from Models, Django CRUD (Create, Retrieve, Update, Delete) Function Based Views, Class Based Generic Views Django (Create, Retrieve, Update, Delete), Django ORM – Inserting, Updating & Deleting Data, Django Basic App Model – Makemigrations and Migrate, Connect MySQL database using MySQL-Connector Python, Installing MongoDB on Windows with Python, Create a database in MongoDB using Python, MongoDB python | Delete Data and Drop Collection. To use it at first we need to import it the collections standard library module. Devuelve un nuevo objeto deque inicializado de izquierda a derecha (utilizando append ()) con datos de iterable. Strengthen your foundations with the Python Programming Foundation Course and learn the basics. | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00352.warc.gz | CC-MAIN-2021-21 | 5,816 | 1 |
https://lists.reproducible-builds.org/pipermail/rb-general/2017-January/000298.html | code | [rb-general] SOURCE_PREFIX_MAP and Occam's Razor
infinity0 at debian.org
Sun Jan 22 11:03:00 CET 2017
> If the build-path was only put into the binaries UPON REQUEST, then
> the bulk of those packages would become reproducible.
>> I also don't fancy trying to convince all build tools in existence
>> to adopt the information-stripping approach.
> First, isn't the SOURCE_PREFIX_MAP an information-stripping approach
> itself? It strips out PART of the path in the object file. If you
> can't convince tool maintainers to strip information, you'll have a
> hard time getting uptake on SOURCE_PREFIX_MAP. Whereas if they are
> amenable to stripping SOME information, I am merely suggesting a
> simpler and more straightforward change: stripping out the ENTIRE path
> (by default), and using a standard command line option when that path
> is actually needed.
> Second, aren't the vast bulk of packages built by the GNU compiler
> tools? A small patch to gcc that would remove the build-path by
> default would reduce the "build-path" issue to a much smaller, more
> tractable number of packages.
> I was initially thinking that the compiler command-line option would
> have no argument, and if present would lead to the current behavior
> (object files contining the build path from the root). This would
> avoid any issue about command line options being stored in object
> files. But upon reflection, suppose it took an argument that was a
> relative path to the top of the source tree? E.g.
> gcc --record-build-path-from=../../
__FILE__, also -g is already "not the default". Of course you're welcome to follow up the rest of your suggestion with the GNU folks; but at this point I have spent so much time of this topic that I am getting a little bit tired of explaining to everyone "why not this way, why not that way".
>> Once this
>> SOURCE_PREFIX_MAP thing is done, it's done and everyone is happy.
> Well, that's true for any solution. Once it's solved, it's solved.
> But that begs the question of HOW to solve it.
> There doesn't seem to be a consensus on how to get it done. [..]
There's never a perfect consensus, we have rough consensus.
>> It was a similar situation with SOURCE_DATE_EPOCH, the "hardcode"
>> "keep-it-simple" people didn't see the point but now it's all fine
>> and people with opinions across the whole range of the spectrum can
>> mostly agree to it.
> The "hardcode" people didn't need to even look at SOURCE_DATE_EPOCH,
> since their tools already didn't encode a timestamp in the object
They still need SOURCE_DATE_EPOCH for builds that take a variable amount of time and who put in a timestamp at the end of build.
> At Cygnus in the 1990s we made the GNU tools fully reproducible, even
> for cross-compilation. This required dealing with many byte-order
> issues and floating-point representations and such -- a set of issues
> that you don't have. We built the byte-order-independent BFD tools,
> and the new GNU linker, pretty much from scratch, for just that purpose.
Do you have this documented in detail somewhere? There is a difference between 100% reproducible and 92% reproducible. We are covering the final few %, so forgive me that I am not interested in people who claim their stuff is "reproducible" but actually do need things like SOURCE_DATE_EPOCH in corner cases that they didn't think about because they were too busy telling everyone how they set timestamps to 0 everywhere.
More information about the rb-general | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710968.29/warc/CC-MAIN-20221204072040-20221204102040-00832.warc.gz | CC-MAIN-2022-49 | 3,468 | 47 |
https://cybercm.tech/blog/2022/06/06/reinventing-retail-with-no-code-machine-learning-sales-forecasting-using-amazon-sagemaker-canvas/ | code | Reinventing retail with no-code machine learning: Sales forecasting using Amazon SageMaker Canvas
Retail businesses are data-driven—they analyze data to get insights about consumer behavior, understand shopping trends, make product recommendations, optimize websites, plan for inventory, and forecast sales.
A common approach for sales forecasting is to use historical sales data to predict future demand. Forecasting future demand is critical for planning and impacts inventory, logistics, and even marketing campaigns. Sales forecasting is generated at many levels such as product, sales channel (store, website, partner), warehouse, city, or country.
Sales managers and planners have domain expertise and knowledge of sales history, but lack data science and programming skills to create machine learning (ML) models to generate accurate sales forecasts. They need an intuitive, easy-to-use tool to create ML models without writing code.
To help achieve the agility and effectiveness that business analysts seek, we’ve introduced Amazon SageMaker Canvas, a no-code ML solution that helps companies accelerate delivery of ML solutions down to hours or days. Canvas enables analysts to easily use available data in data lakes, data warehouses, and operational data stores; build ML models; and use them to make predictions interactively and for batch scoring on bulk datasets—all without writing a single line of code.
In this post, we show how to use Canvas to generate sales forecasts at the retail store level.
In this post, we use Amazon Redshift cluster-based data with Canvas to build ML models to generate sales forecasts. Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. Retail industry customers use Amazon Redshift to store and analyze large-scale, enterprise-level structured and semi-structured business data. It helps them accelerate data-driven business decisions in a performant and scalable way.
Generally, data engineers are responsible for ingesting and curating sales data in Amazon Redshift. Many retailers have a data lake where this has been done, but we show the steps here for clarity, and to illustrate how the data engineer can help the business analyst (such as the sales manager) by curating data for their use. This allows the data engineers to enable self-service data for use by business analysts.
In this post, we use a sample dataset that consists of two tables:
storepromotions. You can prepare this sample dataset using your own sales data.
storesales table keeps historical time series sales data for the stores. The table details are as follows:
storepromotions table contains historical data from the stores regarding promotions and school holidays, on a daily time frame. The table details are as follows:
|INT (0 /1)
We combine data from these two tables to train an ML model that can generate forecasts for the store sales.
Canvas is a visual, point-and-click service that makes it easy to build ML models and generate accurate predictions. There are four steps involved in building the forecasting model:
- Select data from the data source (Amazon Redshift in this case).
- Configure and build (train) your model.
- View model insights such as accuracy and column impact on the prediction.
- Generate predictions (sales forecasts in this case).
Before we can start using Canvas, we need to prepare our data and configure an AWS Identity and Access Management (IAM) role for Canvas.
Create tables and load sample data
To use the sample dataset, complete the following steps:
storepromotionssample data files
store_promotions.csvto an Amazon S3 bucket. Make sure the bucket is in the same region where you run Amazon Redshift cluster.
- Create an Amazon Redshift cluster (if not running).
- Access the Amazon Redshift query editor.
- Create the tables and run the COPY command to load data. Use the appropriate IAM role for the Amazon Redshift cluster in the following code:
By default, the sample data is loaded in the
storepromotions tables in the public schema of the dev database. But you can choose to use a different database and schema.
Create an IAM role for Canvas
Canvas uses an IAM role to access other AWS services. To configure your role, complete the following steps:
- Create your role. For instructions, refer to Give your users permissions to perform time series forecasting.
- Replace the code in the Trusted entities field on the Trust relationships tab.
The following code is the new trust policy for the IAM role:
- Provide the IAM role permission to Amazon Redshift. For instructions, refer to Give users permissions to import Amazon Redshift data.
The following screenshot shows your permission policies.
The IAM role should be assigned as the execution role for Canvas in the Amazon SageMaker domain configuration.
- On the SageMaker console, assign the IAM role created as the execution role when configuring your SageMaker domain.
The data in the Amazon Redshift cluster database and Canvas configuration both are ready. You can now use Canvas to build the forecasting model.
After the data engineers prepare the data in Amazon Redshift data warehouse, the sales managers can use Canvas to generate forecasts.
To launch Canvas, the AWS account administrator first performs the following steps:
- Create a SageMaker domain.
- Create user profiles for the SageMaker domain.
For instructions, refer to Getting started with using Amazon SageMaker Canvas or contact your AWS account administrator for the guidance.
Launch the Canvas app from the SageMaker console. Make sure to launch Canvas in the same AWS Region where the Amazon Redshift cluster is.
When Canvas is launched, you can start with the first step of selecting data from the data source.
Import data in Canvas
To import your data, complete the following steps:
- In the Canvas application, on the Datasets menu, choose Import.
- On the Import page, choose the Add connection menu and choose Redshift.
The data engineer or cloud administrator can provide Amazon Redshift connection information to the sales manager. We show an example of the connection information in this post.
- For Type, choose IAM.
- For Cluster identifier, enter your Amazon Redshift cluster ID.
- For Database name, enter
- For Database user, enter
- For Unload IAM role, enter the IAM role you created earlier for the Amazon Redshift cluster.
- For Connection name, enter
- Choose Add connection.
The connection between Canvas and the Amazon Redshift cluster is established. You can see the
redshiftconnectionicon on the top of the page.
- Drag and drop
storepromotionstables under the public schema to the right panel.
It automatically creates an inner join between the tables on their matching column names
You can update joins and decide which fields to select from each table to create your desired dataset. You can configure the joins and field selection in two ways: using the Canvas user interface to drag and drop joining of tables, or update the SQL script in Canvas if the sales manager knows SQL. We include an example of editing SQL for completeness, and for the many business analysts who have been trained in SQL. The end goal is to prepare a SQL statement that provides the desired dataset that can be imported to Canvas.
- Choose Edit in SQL to see SQL script used for the join.
- Modify the SQL statement with the following code:
- Choose Run SQL to run the query.
When the query is complete, you can see a preview of the output. This is the final data that you want to import in Canvas for the ML model and forecasting purposes.
- Choose Import data to import the data into Canvas.
When importing the data, provide a suitable name for the dataset, such as
The dataset is ready in Canvas. Now you can start training a model to forecast total sales across stores.
Configure and train the model
To configure model training in Canvas, complete the following steps:
- Choose the Models menu option and choose New Model.
- For the new model, give a suitable name such as
- Select the dataset
- Choose Select dataset.
On the Build tab, you can see data and column-level statistics as well as the configuration area for the model training.
totalsalesfor the target column.
Canvas automatically selects Time series forecasting as the model type.
- Choose Configure to start configuration of the model training.
- In the Time series forecasting configuration section, choose store as the unique identity column because we want to generate forecasts for the store.
- Choose saledate for the time stamps column because it represents historical time series.
120as the number of days because we want to forecast sales for a 3-month horizon.
- Choose Save.
- When the model training configuration is complete, choose Standard build to start the model training.
The Quick build and Preview model options aren’t available for the time series forecasting model type at the time of this writing. After you choose the standard build, the Analyze tab shows the estimated time for the model training.
Model training can take 1–4 hours to complete depending on the data size. For the sample data used in this post, the model training was around 3 hours. When the model is ready, you can use it for generating forecasts.
Analyze results and generate forecasts
When the model training is complete, Canvas shows the prediction accuracy of the model on the Analyze tab. For this example, it shows prediction accuracy as 79.13%. We can also see the impact of the columns on the prediction; in this example,
schoolholiday don’t influence the prediction. Column impact information is useful in fine-tuning the dataset and optimizing the model training.
The forecasts are generated on the Predict tab. You can generate forecasts for all the items (all stores) or for the selected single item (single store). It also shows the date range for which the forecasts can be generated.
As an example, we choose to view a single item and enter
2 as the
store to generate sales forecasts for store 2 for the date range 2015-07-31 00:00:00 through 2015-11-28 00:00:00.
The generated forecasts show the average forecast as well as the upper and lower bound of the forecasts. The forecasts boundary helps make aggressive or balanced approaches for the forecast handling.
You can also download the generated forecasts as a CSV file or image. The generated forecasts CSV file is generally used to work offline with the forecast data.
The forecasts are generated based on time series data for a period of time. When the new baseline of data becomes available for the forecasts, you can upload a new baseline dataset and change the dataset in Canvas to retrain the forecast model using new data.
You can retrain the model multiple times as new source data is available.
Generating sales forecasts using Canvas is configuration driven and an easy-to-use process. We showed you how data engineers can help curate data for business analysts to use, and how business analysts can gain insights from their data. The business analyst can now connect to data sources such local disk, Amazon S3, Amazon Redshift, or Snowflake to import data and join data across multiple tables to train a ML forecasting model, which is then used to generate sales forecasts. As the historical sales data updates, you can retrain the forecast model to maintain forecast accuracy.
Sales managers and operations planners can use Canvas without expertise in data science and programming. This expedites decision-making time, enhances productivity, and helps build operational plans.
To get started and learn more about Canvas, refer to the following resources:
- Amazon SageMaker Canvas documentation
- Announcing Amazon SageMaker Canvas – a Visual, No Code Machine Learning Capability for Business Analysts
About the Authors
Davide Gallitelli is a Specialist Solutions Architect for AI/ML in the EMEA region. He is based in Brussels and works closely with customers throughout Benelux. He has been a developer since he was very young, starting to code at the age of 7. He started learning AI/ML at university, and has fallen in love with it since then. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817765.59/warc/CC-MAIN-20240421101951-20240421131951-00495.warc.gz | CC-MAIN-2024-18 | 12,148 | 106 |
https://forum.tezosagora.org/t/the-science-of-blockchain-conference-2023/5340 | code | The The Science of Blockchain Conference 2023 is happening at Arrillaga Alumni Center, Stanford University in Aug 28 - 30, 2023. Link
I think it would be very beneficial to see some "made by"Tezos technical innovations there!
There is a call for papers:
It is a great chance to present the innovative scaling solution of Rollups there, there is the dedicated topic of interest: (tagging @d4hines & @NomadicLabs )
Also another one would be flashbake that could be a fit in the topic of interest for MEV (tagging @nicolasochem ):
I also know that Tezos is strong on the formal verification topic so thats here too:
The conference focuses on technical innovations in the blockchain ecosystem, and brings together researchers and practioners working in the space. We are interested in the application of cryptography, decentralized protocols, formal methods, and empirical analysis, to improving the security and scalability of blockchain deployments. We aim to foster collaboration among practitioners and researchers working on blockchain protocol development, cryptography, distributed systems, secure computing, crypto-economics, and economic risk analysis.
Would be great to get some Tezos representation there
Would be great to see technical contributors from the community participate in this! We all know Tezos has a lot to contribute in these areas.
I believe this is an academic conference. I once made the mistake of showing up to an academic conference on Datalog and logic programming, not knowing the difference between that and a “programmer conference”. It was an ice-cold dunk into the deep end of the pool (“Uhhh, which way to the 20-min ‘Getting started with Datalog’ talk’? Oh, you, that’s not a thing? Ok, I guess I’ll attend your talk on ‘Polynomial Datalog Rewritings for Ontology Mediated Queries with Closed Predicates’ ”). Very rewarding though!
Nonetheless, Tezos is a great place for scientists! If you want your ideas to see the light of day in other ecosystems, you have to write your own paper, design your own L1 or L2, get VC funding, build all of it, and ship it. On Tezos, if you’ve developed a good idea, you can just write an ammendment, and in a few months it can ship to millions of users (or, as of Mumbai, just write your own L2).
I have nothing I can teach blockchain scientists (though I love for them to teach me!) but I’d be happy to advocate for them to bring their work to Tezos.
That said, Tezos does have a small army of real blockchain scientists, especially at @NomadicLabs, who could perhaps present their research, which would be cool as well.
Tezos is far behind on the topic of MEV. We have much to learn but not much to share.
I think the most interesting Tezos topic to present at this conference would be aPlonk and Epoxy. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647895.20/warc/CC-MAIN-20230601143134-20230601173134-00717.warc.gz | CC-MAIN-2023-23 | 2,802 | 15 |
https://forums.xamarin.com/profile/reactions/JonathanDibble?reaction=like | code | @mattward Thank You !!!
Your suggestion works perfectly. I had to close down VS4Mac and re-open it (and the project) to get the code behind file to sit correctly under the xaml files.
Thank yo… (View Post)
After upgrading VS4Mac to I am unable to open a project that opened perfectly an hour ago.
I receive a series of error screens as shown below, after which any existing code pages are blank !
… (View Post)
The latest release will not deploy my android app to either a device or to a simulator (Xamarin Android Player).
In the Deploying to Device window I get
Deployment failed because of an internal… (View Post) | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526807.5/warc/CC-MAIN-20190418201429-20190418223429-00236.warc.gz | CC-MAIN-2019-18 | 623 | 9 |
https://support.oneidentity.com/safeguard-authentication-services/kb/90587/firefox-sso-configuration-on-mac-os-x | code | Firefox does not allow SSO unless the native gsslib is used. What is the correct config for Firefox's network.negotiate-auth.using-native-gsslib setting on Mac OS X?
Use the "network.negotiate-auth.using-native-gsslib = true". Using Quests is not best practices on a Mac. Using the system one is. We store the kerberos tickets in the local ccache. When Firefox uses the native gsslib, it will use the tickets that we created and maintain. | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358520.50/warc/CC-MAIN-20211128103924-20211128133924-00512.warc.gz | CC-MAIN-2021-49 | 438 | 2 |
https://matthewshotton.wordpress.com/2010/08/17/quick-update-3d-scanner-mk2/ | code | Just a quick update to say the 3D scanner project is starting it’s second phase. Gonna throw up some photos of it in a bit. The google code repo is broken at the moment (hell its been broken for the past few months), but within the week a new cross-platform version of the software will be available.
Tag Cloud2 Stroke 3D Modeling 3D scanner Arduino artificial intelligence audio B.S Johnson Bundle c++ Caduntu Camera Cannon SX200 IS chaos clock CNC command line compression computer vision delta robot Desktop CNC machine Diy Electronics Engine evolutionary algorithms fix FPGA fractal Guitar Guitar amplifier image processing Infra-red Inverse Kinematics IplImage Kindle Kinect klann linkage Laser Lens error restart camera libfreenect Light Field london Maker Faire Markov Chains Markov Models Meshlab Mic microphone Moped Nonsense words opencv open hardware OpenKinect Openscad papilio Photography Plotter protable amp Puch pyglet PyQt python QImage Qt rep-rap robot Robot arm sequencer Spartan3E synthesizer tar.gz time slicing undleBundle video work in progress Xillinx
- @kristianthorin Aww, thanks! I really like your pulley mechanism for moving the fishing line! If you need more reso… twitter.com/i/web/status/9… 2 weeks ago
- @kristianthorin That's so cool! I did something similar a while back with a delta robot, the toolchain for getting… twitter.com/i/web/status/9… 2 weeks ago
- This is such a cool and important project. A great example of technology for public good. github.com/OpenHypervideo… twitter.com/OpenHypervideo… 1 month ago
- Hi! I'm available for work in the Bay Area. I'm looking for software or creative technologist roles in areas relate… twitter.com/i/web/status/9… 1 month ago
- @_pdandy @BBCRD I built something similar-ish for a hackday a while back (its not as polished as yours looks though… twitter.com/i/web/status/9… 2 months ago | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946165.56/warc/CC-MAIN-20180423184427-20180423204427-00008.warc.gz | CC-MAIN-2018-17 | 1,892 | 7 |
https://sway.uservoice.com/forums/264674-sway-suggestion-box/suggestions/41447950-allow-to-edit-each-cards-playback-time | code | Allow to edit each cards playback time
Allow to edit each cards playback time.
Some cards don't need a lot of time to present in playback, like introductory cards. But other cards contain a bit more context and you need to offer the audience more time to read the content.
Setting a preset time holistically to all cards can dulls the audience during intro cards if the playback duration is too long and can cause the audience to give up if they can't read the cards with more substance if the playback duration is too short. | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621699.22/warc/CC-MAIN-20210616001810-20210616031810-00209.warc.gz | CC-MAIN-2021-25 | 525 | 4 |
http://weather.ou.edu/~oubliss/blumberg | code | Now a Ph.D candidate at University of Oklahoma, Greg graduated with his B.S. and M.S. in meteorology from OU in 2011 and 2013. Greg uses sounding datasets from boundary-layer profilers and radiosondes to answer the question: “What new ways can we use soundings to better understand the environments that are conducive to thunderstorms?” Datasets he works with come from both operational data streams and field projects (e.g. PECAN, IHOP, and VORTEX-SE). Recently, he has been focusing on using high-temporal resolution convection indices derived from the AERI sounding instrument to improve our understanding of the atmospheric variability within convective environments and how that variability relates to thunderstorm evolution. Greg is an author of the internationally-used open-source sounding analysis program called SHARPpy. Outside of research, Greg actively mentors and teaches numerous high-school and college students from around the country who have an interest in meteorology.
Office: NWC 3230 | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688103.0/warc/CC-MAIN-20170922003402-20170922023402-00232.warc.gz | CC-MAIN-2017-39 | 1,009 | 2 |
https://zh.coursera.org/learn/bioinformatics/reviews?page=3 | code | Apr 18, 2016
I give this course 5 star because I did Bioinformatics I and I totally enjoy it.\n\nThis is where programming can be fun, and practical, and you'll learn some basic biology too.\n\nWhat's not to love?
Aug 16, 2017
I learned a lot from this difficult and time-consuming course! It covers biological concepts using Python. It made learning Python more interesting for me, since I have always loved biology.
创建者 Brian B•
Jun 21, 2017
Excellent intro to Bioinformatics - the course is not afraid of asking some seemingly tough questions and I took a lot away.
创建者 Jiseok L•
Feb 21, 2020
Overall very good introduction to Bioinformatics and python coding, but some materials and explanations are not very clear.
创建者 Sara M V•
Feb 10, 2017
It was a really interesting course! Some exercises were challenging but it was very rewarding when you found the solution.
创建者 Song C•
Apr 03, 2018
Very specific introduction ranging from molecular basis to a appropriate intro of info theory. Good course to begin with!
创建者 Mauro G C•
Jan 08, 2018
I really the course. However, if you are an engineer you will have some problems to understand some biological concepts.
Jun 24, 2019
The design of class is very sophisticated, which is quite friendly to beginners. Looking forward to the rest classes.
创建者 Gabriel C F•
Jun 29, 2019
Estou gostando bastante do curso. O conteúdo é rico em informações que no final ajudam a ir atrás de mais conteúdo.
创建者 Juris L•
Jul 07, 2019
Amazing course with amazing pace and approach. Would recommend just one thing: describe and illustrate a bit more.
创建者 Antonio D C C•
Apr 04, 2017
A very entertaining way to learn programming and applying this new acquired knowledge to real problems in biology.
创建者 Sofia R•
Jul 14, 2017
Awesome! There's biology information and there's programming practice!
It's fantastic, thanks to the instructors!
创建者 Eyal W•
May 23, 2018
Great course, very interesting.
Some of the python quizes are confusing but the comment are surely helpful!
Dec 07, 2016
thank you guys,the most difficult lesson I have got,but I love the feelings when I finally accomplish it.
创建者 Satavisha R•
Jun 16, 2017
Amazing course. I learned plenty and can't wait to learn further more and apply the concepts to my work.
创建者 Martin H•
Jun 18, 2018
Great course. Its challenging but also very rewarding. Will try to keep up with the subsequent courses.
创建者 brennon s•
Jul 31, 2017
Well written and easy-to-follow. Requires very little searching for information not presented in course
创建者 Juan M•
Aug 27, 2019
Very helpful, definitely recommend the course. Helped me learn Python and the basics of Bioinformatics
创建者 Franco D•
Oct 24, 2019
Pretty good course, be ready from a Programming perspective (Python) because it ramps up pretty fast.
Jan 22, 2017
Good course for someone beginning in bioinformatics with background on computer programming.
Aug 13, 2019
An excellent course for the beginners who want to start learning about bioinformatics!
Jul 28, 2016
Excellent course for beginner like me!!!
Thank again for introducing me to this matter.
创建者 Qiang C•
Jan 01, 2018
I have got to understand how to search motifs and do motifs alignments using pythons
创建者 Pablo R M•
Apr 10, 2017
Excellent course! Instructors are great and the exercises using Stepik are awesome
创建者 Ahmed A A M S•
Aug 25, 2017
Very insightful in the Bioinformatics field, for those considering it as a career
创建者 Thomas D•
Nov 08, 2019
Nice course for somebody with just a little knowledge of biology, genetics etc.
创建者 Teng W•
Jan 19, 2019
课程内容深入浅出,语言趣味横生,我非常喜欢这门课程,非常喜欢Pavel Pevzener. 掌握适当python基础的童鞋学习起来会比较顺利,否则会感到吃力。 | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146127.10/warc/CC-MAIN-20200225172036-20200225202036-00281.warc.gz | CC-MAIN-2020-10 | 3,912 | 77 |
https://worldofpadman.net/en/news/healthstation/ | code | Here it is, the HealthStation formally known as the LoadingStation. 😉 We wanted to present this one some time ago, but we got some trouble with the effects. We had some nice stars around the cross, but the performance they needed was too high, so we had to remove them. 🙁
But there are still cool effects: the cross shrinks when the station has only a few health points for you left and the ring around the cross rotates in the other direction then the cross! So, the station still looks really cool and it works fine because of our coder raute. Maybe we will change the yellow rings … we will see!? If you are low at health, search one of these stations step on it and wait a few seconds until you think you have enough health. Easy, isn’t it?!
(Translation by Harmonieman) | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016373.86/warc/CC-MAIN-20220528093113-20220528123113-00109.warc.gz | CC-MAIN-2022-21 | 784 | 3 |
https://kpabijith.wordpress.com/tag/coding/ | code | Untill the git session at Fossmeet 2013 at NITC, I was using git only as a method/tool to upload my code into Github. That particular session by Mr.Noufal Ibrahim really changed my views on usage of git. The word particular was added only because of the satisfaction I that got after attending that workshop.
Me and one of friend Krispin were attending the programme together. Actually gt was the only version control system used by us till date(just heard few names like subversion, mericurial etc). For pushing code into Github repository we only require few commands like add, commit, push and very rarely I used commands like pull. When we first used the branching technique in git, we got really amazed.
In reality I was searching for the file that I created for one branch, when I couldn’t find it in the other. Once I have heard one of seniors tell his friend that all code are maintained using git. Actually I didn’t get why he was wasting his time with all these stuffs. But that session was the point were I really got point on why people use version control system.
We have a compiler lab for this semester and I could have saved a lt of time if I had used git. Whenever I changed the code I make of copy of it and edit the copy only, and since i have a habit of changing the code very often, I had may copies of the same file with only a small difference in them or sometimes nothing.
Now the next thing on the list list would be to study the internals and some advanced topics in git. (well before the Fossmeet itself I had tried to learn git by myself, many times. And all those tries resulted in nothing. But this time it would be different.For sure…. :)) | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700435.69/warc/CC-MAIN-20191019214624-20191020002124-00085.warc.gz | CC-MAIN-2019-43 | 1,676 | 5 |
https://wiki.wireshark.org/DCT2000?action=diff&rev1=1&rev2=2 | code | Revision 1 as of 2006-04-21 20:39:53
DCT2000 page should be here instead...
Add link to sample capture file for this file type
|Deletions are marked like this.||Additions are marked like this.|
|Line 20:||Line 20:|
|XXX - Add a simple example capture file to the SampleCaptures page and link from here (see below). Keep this file short, it's also a good idea to gzip it to make it even smaller, as Ethereal can open gzipped files automatically.||Here is a short example file of this format, that has examples of packets using most supported link types|
|Line 22:||Line 22:|
|* attachment:SampleCaptures/PROTO.pcap|| * attachment:SampleCaptures/dct2000_test.out
|Line 34:||Line 35:|
|There is no support for directly capturing dct2000 packets - they will only be seen by opening DCT2000 .out files.||There is no way to directly capture dct2000 packets - they will only be seen by opening DCT2000 .out files.|
Catapult DCT2000 .out file packet header
This protocol / header format consists of some information associated with a packet read from a Catapult DCT2000 .out file. The fields that comprise this protocol (context, direction, original timing information) should be useful for filtering, and also make it easy to correlate entries in the Ethereal packet list with the DCT2000 decodes.
The DCT2000 dissector shows the fields of this protocol before handing off to the appropriate link-type dissector (ip, ethernet, atm, sscop, lapd, ppp, frame relay or mtp2). Support for this file support/protocol is not yet in any officially released version of Ethereal (and it may be quite some time before it is), but is available for download in recent [http://www.ethereal.com/distribution/buildbot-builds/ buildbot] builds.
There is a single preference setting.
* Only show known 'board-port' protocols. Default OFF (i.e. show all messages)
.out files can contain non-standard messages sent between contexts running on the same card, so by setting this to ON, you can tell the wiretap module not to load these messages. When creating .out files for use by Ethereal you should obviously turn on logging for board ports.
Example capture file
Here is a short example file of this format, that has examples of packets using most supported link types
A complete list of DCT2000 display filter fields can be found in the [http://www.ethereal.com/docs/dfref/d/dct2000.html display filter reference]
Show only the dct2000 based traffic:
(Note that a capture file will either all be DCT2000 packets, or none at all, so the above filter is not very useful)
There is no way to directly capture dct2000 packets - they will only be seen by opening DCT2000 .out files.
See the [http://www.catapult.com Catapult Communications] website for DCT2000 product information. | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622113.11/warc/CC-MAIN-20210625054501-20210625084501-00027.warc.gz | CC-MAIN-2021-25 | 2,750 | 23 |
http://www.computerforums.org/forums/hardware/help-please-dunno-what-seems-matter-140821.html | code | Help! PLEASE! dunno what seems to be the matter!
OK, i recently bought a caddy so that i could boot up linux when i wanted just by putting the caddy in.... ah if only it were that simple!
i put the caddy in my pc, Primary Master IDE and i put in the linux cd (i also have 2 sata hard drives installed) . i put in the caddy and i put in the linux cd and the computer begins to boot from the CD, brings up the installation, installs everything wonderfully... but when the pc goes to restart... IT WONT LOAD UP LINUX! i get a black screen with a while cursor sitting there and like once a minute it displays the letter 'd' on the screen and if i leave it long enough the screen looks like "ddddddddddddddddd.." etc... what the heck is the problem?!
so like i decided... to that the harddrive out of the caddy and put it in another PC where it is again priamry master IDE. i put in the linux cd, re install linux and when the pc restarts, linux works.... so i took that working harddrive, put it back in the caddy and back into my pc, BUT IT WONT WORK! back to that silly black screen with the 'd's all over it.... this is weird, but cmon guys!!! im sure one of you have the solution to this dilemma!
thank you soooo very much!
IBM - Idiots Become Managers | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720967.29/warc/CC-MAIN-20161020183840-00545-ip-10-171-6-4.ec2.internal.warc.gz | CC-MAIN-2016-44 | 1,252 | 6 |
https://english.stackexchange.com/questions/493406/if-they-were-is-real-condition-or-hypothetical/493407 | code | If he wasn't going on the camping, he would stay home. (he might go to the camping) If he weren't going on the camping, he would stay home. (he went to the camping and didn't stay home.)
The first is a real condition, and the second is hypothetical because of was and were. but if the subject is "they", both will be the same. So how to know which is the hypothetical or real condition? | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145746.24/warc/CC-MAIN-20200223032129-20200223062129-00445.warc.gz | CC-MAIN-2020-10 | 386 | 2 |
https://halldweb.jlab.org/wiki/index.php?title=GlueX-related_shared_accounts_on_the_JLab_CUE&oldid=17104 | code | GlueX-related shared accounts on the JLab CUE
We have a group account on the JLab CUE for gluex-related tasks that are generally helpful for the entire collaboration. The username is "gluex" and is a member of the "halld" unix group.
Uses of the gluex account
Some of these tasks are run in cron jobs. By having them in a group account we can have more than one person have access to the crontab and the scripts, and it facilitates having a standard environment, i. e., it frees individuals to change the environment in their private accounts at will and not worry about breaking public jobs.
Another task is creating the standard builds of the offline software. Since all the files are owned by the system account, indiviual users cannot inadvertently overwrite or modify these builds.
How to log into the account
This account does not have a password; it is accessed exclusively by the ssh using your public/private key pair. To log into the account you have to have your public key on file in the gluex account in the file /home/gluex/.ssh/authorized_keys. Contact the Software Coordinator (or any person who currently has access to the account) to have your public key added. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00656.warc.gz | CC-MAIN-2022-27 | 1,179 | 7 |
http://www.indiegamehunt.com/games/5276-grow-titanio-be-the-biggest-warrior-and-fight-free-to-play | code | Grow Titan.io is a battle royale io game of warriors who get biggers as they level up.
In the limited battlefield where you die if you fall, the character carries only one blunt weapon against other players.
You can experience an exhilarating battle with just a simple control.Hammer down enemies and blow off them with whirlwind!
Have fun of growing a character from small warrior to giant Titan!You can download on Apple App Store & Google Play to enjoy the play right now!
★Download ☞ Apple: https://apps.apple.com/app/id1558975702☞ Google: https://play.google.com/store/apps/details?id=com.dreamplay.titanmaker.google
★Official Trailer ☞ https://youtu.be/rMKIJSi1gTc | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00081.warc.gz | CC-MAIN-2021-31 | 680 | 6 |
http://hoarding.altneu.me/ | code | lampinCMS. Private. Personal. Anti-social. Any screen.
To register and receive an invitation with a private lampinCMS install all that is needed is an E-Mail address.
HOARDBOARD, a private image hosting service for all screens.
HOARDBOARD. Private. Personal. Uncensored. Up to 10MB per upload. Direct image links, BBCode and HTML thumbnails. | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000837-00533.warc.gz | CC-MAIN-2019-26 | 341 | 4 |
http://swish-e.org/archive/2002-09/4483.html | code | At 10:56 AM 09/11/02 -0700, Jody Cleveland wrote:
>> Regardless, switch to <swishdocpath> and <swishdescription> would probably
>I'm not understanding. Switch it where?
See the section on -x.
>> You need two config options. I think this is all described in the
>> swish.cgi docs, too.
>> IndexContents HTML .html
>> StoreDescription HTML <body>
>When I index, I run swish-e -S prog -c spider.config
>In that config file, I have this:
>StoreDescription HTML <body> 200000
>IndexContents HTML2 .htm .html
>IndexContents TXT .txt .conf
So all your .htm, .html are type HTML2, and .txt and .conf are type TXT,
but StoreDescription is only saving the <body> for docs of type HTML.
IndexContents TXT2 .txt .conf
StoreDescription HTML2 <body> 200000
StoreDescription TXT2 200000
That's saying all docs are HTML2, with the exception of .txt and .conf
which are TXT2. And then two Store Description's are needed because docs
are not of type HTML2 or type TXT2.
>Which I believe I took right from the docs.
Quite possible. I'll fix if you can point it out.
Sorry for all confusion about the document types. That's all due to having
two sets of parsers possible -- not to mention that we talk about HTML docs
in the general sense, and also HTML and HTML2 "types" as far as swish-e
processing is concerned.
Received on Wed Sep 11 18:25:27 2002 | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258944256.88/warc/CC-MAIN-20160723072904-00243-ip-10-185-27-174.ec2.internal.warc.gz | CC-MAIN-2016-30 | 1,331 | 28 |
https://www.mytechlogy.com/IT-jobs-careers/37776672/lead-infrastructure-architect/ | code | Insight Global is seeking a Lead Infrastructure Architect to sit onsite for a large client here in Cleveland, OH. This person is being brought on because the IT department is expanding and they need someone to be able to lead infrastructure projects, design projects as well as implement them. This person isn't going to be niche to just one type of IT skillset, the manager is looking for someone who has dabbled in many different areas of IT. Some of the IT Infrastructure projects they will be working on are LAN, WAN, SAN, firewalls, servers, (Linux/Windows). They will also oversee scouting and researching new technologies focusing on continuous improvement and value to the business. They will also be providing production support, performance monitoring, capacity monitoring, disaster recovery and trouble ticket resolution. The manager is looking for someone who can lead and contribute to projects, be accountable for deliverables and have successful implementation.
Ex: They have a new distribution center being built, they might need this person to scope out wireless transmission, put up a website for the development digital object system in China, etc.
Has worked in a Manufacturing Industry before.
Personality fit: Someone who can pick up and apply new technologies, communicate effectively, and works effectively on project teams as a leader Bachelors Degree or equivalent experience. (manager said for a rock star they could take someone with an Associates but prefers Bachelors).
8+ years of IT Infrastructure experience.
Someone who can effectively provide technology solutions.
Must have prior experience with enterprise-class network solutions including, routers, switches, firewalls, WAN optimization solutions, load balancers, IT security and VPN solutions.
Understanding of SAN/LAN/WAN Infrastructure (Cisco, EMC, VPN, etc.)
Problem solving through Scripting (Ansible/Salt, PowerShell, Python, Perl, etc.)
Someone who has monitored with systems management and monitoring systems (Splunk, Swatch, PRTG, SCCM, etc.) | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359976.94/warc/CC-MAIN-20211201083001-20211201113001-00342.warc.gz | CC-MAIN-2021-49 | 2,039 | 10 |
https://lists.mysql.com/mysql/123303 | code | > corrupted. The most common error messages which I encountered are : "Can't
> open file: 'tablename.MYD'. (errno: 145)" and "Got error 127 from table
> I rectified it by shutting down the MySQL Server and using "myisamchk"
> with the options -r and sometimes -o.
145 = Table was marked as crashed and should be repaired
> Is there anything I can do to avoid such errors to occur in future ?
Yes. Stop your machine from crashing :)
> Don't these errors make MySQL unreliable ?
Not at all, considering the nature of the problem. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123484.45/warc/CC-MAIN-20170423031203-00493-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 527 | 9 |
http://corporate-training.prismtechnologies.co.in/microsoft-training/sharepoint-2016-training/ | code | Prism Technologies deliver the Sharepoint 2016 Training in End User, Power User & Site owners.
We also deliver MS Sharepoint 2016 Trainings in Bangalore, Hyderabad, & also in Chennai Kolkatta Ahmedabad.
Prism Technologies deliver MS Sharepoint 2016 Trainings in Mumbai Pune & also in Delhi Nagpur India.
We also deliver MS Sharepoint 2016 Trainings in Noida Gurgaon & Ahmedabad Nagpur Surat Baroda Nasik.
The truth is that we cannot survive in business today without MS Office & collaboration of it using Sharepoint.
But most people only know how to use a small part of MS Office 2016 product’s functionality and that puts them at a big disadvantage.
That’s why Microsoft Sharepoint 2016 training is so important.
And that’s why professional office skills training using these packages is so critical to your career.
Sharepoint 2016 Training Courses
Introduction to SharePoint 2016 Training Course
This Introduction to SharePoint 2013 class is for end users working in a SharePoint 2013 environment. It is an abbreviated version of our complete SharePoint End User class and intended for people new to using SharePoint who will not be responsible for managing a SharePoint site.
SharePoint 2016 End User Training Course
SharePoint 2016 End User class is for end users and site owners/managers new to working in SharePoint 2016 environment. The course teaches SharePoint basics such as working with lists and libraries, basic page customization, working with forms and also managing site permissions and users.
SharePoint 2016 Power User Training Course
This SharePoint 2016 Power User training class is designed for individuals who need to learn the fundamentals of managing SharePoint sites
SharePoint 2016 Site Owner Training
This SharePoint 2016 Site Owner class is for site owners & also managers new to working in a SharePoint 2016 environment.
Contact us for more details on the above courses. | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867559.54/warc/CC-MAIN-20180526151207-20180526171207-00599.warc.gz | CC-MAIN-2018-22 | 1,905 | 18 |
https://www.choicestationery.com/printer_supplies/ribbons/epson/c13s015262 | code | Sign in to your account or register new one to have full control over your orders, receive bonuses and more.
New customer? Register
Epson Ribbon Cassette Fabric Nylon Black [for LQ2250 2500 860 1060]. More info
Epson Ribbon Cassette Fabric Nylon Black [for LQ2250 2500 860 1060].
We have this product listed as compatible with the following printers, you can click on any of the printers below to see all of the other products as well.
Any Supporting Documents/Files will be listed here. | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394756.31/warc/CC-MAIN-20200527141855-20200527171855-00499.warc.gz | CC-MAIN-2020-24 | 487 | 6 |
https://www.studypool.com/discuss/1156867/answer-question-below-3?free | code | Thank you for the opportunity to help you with your question!
There are 3 possibilities large size of eyeball affecting on vision of human
1) There are more light gathering rods and cones of
someone with a larger retina than a smaller one, which would result in a
higher resolution image in the brain for a larger retina.
2) The number of cones and rods is the same on a
small or large retina, but each cell is larger and therefore gathers
more light, resulting in better low-light vision for someone with a
large retina. Or
3) The number and size of light gathering cells is
roughly the same for adults with larger or smaller retinas, but there
are larger gaps (and more supportive cells) between rods and cones in
larger retinas. This would result in equivalent eyesight among adults
with different sized retinas. | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689413.52/warc/CC-MAIN-20170923014525-20170923034525-00208.warc.gz | CC-MAIN-2017-39 | 815 | 14 |
http://elderofziyon.blogspot.com/2009/04/lying-as-natural-as-breathing-part-2.html | code | A couple of people decided to look up Nael Barghouti, the person I mentioned in my last post as supposedly the record-breaking prisoner. The only place we see that name mentioned is as the architect of the Park Hotel bombing that killed 20 in 2002.
If he is in fact that same guy, then the article quoted was even more of a lie than I thought. Barghouti would have only been in prison a maximum of 7 consecutive years and calling him a "political prisoner" would also be an outright lie.
Par for the course.
(h/t Soccer Dad and Henrik)
A "Theobald Jew" Goes Spinning in Malaysia
1 hour ago | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126237.56/warc/CC-MAIN-20170423031206-00303-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 589 | 6 |
https://www.troyhunt.com/accidental-mvp/ | code | An unexpected email was waiting for me when I got off the plane from a recent work trip to Thailand on Saturday:
Congratulations! We are pleased to present you with the 2011 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in Developer Security technical communities during the past year.
Given this was sent out on April 1st, one could be forgiven for being a little sceptical. Being loaded up with healthy doses of overwork and jet lag when I read the email didn’t help make things any clearer but a couple of days on, after the dust has settled, it appears I am an MVP. I even have a very nice letter which they kindly offered to send to my boss:
Apparently the writing I’ve been doing around the application security space has resonated and somehow I’ve emerged with the award. I can honestly say that this came absolutely, positively out of the blue and was not something I expected nor set out to achieve.
In complete honesty, my writing on security – predominantly the OWASP Top 10 for .NET Developers series – was driven by two very simple objectives:
- I wanted to help those I work with understand how to actually deliver on the OWASP Top 10 at a practical level, specifically within the ASP.NET framework.
- I wanted to improve my own understanding of security in ASP.NET. I just seem to learn more efficiently when I push myself to articulate things clearly to other people.
So what next? Well, I get to use the shiny blue MVP badge over on the right side of my blog and have a couple of presents from Microsoft but other than that, nothing much changes, at least not as far as I know. I still need to work through the remainder of the OWASP Top 10 series (part 7 on crypto is mostly there) and perhaps I feel a little more driven now. Same deal with other app sec writing which again is as much about my own discovery process as it is anything else.
Ultimately the award is recognition for what the MVP has been doing and on that basis, I’ll mostly keep doing the same thing as its obviously working. In fact that’s what has always appealed to me about the MVP status; people achieve it not because they’ve rote learned some text books and paid for a certification, rather it’s because they’ve actually done something of substance which has helped other people. And that’s a very nice feeling indeed.
A few brief mentions before I sign off: Barry Dorrans (AKA @blowdart, AKA author of Beginning ASP.NET Security), who obviously had more than a bit of passing input in the MVP nomination. David Tchepak (AKA @davetchepak, AKA half the brains trust behind the very excellent NSubstitute), who I watched closely (from the next desk) as he began his blogging back in ‘06 and who encouraged and supported me to do the same. The very beautiful Mrs. Hunt for tolerating the frequent appearance and use of technology devices in each corner of the house and at all hours of the day. And to everyone else who commented, contributed, argued and generally encouraged me to write more. Thanks folks. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474690.22/warc/CC-MAIN-20240228012542-20240228042542-00797.warc.gz | CC-MAIN-2024-10 | 3,178 | 10 |
http://www.ign.com/boards/threads/there-seems-to-be-a-big-lack-of-story-driven-games-on-vita.207887089/ | code | Uncharted is the only one I know of. I know they are working on a Bioshock game but in terms if launch games, or any game we've even seen a screenshot of, none of them seem to be story driven. I guess theres Resistance but I'm not really into that. Am I missing any games? Maybe something has gone under my radar. | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00465.warc.gz | CC-MAIN-2017-47 | 313 | 1 |
http://xml.coverpages.org/mahExtremePoster.html | code | Full paper at http://www.stg.brown.edu/pub/filerep.html
We need to represent information about a set of hierarchically linked documents in order to manipulate them as we perform a conversion from HTML->XHTML->OEB (XML).
Specifically, we have developed a web-based system for grabbing web sites (or sub-trees) and converting them to OEB documents. This system must allow easy manipulation, reconfiguration of file set, file attributes, etc.
-- a strategy that turned out to efficiently & naturally support a wider range of uses than originally anticipated.
The general structure of the DTD is that of <document> and <media> elements, each one of which corresponds to a file. The first few element declarations in the DTD are as follows (attribute declarations trimmed for brevity):
<!ELEMENT filerep (user, metadata, spine+, directoryroot) > <!ELEMENT directoryroot ( (directory | document | media)+ ) > <!ELEMENT directory ( (directory | document | media)+ ) >
Although we gave <document> elements IDs, so we could refer to them, we realized it was expedient to retain the actual file path, in order to (for example) transform or validate the actual files via system commands.
One uses ID/IDREFs for things such as easily finding all the files to which a given file has links (or conversely which files point to a given file), or displaying the titles of items in the spine XML/SGML mandates that IDs/IDREFs may contain only [ . - a-z A-Z 0-9]. However, file paths may have other characters, especially slash and underscore.
Solution: Modified version of the file path for IDs/IDREFs, and also add a separate CDATA attribute containing the actual file path. Here is an example of a <document> element:
<document name="index.html" id="root-index.html" source="fetched"> <uri>index.html</uri> <origuri>http://www.guildhallinn.com/index.html</origuri> <title>Guildhall Inn Bed and Breakfast</title> </document>
The solution described above works very well for a standard website whose home page functions as the root of the document hierarchy, but our requirements include other structures:
a.) Multiple hierarchies
The user may wish to include several distinct websites in their OEB book.
b.) Non-hierarchical document sets
The user may wish to make a book using XML source documents which may not have any links to one another. For example, The user might want to create a book from several disparate XML TEI documents.
The solution to both problems was to create a "dummy" root element. For example, this is useful both for adding a whole new tree to an already-uploaded HTML hierarchy, and for uploading multiple, non-linked TEI documents. For example, an upload of non-linked TEI documents would have a fake root:
MyDoc (fake root) / | \ / | \ chap1.xml chap2.xml chap3.xml
As mentioned, one of the reasons to make sure IDs work correctly is to make is easy to add and delete documents or subtrees from the hierarchy. To achieve this, we first decided to set up a pointer (using an IDREF on the <link> element) to each <document> from within the <document>s that pointed to them.
However, for reasons of ease and speed, we replaced this uni-directional linking with bi-directional linking. That is, we added the <linkers> element, which has pointers to the other <documents> which point to the <document> within which the <linkers> element is located:
<document name="index.html" id="root-index.html" source="fetched"> <uri>index.html</uri> <origuri>http://www.guildhallinn.com/</origuri> <title>Guildhall Inn Bed and Breakfast</title> <link id="root-index.html-link1" idref="root-mapDir.html"> <origuri>./mapDir.html</origuri> <fulluri>http://www.guildhallinn.com/mapDir.html</fulluri> </link> <link id="root-index.html-link2" idref="root-touristInfo.html"> <origuri>./touristInfo.html</origuri> <fulluri>http://www.guildhallinn.com/touristInfo.html</fulluri> </link> </document> <document name="mapDir.html" id="root-mapDir.html" source="fetched"> <uri>mapDir.html</uri> <origuri>http://www.guildhallinn.com/mapDir.html</origuri> <title>Map and Directions</title> <linkers> <itemref idref="root-index.html"/> </linkers> </document>
As stated before, this approach to the representation of information about file hierarchies and links is generalizable, and is not limited to the use to which we have put it. Anyone attempting to use this approach should be aware of the inherent pitfalls described here which arise when representing these hierarchies using XML; I have described the most important ones, and their solutions. However, one should also remember that using XML to represent this information really makes one's life easy, if for no other reason than because XSLT is such a wonderful tool. If we had not used XML we might have had to multiply the number of tools needed. XML also made it very easy to re-purpose the information gathered about the file hierarchies. This is an advantage of SGML/XML in which we have believed for quite some time; a practical demonstration of its truth is nonetheless striking and gratifying. | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00208.warc.gz | CC-MAIN-2022-33 | 5,037 | 21 |
https://www.allinterview.com/company/197/aurobindo/interview-questions/361/group-i.html | code | Define Job control?
what for cmmi and iso(what are they) are used and how we can use them in our testing and whole work related to sqa.Do it has any kind of format if it has kindly tell me from whr i will get it kindly give detalied answer?
Why you want to join banking sector, even after having work experience?
What is processing tables?
I customized the tax procedures, after posting normal g&l (f-02), i got an error, error is complete lineitem display, its popup error message num, how can i find the message error, whats the t-code
What is an artificial (derived) primary key?
Is python a single thread?
What is ms powerpoint and its features?
What is Syntax of any Linux command?
What are the differences between Machine Learning and Deep Learning?
How to launch vagrant box?
Explain how to debug my ant script?
Tell me what is embedded system in a computer system?
What are the difference between RMI and CORBA?
What is toast notification? | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00162.warc.gz | CC-MAIN-2019-51 | 947 | 15 |
http://www.linuxquestions.org/questions/linux-newbie-8/how-to-get-hot-corners-working-in-kde-928551/ | code | How to get hot corners working in KDE
I was using gnome with a compiz plugin so that when I move my mouse to the upper right of the screen it would show all open windows on all desktops.
Then when I upgraded to Ubuntu 11.10, it was using Unity, so I'm not using the gnome session anymore. Instead I'm using the KDE session.
So how can I get hot corners working in here? | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122159.33/warc/CC-MAIN-20170423031202-00431-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 369 | 4 |
https://wiki.genexus.com/commwiki/wiki?50340,Difference+from+property | code | Specifies the base value for a difference calculation.
|First value ||The first value of the series is used as the base value|
|Previous value ||The previous value is used as the base value (this is the default value)|
Generators: .NET, .NET Framework, Java
Level: Query element
Note that if previous value is selected as the base value for the comparison, since the first value of the series has no previous value, the difference computed for the first value is zero (it is defined to be zero).
This property applies only at design-time.
To apply the corresponding changes when the property value is configured, Run the main object.
This property is available since GeneXus 17 Upgrade 9.
Show as percentage property
Show values as property | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00157.warc.gz | CC-MAIN-2023-50 | 740 | 11 |
http://ww-vb.mine.nu/rss1/news/i-love-movies-jason-aldean-field-of-dreams-2015-hd?uid=719 | code | Subscribe to TRAILERS: http://bit.ly/sxaw6h
Subscribe to COMING SOON: http://bit.ly/H2vZUn
Like us on FACEBOOK: http://goo.gl/dHs73
Follow us on TWITTER: http://bit.ly/1ghOWmt
I Love Movies: Jason Aldean - Field of Dreams (2015) HD
Country artist and "Field of Dreams" fanatic Jason Aldean talks about the life-shaping choice he shared with ‘Moonlight’ Graham.
Celebrities sit down to talk about their favorite films and how movies have impacted them personally and professionally.
Watch more I Love Movies: https://goo.gl/r9MKAl
Watch more Field of Dreams: https://goo.gl/8Hyiym
"Chillin Spree" by Rex Paul Schnelle
"Mighty Few" by Rex Paul Schnelle
"Ain't My Problem" by Rex Paul Schnelle
"Banger for Bucks" by Nicholas Joseph Nolan
Fandango MOVIECLIPS Originals is the ultimate destination for all of your favorite original shows and videos. From Weekend Ticket with Dave Karger, to Movie3Some with Tiffany Smith and Kristian Harloff, we are making the shows you need, want, and have to watch. | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00784.warc.gz | CC-MAIN-2022-40 | 1,000 | 14 |
http://talk.maemo.org/showthread.php?t=35156 | code | My N900 has not been delivered yet, but thoought I would ask some of you that already have yours.
Ok so lets say I do a sync with my google contacts to N900. Then I login with my gtalk and IM and Skype details on my device. I then merge all my contacts so everything is 100% integrated. Then I use hermies to get more b-day and missing info from face book.
So as far as I am now concern I have a 100% complete contacts list on my device. If then do a synch with google again. I will now have a ll the same details in my google contacts(b-days phone numbers email, skype, gtalk IM details etc)? | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705936437/warc/CC-MAIN-20130516120536-00028-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 593 | 3 |
https://blogs.gnome.org/mclasen/2016/04/29/yet-another-gtk-update/ | code | GTK+ 3.20 was released a while ago; we’re up to 3.20.3 now. As I tried to explain in earlier posts here and here, this was a pretty active development cycle for GTK+. We landed a lot of of new stuff, and many things have changed.
I’m using the neutral term changed here for a reason. How you view changes depends a lot on your perspective. Us, who implemented the changes, are of course convinced that they are great improvements. Others who maintain GTK+ themes or applications may have a different take, since changes often imply that they have to do work to adapt.
What changed in GTK+
A big set of changes is related to the inner workings of GTK+ CSS.
The CSS box model is much better supported in widgets. This includes padding, margins, borders, shadows, and the min-width and min-height properties. Since many widgets are complex, they typically have many CSS boxes. Here is how the box tree GtkNotebook looks:
In the past (up to and including GTK+ 3.18), we used a mixture of widget class names (like GtkNotebook), style classes (like .button) and widget names (like #cancel_button) for matching styles to widgets. Now, we are using element names for each box (e.g. header, tabs and tab in the screenshot above). Style classes are still used for optional things and variants.
The themes that are included in GTK+ (Adwaita, Adwaita dark, HighContrast, HighContrastInverse and the win32 theme) have of course been updated to follow this new naming scheme. Third-party themes need and application-specific CSS need to be updated for this too.
To help with this, we have expanded both the general documentation about CSS support in GTK+ here and here, and we have documented the element names, style classes and the node hierarchy for each widget. Here, for example, is the notebook documentation.
The documentation is also a good place to learn about style properties that have been deprecated in favor of equivalent CSS properties, like the secondary cursor color property. We warn about deprecated style properties that are used in themes or custom CSS, so it is easy to find and replace them:
(gtk3-demo:14116): Gtk-WARNING **: Theme parsing error: gtk-contained.css:18:37: The style property GtkWidget:secondary-cursor-color is deprecated and shouldn't be used anymore. It will be removed in a future version
There’s also a number of new features in CSS. We do support the CSS syntax for radial gradients, we let you load and recolor symbolic icons, image() and calc() are supported, as well as the rem (‘root em’) unit.
Beyond CSS, the drag-and-drop code as been rearchitected to move the drag cancel animation and most input handling into GDK, thereby dropping most of the platform-dependent code out of GTK+. The main reason for doing this was to enable a complete DND implementation for Wayland. As a side-effect, we gained the ability to use non-toplevel widgets as drag icons, and we dropped the X11 specific feature to use RGBA cursors as drag icons.
The Wayland backend has grown most features that it was missing compared to X11: the already mentioned full DND support, kinetic scrolling, startup notification, primary selection, presenting windows, a bell.
Changes in applications
Here is an unsorted list of issues that may show up in applications with GTK+ 3.20, with some advice on how to handle them.
One of the motivations for the changes is to enable animations and transitions. If you use gtk_style_context_save/restore in your draw() function, that prevents GTK+ from keeping the state that is needed to support animations; so you should avoid it when you can.
There is one place where you need to use gtk_style_context_save(), though: when using “theme colors”. The function gtk_style_context_get_color() will warn when you pass a state other than the current state of the context. To avoid the warning, save the context and set the state:
gtk_style_context_save (context); gtk_style_context_set_state (context, state); gtk_style_context_get_color (context, state, &color); gtk_style_context_restore (context);
And yes, it has been pointed out repeatedly that this change makes the state parameter of gtk_style_context_get_color() and similar functions largely useless – this API has been around sinc e 3.0, when the CSS machinery was much less developed than it is now. Back then, passing in a different state was not a problem (because animations were not really supported).
Another word of caution about “theme colors”: CSS has no concept of foreground/background color pairs. The CSS background is just an image, which is why gtk_style_context_get_background_color() is deprecated and we cannot generally make it return a useful color. The proper way to have a theme-provided background in a widget is to call gtk_widget_render_background() in your draw() function.
If you are using type names of GTK+ widgets in your CSS, look up the element names in the documentation and use them instead. For your own widgets, use gtk_widget_class_set_css_name() to give them an element name, and use it in the CSS.
A problem that we’ve seen in some applications is the interaction between size_allocate() and draw(). GTK+’s CSS boxes need to know their size before they can draw. If you derive from a GTK+ widget and override size_allocate without chaining up, then GTK+ does not get a chance to assign sizes to the boxes. This will lead to critical warnings from GTK+’s draw() function if you don’t override it. The possible solutions to this problem are either to chain up in size_allocate or to provide your own draw implementation.
If you are using GTK+ just for themed drawing without using GTK+ widgets, you probably need to make some changes in the way you are getting theme information. We have added a foreing drawing example to gtk3-demo that shows how this can be done. The example was written with the help of libreoffice and firefox developers, and we intend to keep it up-to-date to ensure that this use case is not neglected.
If you are maintaining a GTK+ application (in particular, a big one like, say, inkscape), and you are looking at porting from GTK+ 2 to GTK+ 3, or updating it to keep up with the changes in 3.20, please let us know about the issues you find. Such feedback will be useful input for us when we get together for a GTK+ hackfest in a few weeks.
One of the big incoming changes for 3.22 is a GL-based renderer and scene graph. Emmanuele has been working on this on-and-off for quite a while – you may have seen some of his earlier presentations. Together with the recent merge of (copies of) clutter and cogl into mutter, this will put clutter on the path towards retirement. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816535.76/warc/CC-MAIN-20240413021024-20240413051024-00319.warc.gz | CC-MAIN-2024-18 | 6,661 | 25 |
https://forums.adobe.com/thread/2465821 | code | I heard that Flash will be shut down in 2020. After that, will Flash-created content become unavailable on the web after that? There are a lot of Flash learning contents created by our school. After 2020, do we have to convert all of them to other languages like HTML5? And do you know exactly when the service is finished? | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516003.73/warc/CC-MAIN-20181023023542-20181023045042-00314.warc.gz | CC-MAIN-2018-43 | 323 | 1 |
https://communities.intel.com/thread/98430 | code | Hello anthony Palermo,
Thank you for joining the Intel communities.
We do not support overclocking, the CPUs we make that are unlocked and the Intel® Extreme Edition Processors have a manufacturing process that makes them more robust to support customizations. This is because there is a sector of the PC market composed of power users, gamers and computer enthusiasts who wants to take the hardware beyond the factory configurations. Intel wants to provide to these people the ability to do so with our processors, but it is pretty known by the industry that any CPU being over clocked will be always at risk and will malfunction sooner or later; they are also pretty aware that they do that under their own risk as the product warranty doesn’t cover over clocking.
Altering PC clock or memory frequency and /or voltage may reduce system stability and use life of the system, memory and processor; it causes the processor and other system components to fail; it causes reductions in system performance; it causes additional heat and other damage; and affects system data integrity. Intel assumes no responsibility that the memory, included if used with altered clock frequencies and / or voltages, will be fit for any particular purpose.
Over clocking is the process used to increment the processor frequency out of the processor specifications.
At this point what I can suggest is to access the BIOS and set the BIOS to defaults by pressing F9 on your keyboard and F10 to save all the changes, this will set the frequency and voltage to defaults.
Here is a Windows*-based tool for overclocking unlocked Intel® Core™ processors. Download the Intel® Extreme Tuning Utility (Intel® XTU) for quick access to the features and settings needed to overclock your system. Easily adjust power, voltage, core, and memory settings, as well as other key system values.
If you are more interested about overclocking you will be able to get better assistance at: | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742937.37/warc/CC-MAIN-20181115203132-20181115224613-00058.warc.gz | CC-MAIN-2018-47 | 1,957 | 8 |
https://www.blackhatworld.com/seo/free-facebook-adwords-coupon-method-for-promoting-appz-2015-d.763065/ | code | Now I'm asking you if you know any other free method of getting some traffic on the mobile games, at least on their release if not constantly. I'm 1000% sure there are thousend of occult, FREE even more effective ways to get free advertising than those facebook and adwords coupons so if anyone could share one I'd be deeply gratefull. I tried before, for about 2 months, to create PVA facebook accounts and join groups and post on them using a self made bot. Beside the fact that it wasn't an efficient method, the bot slowed down after more then 10 active accounts, that I had enormously hard time getting the proxies, phone numbers, verify them, complete them, manage them and finally as a small present for my work all got banned and that I currently cannot create anymore facebook accounts because I always get the verification even If I create them on a private vps/vpn/proxy, everything worked perfectly well! This might be a divine sign for me that I should focus on something else for promoting my apps than fighting facebook Ok so briefly if anyone knows another working method of getting facebook or adwords coupon, no matter how black it is, or any other efficient method of getting mobile traffic please let me/us know. Have a great day! | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864626.37/warc/CC-MAIN-20180522053839-20180522073839-00334.warc.gz | CC-MAIN-2018-22 | 1,250 | 1 |
https://itzone.com.vn/en/article/is-it-easy-to-classify-spam-users/ | code | Today the whole company is on vacation and I stay on duty and work. The work is already in a box, so I have a lot of free time, but I have a lot of free time so I write a few words to confide in my work. About a year ago, I received a spam user identification problem. When he first received a smile in his mind thinking “this is simple”. But when I did it, I found myself mistaken.
# The hard ones
Difficult 1: Definition of spam
The most commonly used definition is:
Spam is the act of sending messages that are not meaningful and annoying to the recipient
That is, what information is meaningless and annoying to the recipient. The recipient is not one person, but many people. For example, person A texts to request sex chat with person B and person C. Person B finds it annoying and insulting, but person C agrees and enjoys it. So is this spam or not ?!
Difficult 2: Data
When building a Machine Learning model, the most time consuming and most important thing is data processing. What is data processing here? Process is to analyze the data to give attributes, then label the data. For you to imagine, I have a data set of 10 million records or 10 million lines of user behavior, my job is to analyze the data to find out the behaviors to distinguish spam users and users do not spam without any suggestions. That’s the job of a Data Analyst
Finding out the attributes is the “big hand” labeling. Labeling is like teaching the kid (the kid here is a computer) how he knows how an orange is different from an apple. Say that to know how important labeling is and determines the success of the model. If you keep telling a child the apple is an orange, and vice versa, when he gets older he will say it is an orange. But when they were young, when they were young, they could not wake up anymore, and it is very difficult to teach again the easiest way to teach is “reset”: – <
But the most accurate label is to have big hands, which means making it by hand. Meanwhile, to solve the spam problem, only I do from A-> Z from data analysis to system design and development. It’s impossible to label 10,000,000 records alone, so you can only do about 3,000 labels.
Difficult 3: Policy protecting user privacy
In order to protect the user all user data such as phone number, account name are encrypted and instead to identify the user I only have access to that user’s id which is not an incrementing integer that is a post-coded string that I don’t know how it was coded or generated. This is completely understandable and does not interfere with my build system.
But people often recognize spam based on content such as email or sms. And here I can’t do that. The content of the message is neither viewed nor viewed. The question is, can spam users be identified? The answer is yes, but the ratio will not be as accurate as combining user behavior and content.
Suppose, what if spam is identified based on content? It is also very difficult. If email spam is detected or on a system with standard style like spiderum, NLP handling is easy. As for SMS systems such as Mocha and Zalo, it is a nightmare because users often use abbreviations, teen codes, or a certain language that only … newcomers can understand to express to the machine. then you have to analyze, label, build models. In other words, you rework NLP Core for each of those codes or languages.
Difficult 4: Real-time processing of big data
When I first started, I didn’t have any experience handling big data using Hadoop, Spark, … I only used pure python and backend libraries like pandas, sklearn. Initially let me detect spam behavior starting from property separation to detect taking 2 hours for about 200,000 users / 10,000,000 records / day. After that, I used queue and split strategy to rule + parallel and distributed processing (all code by themselves without using any big data processing framework) it only took about 20 minutes to process. accomplished.
Another banana is to identify a user with spam behavior in the shortest time, or it can be understood as to identify a user spam based on behavior in less time ?! That’s right ?! You guys didn’t get it wrong. Behavior is something that must be accumulated in a certain period of time but must be identified in real time? = ((((((((
In the difficulty it emerges the wisdom. I handle by continuously scanning the user behavior in very small delta t time eg 1 minute / time. So at time t0 that user has not spam, but at time t1 that user has had enough spam behavior and was identified. Because the scan time or delta t is very small, it can be considered as real time. That is to meet the request of detecting spam behavior of the user as soon as possible: 3
Difficult 5: Finding the attributes of spam users
Back to the difficulty of analyzing user data. Finding the attributes that distinguish spam users from ordinary users is like scavenging for gold. The first thing is to ask the question, what do users usually do? What is spam’s desire? From there find the properties.
But its life is not like a new life. The spammers it behaved like normal guys. For example, the time to send messages between sending messages by spammers is the same as that of regular texters, …
So how to find properties? The answer is to keep thinking, query the data, visualize it into charts and metrics to compare spam users with normal users =))) Explain the meaning and origin!
Difficult 6: Evaluation
Usually when evaluating a classification model people often use Accurancy or True / False Positive / Negative. But that’s when we already have a labeled dataset available for research. But when applied in practice, the data we have is purely unlabeled data. Therefore, how to evaluate a model when we have to label about 10,000,000 records per day = ((((((The only way is to hire a team to label to know exactly)
But the poor can not play like that, we only have statistics and assessment based on the results returned by the model. For example, in the model we recognize 100 spam users / day in which 90 users are truly spam, we can temporarily consider it 90% correct. Plus the percentage of spam users that we listed earlier to compare to see if the reality is true. If the previous statistic rate compared to the model guessing the difference is not more than 10%, the odds are very high that your model is correct
After about the last 2 months I have built the model and the rate is quite good at 96% – this number is only judged by the spam detection messages that the model returns. Well, well, temporarily satisfied. But there is still a lot of work to do and to optimize because users are always searching for and surpassing the AI model. | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00132.warc.gz | CC-MAIN-2022-40 | 6,709 | 26 |
https://www.bzpower.com/topic/13813-t-mg-music/ | code | Kazi the Matoran Posted July 22, 2014 Share Posted July 22, 2014 (edited) Welcome! Here, I would like to share all my musical endeavors with the BZP community. I used to be relatively well known on this sight for my work in the comics forum several years ago. Now, I hope to do better with music. I make a lot of electro and progressive house and have started to work with drum & bass. I'll release a song on here every week and a single EP (single w/ b-side) every month. In the fall, I'm putting out my first LP, so if you like what you hear, be sure to keep in tune for that. On to the music! Latest tracks: A Song For YouMade over the 2015 Holiday season while I was home from school. This track was the beginning of something beautiful in terms of my production quality. From here, expect my music to improve drastically with each post. Sunset SkyYay, progressive house Virtual Riot - We're Not Alone (T-MG Remix)Dubstep remix! InfiniteMy own unique style of glitch hop. Old music:Have A Nice DaySpringWho We AreImpulse (Original Extended Mix)Impulse (Short version)Astronaut - Rain (T-MG and balsamfir remix)Stargazer Edited August 31, 2016 by Kazi the Matoran 2 Quote My comics: Kazi's Comics 3.0 My music: T-MG Music "And now for something completely different." -- Monty Python Link to comment Share on other sites More sharing options...
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649293.44/warc/CC-MAIN-20230603133129-20230603163129-00505.warc.gz | CC-MAIN-2023-23 | 1,544 | 4 |
https://felsenst.github.io/popg/popg.html | code | At its web site https://felsenst.github.io/popg/popg.html is a downloadable "zip archive" which contains a Java archive file which has the Java executable as well as the Java source code. It also has this web page. There is also a folder images for the screenshot images used in this web page. The compiled Java code in the Java archive file can be run on any system that has a reasonably recent version of Java installed.
The Java source for popg is called PopGUserInterface.java and is in the folder src. It was developed in the Eclipse development environment but any environment you are comfortable with will do. Note: Normally you will not need the source code and you will not need to recompile the program. The source code is there so you can see how things were done and use it as a base to make changes to the program and extensions of the program, should you wish to do so. Most users can ignore it.
You can fetch PopG using the links below.
We have posted a Zip archive of PopG, including Java archives and documentation files. This archive is file PopG.zip. To get it
Here are instructions for saving, unpacking, and installing PopG from different browsers, and on operating systems. We cover the Chrome, Firefox, Safari, and Internet Explorer browsers on the Windows, Mac OS, and Linux operating systems.
Using Chrome on Windows, Mac OS, or Linux
Using Firefox on Windows, Linux, or Mac OS
Using Internet Explorer on Windows
Using Safari on Mac OS
A MAC PROBLEM: On Mac OS systems, when you attempt to extract the Zip archive, or when you attempt to run the Java executable, the system may complain that this is from an unknown developer. That is simply because I did not sign the file with my Apple Developer ID. You should be able to make the operation work by control-clicking on the icon and selecting the option to open the file, using the defaults suggested. Once it successfully gets past this, it will not bother you with this again the next time you try to run the program.
The Java archive file PopG.jar will exist in the folder PopG once you have downloaded and installed PopG. If you have Java installed on your system, you should be able to run the Java program by finding the folder PopG and clicking or double-clicking on the icon of the file PopG.jar
The PopG folder also includes the present documentation web page which you are now reading. This can be read here or you can use the Save As menu item in the File menu of your browser to save a copy of it on your machine. The latest version of this page can be read on the Web using this link.
There are also older executable versions compiled for Windows, Mac OS, and Linux systems, plus some even older operating systems. These can be fetched from folder old at our PopG site. Most users should not use these older executables, but if you do, you should start by reading the README file in that folder. One of the versions there is version 3.4, which has compiled executables for the three major operating systems available as well as C source code. These may be useful if you do not have Java and cannot install it on your system.
We would like to make versions available for tablets and even phones. Unfortunately, a version of Java that can use the graphics functions does not seem to exist on the Android operating system and the iOS operating system. We would have to rewrite the program separately for each of those. If you know of a way to run our Java executables on either of those operating systems, and get it to work, please let us know how you did that.
If you have Java installed you can run the PopG program. Generally, Java will be already installed on Mac OS systems and on Linux systems. If you aren't sure if you have Java installed, you can type java -version in a command window and, it Java exists, it will tell you what the version is. If you get back a blank line, you need to either download Java or append where it is to your search path. On Windows systems and on Mac OS or Linux systems that do not have Java, you can install a recent version of Java at no cost by using this link: java.com. Recent Linux and Mac OS systems usually have a recent-enough version of Java already installed. Mac OS systems 10.4 (Leopard) and earlier may not have a recent-enough Java to be able to run PopG. Windows systems do not come with Java already installed, but it can be installed on them from the above web site.
To run the PopG Java program you should simply click (or double-click) on the icon of the PopG.jar file (you can also run it from a command window by navigating to where PopG.jar is stored and typing java -jar PopG.jar). The start up screen looks like this:
There are two menus, File and Run, that control PopG. They are in the upper left of the main PopG window.
The Run menu contains five items: Continue w/, Continue, New Run, Restart, and Display Whole Plot.
The first time it is picked, it looks like:
with all but New Run grayed out. Once you have done your first run, all the selections will be active.
It contains all the parameters that control a PopG run. Note that usually you do not enter a Random Number seed unless you want to do two identical runs. When you are finished editing you can click the OK box to start the run. You can also click Cancel to not start the run and Defaults to reset all the data entry boxes to their default values.
which allows you to change the number of generations run in the next continuation of the run.
The program uses a random number generator which automatically initializes from your system clock. Thus it should give you a different sequence of random numbers and thus a different result every time you run the program. In the menu for a new run, there is a setting for Random number seed which is set by default to (Autogenerate), which will initialize from the system clock. You probably won't have any reason to change this, unless you are debugging PopG and want to do the same run, with the same random outcomes, twice. If you do wish to do the same exact run twice, enter a value in place of the (Autogenerate) string and PopG will use that to initialize the random number generator. Assuming you have not modified the calcPopG routine within the Java code, every time you start with that random number you will get exactly the same results.
This contains four menu items. They are Save, Print, About and Quit.
The first time it is displayed, it looks like:
with Save and Print grayed out. Once you have done your first run, they will be active.
Most people will not need to compile the program themselves as the Java Jar package supplied should run on most versions of Java. So you should probably skip this section. But if you wish to modify the functionality of PopG or if you have some unusual Java environment that will not run the supplied Jar file you will need a Java compiler. We repeat: If you just need to run the program, you should run the Jar file that comes in our distribution. You do not need to compile anything (though you may need to install Java).
If you do need to compile the program, you will find a src directory in the downloaded and unzipped folder PopG which you got from our site. Import the file PopGUserInterface.java from src into your favorite Java editor (we used Eclipse). You can either execute it directly from there or export a Java Jar from the editor and execute it. PopGUserInterface.java does not reference any external libraries, everything it needs is in the JavaSE-1.6 system library. If you are modifying our program, once you have finished doing that you should have no problems creating the Java Jar,
If you cannot do, tell us, since that would be a bug.
This program simulates the evolution of random-mating populations with two alleles, arbitrary fitnesses of the three genotypes, an arbitrary mutation rate, an arbitrary rate of migration between the replicate populations, and finite population size.
The programs simulate simultaneously evolving populations with you specifying the population size, the fitnesses of the three genotypes, the mutation rates in both directions (from A to a and from a to A), and the initial gene frequency. They also ask for a migration rate among all the populations, which will make their gene frequencies more similar to each other. Much of the time (but not always!) you will want to set this migration rate to zero. In most respects the program is self-explanatory.
Initially there are ten populations. You can set the number of simultaneously-evolving populations to any number from 0 to 1000. The population size for each population can be any number from 1 to 10000. Note that a larger population, a larger number of generations run, and a larger number of populations can lead to longer runs.
When you make a menu selection that causes the program to run, a graph of the gene frequencies of the A allele in each of the populations will be drawn in the window. Here is what the graph looks like when we run with an initial gene frequency of 0.2 and fitnesses of AA, Aa, and aa set to 1.08, 1.04, and 1, with all other parameters in their default values. (Note that if you try this run, there will be different random numbers, so your result will be a bit different).
Note that once the plot of the gene frequency curves reaches the right-hand side of the graph, the program prints there the number of populations that fixed for the A allele (ended up with a frequency of 1.0) and the number that lost this allele.
The program can simulate a wide variety of cases, and you should explore some of these. Here are some suggestions:
Version 4.0 of PopG, the first Java version, was written by Ben Zawadzki. His enormously effective programming made good use of mentorship and advice from our lab's Java wizard, Jim McGill.
The original version of PopG was written in the 1970s in FORTRAN by Joe Felsenstein. The interactive version then was written in C with much work by Hisashi Horino, Sean Lamont, Bill Alford, Mark Wells, Mike Palczewski, Doug Buxton, Elizabeth Walkup, Ben Zawadzki and Jim McGill. Hisashi and Sean wrote the C version, and the screen graphics for IBM PC and the first part of the Postscript printing system. Bill greatly improved and expanded the Postscript printing and the X windows graphics. Mark Wells did the original Macintosh version. Mike Palczewski greatly improved the Windows, Macintosh and X Windows graphical user interface, and Doug Buxton modified the program to the 3.0 version and prepared the executables for different operating systems. Elizabeth Walkup improved the X windows interaction and prepared version 3.3. Small documentation changes after version 4.0 were made by me.
Copyright 1993-2016. University of Washington and Joseph Felsenstein. All rights reserved. Permission is granted to reproduce, perform, and modify this program. Permission is granted to distribute or provide access to this program provided that this copyright notice is not removed, this program is not integrated with or called by any product or service that generates revenue, and that your distribution of this program is free. Any modified versions of this program that are distributed or accessible shall indicate that they are based on this program. Educational institutions are granted permission to distribute this program to their students and staff for a fee to recover distribution costs. Permission requests for any other distribution of this program should be directed to license felsenst(at) uw.edu.
| Joe Felsenstein
Department of Genome Sciences
University of Washington
Seattle, WA 98195-5065, USA
email: felsenst (at) gmail.com | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817144.49/warc/CC-MAIN-20240417044411-20240417074411-00882.warc.gz | CC-MAIN-2024-18 | 11,622 | 43 |
http://www.tomshardware.com/forum/97279-13-supcom-post-stats | code | I've been playing it on an E6400 @ 3.2 with a 1950 Pro. Running @ 1440x900, the framerate started hurting around ~2,500 units on the map, but it wasn't at all unplayable. Split-screen definately needs beefy hardware.
I simply love it. I'm glad it came out 1 month before CnC3, since the C&C3 demo is also great.
The zoom function has spoiled me for other RTS games (play SupCom, then play War3). I loved TA and TA:K, so I was incredibly excited about this release.
I run high at 1280 (can't do much more with a 17-inch flat panel) and have no lag on a full skirmish, even when using the spacebar.
P4 Prescott 3.0Ghz (typically...has no problem doing 3.8, but I like my system silent :-D - can almost do a fanless system)
Asus Nvidia 6800 GT 256MB
Asus P5LD2 Mobo
1 GB Corsair DDR2 RAM (2x512)
At moment play on (important components listed)
Gainward Gefore 7950 GT 512MB
2 gigs ram
Raptor 150 gig 10,000rpm
Laptop/desktop replacement- (ocassionally play on this)
Dell inspiron 9100 with
P4 3.2 ghz
Ati mobility 9800 256MB
1 gig ram
Hitachi 80 gig 7200rpm 2.5" Hdd
Works great on the desktop, i run at 1280x1024 everything on medium and no anti alaising or anything.
-Plays without any difficulty
-Lag is visible (My friend and I hook up our comps, he has roughly same machine as me but better graphics, connected using gigabit switch) when its two of you vs 6 supreme ai on 500 unit limit. Its not to bad but for a four hour game it probably increases playing time by 30-60mins due to lag. Lag only really starts to happen when you are trying to break out of the defensive build up to the offensive onslaught, ie when you try unleasing your forces about 1hr into game. Lag depends on map also, biggest map we have played on is 40kmx40km
Works ok on the laptop, run at 1024x768 everything on medium (might have some things on low) and no anti alaising or anything.
-Play fine, but can have lag depending on mission and units (have not done many missons on laptop)
-Usuable is about as far as I would go. Lag guaranteed after 30mins on any map with more than 2 players, map size has to stay at 40x40 or below, think we mainly play 20x20 if using my laptop. Lag when using my laptop probably easily doubles game length. Games can become un finishable becasue of lag
I would like to see the computer specs that can play supcom at maximum everything with 1000 unit limit on 80x80 km map with 8 players, and have no lag in game once players start reaching unit limits while launching huge offensives.
I am building a new pc, so hoping I get better performance. It still runs decent on my current ( Athlon 64 3000,K8N-Neo Platinum 1 gig OCZ dual channel ram, Nvidia 7600 GT OC ) pc but the new rig is a an AMD dual core and also going with the sweet 8800GT- the 320 meg version and also 2 gigs of ram)..
It's so weird trying to play my other rts games like Dawn of War. I find myself trying to use the mouse wheel to zoom out like in SupCom. Man, that;s such a great and intuitive feature! | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295192.28/warc/CC-MAIN-20160823195815-00255-ip-10-153-172-175.ec2.internal.warc.gz | CC-MAIN-2016-36 | 2,982 | 27 |
https://coderanch.com/t/222866/java/passing-string-attachment | code | I am trying to pass a string as a attachment, but when I examine what is being sent it looks as if the string is being passed in the Body.
According to weblogic : Certain Java data types, if used as parameters or return values of a method that implements a Web Service operation, are automatically transported as SOAP Attachments (rather than elements in the SOAP body) when going over the wire.
As I understand it this means passing a parameter as a datahandler.
Here is my code snippet
String attachement = attach.getAttachments().toString(); StringDataSource sds = new StringDataSource(attachement,"String","Attachment");
DataHandler dh = new DataHandler(sds); Results res = new Results();
//The remote soap call. port.scanForViruses(dh);
I have created a DataHandler and am passing it as a parameter, as laid down in weblogic.
I have set up a sniffer, but not sure what I should be looking for to say it has been passed as a Attachement: Here is the sniffer output:
------ 10.101.193.171:11001->localhost:3669 ------ HTTP/1.1 200 OK Date: Thu, 08 Jun 2006 09:53:52 GMT Content-Length: 2338 Content-Type: text/xml Connection: Keep-Alive | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247492825.22/warc/CC-MAIN-20190219203410-20190219225410-00079.warc.gz | CC-MAIN-2019-09 | 1,139 | 10 |
https://lists.debian.org/debian-devel/2009/09/msg00028.html | code | Mike Hommey wrote:
An interesting corollary is how will upgraded systems behave ? A lot of the currently installed ones have a separate /usr. It would be a shame to tell users they have to reinstall (or go through hoops to put /usr in /)
Well, /usr is supposed to be shareable between hosts; forcing it to be on the same filesystem as / is very sub-optimal.
-- Chris Jackson Shadowcat Systems Ltd. | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646636.25/warc/CC-MAIN-20180319081701-20180319101701-00470.warc.gz | CC-MAIN-2018-13 | 397 | 4 |
https://indico.cern.ch/event/587955/contributions/2936828/ | code | The life cycle of the scientific data is well defined: data is collected, then processed,
archived and finally deleted. Data is never modified. The original data is used or new,
derived data is produced: Write Once Read Many times (WORM). With this model in
mind, dCache was designed to handle immutable files as efficiently as possible. Currently,
data replication, HSM connectivity and data-server independent operations are only
possible due to the immutable nature of the stored data.
dCache is seen increasingly as a general-purpose file system, helped by its support for
the NFSv4.1standard,especially by new communities, such as photon science and
microbiology. Although many users are aware of the immutability of data stored in
dCache, some applications and use cases still require in-place update of stored files.
Satisfying these requires some fundamental changes to dCache's core design. However,
those changes must not compromise any aspect of existing functionality.
In this presentation we will show the new developments in dCache that will turn it
into a regular file system. We will discuss the challenges of building a POSIX-compliant
distributed storage system, one that can handle multiple replicas and that remains
backward compatible by providing both WORM and non-WORM capabilities within
the same system. | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.87/warc/CC-MAIN-20210510033850-20210510063850-00602.warc.gz | CC-MAIN-2021-21 | 1,328 | 17 |
http://trc2009.htsv.de/ | code | CMSimple version 2.3 - April 14. 2004
Small - simple - smart
© 1999-2004 Peter Andreas Harteg - [email protected]
This program is free software; you can redistribute it and/or modify it under the terms of the Affero General Public License (AGPL) as published by Affero, Inc. version 1. All files in the CMSimple distribution (except other language files than English and Danish) are covered by this license.
IMPORTANT NOTICE: As covered by the AGPL Section 2(d), the "Powered by CMSimple"-link to cmsimple.dk must under no circumstances be removed from pages generated by this program (except in print facility). If you want to remove or hide this link from your pages, you must purchase CMSimple under a commercial license. This also applies testing purposes and setup at an intranet or internal network.
Please be aware, that the AGPL in Section 2(d) requires, that any modified version of the program runned public (ie. on the Internet) must have an additional download facility for the modified version. This also applies to any modification of the template. Therefore you should purchase a commercial license, if you do not want the design of your internet site to fall under the AGPL license.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the AGPL for more details.
A copy of the Affero General Public License is available at http://www.cmsimple.dk/?License/AGPL, if not, write to Affero, Inc., 510 Third Street, Suite 225, San Francisco, CA 94107 USA.
This copyright note must under no circumstances be removed from this file and any distribution of code covered by this license.
For further information about this license and how to purchase a commercial license, please see http://www.cmsimple.dk/?License
For downloads and information about installation, please see http://www.cmsimple.dk
Trainer-C-Kurs 2008/2009 - Trainer-C-Kurs 2008/2009 | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506559.11/warc/CC-MAIN-20230924023050-20230924053050-00014.warc.gz | CC-MAIN-2023-40 | 2,002 | 12 |
https://eraclito450.blogspot.com/2017/05/wikileaks-releases.html | code | Today, May 5th 2017, WikiLeaks publishes "Archimedes", a tool used by the CIA to attack a computer inside a Local Area Network (LAN), usually used in offices. It allows the re-directing of traffic from the target computer inside the LAN through a computer infected with this malware and controlled by the CIA. This technique is used by the CIA to redirect the target's computers web browser to an exploitation server while appearing as a normal browsing session.
The document illustrates a type of attack within a "protected environment" as the the tool is deployed into an existing local network abusing existing machines to bring targeted computers under control and allowing further exploitation and abuse. <<< https://wikileaks.org/vault7/releases/#Archimedes >>~~ | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589222.18/warc/CC-MAIN-20180716060836-20180716080836-00575.warc.gz | CC-MAIN-2018-30 | 768 | 2 |
https://namethatporn.com/post/328016-where-can-i-find-this-lena-paul-video.html | code | where can i find this Lena Paul video?
This video was featured in an ad for reality kings, and i think the model's name is Lena Paul, bu could anyone link to this scene?
Can YOU Name That Porn?
You can still post as anon. Reload after login | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948531226.26/warc/CC-MAIN-20171213221219-20171214001219-00357.warc.gz | CC-MAIN-2017-51 | 240 | 4 |
https://alternativeto.net/software/scrapehero/ | code | ScrapeHero is described as 'Main features of ScrapeHero: - No Software, No Programming, No DIY tools - Crawl complex websites with ease - Never worry about data quality - Perform complex data transformations - Real-time Website Scraping API for Price Monitoring ScrapeHero provides reliable web' and is an app in the Development category. There are more than 50 alternatives to ScrapeHero for a variety of platforms, including Online / Web-based, Windows, SaaS, Mac and Linux. The best alternative is Scrapy, which is both free and Open Source. Other great apps like ScrapeHero are ParseHub (Freemium), Portia (Free, Open Source), import.io (Paid) and UiPath (Free Personal).
ParseHub is a web scraping tool built to handle the modern web. You can extract data from anywhere. ParseHub works with single-page apps, multi-page apps and just about any other modern web technology.
Almost everyone thinks ParseHub is a great alternative to ScrapeHero.
Portia is an open source visual scraping tool, allows you to scrape websites without any programming knowledge required! Simply annotate pages you're interested in, and Portia will create a spider to extract data from similar pages.
import.io is a web-based platform that puts the power of the machine readable web in your hands. Using our tools you can create an API or crawl an entire website in a fraction of the time of traditional methods, no coding required.
show more ▾
import.io vs ScrapeHero opinions
pros, cons and recent comments
import.io is a self service tool and ScrapeHero is a full service DaaS provider
A free, fully-featured, and extensible tool for automating any web or desktop application. UiPath Studio Community is free for individual developers, small professional teams, education and training purposes.
Want to build a SaaS? Or find new customers? Or supercharge your marketing? ScrapeHunt gives you the benefits of scraping without the headache of scraping. Get a scraped database in less than 60 seconds 🚀
Aggregatus is a service that helps you to aggregate information of the same meaning but from the different websites and make it searchable, filterable and sortable as if it all was from the one website. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00673.warc.gz | CC-MAIN-2021-43 | 2,190 | 12 |
https://support.winshuttle.com/hc/en-us/articles/360057874512-Migration-Manager-installation-issue- | code | After extracting the Migration Manager zip file, the user launches the utility, click continue and nothing happens after that. The utility does get launched and the user does not get any error.
With the recent upgrade of Microsoft Windows, they have added a security feature that any .exe contained in a zip folder is blocked (after the zip is extracted) and cannot be run on a client machine. They may face this issue on both Windows Client and Windows Server machines
For Script Migration:
- Right-click on the downloaded zip file
- Go to properties
- In the General tab, there will be an Unblock Checkbox at the bottom. Check this checkbox
- Click Apply and then click OK
- Now unzip
- Launch the utility.
For Forms Migration: The Form Migration capability is by default disabled in this utility. Follow the steps below to enable it:
- Follow the steps mentioned above from Step 1 to Step 5
- Before launching the utility, open Winshuttle.MigrationManager.exe.config file.
- Change value of key ComposerSolutionsEnabled to true.
- Save the file and then launch the utility.
Note: We support this utility on both client and server machines as long as these machines have access to both source and destination sites.
Please sign in to leave a comment. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00738.warc.gz | CC-MAIN-2023-14 | 1,252 | 16 |
https://blackmoormystara.blogspot.com/2011/03/video-der-hexentoter-von-blackmoor.html | code | DungeonDevil pointed me to this video, called Hexentoter von Blackmoor in German, although the original title was Bloody Judge. I like the German title better. ;)
Warning: this video is not for the faint of heart.
I am almost tempted to throw in some witch hunting for my next Blackmoor adventure. Or does this represent the work of the Wizards Cabal? | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629632.54/warc/CC-MAIN-20210617072023-20210617102023-00566.warc.gz | CC-MAIN-2021-25 | 351 | 3 |
https://community.cypress.com/thread/31232 | code | You can get scripts to program the chip in PERL OR PYTHON or C++ whic will be provided at this location in your system.
C:\Program Files (x86)\Cypress\Programmer\Examples\Programming\PSoC3_5\SWD
You can refer this App Note too-
Thank Anks, I have got scripts to program chips Cortex in c#, they works, but I am looking for hex-code to program a clear chip CY8C5868LTI-LP038 to use it in my own programmer
You can get Kitprog or kitprog2 hex code at this path in your system-
C:\Program Files (x86)\Cypress\Programmer
Thank Anks! I found it on other PC, there weren't hex codes on my PC I do not know why | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141164142.1/warc/CC-MAIN-20201123182720-20201123212720-00514.warc.gz | CC-MAIN-2020-50 | 603 | 7 |
http://www.buhelo.com/privacy-policy-and-disclosure/ | code | This is my personal blog. I, Tom Buhelo, write and edit the articles published in Buhelo.com
The content of this blog provides information about various merchants and products. We are not affiliated with any of them, except for 123inkjets.com and 4inkjets.com. These are two merchants we highly endorse because they have a high quality, low cost product. | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823588.0/warc/CC-MAIN-20181211061718-20181211083218-00215.warc.gz | CC-MAIN-2018-51 | 354 | 2 |
https://www.microsoftpartnercommunity.com/t5/Partner-events-and-seminars/Get-certified/m-p/48791/highlight/true | code | Hello Microsoft Partner Community!
Working on your certifications? Join us in Microsoft's free certification prep sessions:
MB-230 Microsoft Dynamics 365 Customer Service
MB-300 Microsoft Dynamics 365: Core Finance and Operations
MB-310 Microsoft Dynamics 365 Finance
MB-500 Microsoft Dynamics 365: Finance and Operations Apps Developer
MB-800 Microsoft Dynamics 365 Business Central Functional Consultant
PL-100 Microsoft Power Platform App Maker
PL-200 Microsoft Power Platform Functional Consultant
PL-400 Microsoft Power Platform Developer
PL-600 Microsoft Power Platform Solution Architect
PL-900 Microsoft Power Platform Fundamentals
AZ-204 Developing Solutions for Microsoft Azure
AZ-220 Microsoft Azure IoT Developer
AZ-400 Microsoft Azure DevOps Solutions
AI-900 Microsoft Azure AI Fundamentals
DP-203 Data Engineering on Microsoft Azure
DP-300 Administering Relational Databases on Microsoft Azure
AI-102 Designing and Implementing a Microsoft Azure AI Solution
AZ-900 Microsoft Azure Fundamentals
AZ-104 Microsoft Azure Administrator
AZ-120 Planning and Administering Microsoft Azure for SAP Workloads
AZ-140 Configuring and Operating Windows Virtual Desktop on Microsoft Azure
AZ-600 Configuring and Operating a Hybrid Cloud with Microsoft Azure Stack Hub
AZ-303 Microsoft Azure Architect Technologies
AZ-304 Microsoft Azure Architect Design
Modern Work and Security
MS-600: Building Apps and Solutions
MD-100: Modern Desktop Administrator Associate
MD-101: Modern Desktop Administrator
MS-700: Managing Microsoft Teams
SC-900: Security, Identity and Compliance Fundamentals
SC-200: Security Operations Analyst Associate
SC-300: Identity and Access Administrator Associate
SC-400: Information Protection Administrator Associate
Take a Practice Test and you'll receive a free exam voucher!
IS IT TIME TO RENEW YOUR MICROSOFT CERTIFICATION?
Great news! Microsoft has recently introduced a new approach to help learners stay current. Those who have an active certification expiring within six months can renew their certifications annually - at no cost - by passing a renewal assessment on Microsoft Learn. Visit your Certification Dashboard to see which certifications are available for you to renew.
Read our article here to learn about renewal requirements, how to go about your renewal and our FAQs.
OTHER TRAINING RESOURCES
- Visit Microsoft Learn, where you can find on-demand/self-paced certification focused training resources
- View the Partner Training Calendar, for all upcoming training sessions and certification support
- Our ongoing Certification Campaign trainings to help you get ready to take a certification exam
- Use our Cloud Champion Webinars and Learning Paths
- If you're just starting out on your certification journey, check out our Microsoft Virtual Training Days.
We look forward to supporting you on your Microsoft learning journey! | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00541.warc.gz | CC-MAIN-2022-49 | 2,871 | 46 |
http://www.radiosim.com/constellation.htm | code | Constellation of a digitally modulated signal (positions of the signal in the Fresnel plan at sampling times)
This graph shows a 128 QAM / 140 Mbit/s constellation, after adaptive equalization of a selective fading by the "Auto-adaptive equalizer" module of Radiosim. Number of symbols = 4096. Radiosim can also compute a full signature, varying the notch frequency by a "Loop" module.
The following figure shows parameters of the "Auto-adaptive equalizer" entered for this example :
Other examples : Fresnel plan Eye diagram Group delay of a feeder
Group delay of a filter
Interferences Bit Error Rate Multicarrier amplification Constellation Differential gain | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00334.warc.gz | CC-MAIN-2022-05 | 661 | 6 |
https://www.aniruddhafriend-samirsinh.com/day-2-of-national-homoeopathy-seminar-attended-by-bapu/ | code | On second day of the National Homoeopathy Seminar organizers again requested Bapu to speak and he humbly accepted the request. Dr. Vikas Badhe was also invited to speak during which he narrated his personal experience of Bapu. A Group Discussion was also organized where Bapu even participated as a student.
Bapu at National Homoeopathy Seminar
Published at Mumbai, Maharashtra – India | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00094.warc.gz | CC-MAIN-2023-06 | 387 | 3 |
http://lostiesarestillaround.tumblr.com/ | code | This is me when I finally get the joke.
“I’m sinking. Water goes out, takes the sand with it and you sink.”
Remember when all you had to give me was a flower?
lost meme ~ ten themes (10/10) ~ rain
….and then it started to rain | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642168/warc/CC-MAIN-20140305060722-00053-ip-10-183-142-35.ec2.internal.warc.gz | CC-MAIN-2014-10 | 234 | 5 |
https://www.fedoraproject.org/w/index.php?title=User:Penasio&direction=next&oldid=285986 | code | Claudio Penasio Junior
Graduated Bachelor of Mathematics since 2000. In 2002 I got my first class of the RHCE RH300. Since 2000 working with Computers and technology, mostly with Linux. I am currently in charge of back office support IPEM in Sao Paulo. I am user of Fedora since Fedora Core 1. I joined the group of ambassadors for the first time in August 2009.
My Fedora Events
- FLISoL- Santo André- SP - 2010 | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00468.warc.gz | CC-MAIN-2021-39 | 413 | 4 |
https://docs.oracle.com/cd/A91202_01/901_doc/appdev.901/a89857/oci16m36.htm | code | |Oracle Call Interface Programmer's Guide
Part Number A89857-01
More OCI Relational Functions, 36 of 97
This call is used for an Advanced Queuing dequeue operation using the OCI.
sword OCIAQDeq ( OCISvcCtx *svch, OCIError *errh, text *queue_name, OCIAQDeqOptions *dequeue_options, OCIAQMsgProperties *message_properties, OCIType *payload_tdo, dvoid **payload, dvoid **payload_ind, OCIRaw **msgid, ub4 flags );
OCI service context.
An error handle you can pass to
OCIErrorGet() for diagnostic information in the event of an error.
The target queue for the dequeue operation.
The options for the dequeue operation; stored in an OCIAQDeqOptions descriptor.
The message properties for the message; stored in an OCIAQMsgProperties descriptor.
The TDO (type descriptor object) of an object type. For a raw queue, this parameter should point to the TDO of SYS.RAW.
A pointer to a pointer to a program variable buffer that is an instance of an object type. For a raw queue, this parameter should point to an instance of OCIRaw.
Memory for the payload is dynamically allocated in the object cache. The application can optionally call
OCIObjectFree() to deallocate the payload instance when it is no longer needed. If the pointer to the program variable buffer (
*payload) is passed as null, the buffer is implicitly allocated in the cache.
The application may choose to pass null for
payload the first time
OCIAQDeq() is called, and let the OCI allocate the memory for the payload. It can then use a pointer to that previously allocated memory in subsequent calls to
The OCI provides functions which allow the user to set attributes of the payload, such as its text. For information about setting these attributes, refer to "Manipulating Object Attributes".
A pointer to a pointer to the program variable buffer containing the parallel indicator structure for the object type.
The memory allocation rules for
payload_ind are the same as those for
The message ID.
Not currently used; pass as OCI_DEFAULT.
Users must have the
aq_user_role or privileges to execute the
dbms_aq package in order to use this call. The OCI environment must be initialized in object mode (using
) to use this call. | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315809.69/warc/CC-MAIN-20190821043107-20190821065107-00538.warc.gz | CC-MAIN-2019-35 | 2,181 | 29 |
https://www.crownaudio.com/en-US/softwares/iqwic-v8-01-windows | code | Last Updated: Aug 20
Released April 13, 2006
It is strongly recommended that the firmware in all TCP/IQ components be upgraded to the latest version before using IQwic 8. There are two options, HiQnet to be used with Harman Pro System Architect or IQwic, please see the "file information" link with the latest firmware for more information.
The updates from IQwic 8.00 to 8.01 are:
- Support for PIP/Itech versions 2.01, 2.02, 2.03
- Changing network settings now makes the same changes for TCP/IQ Utility.
- System AUX - grid is now single line per item
- System Aux - devices that don't have AUX are not included in the list.
- Changed 'IQ Audio Systems' on menu items to 'Crown Audio'.
- Added SNMP support for PIP versions 2.00 and later.
- Improved SNMP support in the case where non-SNMP devices are in the network.
- Removed support for MIDI Time Code scheduler.
- Fixed some minor anomalies in the Network Wizard.
- Fixed bug that can result in duplicate entries in IP address list after running Network Wizard.
- TCP/IQ Utility: Added subnet mask to status bar.
- TCP/IQ Utility: Changed 'Settings' on menu to 'Network Settings'.
- TCP/IQ Utility: Help now works from both Network Wizard and Network Settings.
- TCP/IQ Utility: Fixed bug that can cause duplicate entries under high network traffic.
- IQ Gateway: Added subnet mask to status bar.
System Requirements for IQwic Software:
- Intel Pentium 500 Mhz or equivalent processor (800 Mhz recommended)
- Microsoft Windows Vista, XP, 2000, NT 4.0 (w/Service Pack 5 or later)
(Windows 2000, XP, or Vista is strongly recommended)
- 64 MB RAM (128 MB recommended)
- 20 MB hard drive space
- SVGA (800x600) or higher resolution display card and monitor
- Ethernet card (100 Mbps recommended)
- Mouse or other pointing device | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100551.17/warc/CC-MAIN-20231205105136-20231205135136-00380.warc.gz | CC-MAIN-2023-50 | 1,782 | 28 |
https://www.mail-archive.com/[email protected]/msg08649.html | code | I am new to graylog2 and I am having an issue with the timestamp that is
displayed in each message.
I understand that the timestamp reflects the time that graylog imported the
log messages, and not the timestamp associated with the application log
message. For example, if I send a log file from my application server to
graylog server, the timestamp of my application log message is a different
field (when extracted) in graylog UI
I was able to configure my application log message timestamp to be date
type and search query have to be formulated to reflect the time zone
difference since the "now" is going to be the UTC time. So I will have an
awkward query like as follow (to query the latest 5 minute time frame)
The BEST solution is to replace/overwrite the timestamp of the graylog
server with the timestamp of the application log message that is shipped
over to graylog2. This is because the web interface is using the timestamp
to do query.
I was able to do it with Logstash by using a date filter, and I was able to
do it with Fluentd by using a plugin. Both worked beautifully. However, I
have not found a solution for graylog2.
Is there a workaround?
You received this message because you are subscribed to the Google Groups
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To view this discussion on the web visit
For more options, visit https://groups.google.com/d/optout. | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948585297.58/warc/CC-MAIN-20171216065121-20171216091121-00606.warc.gz | CC-MAIN-2017-51 | 1,441 | 24 |
https://www.vn.freelancer.com/projects/javascript/software-developer-web-based-application/ | code | We are looking for Part-time Backend and Front end developer who can give minimum 2 hrs on daily basis and assist on our ongoing projects. Payment would be monthly basis.
Below are the requirements:
1. Candidate from Pune would be preferred as projects development may require physical meeting as well
2. 1-3 years experience
3. Back-end technology stack (Java Spring, Hibernate, mySQL). Should have knowledge on REST APIs development
4. Front end (NodeJS, Angular JS 4.0, HTML5, CSS, bootstrap). Good to have knowledge on integrating different APIs with front end.
5. Should be dedicate and serious resource who can work as individual contributor.
40 freelancer đang chào giá trung bình ₹30140 cho công việc này
Hello, I am having more than 10 years of experience in java/ j2ee and spring development. Working as technical architect in multinational company in Pune Regard, Krishna
I have worked with similar project (java) in the past for MNC, and am confident I can meet and exceed your [login to view URL] forward to receiving your response. | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531994.14/warc/CC-MAIN-20190421160020-20190421182020-00404.warc.gz | CC-MAIN-2019-18 | 1,055 | 10 |
https://forums.digi.com/t/serial-port-problems-when-running-a-net-50-at-33mhz/5210 | code | We are trying to run a net+50 at 33Mhz and are having trouble with one of the serial ports. The reason we are running at the slower speed is because we had a bad board layout and the system/memory is unstable at 44Mhz because of it. we are using an external ttl oscillator. I changed the define XTAL1_FREQUENCY to 33177600, and the define PLLTST_SELECT to SELECT_THE_XTAL1_INPUT. With the previous setup the following occurs. Serial port 1 used as a console at 9600 baud. This serial port works correctly. Measuring the baud rate on an oscilloscope shows error less than 0.2% Serial port 2 used to talk to a modem at 2400 baud. This serial port is off by 2.5% as measured by an oscilloscope. It makes no sense why the 9600 works and the 2400 does not. any help would be appreciated. I belive our net+50 version number is 24. -Blake
Are you using external crystal or oscilator ? does it work fine on 44Mhz (your board or Dev. Board) ? is it a problem of serial port 2 ? Have you tried 9600 on serial port 2 or 2400 on serial port 1 ? What kind of modem are you using ? Normaly it’s ok and even recommended to set the serial speed to the modem higher then the actual baud rate between modems. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818732.46/warc/CC-MAIN-20240423162023-20240423192023-00223.warc.gz | CC-MAIN-2024-18 | 1,192 | 2 |
https://www.mobilejobs.com/jobs/7053785/soc-microarchitecture-design-engineer.asp | code | As a Microarchitecture Design Engineer, you will be working as a part of the SOC design team within the Scalable performance CPU Development Group, working on next-generation Xeon products/IPs for Server markets.
You will be responsible for all phases of front-end architecture and design. This includes micro-architectural design and specification, working with architects on feature scoping and approvals, feasibility studies, logic design, integration of third party IPs etc.
You will be engaged in activities ranging from uArchitecture development and design trade-off analysis, RTL coding, creating Specification documents, test-plan generation, design reviews, timing analysis, ECOs, and post silicon debug.
Close collaboration with planning teams, architects, validation and physical design teams will be required.
Also, you will provide IP integration support to SoC customers.
You will have at-least a BS or MS degree in Electrical/Computer Engineering, or Computer Science.
Minimum 5 years' experience as a Logic designer.
Additional qualifications include:
Familiarity with Verilog/system Verilog RTL coding, logic design as well as validation/physical design aspects of the work is required.
Familiarity with a range of internal and 3rd-party logic design tools is also required. | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866965.84/warc/CC-MAIN-20180624141349-20180624161349-00036.warc.gz | CC-MAIN-2018-26 | 1,291 | 10 |
https://rao.livejournal.com/60896.html | code | [openssh-unix-announce] OpenSSH Security Advisory: buffer.adv
Markus Friedl markus at openbsd.org
Tue Sep 16 14:32:18 EST 2003
This is the 1st revision of the Advisory.
This document can be found at: http://www.openssh.com/txt/buffer.adv
1. Versions affected:
All versions of OpenSSH's sshd prior to 3.7 contain a buffer
management error. It is uncertain whether this error is
potentially exploitable, however, we prefer to see bugs
Upgrade to OpenSSH 3.7 or apply the following patch.
RCS file: /cvs/src/usr.bin/ssh/buffer.c,v
retrieving revision 1.16
retrieving revision 1.17
diff -u -r1.16 -r1.17
--- buffer.c 26 Jun 2002 08:54:18 -0000 1.16
+++ buffer.c 16 Sep 2003 03:03:47 -0000 1.17
@@ -69,6 +69,7 @@
buffer_append_space(Buffer *buffer, u_int len)
+ u_int newlen;
if (len > 0x100000)
@@ -98,11 +99,13 @@
/* Increase the size of the buffer and retry. */
- buffer->alloc += len + 32768;
- if (buffer->alloc > 0xa00000)
+ newlen = buffer->alloc + len + 32768;
+ if (newlen > 0xa00000)
fatal("buffer_append_space: alloc %u not supported",
- buffer->buf = xrealloc(buffer->buf, buffer->alloc);
+ buffer->buf = xrealloc(buffer->buf, newlen);
+ buffer->alloc = newlen;
/* NOTREACHED */
Sanely formatted patch here.
The official word is that the openssh developers don't know if the bug is remotely exploitable. But given all I've heard over the past couple of weeks, plus the increased port 22 scanning reported at http://www.heise.de/security/news/meldung/40331, I'm inclined to assume the worst -- remote root exploit is out there in the wild.
The bug will be assigned CAN-2003-0693. Number was assigned on 2003-08-14. Looks like a botched coordinated release.
new Debian x86 debs have been uploaded to security.debian.org.
Red Hat has new RPMs now. | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512323.79/warc/CC-MAIN-20181019041222-20181019062722-00125.warc.gz | CC-MAIN-2018-43 | 1,751 | 36 |
https://mcpedl.com/lukass-java-parity-pack/ | code | Lukas' Java Parity Pack is a resource pack that aims to close the parity gap between Java Edition and Bedrock Edition. This resource pack accomplishes this by changing small parity issues to make them closer to the Java Edition. This resource pack was designed to be compatible with nearly every type of content available: UI packs, resource packs, shaders, marketplace content, and other types of content. Additionally, all devices are supported - from Windows 10 with RTX to consoles to mobile devices.
Features in Version 2.5.1: The Shulker Update
-Now compatible with the color coded shulker ui pack
-**This pack is best suited for use with Java Aspects by @agentms_. Please consider downloading Aspects and applying this pack above Aspects**
Features from previous releases:
-Heavily reduced the amount of #item_id_aux integers used in bindings for better compatibility for future versions of Minecraft.
-Fixed some HUD screen performance issues on lower end devices when changing hotbar slots.
-Item rarity now uses the items folder instead of UI.
-As a result, the elytra won't have an aqua rarity when enchanted (this is a game bug).
-Updated to support 1.16.100 and 1.16.200 beta*
-*Enabling Caves and Cliffs features is not fully supported due to Item ID numbering issues in the 188.8.131.52+ beta releases
-Be advised if you do enable these features, a random item may render as a shield in the inventory and the hotbar item colors may not be 100% accurate
-Minimum supported game version: 1.16.100
-Fixed some clipping issues related to selecting items and "clips_children" in releases with RenderDragon
-Fixed hotbar text positioning in releases with RenderDragon
-Fixed selected durability bar in releases with RenderDragon (for some reason it was not rendering at all???)
-Durability bar now has Java Edition colors
-Selecting an hotbar slot without an item immediately after selecting a slot with an item will make the item text disappear immediately
(matches Java Edition behavior)
-Item rarity colors now behave correctly in the hotbar and inventory
-Example: An unenchanted elytra will have yellow colored text, while an enchanted elytra will have aqua colored text
Banner Patterns now have yellow and magenta coloring when hovering over items and in hotbar
-Updated English and Spanish translations to include correct formatting of the Ominous Banner
-Conduit now has correct inventory icon shading to match Java Edition
-Improved inventory performance again by removing more icon bindings
-Fixed all content errors/warnings. Note: Other packs may have these errors. If you see these, they are not a part of LJP.
-Removed flying item renderer (items flying across screen when crafting or shift clicking)
-Removed popping item animation in the hotbar when interacting or using an item
-Removed "pop" sound effect when dropping items
-Removed category text From Survival Mode (i.e. the blue text underneath item name saying "Nature" and others)
-Changed banner, potion, fireworks, and fireworks star text colors to match Java Edition
-Debuffs will appear red and buffs will appear blue in the effects list as a result of the previous change
-Effect command output changed a bit as a result of the previous change
-Reduced default mipmapping level to better match Java Edition
-Changed red overlay when attacking an entity to better match Java Edition (only on non-RTX Windows 10 and mobile) *APPLY THIS PACK ABOVE ANY SHADER PACKS*
-Changed primed TNT to match the Java Edition (only on non-RTX Windows 10 and mobile) *APPLY THIS PACK ABOVE ANY SHADER PACK*
-OpfiFine Feature: Enable/Disable Vignette (the dark border at low Y-levels when fancy graphics is on), use subpacks slider (RESTART GAME TO APPLY CHANGES)
-You can now hover over locked trades and see the item's full details (including enchantments)
-Roofed forest grass is now properly colored
-Fully supported with Java Aspects/Java Aspects+ and Vanilla Deluxe v9.1.1 (apply above both packs)
-Only American English (en_US) is supported at the moment, more translations to come
-Removed chat filter (only chat screens/panels, effects are only client-side)
-Best to just delete the contents of profanity_filter.wlist in the game files for global effect
-Removed extra HUD tooltip lines that are not on Java Edition (enchantments, potion effects, etc)
-Removed black background on HUD item tooltips
-There is now a shadow on HUD item tooltips
-F3 Java Edition style coordinates
-Enderman Eyes now match Java Edition
-Experimental Features are no longer required to be enabled
-Effects can now be viewed in the inventory
-Chests, trapped chests, and ender chests now use the Java Edition icon when in the inventory
-Made a couple of tweaks to the selected item stack text as a result of the previous change
-Selection box on blocks is now slightly transparent, matches Java Edition (working on width of it, coming soon!)
-Tweaked the shading of entities when attacked to slightly better match Java Edition
-Fixed GLSL shading issue, mobs should now be visible on some mobile devices again (sorry!)
-Entity shading when attacking is actually tweaked this time (oopsies!)
-Chest, trapped chest, and ender chest sizes are now the right size in Pocket UI
-Fixed chests, trapped chests, and ender chests not showing particles when broken
~Chests and trapped chests now use oak plank particles and ender chests use show obsidian particles
-The following icons now use Java Edition shading in the inventory and hotbar (more icons to come soon):
~All chest types
~Jack O' Lanterns
-Glass pane and colored glass panes render 2D in the inventory and hotbar
**Note: the above 2 features will not appear in villager trades or wandering trader trades for performance and compatibility reasons**
-Removed item rendering using /give as a result of the previous 2 changes (also happens to match Java Edition but might be added back in the future)
-Fixed chest not showing in inventory pocket UI tab and trade screen inventory pocket UI tab
-Fixed an issue where the chest icon would follow the cursor after dragging a chest stack to 0 items
-Reduced lag spikes when opening containers for the first time
-Creative inventory is more optimized and won't lag as much compared to previous versions of the pack
-Added Java Edition wall icons for all walls
-More blocks use Java Edition shading in the inventory
-Fixed the bedrock durability bar size bedrock uses
-Fixed an issue where trade icons would sometimes not appear
-Better optimized trading screen
-Fixed icons not appearing in stonecutter screen
-Fixed chest icons in several pocket UI tabs
-Trading screens now use 2D glass icons and shaded blue ice and shaded glowstone
-Changed layering and offsets of lots of things related to pocket UI and mounts in the HUD
-Cartographer villagers will always show the 2D glass pane icons, fixes the issue where sometimes glass panes would render with the 3D model
-Fixed an issue where sometimes the shading overlay in the trade screen would not work
-Rewrote GLSL entity shader, hopefully this fixes the invisible entity bug on mobile (I can't reproduce this issue so I can't confirm that this fixes it)
-Java shield and enchanted shield icons now work in 1.16.20 release and 1.16.100 beta (they were previously in the pack files, but were not fully implemented)
-Revamped a lot of UI code, should be better compatible with other packs now
-Updated and added a couple of English (en_US, en_GB) translations
-Fixed a couple of icons so that they are 100% true to Java Edition (End Rod and Scaffolding)
-Fixed chest icon in pocket horse screen
-Changed how selected/hovered icons render. This fixes the issue where transparent images (i.e. glass panes) would briefly render the old model when
clicking back into the inventory
-Removed all shader files in preparation for Render Dragon release (this means that there are now 2 download links instead of 4, they work for all devices except PCs on the RTX beta)
-Lodestone Compass now uses the enchanting overlay instead of a blue overlay
-The enchanting overlay will not be scaled down (may impact performance on some devices, is now a subpack option)
-Subpacks have been changed. Here are the new subpack settings:
-Vignette Off + Bedrock Enchanting Overlay
-Vignette Off + Java Enchanting Overlay
-Vignette On + Bedrock Enchanting Overlay
-Vignette On + Java Enchanting Overlay
-Changed the hurt overlay so that it will work on all devices, including consoles (Windows 10 RTX will currently not be supported, however)
-Unfortuantely, horses and the ender dragon will still have the Bedrock hurt overlay since those entities are currently hardcoded
-Fixed shield icons on the 184.108.40.206+ beta releases
-Removed several bindings in ui_common.json. This should translate to slightly better performance on some devices.
-Fixed several item categories in the 220.127.116.11/57 beta
-Removed more category texts from more items (more technical items and chemistry items)
-The minimum version required to run the pack will now be 1.16.40, meaning the RTX beta release will not be completely supported
Pictures from v2.0.0-v2.5.0:
This photo shows how effects look in the inventory, as well as the correct chest icon.
Here you can see how coordinates look in the top left corner!
Enderman eyes look like those seen in Java Edition,
The red overlay when dealing damage has been adjusted a little bit.
Glass panes now render 2D in the inventory!
Walls now render similar to Java Edition when in the inventory.
The shield icon uses the Java Edition icon
The durability bar matches Java Edition and more items have the correct item rarity color.
- When applying this pack, please restart your game AND your world for changes to fully take effect.
- Please apply this pack directly to your world if you plan on using Lukas' Java Parity Pack with Education Features enabled or when using custom add-ons!
- Please apply this pack above all other resource packs (including shader packs!)
- When enabling/disabling the vignette, please restart your game for changes to fully take effect.
- Please do not re-distribute this pack to other websites, make profit off of this pack, or remix this pack without explicit permission of @MCGaming_Lukas.
- Please credit @MCGaming_Lukas when discussing/talking about this resource pack.
So what are you waiting for? Download today!
AgentMindStorm: For letting me use and edit some render controllers from Java Aspects to provide out-of-the-box compatibility with Java Aspects.
CrisXolt: For letting me use Vanilla Deluxe: Java UI source code to provide out-of-the-box compatibiliy with VDX 10.
Select version for changelog:
Features in Version 2.5.1: The Shulker Update
-Now compatible with the color coded shulker ui pack
Please apply this pack directly to your world if you plan on using Lukas' Java Parity Pack with Education Features enabled or when using custom add-ons!
Please apply this pack above all other resource packs (including shader packs!) | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00572.warc.gz | CC-MAIN-2022-05 | 10,985 | 126 |
http://markcrocker.com/~mcrocker/Computer/Panorama.shtml | code | My first foray into Java programming was an online Java Applet Workshop created by IBM. The class is available from:
It's really not a very good first Java class, but that's only because it assumes that you already know a bit of Java. They also have another class, Introduction to Java that wasn't available for free at the time I took the Applet class that should be considered a prerequisite for the Applet class. Naturally, I took this Introduction class later, but by then I had already read most of Thinking In Java, 2nd ed., which is a much better way to learn Java.
The project in the Applet class was to make an applet that scrolled a panoramic image around in a loop. Just to show that I actually did take the class (all 37 parts), here's my version of the class project:
Source code (corresponding Javadoc)
Naturally, my version has a few enhancements and some of my own style added in. Enhancements include:
This is a Tomcat-Gentoo powered web server. Thanks for using Apache Tomcat and Gentoo GNU/Linux!
Copyright© 2000-6, Mark Crocker. All rights reserved.
Comments or Suggestions? email: [email protected]
The Gentoo "g" logo is a trademark of the Gentoo Foundation, Inc. This site is not endorsed by, or affiliated with Gentoo.
This page last updated 2005-Oct-05 01:35 EDT (Wednesday) by Mark Crocker | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747369.90/warc/CC-MAIN-20181121072501-20181121094501-00107.warc.gz | CC-MAIN-2018-47 | 1,329 | 10 |
http://www.work-at-home-forum.com/threads/more-information-about-sql-database.705/ | code | I will want to know more about SQL database. Total number of entries permitted to each SQL table. It is possible to transfer SQL database to one server to another server (web hosting company). It is possible to reduce size of database (deleting some of pending articles). Please good replay. | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188926.39/warc/CC-MAIN-20170322212948-00227-ip-10-233-31-227.ec2.internal.warc.gz | CC-MAIN-2017-13 | 291 | 1 |
http://www.amibroker.org/userkb/2007/06/19/introduction-to-real-time-system-design/ | code | June 19, 2007
Developing trading systems is a very personal activity, and opinions vary widely regarding what is the best approach. Most of the solutions presented here were developed by Herman van den Bergen. They may not be compatible with your personal preferences and you are encouraged to explore other alternatives more suited to your own trading style before deciding on a possible solution. To develop a Real-Time Automated Trading (RT/AT) system, you must have a trading system to automate. If you haven’t developed one yet, you may find some ideas in the Research and Exploration or Trading-Systems categories.
Modular design, readability, and simplicity of the system code are desirable to facilitate future maintenance. Posts in this category will progress through the various phases of developing a Real-Time Automated system.
Edited by Al Venosa | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191396.90/warc/CC-MAIN-20170322212951-00035-ip-10-233-31-227.ec2.internal.warc.gz | CC-MAIN-2017-13 | 861 | 4 |
https://pedrovallejo.works/Daniel-Ellsberg-s-Dream-Team | code | 1. Retired new york architect and notorious intelligence leak facilitator | 2. Euro cryptographer / programmer | 3. Pacific physicist and illustrato | 4. A pacific author and economic policy lecturer | 5. Euro, Ex-Cambridge mathemathician / cryptographer / programme | 6. Euro businessman and security specialist / activist | 7. Author of software that runs 40% of the world’s website | 8. US pure mathematician with criminal law background | 9. An infamous US ex- hacker | 10. Pacific cryptographer / physicist and activis | 11. US-euro cryptographer and activist / programmer | 12. Pacific programmer | 13. Pacific architect / foreign policy wonk. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100290.24/warc/CC-MAIN-20231201151933-20231201181933-00794.warc.gz | CC-MAIN-2023-50 | 651 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.